Another, not so easy to solve issue was conversational dialogue type data, which wasn’t super well represented in the training data.
I’ve always wanted to come back to working on the problem again, because I think it’s very interesting and we will have a bunch of unstructured text as a result of STT models like whisper that do a great job of transcribing/translating but generally don’t format anything.
It literally splitted the text in-between of related texts while at the same time kept unrelated texts together, even though the embedding limit was far off.
I genuinely wanted this to work. I mean this. But nop. This shit did not work at all.
RAG is still fcked because if chunking issues. GraphRAG doesn't work correctly either unless you are willing to throw a lot of money during ingestion time.
Chonk("Hey I forgot my password, this is Tom from X Company") = ("Hey", "I forgot my password", "this is Tom from X Company")
Even then it doesn't quite look helpful.