RAG and LLM

I was doing some research on RAG (Retrieval-Augmented Generation) for work and read through this AWS documentation. It came complete with a perfect description of how LLMs fall short and the ways the shortcomings can be perceived.

You can think of the Large Language Model as an over-enthusiastic new employee who refuses to stay informed with current events but will always answer every question with absolute confidence. Unfortunately, such an attitude can negatively impact user trust and is not something you want your chatbots to emulate!

RAG seeks to correct some of the problems with LLMs by connecting to sources of updated information. However, I can see potential issues with the approach.

They can use RAG to connect the LLM directly to live social media feeds, news sites, or other frequently-updated information sources.

If the model depends on social media for correcting information, it could introduce more problems than it solves. Social media is notoriously rife with misinformation and some of the biggest players in the space refuse to do anything about it or even amplify it. The one saving grace here in determining the veracity of information is that source attribution is included. Caveat emptor, YMMV, etc.

tech

Made with in North Carolina
© Canned Dragons