Day 76 of 100 Days of AI
Today was another day of AI reading as I was travelling a bunch. looked over the arguments for when to use retrieval-augmented generation (RAG) and when to fine-tune large language models. Both approaches have pros and cons but using them both can render some weaknesses irrelevant while not necessarily solving for cost. Here’s a brief take on the two points to consider:
- RAG is cheap, fast, and is less prone to hallucinations (though it isn’t entirely hallucination-free!) However, you are still working with a generic underlying model that will lack the nuances of a particular niche or area of expertise.
- Fine-tuning a LLM provides a new model that is more of a domain expert. It’s more likely to provide superior results within an area of specialism that you fine-tune the model on. However, fine-tuning a model is a more expensive and time-intensive process. Also, hallucinations are still an issue.
- A blend of both strategies (a fine-tuned LLM with RAG) can be superior in terms of performance but it takes up more time and resource to achieve.