Everything You Need To Know About Preventing LLM Hallucinations

February 1, 2024

Hallucination in large language models — a feature or a bug?

Large Language Models, no matter the model, no matter the training, the reinforcement learning, the number of parameters, all, by their very nature “hallucinate.” In essence, an LLM predicts the next word in a sentence. As a simplified definition, they’re really advanced, almost miraculous, autocomplete engines.  

But in a less simplified definition they’re essentially applying an algorithm that finds in a multidimensional (as in many billions of variables) vector space. This algorithm then responds to a prompt, i.e. some form of input from a user, and predicts the best way to complete that response.  

As it turns out, this ability enables some pretty amazing results. For example, a viral tweet asked ChatGPT to write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.  

The result was mesmerizing. It found the space in between all of these ideas — “king james bible style,” “peanut butter sandwich,” “VCR,” and with no irony nor pretention it generated a blended response:  

This ability to generate something new from the parameters of its training data, to essentially predict something that would exist in latent space, has led many to consider the model itself intelligent or creative or able to effectively do what many humans cannot. Yet, at its core, this action of creation is at odds with what makes for effective, actionable information inside of an enterprise.  

This is not to diminish the importance of the ability for a model to create. Indeed, the generative aspects of an LLM are amazing. Yet, let’s imagine a pretty basic business scenario, one where our thousands of users for our free financial tool, hila.ai, have run into on a daily basis — they want to know what Tim Cook said in the most recent earnings call.  

In this case, predicting the next word or finding a space between what Tim Cook has said in the past is deeply problematic. In fact, should someone rely on the information of an unadulterated response from an LLM, without any techniques to improve or measure the response, then it could be disastrous. It, like the peanut butter sandwich would be entirely hallucinated.  

So, is hallucination a feature of a bug? It is fundamental to the way these models work. And in some use cases, it’s absolutely a feature, but in others, like any task that relies on facts or precision, it’s a complete bug.  

This is an example of a hallucination in ChatGPT on an earnings release. The cash it claims on hand is wrong. hila Enterprise, our product, is in the middle and provides the right answer.

What to do about hallucination?

Recently,  a comprehensive paper from researchers at the Islamic University of Technology, the University of South Carolina, Stanford University and Amazon’s AI department, surveyed the various methods to mitigate hallucination available today.  

These are divided into two camps — the methods to mitigate via prompt engineering and the methods in mitigation within the model itself.  

A diagram of a companyDescription automatically generated with medium confidence

Chief among the techniques to alter the prompt are RAG, retrieval augmented generation, which essentially uses known information and appends it to the message sent to the model. This approach works significantly in mitigating hallucinations, as it essentially ties the model to a certain, specific set of context. Many of hila Enterprise’s methods focus on these techniques.  

These RAG techniques can come at multiple points in the processing of the question — before going to the model, during the generation of the answer and after the generation of the answer. The initial RAG process, proposed in a paper by Lewis et al in 2021, there was an end-to-end process that uses a retriever and supplies the relevant documents based on an input to a model to provide the final output. This process, and many processes of this type, require access to the model itself — an API alone often does not provide enough information from a public model to engage in several of these methodologies.  

This restraint on API-based and public models also extends to the developing of the model itself. This covers the second type of anti-hallucination in the branching tree. Many of these work while training the model from the outset. A few of these techniques work as supervised finetuning — a method that Vianai uses in hila Enterprise, which improve the anti-hallucination properties of some public models and all open-source models available. This process can include fine-tuning for factuality, attribution, quantifying hallucination severity and information faithfulness (among others).  

Finally, there are two groups of techniques that rely on improving the prompt. This comes in two methods — first, in improving the prompt by using a lightweight technique to essentially replace a prompt with one that gives a better, more factual response. The other uses various forms of feedback to the model to improve its responses. This could be in generating other responses, having LLMs judge if a piece of context can properly provide a response.  

How hila Enterprise fits

Another example of hila Enterprise providing the right answer in a place where ChatGPT hallucinated. Again, both were using a RAG process.

hila Enterprise uses fine-tuned models, post processing of hallucinations and enhancements to the retrieval process. These methods are holistic — the work on our own models or public models — and they’re comprehensive. hila also highlights potential hallucinations, directly removes the hallucinations from the original answers, or improves the answer based on the appropriate sources.  

More specifically, hila Enterprise utilizes:

  1. Entailment-based detection —A proprietary Vianai technique, we determine whether each sentence in the LLM-generated answer is entailed by the most similar sentences in the context. Thus, hila Enterprise can arrive at a measure for hallucination. The platform applies a thorough process that extends beyond purely using entailment and leverages additional techniques that enable us to capture sentence variability and nuances in text. We can then provide a hallucination score that considers differences in sentence structure that appear between an LLM-generated answer and the sources the LLM uses to generate that answer.
  1. Iterative refinement technique — An LLM determines whether or not an LLM-generated answer contains a hallucination with regard to a provided context. This is done multiple times for each piece of context. The number of “yes” answers is divided by the total number of times the LLM is prompted to detect hallucinations to get a score for hallucination. The hila Enterprise platform averages the hallucination scores for all pieces of context and applies additional steps that refine the final answer based on detected hallucinations. This technique not only produces a metric for hallucination (the hallucination rate), but also provides an enhanced answer that overcomes hallucinations.
  1. Multistep verification technique — Each LLM-generated answer is sent to an LLM to generate follow-up questions. The hila Enterprise platform uses these follow-up questions to retrieve more context from the retrieval system, and then uses this to answer the follow-up questions. The follow-up responses are then used to modify the original answer via an LLM. This leads to a more refined answer.
  1. MARAG — This is a proprietary VIANAI technique that is inspired by the way in which humans retrieve information when answering questions during long research processes. It is a complex retrieval process that leverages language models to enhance the retrieval process. This technique not only enables us to find the best possible sources to answer a question, but also enables us to determine what sources to not consider when prompting an LLM. Although this process is computationally expensive and long, it does enable much more factual correctness as the final answer that is generated by an LLM that uses the best possible context retrieved from the hila Enterprise retrieval system. This technique is more useful for report generation, where longer wait times are expected.
  1. Fine-tuned models — We have brought in key methodologies to improve models, fine-tuning based on task, on function and to improve various aspects of factuality and retrieval. These models now serve in txt2sql, translation, and anti-hallucination. Our fine-tuned model, for example, took execution accuracy on a SAP HANA database from 0% (with mostly empty responses) to 96.42%; our additional services took it to 100%.

Taken together, these techniques have taken hallucinations from 66 percent of a sentence down to zero. And, they’ve improved the sentence — instead of leaving empty responses or solely eliminating sentences, by providing more robust answers from the original sources.  

Fundamentally, hila Enterprise is a helpful assistant, not a creative agent. This pushed us to advance the state of the art, and propelled us to continue to spin in cutting-edge anti-hallucination methods that maintain privacy and aid in the performing of various work tasks.

Interested in learning more about how you can bring non-hallucinating GenAI into your enterprise? Get in touch here.