Join our upcoming webinar “Deriving Business Value from LLMs and RAGs.”
Register now

The debut of GPT-3 marked a pivotal moment, not just for artificial intelligence but for our collective imagination about what technology can achieve. It enlarged our understanding of machines that could process data at lightning speed or solve complex equations. Now, language models can craft a narrative, inject humor into a conversation, and, in essence, mimic the creative prowess of the human mind. However, translating the nuances of human emotion, humor, and thought into binary code remained a puzzle. Enter reinforcement learning with human feedback (RLHF), a groundbreaking approach poised to bring us closer to solving this mystery.

RLHF is about fine-tuning LLMs to grasp the subtle nuances of human communication. It's a move towards making language models not only mimic human interactions but also understand and adapt to them. By integrating human feedback directly into the learning process, RLHF aims to make interactions with AI as natural and intuitive as talking to another person. In this blog post, we'll dive into the nuts and bolts of RLHF, see how it works, explore tools, and discover alternative methods to this method.

What is RLHF?

Reinforcement learning with human feedback (RLHF) is a technique where AI improves by learning directly from human feedback. This way, you enrich AI's learning process with real human insights. In RLHF, AI doesn't just produce what it thinks is best based on data alone but also considers what people actually find useful or relevant. RLHF is especially handy for natural language processing tasks requiring a human touch, like creating content that genuinely resonates with us. By integrating our feedback, language models become more adept at delivering results that align with human goals and preferences, marking a significant step forward in generative AI applications, including large language models.

Understanding RLHF meaning

Picture this: you're fine-tuning a language model to summarize text. Take this brief text as an example: "The internet revolutionized how we share information, making it instant and accessible worldwide. It has become a crucial tool for communication, education, and entertainment." Here are two different summaries of the previous text.

Summary 1: "The internet changed communication by making information sharing instant and global."

Summary 2: "The internet's impact includes transforming communication, enhancing education, and providing entertainment globally."

text two summaries

While both summaries capture the essence, they focus on different aspects. The first is concise, emphasizing the revolution in communication. The second expands on the internet's broader impacts, touching on education and entertainment. Which one is "better" depends on what details and focus we value more.

Given the variety in language and human choice, it's clear that preferences in summaries can vary widely among individuals. This variability is exactly why summarization isn't a one-size-fits-all task. While specific natural language processing tasks have straightforward answers, summarization is subjective, often leading to multiple "correct" summaries based on individual preferences. By collecting human feedback, the RLHF model crafts the data needed for later LLM processing.

RLHF can be useful even if you're not training an LLM from scratch. Let's say you're building an application whose values you want to set. While fine-tuning is one way to do this, sometimes RLHF is a better solution. When given a question like "Where is Times Square?", LLM can reply simply "New York" or "Time Square is in New York." Some of these responses will feel more natural than others, so RLHF is a method for gathering human feedback on which responses they prefer to train a model to generate responses that humans prefer.

And it's not only about summarization—many LLM applications require diverse opinions to collect comprehensive data. Reinforcement learning from human feedback is the solution for this.

In a nutshell, RLHF helps us improve LLM's ability to solve complex tasks where the desired output is difficult to explain or describe. In other words, problems with no single correct answer, which is the case for many LLM problems, RLHF doesn't solve all of the problems of truthfulness and toxicity in language models, but it's been a key part of improving LLM's quality.

RLHF approaches

The first thing you do for RLHF is collect text samples and have people summarize them. There's rarely just one way to summarize a text, reflecting the personal touch language inherently has.

To address this, we focus on understanding what people actually like. By showing labelers two different summaries and asking which they prefer, we shift from finding a single "correct" answer to aligning AI outputs with human guidance. This approach is key to reinforcement learning with human feedback (RLHF). SuperAnnotate's model comparison template (one of our many LLM templates) focuses exactly on that. We offer a data collection pipeline for fine-tuning a language model by having our expert workforce, or labelers, compare different model outputs and choose the preferred ones. Instead of traditional model tuning, this process is referred to as reinforcement learning, guiding our AI to produce outputs that better match what people want to see or hear.

In this process, you start with an LLM that's already been trained with instructions and learned to follow them. You then gather a dataset that indicates a human labeler's preferences between multiple completions of the same prompts and use this dataset as a reward signal to fine-tune an instruction-tuned LLM. The result is a tuned LLM that generates outputs that better align with human guidance.

How does RLHF work

RLHF is an evolving area of research, and there are many variations of how we may implement it, but the high-level themes are the same. RLHF consists of three stages: We first create a preference dataset. Then, we use this preference dataset to train a reward function with supervised learning. Afterwards, we use the reward learning in the reinforcement learning loop to fine-tune our base LLM.

RLHF phases

Here are the RLHF stages in detail:

rlhf

Stage 1: Preference dataset

We start by choosing the large language model (LLM) that needs refinement. The process starts by giving the pre-trained model various prompts, such as requests to summarize specific texts, setting the stage for further tuning. Human labelers play a crucial role at this point; they evaluate pairs of model-generated responses to each prompt and select the more suitable option. This comparison builds our preference dataset, which captures human preferences among the model's outputs. Creating this dataset is essential but requires clear goals for the model's tuning, like enhancing accuracy, reducing bias, or increasing user engagement.

create preference dataset

Stage 2: Reward model

Next, we take the preference data we've gathered and get down to training a reward model. This model's job is essentially to act as the judge during the training process, scoring the LLM's responses using a reward function based on how well they align with what our human labelers prefer. This step turns qualitative judgments into quantifiable scores, offering a way to measure how close an LLM's response is to the ideal.

use preference dataset

Training this model involves feeding it examples of prompts paired with two different responses—the preferred one and the not-so-preferred one. From there, it learns to assign scores that reflect the preferences it's been trained on. The reward function score isn't about right or wrong but aligning closer with human values and preferences.

Stage 3: Fine-tuning

The final step involves fine-tuning the base language model with the insights from the reward model. The aim here is to adjust the LLM's output to reflect human preferences better, as indicated by higher scores from the reward model.

use reward model in rl loop

This step uses a different dataset, filled with prompts, and applies reinforcement learning to improve the model's output, guiding it toward generating responses that humans favor.

proximal policy optimization

And here's a graph showing the whole RLHF lifecycle:

rlhf lifecycle

Reinforcement learning component

Reinforcement learning (RL) comes into play when you have a complex and not strictly defined task. Imagine trying to teach someone a game without explicitly telling them the rules but instead rewarding them for good moves. That's the essence of RL—it's about guiding the model towards making a series of decisions that lead to the best outcome, even when the "best" isn't clearly defined from the start.

In reinforcement learning, the model, or "agent," learns by doing. It interacts with its environment, makes decisions (or "actions"), sees how the environment responds and receives rewards or penalties. This process helps the agent figure out the environment's rules. A famous example is AlphaGo, which mastered the game of Go by experimenting with different strategies and learning from the outcomes.

This learning process differs from what we see in supervised learning, where the model learns from clear examples of what to do. In reinforcement learning, there's no set path. The agent explores, tries different actions, and learns from the results. It keeps track of what actions lead to better rewards in different situations, storing this information in a policy. Like the agent's decision-making brain, this policy maps the current state of the environment to the actions the agent should take next, aiming to maximize rewards.

For instance, when tuning a large language model with RL, the "current state" might include the prompt given to the model and any text it has generated up until that point. The "actions" are the next tokens or words the model chooses to generate. Each choice the model makes is evaluated by a reward model, which scores how well the generated text aligns with what we're looking for. The goal is to learn a policy that gets the LLM to produce highly scored completions, effectively teaching the model to generate text that matches human preferences more closely.

RLHF in SuperAnnotate

SuperAnnotate's LLM playground offers ready-made templates for building datasets based on human feedback. RLHF is a big part of this process, where our expert workforce labels data according to their domain knowledge and preferences and crafts a dataset ready for RLHF fine-tuning.

rlhf superannotate

Here are a few reasons why enterprises choose to trust SuperAnnotate for their RLHF language model projects:

  • We've put time and effort into gathering a world-class team of experts who meticulously work with clients' data. They ensure top-quality human feedback – the gem for any RLHF project.
  • The interface is fully customizable – you can build your own use case aside from the ready-made templates!
  • Our platform offers analytics and insights that allow the clients to control and understand their data fully.
  • API integrations make it easy to set up a model in the loop, AI feedback, and much more.

Alternatives to RLHF

While reinforcement learning from human feedback offers a robust way to align LLM outputs with human preferences, it's not the only method on the table. In fact, it has some notable drawbacks that make people think of more efficient alternatives. Some challenges include the scalability of gathering human feedback, potential biases introduced by the feedback providers, and the complexity of effectively integrating this feedback into the AI training process. Let's delve into a couple of RLHF alternatives and see if and how they address these issues.

RLHF vs. DPO

RLHF is a complicated process. You first fit a reward model based on human feedback, then fine-tune the unsupervised language model using RL to maximize the reward score while staying close to the original method. Stanford researchers recently came up with a new parametrization method of the reward model that enables optimal policy extraction in closed form. This allows solving the RLHF problem with only a simple classification loss. The resulting algorithm is called DPO and is computationally lightweight, stable, and performant. DPO eliminates the need for sampling from the language model during fine-tuning or performing significant hyperparameter tuning.

rlhf vs dpo

DPO shows some impressive results, setting itself as a method that fine-tunes an LM to fit human feedback as well as or even better than existing methods. Results show that DPO performs better than PPO-based RLHF in terms of controlling the sentiment of generations. In summarization and single-return dialogue tasks, it matches or improves RLHF while being substantially simple to implement and train.

RLHF vs. RLAIF

Human labor in RLHF training is time-consuming and extensive. That's a significant motivation for a technique that gained popularity last year – RLAIF, which uses a ready-made LLM to mimic the job of human annotators, creating AI-generated preferences instead.

When it comes to tasks like summarization and crafting helpful or non-offensive dialogue, RLAIF keeps up and sometimes races ahead of RLHF. It beats the standard approach of fine-tuning with supervision, impressively doing so with a preference labeler the same size as the policy model it's training.

Intriguingly, simply asking the LLM directly for reward scores can lead to better results than the typical RLAIF approach, which involves turning LLM-generated preferences into a reward model first. Through thoroughly exploring different methods to generate AI preferences that align with human values, the findings hint at RLAIF's capacity to outperform human annotators. This breakthrough points to a way around the tricky issue of scaling RLHF, offering a glimpse into a future where aligning AI with human preferences might not be so daunting after all.

Fine-grained human feedback

Language models sometimes mess up by creating content that's misleading, harmful, or just plain irrelevant. To make these models better listeners and speakers, researchers have implemented our well-known RLHF.

But, traditional RLHF is like getting a single report card for an entire year's subjects—it's too broad and doesn't pinpoint where the model needs to improve. Fine-grained RLHF is a sophisticated approach that breaks down feedback into more detailed, bite-sized pieces. It's like getting a report card that not only tells you how you did in each subject but also gives you feedback on every assignment and test.

fine-grained rlhf
Source

Fine-grained RLHF enables training and learning from reward functions that are fine-grained in two respects:

  1. Density, providing a reward after every segment (e.g., a sentence) is generated.
  2. Incorporating multiple reward models associated with different feedback types (e.g., factual incorrectness, irrelevance, and information incompleteness).

You can fine all related data, collected human feedback, and codes on GitHub.

Wrapping up

The arrival of language models like GPT-3 opened up a world of possibilities for making AI understand and generate human-like language. But here's the real challenge: fine-tuning AI to grasp the nuances of what we really mean or prefer. RLHF comes in here, blending the best of LLM's learning abilities with the irreplaceable insights from human feedback. It's all about making AI not just smart but also sensitive to our preferences.

RLHF shines by steering AI towards outcomes that resonate more authentically with us, especially in scenarios without clear-cut answers. However, perfecting this approach has its hurdles, like ensuring we can scale up without losing the personal touch or introducing biases. That's why alternatives like direct performance optimization (DPO) and reinforcement learning from AI feedback (RLAIF) are getting attention. DPO simplifies the fine-tuning process, and RLAIF introduces a clever workaround for RLHF's scalability challenge by using AI to simulate human feedback, both showing promising strides toward achieving nuanced AI interactions.

As we explore these paths, the end goal is crystal clear: to evolve artificial intelligence systems that are not only efficient but deeply aligned with human values and thoughts. RLHF's journey and its alternatives showcase our drive towards creating AI that genuinely understands and interacts with us on a human level. It's an exciting time, with each step forward bringing us closer to seamlessly integrated AI-human interactions.

Recommended for you

Stay connected

Subscribe to receive new blog posts and latest discoveries in the industry from SuperAnnotate