MMS • Anthony Alford
AI research groups LAION and CarperAI have released OpenAssistant and trlX, open-source implementations of reinforcement learning from human feedback (RLHF), the algorithm used to train ChatGPT. Independent AI developer Phil Wang has also open-sourced his own implementation of the algorithm.
LAION, the Large-scale Artificial Intelligence Open Network, is a non-profit machine learning research organization dedicated to making AI models, datasets, and code available to the public. In 2022, InfoQ covered LAION’s release of LAION-5B, an AI training dataset containing over five billion image-text pairs. LAION’s latest project is OpenAssistant, which is intended to “give everyone access to a great chat based large language model.” The planned MVP implementation of OpenAssistant will be based on OpenAI’s InstructGPT paper: a dataset of human-generated instructions, a dataset of machine-generated responses and their human rankings, and an implementation of RLHF. According to LAION:
We are not going to stop at replicating ChatGPT. We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information, and much more, with the ability to be personalized and extended by anyone. And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.
CarperAI is a new lab within the EleutherAI research group, tasked with “improving the performance and safety of large language models (LLMs) with reinforcement learning.” InfoQ previously covered EleutherAI’s development of open-source language model GPT-NeoX. In October 2022, the lab announced a project to train and publicly release “instruction-tuned” models using RLHF. The project is a cooperative effort of several organizations, including HuggingFace, Scale, and Humanloop. As part of this project, CarperAI open-sourced Transformer Reinforcement Learning X (trlX), a framework for fine-tuning HuggingFace language models using RLHF.
Phil Wang, an AI developer known for open-source implementations of deep learning research models such as Imagen and Make-A-Video, shared his work-in-progress implementation of RLHF for the PaLM language model called PaLM + RLHF. Wang notes that there is no pre-trained model, only a framework for users to train their own. He also recommends users interested in replicating ChatGPT should join the LAION discord channel.
Although these open-source projects include implementations of ChatGPT’s training methods, they do not have any trained models currently available. Wang’s project FAQ suggests that training might require “millions of dollars of compute + data” to complete. LAION’s roadmap document for OpenAssistant does list efforts to collect data and train models, but isn’t clear on when trained models might be released. CarperAI’s Twitter account noted:
We haven’t released any RLHF models yet officially, just a few small replication efforts of hh-RLHF, learning to summarize, etc in our discord. We can match performance reported in respective papers on these.
Several prominent members of the AI community have discussed these efforts on social media. On Twitter, HuggingFace CTO Julien Chaumond predicted that in six months there will be “10 open reproductions of ChatGPT.” AI researcher Sebastian Raschka replied:
Agreed, there will be many open source implementations of ChatGPT. But there won’t be many high-quality models. I think we underestimate how much people hate labeling (or worse: writing) training data by hand.
StabilityAI’s founder Emad Mostaque tweeted that his company is “working on open chatGPT.” He also said:
Toughest part of open chatGPT creation (aside from millions of bucks for RL bit) is the governance aspect…The nice thing is once all the blood sweat and tears go into creating the models and frameworks they can proliferate like crazy as a new type of dev primitive.