Hugging Face Presents HuggingChat, Open Source Alternative to ChatGPT

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

HuggingChat is a new AI-powered chatbot available for testing on Hugging Face. HuggingChat is able to carry through many tasks that have made ChatGPT attract lot of interest recently, including drafting articles, solve coding problems, or answer questions.

HuggingChat has 30 billion parameters and is at the moment the best open source chat model according to Hugging Face. The AI startup, however, plans to expose all chat models available on the Hub in the long-term.

The goal of this app is to showcase that it is now (April 2023) possible to build an open source alternative to ChatGPT.

As Hugging Face clarifies, HuggingChat is currently based on the latest LLaMa model developed by the project OpenAssistant.

OpenAssistant has the rather ambitious goal of going beyond ChatGPT:

We are not going to stop at replicating ChatGPT. We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information, and much more, with the ability to be personalized and extended by anyone.

An additional goal they have in mind is making this AI-based assistant to be small and efficient so that it can be run on consumer hardware.

OpenAssistant itself is managed under LAION, a non-profit organization that provides open datasets, tools, and models to foster machine learning research, including the LAION-5B dataset on which Stable Diffusion is based.

Currently, HuggingChat enforces a strict privacy model, whereby messages are only stored to display them to the user and are not even shared for research or training purposes. Additionally, users are not authenticated nor identified using cookies. This will likely change in future, though, to let users share their conversations with researchers.

As mentioned, HuggingChat is the first AI-powered chat project that is open-source. The code for the UI can be found on GitHub while the inference backend runs text-generation-inference on Hugging Face’s Inference API infrastructure.

This means that the app can be deployed to a Hugging Face Space and customize it in a number of ways, including swapping models, modify the UI, change the policy about storing user messages, and so on.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.