OpenChatKit Online

The First Open-Source ChatGPT.

OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications. We collaborated with LAION and Ontocord to create the training dataset.

It is not just a release of a model, this is the start of an open-source project. We have released a set of tools and processes for continuous improvement and community contributions.

This is an online playground page for OpenChatKit, welcome to try and give feedback.


Open-source chatbots project for various applications

Instruction-tuned large language model

A 20 billion parameter model fine-tuned for chat from EleutherAI’s GPT-NeoX with over 43 million instructions.

Various natural language tasks

OpenChatKit can handle various natural language tasks such as dialogue, question answering, classification, extraction, and summarization

Large dataset

OpenChatKit is trained based on the OIG-43M dataset created by Together, LAION and Ontocord

Extensible retrieval system

A system that enables the chatbot to augment responses with information from a document repository, API, or other live-updating source.

Live-updating source

It provides the context for the model to answer questions with up-to-date information.

Sample code

It also supports a Wikipedia index and sample code for how to call a web search API during retrieval

OpenChatKit Online

Thanks for testing! Feedback helps improve the bot and AI research.

Frequently asked questions

If you can’t find what you’re looking for, email our support team and if you’re lucky someone will get back to you.

    • What is OpenChatKit and what does it provide?

      OpenChatKit is an open-source project that provides a powerful base to create both specialized and general purpose chatbots for various applications. It consists of four key components: an instruction-tuned large language model, customization recipes to fine-tune the model, an extensible retrieval system to augment the model with live-updating information, and a moderation model to filter inappropriate or out-of-domain questions.

    • Who are the collaborators behind OpenChatKit and the training datasets?

      OpenChatKit is a collaboration between Together, LAION and Ontocord. Together is a company that provides open-source foundation models for natural language understanding and generation. LAION is a company that provides high-quality data annotation and curation services. Ontocord is a company that provides data engineering and machine learning solutions. Together, they created the OIG-43M dataset, a collection of 43 million high-quality instructions for conversational interactions, and the moderation dataset, a collection of inappropriate questions for chatbots.

    • How can I try out OpenChatKit and give feedback?

      You can try out OpenChatKit on and give feedback through the OpenChatKit feedback app. You can also join the OpenChatKit community on GitHub, Discord, Twitter and Medium, and share your ideas, suggestions and questions.

    • What is the base model of OpenChatKit and how is it fine-tuned?

      The base model of OpenChatKit is GPT-NeoXT-Chat-Base-20B, a 20 billion parameter large language model based on EleutherAI’s GPT-NeoX model. It is fine-tuned with the OIG-43M dataset, focusing on several tasks such as multi-turn dialogue, question answering, classification, extraction, and summarization.

    • How does OpenChatKit perform on different natural language tasks?

      OpenChatKit performs well on a broad set of natural language tasks, especially those involving question and answering, extraction and classification. It also does well on few-shot prompts, where it can leverage its instruction-tuning to adapt to different tasks. However, there are also some areas where OpenChatKit needs improvement, such as knowledge-based closed question and answering, coding tasks, repetition, context switching, and creative writing and longer answers.

    • How can I cite or reference OpenChatKit or the training datasets in my work?

      You can cite or reference OpenChatKit or the training datasets in your work by using the provided BibTeX entries in the GitHub repository.

    • How does OpenChatKit compare with other large language models or chatbots?

      OpenChatKit compares favorably with other large language models or chatbots in terms of its versatility, customizability, and extensibility. It can handle a wide range of natural language tasks with high performance, and it can be fine-tuned and adapted for specific applications or domains with the provided tools and recipes. It can also incorporate live-updating information from external sources with the extensible retrieval system, and filter inappropriate or out-of-domain questions with the moderation model.

    • What is the license of OpenChatKit and how can I modify or inspect the weights?

      OpenChatKit is licensed under the Apache License 2.0, which allows you to freely use, modify, and distribute the software. You can also inspect the weights of the model using the Hugging Face Transformers library or the Jupyter notebooks provided in the GitHub repository.

    • How can I access the source code, model weights and training datasets of OpenChatKit?

      You can access the source code, model weights and training datasets of OpenChatKit on GitHub. You can also download the model weights and the datasets from Hugging Face.