Welcome to the ODI Labs AI Assistant/s, developed as part of the SAGE-RAI project in collaboration with the Open University.
The ODI Labs AI Assistant is not a single assistant, it is in fact an advanced service that allows permitted users to create their own AI Assistants for specific tasks including (but not limited) to:
The majority of Generative AI models you may have heard of are general purpose Large Language Models (LLMs). These are trained upon vast swathes of data to help them understand the patterns in language. When you ask them to do a task, they use a statistical model on your input to match this to the patterns in language and then use this to generate a response that should match those pattern. As a result it might make up the answer, a problem known as hullicination.
Task focussed assistants still make use of general purpose LLMs, but rather than relying on the genreal langage model exclusively, they first try and retrieve relevant knowledge from a specific knowledge base that matches the users prompt. This is a process known as Retrieval-augmented Generation (RAG).
AI Assistants that use Retrieval-augmented Generation are not new, however they often remain shrowded in as much mystery as the general purpose LLMs and can only be created by those with advanced technical knowledge. Our objective is to lower the barrier to entry for people to create their own assistants, and lift the lid on the mystery of how you can build advanced task specific AI assistants.
We make use of a modular architecture. This means you can build an assistant that is enirely running locally (including the LLM), to an assistant that uses a shared cloud architecture and LLM from providers such as OpenAI, Anthropic and others. You can find more about how to deploy our architecure yourself on the documetation page.