ODI Labs Logo
ODI Chat

ODI Labs AI Assistant/s

Welcome to the ODI Labs AI Assistant/s, developed as part of the SAGE-RAI project in collaboration with the Open University.

Fast deployment of task focussed AI assistants

The ODI Labs AI Assistant is not a single assistant, it is in fact an advanced service that allows permitted users to create their own AI Assistants for specific tasks including (but not limited) to:

Be it assistants that are managed by a company centrally, or an assistant that is helping an individual do a literature study or extract insights from multiple lengthy PDFs, our approach makes it easy to build AI assistants that are task focussed!

Why task focussed assistants?

The majority of Generative AI models you may have heard of are general purpose Large Language Models (LLMs). These are trained upon vast swathes of data to help them understand the patterns in language. When you ask them to do a task, they use a statistical model on your input to match this to the patterns in language and then use this to generate a response that should match those pattern. As a result it might make up the answer, a problem known as hullicination.

Task focussed assistants still make use of general purpose LLMs, but rather than relying on the genreal langage model exclusively, they first try and retrieve relevant knowledge from a specific knowledge base that matches the users prompt. This is a process known as Retrieval-augmented Generation (RAG).

Our approach

AI Assistants that use Retrieval-augmented Generation are not new, however they often remain shrowded in as much mystery as the general purpose LLMs and can only be created by those with advanced technical knowledge. Our objective is to lower the barrier to entry for people to create their own assistants, and lift the lid on the mystery of how you can build advanced task specific AI assistants.

We make use of a modular architecture. This means you can build an assistant that is enirely running locally (including the LLM), to an assistant that uses a shared cloud architecture and LLM from providers such as OpenAI, Anthropic and others. You can find more about how to deploy our architecure yourself on the documetation page.