Knowledge Base (RAGs)

Your project’s knowledge base is the long term memory for your LLMs, you can populate it with any type of document, where it will be split into chunks and each chunk is vectorized and given an embedding.

Adding Documents

You can add documents either through file upload or by importing it from a website from the knowledge base page.

Using your long term memory in Apps

In the app editor you will find a ‘Retrieve Knowledge’ node in your logic section, so given a text query it will perform a semantic search and fetch the relevant chunks of texts which you can use in your LLM prompts as shown below.

Implementing RAGs in Moonlit

From the 'Retrieve Knowledge' function configuration, you also have the option to select the number of chunks to return (the default is 5), as well as the ability to narrow down the search to specific documents.

Need more Help?

Please reach out to us through the live chat widget on the bottom right corner or feel free to book a call with us. We're more than excited to explore and help you with your use case!