Your project’s knowledge base is the long term memory for your LLMs, you can populate it with any type of document, where it will be split into chunks and each chunk is vectorized and given an embedding.
You can add documents either through file upload or by importing it from a website from the knowledge base page.
In the app editor you will find a ‘Retrieve Knowledge’ node in your logic section, so given a text query it will perform a semantic search and fetch the relevant chunks of texts which you can use in your LLM prompts as shown below.
From the 'Retrieve Knowledge' function configuration, you also have the option to select the number of chunks to return (the default is 5), as well as the ability to narrow down the search to specific documents.
Please reach out to us through the live chat widget on the bottom right corner or feel free to book a call with us. We're more than excited to explore and help you with your use case!