Guide Book

Building Apps

The Build Editor Interface

Every app consists of Inputs, Logic, and Output. It gets executed starting from the left side (inputs) all the way down to outputs. You can start adding nodes from the ‘Add’ button next to each of the three section titles.

💡 Keep in mind that the horizontal ordering on logic nodes dictates their execution order, logic nodes placed in the same column will run in parallel.

In this example the ‘Scrape Page’ Functions are ran in parallel and when both of them finish executing, the OpenAI GPT function will start running.

Planning your App

The Moonlit framework makes it simple to plan out your app. For our example of building a cover letter generator, we can start by deciding our inputs and outputs, so we want 2 inputs, the first will be a file upload for the user’s CV, and the second is the job description. For the output we just need a text output. (Quick note: Text output parses markdown syntax as well). Lastly for our logic we will just be using an OpenAI text model, so the app should look like this:

Configuring Nodes

Each node here has it’s own set of options. Notice the red warning indicated on the configuration button for each node, this tells us that the node is missing configuration that we need to setup. So let’s configure our Logic node, either double click the node or click on the gear button:

There’s a few things to unpack here;

We wrote the prompt template for the ChatGPT model, Each logic node has at least one ‘Dynamic Field’. This is indicated by the bolt button next to the field name. While this is active, this field can reference the output of preceding nodes. So in our example we referenced our two input field.

Testing our App

After configuring our nodes we can test our app, we used a sample CV and a Job description from Google’s job openings. During execution if anything goes wrong, you’ll be able to see which node caused the issue and the traceback of the error.

Extending Your Apps

This is obviously a very simple app but it can be extended to have more capabilities and improve the result massively. From here you can:

  • Use the knowledge base to upload relevant documents such as certificates or your portfolio.
  • Add more input options such as Word Count, Example Cover Letter, Tone and Style, etc.
  • Test different prompts
  • Fine-tune the used model

Techniques & Best Practices

LLM Chaining

LLM Chaining is a technique used to chain multiple LLMs together to create more complex outputs. This is done by using the "Include Message History" option and passing the output of the previous LLM as the input to the next LLM. This can be done with any number of LLMs, but it's recommended to keep it to 2-3 LLMs for best results. For example, we can have a chain where the first LLM is asked to create an outline of a blog and the second LLM is asked to follow this guideline to write the blog.

💡 For LLM Chains you can tick the ‘Include Message History’ checkbox to pass the entire message history to the next LLM Node

Few-Shot Prompting

Few-Shot Prompting is a technique used to give the model a few examples of what you want it to do. This is done by using the "Message History" option and passing the examples as a list of strings. This can be done with any number of examples, but it's recommended to keep it to 1-3 examples for best results.

To take a very simple example, let's say you want it to act as a calculator and only respond with a number:

[{ "role": "user", "content": "What's two plus two?" },  { "role": "assistant", "content": "4" },  { "role": "user", "content": "What's 5 times itself?" },  { "role": "assistant", "content": "25" }]

Without the examples, the model would likely respond with a sentence and explanation. given examples, it will understand that you want it to respond with a number.

Need more Help?

Please reach out to us through the live chat widget in your project dashboard or feel free to book a call with us, we're more than  excited to explore your unique business requirements!