Tutorial Overview

The next step in our tutorial is preparing our LLM calls.

Using the LLM class

Trellis comes with a pre-built LLM tool which already handles rate limits and errors, so we’ll be using that. Trellis currently only supports OpenAI, so if you want to use a different provider you’ll have to extend Node and write your own tool for it. Each Trellis LLM node is effectively one call to the OpenAI API, so we’ll need two LLM nodes for our DAG.

Imports

First, we’ll import the LLM class from the trellis package.

example_dag.py
from trellis_dag import LLM

Initializing the LLM generating the cat fact

Next, we’ll initialize the LLM that’s generating the cat fact.

example_dag.py
generate_cat_fact_llm_msgs = [
    {
        "role": "user",
        "content": "Tell me a random cat fact, as a sentence.",
    }
]
generate_cat_fact_llm = LLM(
    "generate_cat_fact_llm", messages=generate_cat_fact_llm_msgs
)

The LLM class only requires a name to be initialized. messages is also very important, but Trellis lets you set it using set_messages or through the constructor. In this case, we’ll use the constructor, and we’ll use set_messages for the next LLM call. Since our prompt doesn’t have any variadic input, we can leave the input schema input_s blank. Other than stream, you can set any other arguments that you’d expect within the OpenAI API spec for chat completions.

Initializing the LLM judging the cat fact

Now, we’ll initialize the last Node needed for our DAG, the LLM that’s judging the cat fact.

example_dag.py
distinguish_cat_fact_llm_msgs = [
    {
        "role": "user",
        "content": "Which of these was generated by an LLM? 1. {cat_fact_1} 2. {cat_fact_2} Give your answer as 1 or 2.",
    }
]
distinguish_cat_fact_llm = LLM("distinguish_cat_fact_llm")
distinguish_cat_fact_llm.set_messages(distinguish_cat_fact_llm_msgs)

We’ll use set_messages this time to set the messages. In the messages, we’re using {cat_fact_1} and {cat_fact_2} to reference the outputs of the previous Nodes. These will be filled in when we connect the nodes together through edges in the next section.

Putting it all together

That’s it for the LLM code! Visit the LLM reference to learn more. Here’s the full code for this tutorial:

example_dag.py
from trellis_dag import LLM

generate_cat_fact_llm_msgs = [
    {
        "role": "user",
        "content": "Tell me a random cat fact, as a sentence.",
    }
]
generate_cat_fact_llm = LLM(
    "generate_cat_fact_llm", messages=generate_cat_fact_llm_msgs
)

distinguish_cat_fact_llm_msgs = [
    {
        "role": "user",
        "content": "Which of these was generated by an LLM? 1. {cat_fact_1} 2. {cat_fact_2} Give your answer as 1 or 2.",
    }
]
distinguish_cat_fact_llm = LLM("distinguish_cat_fact_llm")
distinguish_cat_fact_llm.set_messages(distinguish_cat_fact_llm_msgs)

Move onto the next section to connect the Nodes together in a DAG.