Tutorial Overview
The next step in our tutorial is preparing ourLLM calls.
Using the LLM class
Trellis comes with a pre-built LLM tool which already handles rate limits and errors, so we’ll be using that. Trellis currently only supports OpenAI, so if you want to use a different provider you’ll have to extendNode and write your own tool for it.
Each Trellis LLM node is effectively one call to the OpenAI API, so we’ll need two LLM nodes for our DAG.
Imports
First, we’ll import theLLM class from the trellis package.
example_dag.py
Initializing the LLM generating the cat fact
Next, we’ll initialize the LLM that’s generating the cat fact.example_dag.py
LLM class only requires a name to be initialized. messages is also very important, but
Trellis lets you set it using set_messages or through the constructor. In this case, we’ll use the constructor,
and we’ll use set_messages for the next LLM call.
Since our prompt doesn’t have any variadic input, we can leave the input schema input_s blank.
Other than stream, you can set any other arguments that you’d expect within the OpenAI API spec for chat completions.
Initializing the LLM judging the cat fact
Now, we’ll initialize the last Node needed for our DAG, the LLM that’s judging the cat fact.example_dag.py
set_messages this time to set the messages. In the messages, we’re using {cat_fact_1} and {cat_fact_2} to reference the outputs of the previous Nodes.
These will be filled in when we connect the nodes together through edges in the next section.
Putting it all together
That’s it for the LLM code! Visit the LLM reference to learn more. Here’s the full code for this tutorial:example_dag.py
Nodes together in a DAG.
