Improve AI Output Using the Guardrails Library with Custom Validators

Occasionally, AI models make the same mistake no matter what prompt engineering trick we use. It’s especially annoying when we need to control the output precisely. Fortunately, we can use the Guardrails library to validate the AI’s output and request corrections if necessary.

Table of Contents

  1. What are Guardrails?
  2. Installation
  3. Building an AI web scrapper with the Guardrails library
  4. Writing tweets with AI
    1. Automatically fixing LLM output with Guardrails
    2. Using AI to validate AI in Guardrails

What are Guardrails?

Guardrails is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs).

The library allows us to specify the expected structure of the output and verify whether the values are correct. We define the validation rules using the Reliable AI markup Language, an XML document. In the document, we specify the structure of the output, the validation criteria, and the corrective actions. We may choose one of the available automatic corrections (truncating text, converting to lowercase, etc.) or ask the model to correct the output by itself. Additionally, it’s possible to specify custom validators and corrections.

In this article, I will start with a simple example of using the Guardrails library, and then I will show how to create two custom validators and corrections. The first validator will use a custom Python function to fix the output. The second validator will use an AI model to verify the LLM’s work.

Installation

Before we start, we have to install the Guardrails library. The library is available on PyPI, so we can install Guardrails using pip. We will also need an LLM implementation. In this case, I use OpenAI.

pip install guardrails-ai openai

Now, we can import both packages and configure the OpenAI API key. Remember to keep your key in a separate file or an environment variable. Don’t put keys in your code!

import guardrails as gd
import openai


openai.api_key = 'YOUR KEY'

Building an AI web scrapper with the Guardrails library

In the first example, I will use AI to parse a website’s content with job offers. I assume you have already downloaded the website content, and we will focus on the parsing code. If you need help scrapping the website, take a look at my articles about:

We scrape a popular website with remote jobs. I chose it because the creators provoked me ;) They include a “captcha” in many job offers:

Please mention the word RESTORED when applying to show you read the job post completely ([here they put your IP address as a base64 encoded string]). This is a feature to avoid fake spam applicants. Companies can search these words to find applicants that read this and instantly see they’re human.

Let me prove you don’t need a human to apply for the job. In fact, you don’t even need AI. They write the same sentence every time, so a simple regular expression will do the job. However, I will use AI to show you how to use the Guardrails library.

In addition to extracting the “captcha” word, I also want to get the salary range, the currency, and a list of requirements.

First, we have to define the RAIL specification. In the specification, we define the output structure and the prompt. The complete specification for our use case looks like this:

<rail version="0.1">
<output>
<string name="captcha" description="Is there a word the applicant has to mention to prove they are human?"/>
<integer name="minimal salary" description="What's the lower bound of the salary range?"/>
<integer name="maximal salary" description="What's the upper bound of the salary range"/>
<string name="salary currency" description="What is the currency of the salary?"/>
<list name="requirements" description="Requirements for the candidate" format="min-len: 2">
    <string />
</list>
</output>

<prompt>

Given the following document, answer the following questions. If the answer doesn't exist in the document, enter 'None'.



@xml_prefix_prompt

{output_schema}

@json_suffix_prompt
</prompt>
</rail>

I request an output with four elements: the “captcha”, the salary range, and the salary currency. Additionally, I want a list of requirements containing at least two elements.

Below the output specification, we have the prompt. In the prompt, we explain what we want from the model. The prompt must include the output specification. We use the {output_schema} variable, and the library will automatically copy the specification into the prompt. Additionally, we need the @xml_prefix_prompt and @json_suffix_prompt tags. In those places, the library will insert explanations on parsing the specification and producing the output JSON document.

I instructed the AI model to answer questions, but the questions are defined in the output section as descriptions of the JSON elements.

If we store the RAIL specification as a string in the rail_spec variable, we can create a Guard instance using the from_rail_string function:

guard = gd.Guard.from_rail_string(rail_spec)

Now, we can run the Guardrails validation. Before we do it, we have to store the job description in a variable. I will skip this part and assume you have the description in the job_description variable.

raw_llm_output, validated_output = guard(
    openai.Completion.create,
    prompt_params={"document": job_description},
    engine="text-davinci-003",
    max_tokens=1024,
    temperature=0.3,
)

In the validated_output variable, we will get the output of the AI model after applying all requested fixes. In the example, I got this result:

{'captcha': 'RESTORED', 'minimal salary': 135000, 'maximal salary': 150000, 'salary currency': 'USD', 'requirements': ['Python', 'TensorFlow', 'Familiarity with Computer Vision', 'Excellent communication skills (English)']}

Writing tweets with AI

Let’s move on to the second use case, where we must define custom validators. This time, the model will receive a complete text of one of my articles, and it has to write a tweet. The tweet must be between 200 and 280 characters. Additionally, I don’t want to use hashtags because nobody uses them in tweets anymore. Later, we will add another validator to ensure the tweet contains a call to action.

Automatically fixing LLM output with Guardrails

However, let’s start with the simpler use case. We want to get rid of hashtags. We don’t need to ask AI to remove them from a tweet. Instead, we can write a simple Python function to remove the hashtags.

In our RAIL specification, we will configure the validator with on-fail instructions. Those instructions tell Guardrails what to do when the validation fails. If we configure the validator to reask, it will ask AI to fix the mistake. The validator code will correct the error if we use the fix option. Of course, fixing with the code works only when we don’t need to understand the text.

If we use a custom validator, we have to define a function in the <script> section of the RAIL document. Specifications with custom validators get lengthy and complicated quickly, so I suggest storing them in separate files and using the gd.Guard.from_rail function to load them from a file.

<rail version="0.1">
<output>
<string name="tweet" description="Write a tweet about a given article. Don't use hashtags."
    format="length: 200 280; no-hashtag"
    on-fail-length="reask" on-fail-no-hashtag="fix"
/>
</output>

<prompt>

Given the following article, write a tweet about it. Don't use hashtags.



@xml_prefix_prompt

{output_schema}

@json_suffix_prompt
</prompt>

<script language="python">
from typing import Dict
import re
from guardrails.validators import Validator, EventDetail, register_validator


@register_validator(name="no-hashtag", data_type="string")
class NoHashtag(Validator):

    def remove_hashtags(self, text):
        hashtag_pattern = r"#\w+"

        text_without_hashtags = re.sub(hashtag_pattern, '', text)
        text_without_hashtags_and_duplicate_spaces = re.sub(r'\s+', ' ', text_without_hashtags).strip()

        return text_without_hashtags_and_duplicate_spaces

    def validate(self, key, value, schema) -> Dict:
        contains_hashtags = '#' in value
        descriptive_error_message = 'Remove hashtags'

        if contains_hashtags:
            correct_value = self.remove_hashtags(value)
            raise EventDetail(
                key=key,
                value=value,
                schema=schema,
                error_message=descriptive_error_message,
                fix_value=correct_value,
            )

        return schema
</script>
</rail>

In the output section, we use the format="length: 200 280; no-hashtag" on-fail-length="reask" on-fail-no-hashtag="fix" attributes to configure the built-in length validator and our custom validator. In the script section, we define the NoHashtag class. The class implements the validate function. The function must return the schema or throw an EventDetail exception. The exception contains information about the error and the corrected value.

I use my article about data catalogs as the input of the tweet generator. In the article, I tell data engineers to use data catalog because Sumerian librarians had a card catalog system (on clay tablets) in 2500 BCE.

guard = gd.Guard.from_rail('validation.rail')
article = ...

raw_llm_output, validated_output = guard(
    openai.Completion.create,
    prompt_params={"document": article},
    engine="text-davinci-003",
    max_tokens=1024,
    temperature=0.3,
)

In the validated_output variable, I got a tweet:

{
'tweet': 'In 2022, it’s finally time to upgrade your data lake to 2500 BCE technology! Librarians have been using a card catalog system for almost 5000 years - learn how data engineers are data librarians!'
}

Additionally, we can print the invocation log and see what happens with the output:

from rich import print

print(guard.state.most_recent_call.tree)

In addition to the complete prompt (that I will not include here because it’s too long), I see the raw LLM output containing a tweet with hashtags and the fixed output without hashtags.

Raw LLM Output

{"tweet": "In 2022, it’s finally time to upgrade your data lake to 2500 BCE technology! Librarians have been using a card catalog system for almost 5000 years - learn how data engineers are data librarians! #DataEngineering #DataLakes"}

Validated Output

{'tweet': 'In 2022, it’s finally time to upgrade your data lake to 2500 BCE technology! Librarians have been using a card catalog system for almost 5000 years - learn how data engineers are data librarians!'}

Using AI to validate AI in Guardrails

Now, I want to check if the tweet contains a call to action using an AI-based validator. The validator sends the tweet to an AI model to determine if the text includes the call to action. Of course, I will use Guardrails for the validation too, so we have to nest RAIL rules inside other RAIL rules. It’s possible, but we must remember that we nest an XML document inside another XML document, so the inner document needs the < > characters escaped.

<rail version="0.1">
<output>
<string name="tweet" description="Write a tweet about a given article. Don't use hashtags."
    format="length: 200 280; no-hashtag; has-call-to-action"
    on-fail-length="reask" on-fail-no-hashtag="fix" on-fail-has-call-to-action="reask"
/>
</output>

<prompt>

Given the following article, write a tweet about it. Don't use hashtags.



@xml_prefix_prompt

{output_schema}

@json_suffix_prompt
</prompt>

<script language="python">
from typing import Dict
import re
from guardrails.validators import Validator, EventDetail, register_validator
import guardrails as gd
import openai
openai.api_key = ...


@register_validator(name="no-hashtag", data_type="string")
class NoHashtag(Validator):

    def remove_hashtags(self, text):
        hashtag_pattern = r"#\w+"

        text_without_hashtags = re.sub(hashtag_pattern, '', text)
        text_without_hashtags_and_duplicate_spaces = re.sub(r'\s+', ' ', text_without_hashtags).strip()

        return text_without_hashtags_and_duplicate_spaces

    def validate(self, key, value, schema) -> Dict:
        contains_hashtags = '#' in value
        descriptive_error_message = 'Remove hashtags'

        if contains_hashtags:
            correct_value = self.remove_hashtags(value)
            raise EventDetail(
                key=key,
                value=value,
                schema=schema,
                error_message=descriptive_error_message,
                fix_value=correct_value,
            )

        return schema


@register_validator(name="has-call-to-action", data_type="string")
class HasCallToAction(Validator):

  def contains_a_call_to_action(self, text):
    guard = gd.Guard.from_rail_string('''&lt;rail version="0.1"&gt;
    &lt;output&gt;
    &lt;bool name="contains a call to action" description="Does the given text contain a call to action?"/&gt;
    &lt;/output&gt;

    &lt;prompt&gt;

    Answer the questions about the given tweet:

    

    @xml_prefix_prompt

    {output_schema}

    @json_suffix_prompt
    &lt;/prompt&gt;

    &lt;/rail&gt;
    ''')

    raw_llm_output, validated_output = guard(
        openai.Completion.create,
        prompt_params={"document": text},
        engine="text-davinci-003",
        max_tokens=1024,
        temperature=0.3,
    )

    return validated_output['contains a call to action']

  def validate(self, key, value, schema) -> Dict:
      if not self.contains_a_call_to_action(value):
          raise EventDetail(
                    key=key,
                    value=value,
                    schema=schema,
                    error_message="Write a call to action",
                    fix_value=""
                )
      return schema
</script>

</rail>

In the HasCallToAction validator, I return an empty fix_value because I want the outer AI model to fix its errors. This validator will not support the fix instruction. Instead, our custom validator can be used only with the reask parameter.

The Python code looks the same because we modified only the RAIL specification:

guard = gd.Guard.from_rail('validation.rail')
article = ...

raw_llm_output, validated_output = guard(
    openai.Completion.create,
    prompt_params={"document": article},
    engine="text-davinci-003",
    max_tokens=1024,
    temperature=0.3,
)

This time, I got a tweet with a call to action:

{
'tweet': 'Upgrade your data lake to 2500 BCE technology and make your data engineering more efficient. Take action now and make your data engineering more efficient!'
}

And in the invocation log, I see two calls to the AI model:

Raw LLM Output

{                                                                                                   "tweet": "Do you know how long librarians have been using a card catalog? Sumerian librarians used clay tablets as a card catalog system in 2500 BCE! Upgrade your data lake to 2500 BCE technology and make your data engineering more efficient. #DataEngineering"}

Validated Output

{'tweet': ReAsk(
    incorrect_value='Do you know how long librarians have been using a card catalog? Sumerian librarians used clay tablets as a card catalog system in 2500 BCE! Upgrade your data lake to 2500 BCE technology and make your data engineering more efficient. #DataEngineering',
    error_message='Write a call to action'
    fix_value=''
    path=['tweet']
)}

Step 1

[a prompt asking the model to correct the mistakes]

Raw LLM Output

{"tweet": "Upgrade your data lake to 2500 BCE technology and make your data engineering more efficient. Take action now and make your data engineering more efficient! #DataEngineering"}

Validated Output

{"tweet": "Upgrade your data lake to 2500 BCE technology and make your data engineering more efficient. Take action now and make your data engineering more efficient!"}

Finally, we got a tweet we can use. BTW, here is a tweet about this article (generated using only the first section): “Don’t let AI models keep making the same mistakes! Use Guardrails, an open-source Python package, to validate and correct the output of large language models. Get precise control over the output with custom validators and corrections!” It’s a decent tweet.


Do you need help building AI-powered applications for your business?
You can hire me!

Older post

How to Build a ChatGPT Plugin in Python?

A step-by-step guide to building a ChatGPT plugin in Python to retrieve data from the knowledge base stored in a vector database

Newer post

Deploy LLMs with Confidence: A Comprehensive Guide to Software Architecture for Production-Ready AI

Learn the essentials of deploying large language models in production with our comprehensive guide on software architecture for AI

Are you looking for an experienced AI consultant? Do you need assistance with your RAG or Agentic Workflow?
Schedule a call, send me a message on LinkedIn. Schedule a call or send me a message on LinkedIn

>