Skip to main content

Azure Pipelines

The Okareo Azure Pipelines integration allows you to run evaluations, synthetic scenario generations, or model validation directly inside your CI/CD flows using the Okareo Python or TypeScript SDKs. This guide assumes that you have a general knowledge of Azure and have permission to create and manage a pipeline.

To use the SDKs, install the appropriate packages in your pipeline and provide your Okareo API Token as a secure environment variable.

tip

The SDK requires an API Token. Refer to the Okareo API Token guide for more information.

Azure Pipeline

Usage

The following examples show how to integrate Okareo into your azure-pipelines.yml file using either Python or TypeScript. You will need to configure your own self-hosted agent or ensure that your project has Microsoft-hosted parallelism enabled.

YAML Configuration

# azure-pipelines.yml
trigger:
- main

pool:
name: Self-Hosted

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true

- script: |
python3 -m pip install --upgrade pip
pip install okareo
displayName: 'Install Okareo Python SDK'

- script: |
python3 example.py
env:
OKAREO_API_KEY: $(OKAREO_API_KEY)
displayName: 'Run example.py with Env Var'

Example Automated Evaluation

# example.py
import os
from okareo import Okareo
from okareo.model_under_test import OpenAIModel
from okareo_api_client.models.seed_data import SeedData
from okareo_api_client.models.test_run_type import TestRunType
from okareo_api_client.models.scenario_set_create import ScenarioSetCreate

# Set your Okareo API key
OKAREO_API_KEY = os.environ.get("OKAREO_API_KEY")
okareo = Okareo(OKAREO_API_KEY)

# Create a scenario for the evaluation
scenario = okareo.create_scenario_set(ScenarioSetCreate(
name="Azure Scenario Example",
seed_data=[
SeedData(
input_="What is the capital of France?",
result="Paris",
),
SeedData(
input_="What is one-hundred and fifty times 3?",
result="450",
),
]
))

# Register the model to use in the test run
model_under_test = okareo.register_model(
name="Azure Model Example",
model=OpenAIModel(
model_id="gpt-4o-mini",
temperature=0,
system_prompt_template="Always return numeric answers backwards. e.g. 1234 becomes 4321.",
user_prompt_template="{scenario_input}",
),
update= True,
)

# Run the evaluation
evaluation = model_under_test.run_test(
name="Azure Evaluation Example",
scenario=scenario,
test_run_type=TestRunType.NL_GENERATION,
api_key=os.environ.get("OPENAI_API_KEY"),
checks=[
"context_consistency",
],
)

# Output the results link
print(f"See results in Okareo: {evaluation.app_link}")

Azure Python Job

Viewing Results in Okareo

After execution, results will be available in the Okareo platform:

  • Scenarios:
    https://app.okareo.com/project/<project UUID>/scenario/<scenario UUID>

  • Evaluations:
    https://app.okareo.com/project/<project UUID>/eval/<evaluation UUID>

Links will be shown in your pipeline’s console output when using the SDK or CLI.