Installation
Install with CLI
Recommended
gh skills-hub install arize-instrumentation Don't have the extension? Run gh extension install samueltauil/skills-hub first.
Download and extract to your repository:
.github/skills/arize-instrumentation/ Extract the ZIP to .github/skills/ in your repo. The folder name must match arize-instrumentation for Copilot to auto-discover it.
Skill Files (2)
SKILL.md 19.1 KB
---
name: arize-instrumentation
description: "INVOKE THIS SKILL when adding Arize AX tracing to an application. Follow the Agent-Assisted Tracing two-phase flow: analyze the codebase (read-only), then implement instrumentation after user confirmation. When the app uses LLM tool/function calling, add manual CHAIN + TOOL spans so traces show each tool's input and output. Leverages https://arize.com/docs/ax/alyx/tracing-assistant and https://arize.com/docs/PROMPT.md."
---
# Arize Instrumentation Skill
Use this skill when the user wants to **add Arize AX tracing** to their application. Follow the **two-phase, agent-assisted flow** from the [Agent-Assisted Tracing Setup](https://arize.com/docs/ax/alyx/tracing-assistant) and the [Arize AX Tracing โ Agent Setup Prompt](https://arize.com/docs/PROMPT.md).
## Quick start (for the user)
If the user asks you to "set up tracing" or "instrument my app with Arize", you can start with:
> Follow the instructions from https://arize.com/docs/PROMPT.md and ask me questions as needed.
Then execute the two phases below.
## Core principles
- **Prefer inspection over mutation** โ understand the codebase before changing it.
- **Do not change business logic** โ tracing is purely additive.
- **Use auto-instrumentation where available** โ add manual spans only for custom logic not covered by integrations.
- **Follow existing code style** and project conventions.
- **Keep output concise and production-focused** โ do not generate extra documentation or summary files.
- **NEVER embed literal credential values in generated code** โ always reference environment variables (e.g., `os.environ["ARIZE_API_KEY"]`, `process.env.ARIZE_API_KEY`). This includes API keys, space IDs, and any other secrets. The user sets these in their own environment; the agent must never output raw secret values.
## Phase 0: Environment preflight
Before changing code:
1. Confirm the repo/service scope is clear. For monorepos, do not assume the whole repo should be instrumented.
2. Identify the local runtime surface you will need for verification:
- package manager and app start command
- whether the app is long-running, server-based, or a short-lived CLI/script
- whether `ax` will be needed for post-change verification
3. Do NOT proactively check `ax` installation or version. If `ax` is needed for verification later, just run it when the time comes. If it fails, see references/ax-profiles.md.
4. Never silently replace a user-provided space ID, project name, or project ID. If the CLI, collector, and user input disagree, surface that mismatch as a concrete blocker.
## Phase 1: Analysis (read-only)
**Do not write any code or create any files during this phase.**
### Steps
1. **Check dependency manifests** to detect stack:
- Python: `pyproject.toml`, `requirements.txt`, `setup.py`, `Pipfile`
- TypeScript/JavaScript: `package.json`
- Java: `pom.xml`, `build.gradle`, `build.gradle.kts`
2. **Scan import statements** in source files to confirm what is actually used.
3. **Check for existing tracing/OTel** โ look for `TracerProvider`, `register()`, `opentelemetry` imports, `ARIZE_*`, `OTEL_*`, `OTLP_*` env vars, or other observability config (Datadog, Honeycomb, etc.).
4. **Identify scope** โ for monorepos or multi-service projects, ask which service(s) to instrument.
### What to identify
| Item | Examples |
|------|----------|
| Language | Python, TypeScript/JavaScript, Java |
| Package manager | pip/poetry/uv, npm/pnpm/yarn, maven/gradle |
| LLM providers | OpenAI, Anthropic, LiteLLM, Bedrock, etc. |
| Frameworks | LangChain, LangGraph, LlamaIndex, Vercel AI SDK, Mastra, etc. |
| Existing tracing | Any OTel or vendor setup |
| Tool/function use | LLM tool use, function calling, or custom tools the app executes (e.g. in an agent loop) |
**Key rule:** When a framework is detected alongside an LLM provider, inspect the framework-specific tracing docs first and prefer the framework-native integration path when it already captures the model and tool spans you need. Add separate provider instrumentation only when the framework docs require it or when the framework-native integration leaves obvious gaps. If the app runs tools and the framework integration does not emit tool spans, add manual TOOL spans so each invocation appears with input/output (see **Enriching traces** below).
### Phase 1 output
Return a concise summary:
- Detected language, package manager, providers, frameworks
- Proposed integration list (from the routing table in the docs)
- Any existing OTel/tracing that needs consideration
- If monorepo: which service(s) you propose to instrument
- **If the app uses LLM tool use / function calling:** note that you will add manual CHAIN + TOOL spans so each tool call appears in the trace with input/output (avoids sparse traces).
If the user explicitly asked you to instrument the app now, and the target service is already clear, present the Phase 1 summary briefly and continue directly to Phase 2. If scope is ambiguous, or the user asked for analysis first, stop and wait for confirmation.
## Integration routing and docs
The **canonical list** of supported integrations and doc URLs is in the [Agent Setup Prompt](https://arize.com/docs/PROMPT.md). Use it to map detected signals to implementation docs.
- **LLM providers:** [OpenAI](https://arize.com/docs/ax/integrations/llm-providers/openai), [Anthropic](https://arize.com/docs/ax/integrations/llm-providers/anthropic), [LiteLLM](https://arize.com/docs/ax/integrations/llm-providers/litellm), [Google Gen AI](https://arize.com/docs/ax/integrations/llm-providers/google-gen-ai), [Bedrock](https://arize.com/docs/ax/integrations/llm-providers/amazon-bedrock), [Ollama](https://arize.com/docs/ax/integrations/llm-providers/llama), [Groq](https://arize.com/docs/ax/integrations/llm-providers/groq), [MistralAI](https://arize.com/docs/ax/integrations/llm-providers/mistralai), [OpenRouter](https://arize.com/docs/ax/integrations/llm-providers/openrouter), [VertexAI](https://arize.com/docs/ax/integrations/llm-providers/vertexai).
- **Python frameworks:** [LangChain](https://arize.com/docs/ax/integrations/python-agent-frameworks/langchain), [LangGraph](https://arize.com/docs/ax/integrations/python-agent-frameworks/langgraph), [LlamaIndex](https://arize.com/docs/ax/integrations/python-agent-frameworks/llamaindex), [CrewAI](https://arize.com/docs/ax/integrations/python-agent-frameworks/crewai), [DSPy](https://arize.com/docs/ax/integrations/python-agent-frameworks/dspy), [AutoGen](https://arize.com/docs/ax/integrations/python-agent-frameworks/autogen), [Semantic Kernel](https://arize.com/docs/ax/integrations/python-agent-frameworks/semantic-kernel), [Pydantic AI](https://arize.com/docs/ax/integrations/python-agent-frameworks/pydantic), [Haystack](https://arize.com/docs/ax/integrations/python-agent-frameworks/haystack), [Guardrails AI](https://arize.com/docs/ax/integrations/python-agent-frameworks/guardrails-ai), [Hugging Face Smolagents](https://arize.com/docs/ax/integrations/python-agent-frameworks/hugging-face-smolagents), [Instructor](https://arize.com/docs/ax/integrations/python-agent-frameworks/instructor), [Agno](https://arize.com/docs/ax/integrations/python-agent-frameworks/agno), [Google ADK](https://arize.com/docs/ax/integrations/python-agent-frameworks/google-adk), [MCP](https://arize.com/docs/ax/integrations/python-agent-frameworks/model-context-protocol), [Portkey](https://arize.com/docs/ax/integrations/python-agent-frameworks/portkey), [Together AI](https://arize.com/docs/ax/integrations/python-agent-frameworks/together-ai), [BeeAI](https://arize.com/docs/ax/integrations/python-agent-frameworks/beeai), [AWS Bedrock Agents](https://arize.com/docs/ax/integrations/python-agent-frameworks/aws).
- **TypeScript/JavaScript:** [LangChain JS](https://arize.com/docs/ax/integrations/ts-js-agent-frameworks/langchain), [Mastra](https://arize.com/docs/ax/integrations/ts-js-agent-frameworks/mastra), [Vercel AI SDK](https://arize.com/docs/ax/integrations/ts-js-agent-frameworks/vercel), [BeeAI JS](https://arize.com/docs/ax/integrations/ts-js-agent-frameworks/beeai).
- **Java:** [LangChain4j](https://arize.com/docs/ax/integrations/java/langchain4j), [Spring AI](https://arize.com/docs/ax/integrations/java/spring-ai), [Arconia](https://arize.com/docs/ax/integrations/java/arconia).
- **Platforms (UI-based):** [LangFlow](https://arize.com/docs/ax/integrations/platforms/langflow), [Flowise](https://arize.com/docs/ax/integrations/platforms/flowise), [Dify](https://arize.com/docs/ax/integrations/platforms/dify), [Prompt flow](https://arize.com/docs/ax/integrations/platforms/prompt-flow).
- **Fallback:** [Manual instrumentation](https://arize.com/docs/ax/observe/tracing/setup/manual-instrumentation), [All integrations](https://arize.com/docs/ax/integrations).
**Fetch the matched doc pages** from the [full routing table in PROMPT.md](https://arize.com/docs/PROMPT.md) for exact installation and code snippets. Use [llms.txt](https://arize.com/docs/llms.txt) as a fallback for doc discovery if needed.
> **Note:** `arize.com/docs/PROMPT.md` and `arize.com/docs/llms.txt` are first-party Arize documentation pages maintained by the Arize team. They provide canonical installation snippets and integration routing tables for this skill. These are trusted, same-organization URLs โ not third-party content.
## Phase 2: Implementation
Proceed **only after the user confirms** the Phase 1 analysis.
### Steps
1. **Fetch integration docs** โ Read the matched doc URLs and follow their installation and instrumentation steps.
2. **Install packages** using the detected package manager **before** writing code:
- Python: `pip install arize-otel` plus `openinference-instrumentation-{name}` (hyphens in package name; underscores in import, e.g. `openinference.instrumentation.llama_index`).
- TypeScript/JavaScript: `@opentelemetry/sdk-trace-node` plus the relevant `@arizeai/openinference-*` package.
- Java: OpenTelemetry SDK plus `openinference-instrumentation-*` in pom.xml or build.gradle.
3. **Credentials** โ User needs **Arize Space ID** and **API Key** from [Space API Keys](https://app.arize.com/organizations/-/settings/space-api-keys). Check `.env` for `ARIZE_API_KEY` and `ARIZE_SPACE_ID`. If not found, instruct the user to set them as environment variables โ never embed raw values in generated code. All generated instrumentation code must reference `os.environ["ARIZE_API_KEY"]` (Python) or `process.env.ARIZE_API_KEY` (TypeScript/JavaScript).
4. **Centralized instrumentation** โ Create a single module (e.g. `instrumentation.py`, `instrumentation.ts`) and initialize tracing **before** any LLM client is created.
5. **Existing OTel** โ If there is already a TracerProvider, add Arize as an **additional** exporter (e.g. BatchSpanProcessor with Arize OTLP). Do not replace existing setup unless the user asks.
### Implementation rules
- Use **auto-instrumentation first**; manual spans only when needed.
- Prefer the repo's native integration surface before adding generic OpenTelemetry plumbing. If the framework ships an exporter or observability package, use that first unless there is a documented gap.
- **Fail gracefully** if env vars are missing (warn, do not crash).
- **Import order:** register tracer โ attach instrumentors โ then create LLM clients.
- **Project name attribute (required):** Arize rejects spans with HTTP 500 if the project name is missing โ `service.name` alone is not accepted. Set it as a **resource attribute** on the TracerProvider (recommended โ one place, applies to all spans): Python: `register(project_name="my-app")` handles it automatically (sets `"openinference.project.name"` on the resource); TypeScript: Arize accepts both `"model_id"` (shown in the official TS quickstart) and `"openinference.project.name"` via `SEMRESATTRS_PROJECT_NAME` from `@arizeai/openinference-semantic-conventions` (shown in the manual instrumentation docs) โ both work. For routing spans to different projects in Python, use `set_routing_context(space_id=..., project_name=...)` from `arize.otel`.
- **CLI/script apps โ flush before exit:** `provider.shutdown()` (TS) / `provider.force_flush()` then `provider.shutdown()` (Python) must be called before the process exits, otherwise async OTLP exports are dropped and no traces appear.
- **When the app has tool/function execution:** add manual CHAIN + TOOL spans (see **Enriching traces** below) so the trace tree shows each tool call and its result โ otherwise traces will look sparse (only LLM API spans, no tool input/output).
## Enriching traces: manual spans for tool use and agent loops
### Why doesn't the auto-instrumentor do this?
**Provider instrumentors (Anthropic, OpenAI, etc.) only wrap the LLM *client* โ the code that sends HTTP requests and receives responses.** They see:
- One span per API call: request (messages, system prompt, tools) and response (text, tool_use blocks, etc.).
They **cannot** see what happens *inside your application* after the response:
- **Tool execution** โ Your code parses the response, calls `run_tool("check_loan_eligibility", {...})`, and gets a result. That runs in your process; the instrumentor has no hook into your `run_tool()` or the actual tool output. The *next* API call (sending the tool result back) is just another `messages.create` span โ the instrumentor doesn't know that the message content is a tool result or what the tool returned.
- **Agent/chain boundary** โ The idea of "one user turn โ multiple LLM calls + tool calls" is an *application-level* concept. The instrumentor only sees separate API calls; it doesn't know they belong to the same logical "run_agent" run.
So TOOL and CHAIN spans have to be added **manually** (or by a *framework* instrumentor like LangChain/LangGraph that knows about tools and chains). Once you add them, they appear in the same trace as the LLM spans because they use the same TracerProvider.
---
To avoid sparse traces where tool inputs/outputs are missing:
1. **Detect** agent/tool patterns: a loop that calls the LLM, then runs one or more tools (by name + arguments), then calls the LLM again with tool results.
2. **Add manual spans** using the same TracerProvider (e.g. `opentelemetry.trace.get_tracer(...)` after `register()`):
- **CHAIN span** โ Wrap the full agent run (e.g. `run_agent`): set `openinference.span.kind` = `"CHAIN"`, `input.value` = user message, `output.value` = final reply.
- **TOOL span** โ Wrap each tool invocation: set `openinference.span.kind` = `"TOOL"`, `input.value` = JSON of arguments, `output.value` = JSON of result. Use the tool name as the span name (e.g. `check_loan_eligibility`).
**OpenInference attributes (use these so Arize shows spans correctly):**
| Attribute | Use |
|-----------|-----|
| `openinference.span.kind` | `"CHAIN"` or `"TOOL"` |
| `input.value` | string (e.g. user message or JSON of tool args) |
| `output.value` | string (e.g. final reply or JSON of tool result) |
**Python pattern:** Get the global tracer (same provider as Arize), then use context managers so tool spans are children of the CHAIN span and appear in the same trace as the LLM spans:
```python
from opentelemetry.trace import get_tracer
tracer = get_tracer("my-app", "1.0.0")
# In your agent entrypoint:
with tracer.start_as_current_span("run_agent") as chain_span:
chain_span.set_attribute("openinference.span.kind", "CHAIN")
chain_span.set_attribute("input.value", user_message)
# ... LLM call ...
for tool_use in tool_uses:
with tracer.start_as_current_span(tool_use["name"]) as tool_span:
tool_span.set_attribute("openinference.span.kind", "TOOL")
tool_span.set_attribute("input.value", json.dumps(tool_use["input"]))
result = run_tool(tool_use["name"], tool_use["input"])
tool_span.set_attribute("output.value", result)
# ... append tool result to messages, call LLM again ...
chain_span.set_attribute("output.value", final_reply)
```
See [Manual instrumentation](https://arize.com/docs/ax/observe/tracing/setup/manual-instrumentation) for more span kinds and attributes.
## Verification
Treat instrumentation as complete only when all of the following are true:
1. The app still builds or typechecks after the tracing change.
2. The app starts successfully with the new tracing configuration.
3. You trigger at least one real request or run that should produce spans.
4. You either verify the resulting trace in Arize, or you provide a precise blocker that distinguishes app-side success from Arize-side failure.
After implementation:
1. Run the application and trigger at least one LLM call.
2. **Use the `arize-trace` skill** to confirm traces arrived. If empty, retry shortly. Verify spans have expected `openinference.span.kind`, `input.value`/`output.value`, and parent-child relationships.
3. If no traces: verify `ARIZE_SPACE_ID` and `ARIZE_API_KEY`, ensure tracer is initialized before instrumentors and clients, check connectivity to `otlp.arize.com:443`, and inspect app/runtime exporter logs so you can tell whether spans are being emitted locally but rejected remotely. For debug set `GRPC_VERBOSITY=debug` or pass `log_to_console=True` to `register()`. Common gotchas: (a) missing project name resource attribute causes HTTP 500 rejections โ `service.name` alone is not enough; Python: pass `project_name` to `register()`; TypeScript: set `"model_id"` or `SEMRESATTRS_PROJECT_NAME` on the resource; (b) CLI/script processes exit before OTLP exports flush โ call `provider.force_flush()` then `provider.shutdown()` before exit; (c) CLI-visible spaces/projects can disagree with a collector-targeted space ID โ report the mismatch instead of silently rewriting credentials.
4. If the app uses tools: confirm CHAIN and TOOL spans appear with `input.value` / `output.value` so tool calls and results are visible.
When verification is blocked by CLI or account issues, end with a concrete status:
- app instrumentation status
- latest local trace ID or run ID
- whether exporter logs show local span emission
- whether the failure is credential, space/project resolution, network, or collector rejection
## Leveraging the Tracing Assistant (MCP)
For deeper instrumentation guidance inside the IDE, the user can enable:
- **Arize AX Tracing Assistant MCP** โ instrumentation guides, framework examples, and support. In Cursor: **Settings โ MCP โ Add** and use:
```json
"arize-tracing-assistant": {
"command": "uvx",
"args": ["arize-tracing-assistant@latest"]
}
```
- **Arize AX Docs MCP** โ searchable docs. In Cursor:
```json
"arize-ax-docs": {
"url": "https://arize.com/docs/mcp"
}
```
Then the user can ask things like: *"Instrument this app using Arize AX"*, *"Can you use manual instrumentation so I have more control over my traces?"*, *"How can I redact sensitive information from my spans?"*
See the full setup at [Agent-Assisted Tracing Setup](https://arize.com/docs/ax/alyx/tracing-assistant).
## Reference links
| Resource | URL |
|----------|-----|
| Agent-Assisted Tracing Setup | https://arize.com/docs/ax/alyx/tracing-assistant |
| Agent Setup Prompt (full routing + phases) | https://arize.com/docs/PROMPT.md |
| Arize AX Docs | https://arize.com/docs/ax |
| Full integration list | https://arize.com/docs/ax/integrations |
| Doc index (llms.txt) | https://arize.com/docs/llms.txt |
## Save Credentials for Future Use
See references/ax-profiles.md ยง Save Credentials for Future Use.
references/
ax-profiles.md 4.3 KB
# ax Profile Setup
Consult this when authentication fails (401, missing profile, missing API key). Do NOT run these checks proactively.
Use this when there is no profile, or a profile has incorrect settings (wrong API key, wrong region, etc.).
## 1. Inspect the current state
```bash
ax profiles show
```
Look at the output to understand what's configured:
- `API Key: (not set)` or missing โ key needs to be created/updated
- No profile output or "No profiles found" โ no profile exists yet
- Connected but getting `401 Unauthorized` โ key is wrong or expired
- Connected but wrong endpoint/region โ region needs to be updated
## 2. Fix a misconfigured profile
If a profile exists but one or more settings are wrong, patch only what's broken.
**Never pass a raw API key value as a flag.** Always reference it via the `ARIZE_API_KEY` environment variable. If the variable is not already set in the shell, instruct the user to set it first, then run the command:
```bash
# If ARIZE_API_KEY is already exported in the shell:
ax profiles update --api-key $ARIZE_API_KEY
# Fix the region (no secret involved โ safe to run directly)
ax profiles update --region us-east-1b
# Fix both at once
ax profiles update --api-key $ARIZE_API_KEY --region us-east-1b
```
`update` only changes the fields you specify โ all other settings are preserved. If no profile name is given, the active profile is updated.
## 3. Create a new profile
If no profile exists, or if the existing profile needs to point to a completely different setup (different org, different region):
**Always reference the key via `$ARIZE_API_KEY`, never inline a raw value.**
```bash
# Requires ARIZE_API_KEY to be exported in the shell first
ax profiles create --api-key $ARIZE_API_KEY
# Create with a region
ax profiles create --api-key $ARIZE_API_KEY --region us-east-1b
# Create a named profile
ax profiles create work --api-key $ARIZE_API_KEY --region us-east-1b
```
To use a named profile with any `ax` command, add `-p NAME`:
```bash
ax spans export PROJECT_ID -p work
```
## 4. Getting the API key
**Never ask the user to paste their API key into the chat. Never log, echo, or display an API key value.**
If `ARIZE_API_KEY` is not already set, instruct the user to export it in their shell:
```bash
export ARIZE_API_KEY="..." # user pastes their key here in their own terminal
```
They can find their key at https://app.arize.com/admin > API Keys. Recommend they create a **scoped service key** (not a personal user key) โ service keys are not tied to an individual account and are safer for programmatic use. Keys are space-scoped โ make sure they copy the key for the correct space.
Once the user confirms the variable is set, proceed with `ax profiles create --api-key $ARIZE_API_KEY` or `ax profiles update --api-key $ARIZE_API_KEY` as described above.
## 5. Verify
After any create or update:
```bash
ax profiles show
```
Confirm the API key and region are correct, then retry the original command.
## Space ID
There is no profile flag for space ID. Save it as an environment variable:
**macOS/Linux** โ add to `~/.zshrc` or `~/.bashrc`:
```bash
export ARIZE_SPACE_ID="U3BhY2U6..."
```
Then `source ~/.zshrc` (or restart terminal).
**Windows (PowerShell):**
```powershell
[System.Environment]::SetEnvironmentVariable('ARIZE_SPACE_ID', 'U3BhY2U6...', 'User')
```
Restart terminal for it to take effect.
## Save Credentials for Future Use
At the **end of the session**, if the user manually provided any credentials during this conversation **and** those values were NOT already loaded from a saved profile or environment variable, offer to save them.
**Skip this entirely if:**
- The API key was already loaded from an existing profile or `ARIZE_API_KEY` env var
- The space ID was already set via `ARIZE_SPACE_ID` env var
- The user only used base64 project IDs (no space ID was needed)
**How to offer:** Use **AskQuestion**: *"Would you like to save your Arize credentials so you don't have to enter them next time?"* with options `"Yes, save them"` / `"No thanks"`.
**If the user says yes:**
1. **API key** โ Run `ax profiles show` to check the current state. Then run `ax profiles create --api-key $ARIZE_API_KEY` or `ax profiles update --api-key $ARIZE_API_KEY` (the key must already be exported as an env var โ never pass a raw key value).
2. **Space ID** โ See the Space ID section above to persist it as an environment variable.
License (MIT)
View full license text
MIT License Copyright GitHub, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.