Skip to content

API Keys

pwnkit’s api runtime (the default) makes direct HTTP calls to an LLM provider. You need to set an API key as an environment variable.

ProviderEnvironment VariableNotes
OpenRouterOPENROUTER_API_KEYRecommended. One key, access to many models (Claude, GPT-4, Llama, Mistral, and more). Includes free-tier models. Get a key at openrouter.ai.
AnthropicANTHROPIC_API_KEYDirect access to Claude models. Get a key at console.anthropic.com.
Azure OpenAIAZURE_OPENAI_API_KEYAzure-hosted OpenAI models. See Azure configuration below for additional settings.
OpenAIOPENAI_API_KEYDirect access to GPT models. Get a key at platform.openai.com.

When multiple API keys are set, pwnkit uses this priority:

  1. OPENROUTER_API_KEY (highest priority)
  2. ANTHROPIC_API_KEY
  3. AZURE_OPENAI_API_KEY
  4. OPENAI_API_KEY (lowest priority)

Only one key is needed. If you set multiple, the highest-priority one is used.

Terminal window
# Add to your shell profile (~/.zshrc, ~/.bashrc, etc.)
export OPENROUTER_API_KEY="sk-or-v1-..."

Then reload your shell or run source ~/.zshrc.

Add the key as a repository secret, then reference it in your workflow:

- uses: peaktwilight/pwnkit@main
with:
mode: review
path: .
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}

OpenRouter acts as a unified gateway to many LLM providers. Benefits:

  • One key, many models — access Claude, GPT-4, Llama, Mistral, and others
  • Free-tier models available — useful for testing and CI
  • Automatic fallback — if one provider is down, OpenRouter can route to another
  • Usage dashboard — track costs across all models in one place

Azure OpenAI is stricter than the other providers. The API key alone is not enough. pwnkit needs:

  • an Azure base URL
  • an Azure deployment/model name

You can provide those explicitly via env vars, or let pwnkit reuse them from ~/.codex/config.toml when Codex is already configured against Azure.

VariableRequiredDescription
AZURE_OPENAI_API_KEYYesYour Azure OpenAI API key
AZURE_OPENAI_BASE_URLYes, unless pwnkit can read it from Codex configBase URL for your Azure deployment. For the Responses API this should include /openai/v1.
AZURE_OPENAI_MODELYes, unless pwnkit can read it from Codex configAzure deployment/model name (not just a generic model family string)
AZURE_OPENAI_WIRE_APINoWire API format: chat_completions (default) or responses
Terminal window
export AZURE_OPENAI_API_KEY="your-azure-key"
export AZURE_OPENAI_BASE_URL="https://your-resource.openai.azure.com/openai/v1"
export AZURE_OPENAI_MODEL="gpt-4o"
export AZURE_OPENAI_WIRE_API="responses"

If you rely on Codex config instead of env vars, make sure ~/.codex/config.toml points at Azure and contains a usable Azure base URL plus model/deployment. If the selected Azure runtime is incomplete, pwnkit stops immediately with a configuration error instead of silently falling through to a broken scan.

If you prefer not to use API keys at all, you can use the CLI runtimes instead. These use your existing subscription to Claude, Codex, or Gemini:

Terminal window
# Use Claude Code CLI (requires Claude subscription)
npx pwnkit-cli scan --target https://api.example.com/chat --runtime claude
# Use Codex CLI
npx pwnkit-cli review ./my-repo --runtime codex
# Use Gemini CLI
npx pwnkit-cli review ./my-repo --runtime gemini

No API key environment variable is needed for CLI runtimes — authentication is handled by the respective CLI tool.