Quick Configuration
- OpenAI Compatible
- Anthropic Compatible
- Environment Variables
Use this setup for
gpt-4o, gpt-4o-mini, gemini-1.5-pro, gemini-1.5-flash, passy/deepseek-v3, passy/mistral-nemo, and other models returned by https://api.passy.ai/v1/models.Set Provider
- Select
OpenAI. - Set Base URL to
https://api.passy.ai/v1. - Paste your Passy API key from dash.passy.ai/keys.
For the OpenAI-compatible setup, keep
/v1 in the base URL.Recommended Models
Use exact model IDs as returned by the API.| Use Case | Setup | Recommended Model |
|---|---|---|
| Quick coding tasks | OpenAI-compatible | gpt-4o-mini |
| General-purpose coding | OpenAI-compatible | gpt-4o |
| Long-context work | OpenAI-compatible | gemini-1.5-pro |
| Low-cost generation | OpenAI-compatible | passy/mistral-nemo |
| Open source coding | OpenAI-compatible | passy/llama-3.1-70b-instruct |
| Strong reasoning | OpenAI-compatible | passy/deepseek-v3 |
| Claude workflow | Anthropic-compatible | claude-3-sonnet-20240229 |
| Highest Claude tier | Anthropic-compatible | claude-3-opus-20240229 |
Browse the public catalog at passy.ai/models or fetch the live OpenAI-compatible list from
https://api.passy.ai/v1/models.Verification
After saving your settings:- Open the Cline panel.
- Start a new chat.
- Ask:
What model are you using? - Ask:
Write a hello world function in Python.
Common Configurations
Fast Default
Claude Setup
DeepSeek Setup
Troubleshooting
No response or model initialization failed
No response or model initialization failed
- Check that the provider matches the base URL format.
- For OpenAI-compatible config use
https://api.passy.ai/v1. - For Anthropic-compatible config use
https://api.passy.ai. - Confirm the model ID exists in
https://api.passy.ai/v1/models.
Invalid API key or 401 errors
Invalid API key or 401 errors
- Regenerate or copy the key again from dash.passy.ai/keys.
- Reload VS Code after changing environment variables.
- Test the key manually with
curlagainsthttps://api.passy.ai/v1/models.
Claude models are not working
Claude models are not working
Make sure the provider is
anthropic and the base URL is https://api.passy.ai without /v1.GPT, Gemini, DeepSeek, or Llama models are not working
GPT, Gemini, DeepSeek, or Llama models are not working
Make sure the provider is
openai and the base URL is https://api.passy.ai/v1.Terminal integration is not executing commands
Terminal integration is not executing commands
Check Cline’s approval prompts inside VS Code and confirm the workspace is trusted. If terminal actions are blocked, review your Cline and VS Code terminal permissions.
Best Practices
Use Env Vars
Keep the API key in
${env:PASSY_API_KEY} instead of hardcoding it in settings.Use Exact IDs
Copy model IDs exactly as returned by the Passy models endpoint.
Pick the Right Provider
Use
openai for /v1 models and anthropic for Claude-style requests.Start with Fast Models
gpt-4o-mini and passy/mistral-nemo are good default starting points.Next Steps
Models Catalog
Browse currently available model IDs.
Cursor Setup
Configure Cursor with the same Passy endpoint.
Roo Code Setup
Use Passy with Roo Code as well.
Kilo Code Setup
Configure another coding assistant with Passy.
API Reference
Review authentication and request format details.
Dashboard
Manage keys and account settings.

