Skip to Content
ConceptsSupported ModelsOverview

Supported Models

Convoy supports a wide range of models. Use the Convoy Model ID in your requests — Convoy handles routing to the underlying provider automatically.

Browse by Provider

  • Anthropic — Claude 3, 3.5, 3.7, Sonnet 4, Opus 4, and more
  • Amazon Nova — Nova Micro, Lite, Pro, Premier, Nova 2 Lite
  • Meta Llama — Llama 3.1, 3.2, 3.3, Llama 4
  • Mistral AI — Mistral Small, Large, Ministral, Devstral, Voxtral
  • DeepSeek — DeepSeek V3.1, V3.2
  • Google — Gemma 3 (4B, 12B, 27B)
  • MiniMax — MiniMax M2, M2.1
  • NVIDIA — Nemotron Nano
  • Moonshot AI — Kimi K2, K2.5
  • OpenAI — GPT OSS 20B, 120B
  • Qwen — Qwen3 32B, 235B, Coder, VL
  • Z.AI — GLM 4.7, GLM 4.7 Flash

Model IDs

Each model has a Convoy Model ID — a stable, provider-agnostic identifier you use in all API requests. Convoy translates this to the correct provider-specific ID (e.g. a Bedrock ARN) at routing time.

convoy-model-id → provider-specific-id claude-3-haiku → anthropic.claude-3-haiku-20240307-v1:0 amazon-nova-pro → amazon.nova-pro-v1:0 llama-3.3-70b-instruct → meta.llama3-3-70b-instruct-v1:0

This means if Convoy adds support for a new provider in the future, your code doesn’t change — the same model ID continues to work.

Example Request

curl -X POST https://api.cnvy.ai/cargo/load \ -H "Content-Type: application/json" \ -H "X-API-Key: convoy_sk_your_key_here" \ -d '{ "params": { "model": "claude-3-haiku" }, "records": [ { "recordId": "rec_001", "modelInput": { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": "Summarize this document." } ] } } ] }'
Last updated on