Only OpenAI models are supported via Azure at this time.
Prerequisites
- An Azure account with an active subscription
- Emby account (Pro plan required for provider keys) or self-hosted instance (free)
Overview
Azure provides enterprise-grade access to OpenAI models with enhanced security, compliance, and regional availability. Emby integrates seamlessly with Azure deployments.Create Azure Resource
Create an Azure OpenAI Resource
- Log into the Azure Portal (https://portal.azure.com)
- Click Create a resource
- Search for Azure OpenAI and select it
- Click Create
- Configure the resource:
- Subscription: Select your Azure subscription
- Resource group: Create new or select existing
- Region: Choose a region (e.g., East US, West Europe)
- Name: Enter a unique resource name (this will be your
<resource-name>) - Pricing tier: Select Standard S0
- Click Review + create, then Create
- Wait for deployment to complete
https://<resource-name>.openai.azure.comDeploy Models
- Navigate to your Azure resource in the Azure Portal
- Click Go to Azure OpenAI Studio or visit https://oai.azure.com
- In Azure Studio, select Deployments from the left sidebar
- Click Create new deployment
- Configure your deployment:
- Model: Select a model (e.g., gpt-5, gpt-5-mini, gpt-4-turbo)
- Deployment name: Enter a name (this must match the model identifier you’ll use – use the pre-filled name)
- Model version: Select the latest version
- Deployment type: Global Standard
- Click Create
- Repeat for additional models you want to use
- For
gpt-5-mini→ deployment name should begpt-5-mini - For
gpt-5→ deployment name should begpt-5etc.
Get API Key
- In the Azure Portal, go to your Azure resource
- Click Keys and Endpoint in the left sidebar
- Copy Key 1 or Key 2
- Note your Endpoint URL (should be
https://<resource-name>.openai.azure.com)
Add to Emby
Navigate to Provider Keys
- Log into Emby Dashboard
- Select your organization and project
- Go to Provider Keys in the sidebar
Add Azure Provider Key
- Click Add for Azure
- Enter your API Key from Azure Portal
- Enter your Resource Name (the name from your Azure endpoint URL)
- Example: If your endpoint is
https://my-openai-resource.openai.azure.com, entermy-openai-resource
- Example: If your endpoint is
- Select your preferred type (Azure OpenAI or AI Foundry)
- Adapt the Validation Model to a model that you already deployed and is available This is a one time check to ensure the API key is valid and the model can be accessed.
- Click Add Key
Test the Integration
Test your integration with a simple API call:EMBY_API_KEY with your Emby API key.Available Models
Once configured, you can access your Azure deployments through Emby:- GPT-5:
azure/gpt-5 - GPT-5 Mini:
azure/gpt-5-mini - GPT-4o:
azure/gpt-4o - GPT-4o Mini:
azure/gpt-4o-mini
Troubleshooting
”Deployment not found” error
- Verify you’ve created a deployment in Azure Studio
- Ensure the deployment name exactly matches the model name you’re requesting
- Check that the deployment is in the same resource as your API key
”Resource not found” error
- Verify the resource name is correct (check your Azure Portal endpoint URL)
- Ensure your API key belongs to the correct Azure resource
- Confirm the resource is in an active state in Azure Portal
Rate limiting
- Azure has Tokens Per Minute (TPM) quotas per deployment
- Monitor usage in Azure Studio under Quotas
- Request quota increases through Azure Portal if needed for high-volume workloads
Region availability
- Not all models are available in all Azure regions
- Check Azure model availability for your region
- Consider creating resources in multiple regions for better availability

