Skip to main content
POST
/
completions
curl --request POST \ --url https://dev.emby.ai/v1/completions \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "model": "llama-3.3-70b-versatile", "prompt": "Say this is a test", "max_tokens": 100, "temperature": 0.7 } '
{
  "id": "cmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW",
  "object": "text_completion",
  "created": 1677664795,
  "model": "llama-3.3-70b-versatile",
  "choices": [
    {
      "text": " This is indeed a test",
      "index": 0,
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 7,
    "total_tokens": 12
  }
}

Authorizations

Authorization
string
header
required

Enter your Emby API key. Get your key from the Emby dashboard at https://dev.emby.ai

Body

application/json
model
string
required

ID of the model to use

Example:

"llama-3.3-70b-versatile"

prompt
required

The prompt(s) to generate completions for

Example:

"Say this is a test"

max_tokens
integer
default:16

The maximum number of tokens to generate

Required range: x >= 1
Example:

100

temperature
number
default:1

Sampling temperature between 0 and 2

Required range: 0 <= x <= 2
Example:

0.7

top_p
number
default:1

Nucleus sampling parameter

Required range: 0 <= x <= 1
Example:

1

n
integer
default:1

Number of completions to generate

Required range: 1 <= x <= 128
Example:

1

stream
boolean
default:false

Whether to stream back partial progress

Example:

false

stop

Up to 4 sequences where the API will stop generating

Example:

"\n"

presence_penalty
number
default:0

Penalty for new tokens based on presence in text so far

Required range: -2 <= x <= 2
Example:

0

frequency_penalty
number
default:0

Penalty for new tokens based on frequency in text so far

Required range: -2 <= x <= 2
Example:

0

Response

Successfully created completion

id
string
required

Unique identifier for the completion

Example:

"cmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW"

object
enum<string>
required

Object type, always 'text_completion'

Available options:
text_completion
created
integer
required

Unix timestamp (in seconds) when the completion was created

Example:

1677664795

model
string
required

The model used for completion

Example:

"llama-3.3-70b-versatile"

choices
object[]
required

List of completion choices

usage
object