Creates a completion for the provided prompt and parameters. This endpoint supports both streaming and non-streaming responses.
Enter your Emby API key. Get your key from the Emby dashboard at https://dev.emby.ai
ID of the model to use
"llama-3.3-70b-versatile"
The prompt(s) to generate completions for
"Say this is a test"
The maximum number of tokens to generate
x >= 1100
Sampling temperature between 0 and 2
0 <= x <= 20.7
Nucleus sampling parameter
0 <= x <= 11
Number of completions to generate
1 <= x <= 1281
Whether to stream back partial progress
false
Up to 4 sequences where the API will stop generating
"\n"
Penalty for new tokens based on presence in text so far
-2 <= x <= 20
Penalty for new tokens based on frequency in text so far
-2 <= x <= 20
Successfully created completion
Unique identifier for the completion
"cmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW"
Object type, always 'text_completion'
text_completion Unix timestamp (in seconds) when the completion was created
1677664795
The model used for completion
"llama-3.3-70b-versatile"
List of completion choices