Open Dev Kit Documentation :: Open AI :: AI Base Options
OpenAI.AIBaseOptions
Functions
Properties
- Max Tokens Integer
- The maximum number of tokens to generate in the completion.
- Temperature Float
- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or Top Probability but not both.
- Top Probability Float
- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with this mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
- Reasoning Effort AI Model Reasoning Effort
- Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
- Service Tier AI Model Service Tier
- Specifies the latency tier to use for processing the request.
- Response Schema Variant Dictionary
- Schema response format. Used to generate structured responses for Structured Outputs.
If you think anything is missing, please feel free to: submit documentation feedback on this page
