W/Y expert Description
Create your custom assistant and experiment with all the available parameters of ChatGPT.
Additionally, you can choose to keep the context of your conversation and manage your personal assistant's memory.
Your personal data will NOT be collected. To use this app, you must have an account with OpenAI (openai.com), the world's most popular AI service provider.
Thank you all for your positive comments and feedback. Have fun!
--
Current parameters to play with:
- model: gpt-4 or gpt-3.5-turbo
- endpoint: editable, current default /v1/chat/completions
- max_tokens: The maximum number of tokens to generate in the completion.
- temperature: It makes the output more random or more focused and deterministic.
- top_p: Nucleus sampling, the model considers the results of the tokens with top_p probability mass.
- n: How many completions to generate for each prompt.
- stop: Sequence where the API will stop generating further tokens.
- presence_penalty: Positive values increase the model's likelihood to talk about new topics.
- frequency_penalty: Positive values decrease the model's likelihood to repeat the same line verbatim.
Additionally, you can choose to keep the context of your conversation and manage your personal assistant's memory.
Your personal data will NOT be collected. To use this app, you must have an account with OpenAI (openai.com), the world's most popular AI service provider.
Thank you all for your positive comments and feedback. Have fun!
--
Current parameters to play with:
- model: gpt-4 or gpt-3.5-turbo
- endpoint: editable, current default /v1/chat/completions
- max_tokens: The maximum number of tokens to generate in the completion.
- temperature: It makes the output more random or more focused and deterministic.
- top_p: Nucleus sampling, the model considers the results of the tokens with top_p probability mass.
- n: How many completions to generate for each prompt.
- stop: Sequence where the API will stop generating further tokens.
- presence_penalty: Positive values increase the model's likelihood to talk about new topics.
- frequency_penalty: Positive values decrease the model's likelihood to repeat the same line verbatim.
Open up