Can AI chatbots supported by large language models, like ChatGPT with GPT-4, be directed to be deterministic?

 

AI chatbots supported by large language models (LLMs) like ChatGPT with GPT-4 can be guided to behave more deterministically, but achieving true determinism is challenging due to the inherent stochastic nature of LLMs and their reliance on probabilistic algorithms. Here's a breakdown:

Why LLMs Are Naturally Non-Deterministic

1.   Probabilistic Nature: LLMs generate responses based on probabilities assigned to word sequences. Even small variations in inputs or internal states can lead to different outputs.

2.   Sampling Settings: The way responses are generated involves randomness, influenced by settings like temperature and top-p sampling.

How to Make LLMs More Deterministic

1.   Set Temperature to Zero:

o    The temperature parameter controls the randomness of the output. A value of 0 ensures the model always picks the most likely next word, reducing variability.

o    Example: A temperature of 0.7 allows for some randomness, while 0 makes responses more predictable.

2.   Fix a Seed (if supported):

o    Some implementations allow fixing a random seed for response generation, ensuring reproducibility of outputs for the same input.

o    This feature is more common in programming interfaces like the OpenAI API.

3.   Provide Clear and Consistent Inputs:

o    Small changes in phrasing, punctuation, or context can lead to different outputs. Use identical inputs for repeatable results.

4.   Limit Token Availability:

o    By constraining the number of tokens or narrowing the response scope, variability in responses can be reduced.

5.   Turn Off Certain Features:

o    Features like dynamic memory or adaptive reasoning can introduce variability. Disabling these (where possible) may help.

6.   Structured Contexts or Prompts:

o    Providing highly structured prompts with explicit instructions reduces ambiguity, making outputs more consistent.

Limitations to Determinism

Practical Determinism

For practical purposes, combining a low temperature setting with fixed inputs can make AI chatbots behave deterministically in most scenarios, though absolute determinism in all cases is unlikely without specialized adjustments.

 

From <https://chatgpt.com/>