Skip links

How Should AI Systems Behave and Who Should Decide?

How Should AI Systems Behave and Who Should Decide?

A new ChatGPT Business subscription is also in the works for professionals who need more control over their data and businesses who want to manage their end users. ChatGPT Business will follow the API's data usage policies, which means that end users' data will not be used to train models by default. ChatGPT Business is planned to be available in the coming months.

Finally, a new Export option in the settings makes it much easier to export your ChatGPT data and understand what information ChatGPT is storing. 

Unlike ordinary software, models are huge neural networks. Their behavior is learned from a wide range of data, not explicitly programmed. While not a perfect analogy, the process is more like training a dog than ordinary programming. First comes a "pre-training" phase, in which the model learns to predict the next word in a sentence, informed by exposure to a large number of Internet texts (and a wide variety of viewpoints). This is followed by a second phase in which the model is "fine-tuned" to narrow down the system behavior.

As of today, this process is imperfect. Sometimes the fine-tuning process falls short of the Ai goal (to produce a safe and useful tool) and the user's goal (to produce a useful output in response to a given input). Developing ways to align AI systems with human values is a top priority for the company, especially as AI systems become more capable.

Committed to expanding access to, utilization of and impact on AI and AGI. At least three building blocks exist to achieve these goals in the context of AI system behavior.

  1. Improve default behavior. We want as many users as possible to find AI systems useful to them "out of the box" and to feel that our technology understands and respects their values.

    To this end, research and engineering is being invested to reduce both the salient and subtle biases in how ChatGPT responds to different inputs. In some cases, ChatGPT rejects outputs that it should not currently reject, and in other cases it does not reject when it should. There is also room for improvement in other aspects of system behavior, such as the system "making things up". Feedback from users is invaluable for making these improvements.

  2. Define the values of your AI within broad limits. We believe that AI should be a useful tool for individual people and therefore can be customized by each user to the limits defined by society. For this reason, an upgrade to ChatGPT is being developed that allows users to easily customize their behavior.

    This will mean allowing system outputs that other people (including ourselves) may strongly disagree with. Striking the right balance here will be difficult; taking privatization to the extreme risks enabling malicious uses of our technology and sycophantic AIs that mindlessly reinforce people's existing beliefs.

    Therefore there will always be some limits on system behavior. The challenge is to define what these limits are. 

  3. Public input on defaults and hard limits. One way to avoid excessive concentration of power is to give people who use or are affected by systems like ChatGPT the ability to influence their rules.

  • Get yourself comfortable.
  • Manage your workspace and organize your desk.
  • Adjust the work/life balance.
  • Keep In touch with your co-workers.
This website uses cookies to improve your web experience.