ChatGPT maker OpenAI has proposed a new international body for regulating artificial intelligence (AI). Led by CEO Sam Altman, the company said that AI systems in the next ten years could have expert skills in most domains and could do tasks productively akin to the largest corporations today.
In a blog post on Monday, OpenAI explained the reason for regularizing AI, the company said, “The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems.”
“We don’t yet know how to design such a mechanism, but we plan to experiment with its development.” OpenAI addedÂ
The blog post was written by Open AI founder Sam Altman, President Greg Brockman and  Chief Scientist Ilya Sutskever. The post compares ‘superintelligence’ to nuclear energy and suggested the creation of an authority similar to the International Atomic Energy Agency to mitigate the risks of AI.
OpenAI’s plan for tackling the challenges posed by AI:
OpenAI proposed a three-point agenda to mitigate the risks of superintelligent AI systems of the future.
1) Coordination among AI makers: OpenAI’s blog post suggest that companies that make AI systems such as Bard, Anthropic, Bing should make a coordinated effort to ensure that the development of ‘superintelligence’ happens in a way that ensures safety and helps the smooth integration of these systems into society.
The ChatGPT maker has suggested two ways in which this coordination could take place: governments around the world could set up a regulatory system involving leading AI manufacturers, or these companies could agree among themselves to limit AI growth to a certain rate per year.
2) International regulatory body: OpenAI has suggested a new international body much like International Atomic Energy Agency in order to mitigate the existential risks posed by superintelligent AI systems. As per OpenAI, the proposed new body should have the authority to inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.
3) Safer superintelligence: OpenAI says that the company is working on making artificial intelligence systems safer and more aligned with human values and following human intent.
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less