Many countries are making new rules and guidelines to control artificial intelligence or AI tools. AI technology is improving very fast. For example, OpenAI recently created ChatGPT which can chat like humans. Because AI is progressing so quickly, it is hard for governments to make proper laws. Here is what some countries are doing to control AI:
Australia Wants Advice on How to Deal With AI
Australia’s government has asked its main science advisory group for advice on how to handle AI. The government is thinking about what to do next, said someone who works with the industry and science minister.
India Has No Plans To Regulate AI Growth
The Indian government is not considering bringing a law or regulating the growth of artificial intelligence in the country," said IT Minister Ashwini Vaishnaw in a written parliamentary. Instead, the government is working on standardizing responsible AI development practices.
US Seeks Public Comments on Accountability for AI Systems
The Biden administration said on April 11 that it wants public comments on possible ways to hold AI systems accountable. Earlier, President Biden told his science and tech advisers that AI could help with healthcare and climate change. But it was also important to address the potential risks of AI to society, national security, and the economy.
Italy Lifts Temporary Ban on OpenAI’s ChatGPT Only If Demands Are Met
Italy's data protection agency temporarily banned OpenAI's ChatGPT on March 31. The agency was worried that ChatGPT might violate privacy and that it did not check if users were at least 13 years old, as the agency had asked. On April 13, the agency gave OpenAI until the end of April to meet its demands on data protection and privacy. Only then can ChatGPT be used again in Italy.
Britain Plans to Divide Responsibility for Controlling AI
Britain said in March that it plans to divide responsibility for controlling AI between groups that already protect human rights, health, safety, and fair competition. Britain will not make a new group for this.
China Makes Rules to Manage AI Systems That Generate Content
China’s agency that controls the internet made draft rules on April 11 to manage AI systems that generate content, like texts or images. The agency wants companies to submit reports on how secure their AI systems are before offering them to the public. In February, Beijing's economy and IT department said it would support leading companies to make AI models that can compete with ChatGPT.
EU Plans to Introduce AI Law to Govern AI Use
EU lawmakers are talking about introducing an AI law for the European Union. This law would apply to anyone who sells or provides a product or service using AI. The law would cover AI systems that produce outputs like content, predictions, recommendations or decisions that can influence the environment. EU lawmakers have suggested classifying different AI tools based on how risky they are, from low risk to unacceptable.
France Investigates Complaints About ChatGPT
France's privacy regulator, CNIL, said on April 11 that it is investigating several complaints about ChatGPT. This is after ChatGPT was temporarily banned in Italy because it might have violated privacy rules. In March, France's National Assembly approved using AI for video surveillance during the 2024 Paris Olympics.
Japan Wants G7 to Discuss AI Technologies
Japan's digital transformation minister Taro Kono said on April 10 that he wants the upcoming G7 digital ministers’ meeting on April 29-30 to discuss AI technologies like ChatGPT. He wants the G7 to issue a joint statement on AI.
Spain Raises Privacy Concerns About ChatGPT
Spain's data protection agency has asked the EU's privacy regulator to assess privacy concerns about ChatGPT, the agency told Reuters on April 11.
In summary, as governments around the world try to regulate AI, OpenAI's ChatGPT has faced scrutiny in several countries over possible privacy violations. The push to regulate AI comes from worries about the possible harm AI may cause to society, national security, and the economy. As AI continues to progress rapidly, governments must keep up with the latest developments and make effective rules to ensure AI is used responsibly.