Microsoft asks for AI rules to minimize risk

Microsoft on Thursday approved a series of regulations for artificial intelligence, as the company manages concerns from governments around the world about the risks of rapidly changing technology.

Microsoft, which has promised to embed artificial intelligence in many of its products, has proposed regulations that include a requirement that systems used in critical infrastructure can be completely turned off or slowed down, such as an emergency braking system on a train. The company also called for laws to clarify when additional legal obligations apply to an AI system and for labels that clearly indicate when an image or video was produced by a computer.

“Companies need to step up,” Microsoft President Brad Smith said in an interview about the push for regulation. “The government needs to act faster.” He presented the proposals to an audience that included lawmakers at an event in downtown Washington on Thursday morning.

The call for regulation punctuates an AI boom, with the release of chatbot ChatGPT in November sparking a flurry of interest. Companies such as Microsoft and Google’s parent company Alphabet have since rushed to integrate the technology into their products. This has fueled fears that companies are sacrificing security to achieve the next big hit before their competitors.

Lawmakers have publicly expressed concern that these AI products, which can generate text and images on their own, will create a flood of misinformation, be used by criminals and put people out of work. Washington regulators have pledged to be vigilant about scammers using AI and instances in which the systems perpetuate discrimination or make decisions that violate the law.

In response to this scrutiny, AI developers have increasingly called for shifting some of the burden of technology oversight onto government. Sam Altman, the chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.

The maneuver echoes calls for new privacy or social media laws by internet companies such as Google and Facebook’s parent company Meta. In the United States, lawmakers have moved slowly on such calls, with few new federal rules on privacy or social media in recent years.

In the interview, Mr. Smith said that Microsoft was not trying to absolve itself of responsibility for managing new technology, as it offered specific ideas and was committed to carrying out some of them, whether the government acted or No.

“There is not one iota of abdication of responsibility,” he said.

He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “high-performance” AI models.

“That means you let the government know when you start testing,” Smith said. “You have to share the results with the government. Even when it is cleared for deployment, you have a duty to continue to monitor it and report to the government if any unexpected issues arise.

Microsoft, which made more than $22 billion from its cloud computing business in the first quarter, also said these high-risk systems should only be allowed to run in “licensed AI data centers.” . Mr Smith acknowledged the company would not be “badly placed” to offer such services, but said many US competitors could also provide them.

Microsoft added that governments should designate certain AI systems used in critical infrastructure as “high risk” and require that they have a “safety brake”. He compared this feature to “braking systems that engineers have long incorporated into other technologies such as elevators, school buses, and high-speed trains.”

In certain sensitive cases, Microsoft said, companies that provide AI systems should be aware of certain information about their customers. To protect consumers from deception, AI-created content should be required to carry a special label, the company said.

Mr Smith said companies should take legal “responsibility” for harms associated with AI. In some cases, he said, the responsible party could be the developer of an app like Microsoft’s Bing search engine that uses someone else’s underlying AI technology. Cloud companies could be responsible for adhering to security regulations and other rules, he added.

“We don’t necessarily have the best information or the best response, or we may not be the most credible speaker,” Smith said. “But, you know, right now, especially in Washington DC, people are looking for ideas.”


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button