Microsoft President Brad Smith added his name this week to the growing list of tech industry giants sounding the alarm and calling on governments to regulate artificial intelligence (AI).
“Government needs to move faster,” Smith said during a Thursday morning panel discussion in Washington, D.C. that included policymakers, The New York Times reported.
Microsoft’s call for regulation comes at a time when the rapid development of artificial intelligence—specifically generative AI tools—has come under increased scrutiny by regulators.
AI may be the most consequential technology advance of our lifetime. Today we announced a 5-point blueprint for Governing AI. It addresses current and emerging issues, brings the public and private sector together, and ensures this tool serves all society. https://t.co/zYektkQlZy
— Brad Smith (@BradSmi) May 25, 2023
Generative AI refers to an artificial intelligence system capable of generating text, images, or other media in response to user-provided prompts. Prominent examples include the image generator platform Midjourney, Google’s Bard, and OpenAI’s ChatGPT.
The call for AI regulation has grown louder since the public launch of ChatGPT in November. Prominent figures, including Warren Buffett, Elon Musk, and even OpenAI CEO Sam Altman, have spoken out about the potential dangers of the technology. A key factor in the ongoing WGA writer’s strike is the fear that AI could be used to replace human writers, a sentiment shared by video game artists now that game studios are looking into the technology.
Smith endorsed requiring developers to obtain a license before deploying advanced AI projects, and suggested that what he called “high-risk” AI should operate only in licensed AI data centers.
The Microsoft executive also called on companies to take responsibility for managing the technology that has taken the world by storm, suggesting that the impetus isn’t solely on governments to handle the potential societal impact of AI.
“That means you notify the government when you start testing,” Smith said. “Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”
Despite the concerns, Microsoft has bet big on AI, reportedly investing over $13 billion into ChatGPT developer OpenAI and integrating the popular chatbot into its Bing web browser.
“We are committed and determined as a company to develop and deploy AI in a safe and responsible way,” Smith wrote in a post on AI governance. “We also recognize, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.”
In March, Microsoft released its Security Copilot, the first specialized tool for its Copilot line that uses AI to help IT and cybersecurity professionals identify cyber threats using large data sets.
Smith’s comments echo those given by OpenAI CEO Sam Altman during a hearing before the U.S. Senate Committee on the Judiciary last week. Altman suggested creating a federal agency to regulate and set standards for AI development.
“I would form a new agency that licenses any effort above a certain scale of capabilities, and that can take that license away and ensure compliance with safety standards,” Altman said.
Microsoft did not immediately respond to Decrypt’s request for comment.
Read the full article here