AI programs will likely have applications across industries, Chopra said, and “the fact that big tech companies now overlap with major fundamental models of AI adds even more questions about what we are doing to ensure that “They do not have disproportionate power.” he added.
He’s not the only one worried. For months, policymakers from the White House to the Securities and Exchange Commission have been strategizing to address some of the headaches that AI could cause financial institutions and markets.
SEC Chairman Gary Gensler expressed alarm that Wall Street firms will likely rely on a limited number of AI platforms, which could lead to a sudden market fade. And Rostin Behnam, chairman of the Commodity Futures Trading Commission, recently unveiled a working group that could lead to new rules or guidance regarding AI in derivatives markets.
Furthermore, Senator. Mark Warner (D-Va.) is working on legislation that would task the Financial Stability Oversight Council — an interagency body made up of top regulators — with responding to AI risks.
Chopra, an ally of the senator. Elizabeth Warren (D-Mass.), focused on how artificial intelligence has been used by lenders to automate decisions about accessing credit. The bureau is developing rules to address how data brokers use technology to try to stem abuse.
While there has been much discussion about the need for AI-specific regulation, developing a rulebook will take time. Discussions about how to address the existential challenges that rapidly evolving technology could pose to the economy are more urgent now that internal debates over OpenAI’s future have become public.
“I don’t think it’s really clear what all the risks are that are out there,” Christy Goldsmith Romero, a longtime regulator who now serves on the CFTC, said in an interview.
Goldsmith Romero, who sponsored an advisory committee to help the derivatives regulator chart a path forward on AI, said the technology “is evolving so quickly that I think the first thing to do is to start from the concept from high-level principles that still apply.” every time we look at things; risk management, governance.
Fears that the potential abuse of generative AI could lead to runaway computer programs often sound like science fiction. In the context of financial markets, AI programs could put the automated trading and lending capabilities of financial institutions “on steroids,” Chopra said.
If these programs make their own decisions based on the data received, it can “actually lead to very procyclical effects that would amplify the tremors and turn them into much larger financial earthquakes,” he said.
Emmett Shear, interim CEO of OpenAI, wrote on X that the board did not remove Altman “on any specific disagreement over security» of OpenAI technology.
Questions are also being raised about what Microsoft’s hiring of Altman and other OpenAI executives could mean for the competitive landscape around AI, with some speculating that the personnel moves are akin to an acquisition.
This might be a difficult argument to make. OpenAI’s technology is still owned by OpenAI, and Altman’s ability to replicate its early successes under Microsoft’s auspices – rather than as an independent startup answering to a nonprofit board – will pose a challenge .
Still, FTC Chair Lina Khan wants to determine whether big tech companies have resorted to strategic investments in artificial intelligence startups to avoid regulatory scrutiny or harm competition.
In the meantime, Altman’s time at Microsoft will benefit from at least one policy designed to keep labor markets competitive.
“I am sure OpenAI management and staff are grateful that non-competes are unenforceable in California,” an FTC official told POLITICO.