The State of AI Regulations
Nov 05, 2024
Up until recently, researchers and developers in artificial intelligence technologies have been able...

Up until recently, researchers and developers in artificial intelligence technologies have been able to pursue their work unfettered by government regulations. In the absence of meaningful regulatory scrutiny, academic researchers and for-profit developers have been able to “fly under the radar” and produce some amazing AI-based applications.
Legislators and regulators ignored the progress of AI because, until a couple of years ago, most AI applications were useful only for niche tasks with narrowly-defined boundaries. But then a few things happened to attract government attention to AI:
- Companies started using AI algorithms to make decisions around credit, insurance, and job offers, and customers and job applicants were not pleased at having machines make these kinds of decisions about them.
- ChatGPT and other generative AI applications started making headlines for their usefulness along with their issues.
- A well-publicized “AI safety” movement emerged, with an open letter calling for a “pause” in development of powerful large AI models. They claimed that failure to do so could result in an existential threat to the human race.
- High profile AI company leaders such as OpenAI’s Sam Altman testified before the U.S. Senate Judiciary Committee last year regarding the potential dangers of AI and the need for government oversight. (How much of that is genuine concern for safety, and how much is raising barriers to entry against potential competitors, is an open question.)
So governments around the world began to take notice, and various regulatory frameworks are being proposed. Some have already been enacted into law. Will these regulations serve to keep humans safe from rogue AI models, or will they go too far and stifle AI research or drive it underground?
Current and Proposed Regulations
First, let’s have a look at the current state of AI-related regulations around the world.
Europe
As is often the case with regulating new technologies, the European Union (EU) is farther along than other governments. The AI Act, which gained final approval in May 2024, subjects AI-based applications to different regulatory standards according to risk to health, safety, fundamental rights, the environment, fair and free elections, and the rule of law.
Those applications judged to have “unacceptable risk” (such as social-scoring applications and real-time facial recognition) are banned outright. Other applications face a review process before being allowed on the market, and some high-risk applications will be registered in an EU database.
Generative AI applications are not considered high risk, but developers must meet transparency requirements that will provide summaries of copyrighted data used for training.
The law’s provisions apply to any AI application developed or deployed in any of the EU’s 27 member nations.
United States
The U.S. federal government currently has no comprehensive AI legislation pending. In late 2023, President Joe Biden issued an executive order regarding AI technology, but its scope is limited and seems intended to inspire Congress to take steps of its own.
For its part, Congress has been slow to take any specific actions, although the Senate did issue a “roadmap for AI policy” that suggests possible regulatory paths as well as federal R&D funding for AI.
The California legislature recently passed its own AI regulation, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which still awaits Gov. Gavin Newsom’s signature at this writing. The bill would apply to AI companies operating in California and is aimed mainly at safety issues. It would require developers to implement testing procedures that would prevent their models from “causing or enabling a critical harm.”
China
China also has not yet enacted any nationwide AI legislation, but at least one proposal is making its way through its legislative process. Look for more news from China in the coming months regarding their AI regulations.
Takeaways
Just as it’s still early days for AI, legislation around it is not settled and won’t be for some time. Even laws that are now on the books have been subject to criticism. The California law that finally passed the legislature is far different from its original proposal, and many of its stricter provisions were watered down in response to feedback from AI companies, researchers, and other observers. Some fear that Europe’s law will gut the region’s small but growing AI industry with excessive bureaucracy.
In any case, the next few years will be marked with an uncertain regulatory landscape as laws become passed and court cases around them are adjudicated. Companies in the AI space, whether developing apps for internal use or for sale to customers, would be wise to keep their fingers on the pulse of pending legislation and be ready to pivot in response to new regulations.
One way to do so is to leverage a flexible, scalable, available AI cloud development environment such as the one TensorWave provides, based on AMD’s MI300X GPU accelerator. Our always-available service means that development and test cycles are shortened, so when new or proposed regulations require a design change or a compliance assessment, you can re-train and re-test in short order rather than wait for oversubscribed GPUs to become available.
To learn more about how TensorWave can help you navigate an uncertain regulatory environment, book a demo today.