With the rapid advancements in artificial intelligence (AI), the necessity for regulatory frameworks has become more pressing than ever. Despite some strides made in state-level regulation and efforts from federal entities, the U.S. still grapples with how best to regulate AI effectively. The complexity of this task is underscored by several recent legislative developments—some successful, others thwarted—revealing the contentious and multifaceted nature of AI governance.
As states begin to grapple with the implications of AI technology, March 2023 saw Tennessee take a pioneering step by becoming the first state to pass legislation safeguarding voice artists against unauthorized AI cloning. This landmark measure is indicative of the growing recognition of the potential for misuse of AI technologies. Similarly, in the summer of 2023, Colorado implemented a risk-based policy framework designed to categorize AI technologies based on their potential hazards. These early attempts at regulation represent critical shifts in thinking about how state-level legislation can assert control over emerging technologies.
California, known for its tech-centric policies, has seen a flurry of activity aimed at regulating AI. Governor Gavin Newsom signed a series of AI-related safety bills in September, mandating transparency from companies about their AI training processes. However, even in this proactive environment, significant obstacles remain. Newsom vetoed Senate Bill 1047, which sought broad safety and transparency requirements. The decision highlighted the influence of special interests and ongoing debates about how best to impose regulatory measures without stifling innovation.
While states take tentative steps, the absence of a cohesive federal policy akin to the European Union’s AI Act leaves a regulatory vacuum. This gap raises concerns about the U.S.’s capacity to effectively regulate AI technologies before they proliferate unchecked. Jessica Newman, co-director of the AI Policy Hub at UC Berkeley, emphasizes that while the U.S. may have been described as a “Wild West” in AI regulation, the reality is more nuanced. Existing legislation, such as anti-discrimination and consumer protection laws, can in fact be applied to AI, albeit in a piecemeal manner.
Federal agencies like the Federal Trade Commission (FTC) are beginning to make strides, as evidenced by their actions against companies illegally harvesting data for AI models. Investigations into the potential antitrust implications of AI startup acquisitions by major tech firms signify a growing awareness of the need for oversight. The Federal Communications Commission (FCC) taking steps to classify AI-driven robocalls as illegal indicates that regulatory measures are evolving, albeit slowly.
In a significant move last year, President Biden enacted an AI Executive Order that established the U.S. AI Safety Institute (AISI) within the National Institute of Standards and Technology. The AISI aims to study AI risks and collaborate with leading AI laboratories, such as OpenAI and Anthropic. However, the future of the AISI hangs in the balance, as its existence is contingent upon the executive order’s survival. In October, a coalition of organizations urged Congress to enact legislation that would permanently codify the institute to ensure long-term stability in AI oversight.
Elizabeth Kelly, director of the AISI, expressed the collective interest of Americans in mitigating the risks posed by technological advancement, recognizing that an informed regulatory approach is crucial. Despite the challenges, there remains a sense of cautious optimism among some policymakers and experts that comprehensive AI regulation is achievable.
The setbacks encountered—such as the veto of SB 1047—aren’t entirely discouraging. Senator Scott Wiener, who sponsored the bill, remains hopeful that the extensive dialogue around AI hazards—notably from industry leaders acknowledging genuine concerns—could pave the way for more robust legislation. The prevailing argument among some tech leaders is that regulation should be collaborative rather than adversarial.
Conversely, powerful voices in the technology sector have vehemently opposed stringent regulations, arguing they threaten innovation and financial interests. For instance, prominent figures like Vinod Khosla dismissed the qualifications of policymakers addressing AI risks, casting further doubt on the feasibility of regulation.
However, as more than 700 pieces of AI legislation emerge across states this year, Newman suggests that the collective pressure to unify regulations may inspire stronger, more cohesive federal solutions. As stakeholder discussions continue, the imperative remains: how to balance the dual needs of fostering innovation while ensuring public safety in an increasingly AI-driven world.
The regulatory framework for AI in the U.S. is still a work in progress, marked by promising initiatives and considerable challenges. The achievement of meaningful oversight will require a concerted effort across federal and state levels, driven by collaborative dialogue among regulators, technologists, and the broader public. As the landscape evolves, the overarching goal must remain clear: to cultivate an AI ecosystem that prioritizes ethical considerations, public safety, and innovation.