The U.S. Senate will hold the second in a series of bipartisan AI Insight Forums on Tuesday, Oct. 24, where senators will hear from some of the most influential tech leaders to help inform regulations around the technology.
This follows last month’s first-ever AI Insight Forum that brought together senators and big tech leaders — including Sam Altman (CEO, OpenAI), Bill Gates, Elon Musk and Mark Zuckerberg — at the U.S. Capitol to discuss the impact and potential threats posed by artificial intelligence.
The forum was preceded by a series of public hearings to regulate AI across areas of privacy, transparency and public trust. One such hearing introduced what is being called the “Bipartisan Framework for U.S. AI Act,” which aims to begin crafting a framework for regulations around AI.
Here are the five points outlined in the proposed framework:
-
Establish a licensing regime administered by an independent oversight body.
-
Ensure legal accountability for harms.
-
Defend national security and international competition.
-
Promote transparency.
-
Protect consumers and kids.
ASU News spoke with Paulo Shakarian, associate professor at Arizona State University’s School of Computing and Augmented Intelligence, to learn more about what this means.
Question: Why are government officials calling on tech leaders for expertise in crafting legislation on artificial intelligence?
Answer: Congress needs to balance growth and regulation in an area that is rapidly evolving. It is very difficult to forecast what types of regulation will be required as technology often evolves in unexpected ways. For example, many in the industry were earlier predicting fully autonomous vehicles on the roads by 2021, yet that did not occur and many of the predicted aftereffects on the economy did not occur either. Here the technologists ran into impediments that prevented progress. On the other hand, the emergence of large language models has proven to be more rapid than expected, which can pose threats to the information space. What is even more difficult is that in both of these examples, the trends could potentially reverse.
Q: How can Congress promote responsibility and due diligence by requiring transparency from the companies developing and deploying AI systems?
A: In my view, a key aspect missing in the broader discussion on regulation is how to encourage companies to drive development toward a stronger and more socially acceptable artificial intelligence. For example, many large companies focus AI efforts on advertising, which can produce great profits, but the algorithm’s cost of failure is very low. When these ideas get applied to more mission-critical applications, failures start to occur that cannot be resolved by engineering. Congress needs to use regulation to encourage companies to fund research to promote scientific advances in AI safety, fairness, explainability and modularity.
Q: Should the government create a new AI regulator? Or should this be in the hands of existing agencies?
A: AI regulation is not going to be straightforward and likely will be a moving target. For example, the calls to prohibit companies from training models of a certain size without a license may not be effective due to techniques such as neural network distillation that permit smaller models. Further, there are incentives for incumbents to promote regulation as a barrier to entry for startups. As these questions revolve around very specific technical and business concerns, it may make sense to have an AI regulator where such expertise can reside. Additionally, there should be strict ethical guidelines for regulators and separation from those being regulated, as otherwise, we risk such an agency being used to further corporate goals instead of the public good.
Q: How can AI companies boost transparency and the public’s trust?
A: AI companies are currently overinvested in traditional neural architectures that are black box in nature, do not allow for constraints and are not modular. These are inherent limitations to their approach that require scientific advancement as opposed to simply applying engineering efforts or using more computing power. Congress should set requirements that become more stringent over time to force companies to continually invest in these areas.
Q: What are the “safety breaks” that AI companies should be required to implement?
A: The concept of a safety break is somewhat flawed as the current AI systems (e.g., generative AI) cannot enforce stringent constraints, so companies making these systems resort to a variety of tests to determine safety breaks. This differs from standard engineering practices because if a system is not designed to provide a guarantee, the ensuing test plan cannot be created in a rigorous way to ensure that the guarantee can be met. Today, AI firms are designing their systems with no firm guarantees at the model level. Then more ad hoc testing is conducted to determine if there should be any safety breaks, which are then implemented at the product level. This is a fundamentally incomplete way of implementing safety breaks, and it differs greatly from other systems engineered by people.
Q: How would this framework impact AI innovation?
A: Whether regulation is done lightly or not at all, the most advanced AI systems will continue to be for the most profit-producing applications — which almost by definition have a low false-positive cost. Regulation can also go the other direction and implement restrictions in an overly burdensome manner, which will limit innovation — especially for emerging startups without resources to obtain licenses or other certifications. A happy medium will be regulations that are focused on the shortcomings of the profit-producing applications and become more stringent over time — which will drive innovation and result in novel startups that push the science to address newly imposed government requirements.