Vianai's Thoughts on the AI Bill of Rights

October 4, 2022

The White House recently released a blueprint for an AI Bill of Rights. The act marks a significant increase in the U.S. federal government taking substantive action on AI.

With this blueprint, the White House's Office of Science and Technology offers well-considered and researched recommendations on everything from envisioning how human intervention could mitigate many of the more frightening ramifications for AI to how to produce safe and effective AI systems going forward.

This builds on the AI Risk Management Framework put forward by NIST earlier this year, which Vianai also provided public comment on.

Summary of the AI Bill of Rights

There's a lot to like. The blueprint provides a comprehensive guide that will enable companies that previously sat on the sidelines to begin to build AI systems because now there is clarity around some of the key risks and how to mitigate them. This is an important step in the right direction for shifting AI from a "nice to have" aspiration to a real-world amplifier of our human potential.

Take, for example, monitoring. The blueprint all but mandates a monitoring and automatic retraining of AI models as they stray from the ground truth. This kind of idea has a strong ability to mitigate the risks around AI. Best-in-class monitoring solutions not only "watch" the AI models but also predict when they'll need to be retrained and have access to the data necessary to retrain them.

Next, there are several steps that the blueprint encourages for reporting, risk identification and mitigation, and independent evaluation. These three categories put a significant onus on a company engaged in AI to actively identify and eliminate risk. While often there's a worry about regulation inhibiting the growth of an industry, this type of regulation is not without precedent, as it has been present in the financial industry for years. There, the regulations have enabled responsible growth in AI. Financial services firms must both audit and report their high-risk models on a regular basis. And the financial services industry is one of the most advanced in terms of AI - many enterprises have thousands of models and are seeing significant financial gains from them.

Finally, the section of keen interest - and one that is perhaps the most nebulous - is this notion of human alternatives. Here at Vianai, we believe fundamentally in human-centered AI. The report emphasizes heavily this idea of having a human alternative, to lead AI to be reasonable and not burdensome.

Vianai's Thoughts

This centers on the fundamental issues associated with AI - that in many cases, having a system mostly accurate isn't enough. There are large, critical applications that could exist, wherein AI could even play a much larger role. Still, without a human alternative or the ability to rapidly troubleshoot an issue, the technology will languish and potentially become dangerous.

For example, autonomous driving, specifically Level 5 autonomy, wherein the system needs no human intervention, is very far off technologically, if not impossible. Level 4 autonomy, though, is possible with the right infrastructure around the vehicle and the ability for a human to intercede should the car confront circumstances that it cannot understand. Such infrastructure could include widespread, high-bandwidth WiFi, which enables a remote operator to take control of the vehicle should something happen that confuses the algorithms.

But what we cannot accept is a future where we become desensitized to the fallout from AI systems. Autonomous vehicles resulted in fatalities. Unchecked recommendation systems have enabled the mass spread of misinformation. Computer vision systems enabled systemic hiring prejudices. This technology is ground-breaking and can help streamline societal processes but needs to be human-led and watched closely to work in the right way.  

We need monitoring, explainability, ease of translation, and so much more. Plus, some of these systems refuse any understanding beyond technical explanations. They are simply black boxes where massive amounts of data go in, and a potentially baffling output goes out. While only a handful of people today can even explain these systems, it doesn't mean that such high-stakes decisions should be in the hands of few.

So while we need this type of bill of rights, and we need an even greater debate about AI that this blueprint will inspire, it's not enough. The industry is self-regulating, and a robust public debate assists in implementing commonsensical provisions that protect humans from the inevitable side effects of well-intentioned (or not) efforts.  

AI is ready for this. It's ready for a real, clear-eyed debate around its capabilities and its limitations, as well as how humans can intervene to prevent worst-case actions. This debate, and the subsequent education among the population, will advance the technology beyond proof of concept into the next phase of real-world implementation and growth.

Vianai, a Human-Centered AI company

Our AI platform and products put humans at the center of AI. This is why we were founded - to bring AI to enterprises in ways that amplify the work of an organization's most valuable asset: its people. This includes building and delivering solutions that bring real value to business owners, decision-makers, practitioners, and others in an organization, solutions that enable companies to make AI that works, that is trustworthy and that is scalable. We hope more companies will join this conversation to ensure that humans are at the center of driving what we want from AI in enterprises, and in our society.

Learn more about our platform here.