Dr. Sanjay Rajagopalan Discusses How Human-Centered AI is Essential to Adopting AI Responsibly Across the Enterprise

by | Mar 15, 2023

Dr. Sanjay Rajagopalan, Chief Design & Strategy Officer of Vianai, sat on a panel at the 2022 Ai4: Artificial Intelligence Conference this past fall to participate in a lively discussion on responsible AI, an example that this topic has been part of Vianai’s mission since the beginning.

Titled “Human-Centered AI: Adopting AI Responsibly Across the Enterprise,” the panel discussed how adopting responsible AI is critical to the long-term success of businesses. They explored the essential aspect of building trust between humans and AI by ensuring AI solutions are responsible and built with employee and user needs in mind.

During the panel, he discussed the risks a company faces if they don’t have sound systems in place to ensure the responsible deployment of AI solutions. Considering the infrastructure for explainability, monitoring, transparency, validation, and governance in AI systems is crucial. These kinds of tools, processes, and policies must be in place to avoid catastrophic events, which could have a disproportionate financial impact on the business – derailing ROI, among many other things.

Key Takeaways from the Panel:

Human-Centered AI and Responsible AI Are Interlinked and Dependent

Dr. Rajagopalan highlighted that human-centered AI and responsible AI are both highly dependent and interlinked. AI needs to work side by side with humans to amplify their ability, while preserving the uniquely human things only people are capable of. There is already a deficit of trust in this area because of the notion that AI will replace humans, so the enterprise must work towards closing this gap.

Dr. Rajagopalan acknowledged that the vast majority of use cases which involve AI are not replacement use cases; instead, they are assistance and application use cases – especially in the enterprise. Human-centered AI entails having tools and frameworks to build solutions with longevity and sustainability. This is where the enterprise starts to see responsible AI becoming a critical aspect of building trust between humans and AI. Dr. Rajagopalan shared the idea that responsible AI is a necessary type of capability to actualize the vision of human-centered AI.

He detailed the three qualities necessary for enterprises to bring responsible AI:

    1. Reactive
    2. Proactive
    3. Being designful

Being reactive entails identifying the root causes of issues quickly and retraining models when and if needed. To be successful, companies must have monitoring tools to identify the issues, along with the tools and frameworks set up to fix any problems they face, preferably in near real-time. Being proactive, however, means having the tools in place to alert you before something happens, so the company can take corrective action. A proactive AI system requires you to have mechanisms that look at trends and provide a diagnostic solution before the issue causes harm.

Being designful when building systems that do not fail in the first place is essential in responsible AI. Building systems, frameworks, and products for longevity, sustainability, responsibility, and reliability help Vianai show organizations where capabilities drive specific outcomes around reactive, proactive, and designful behavior. These tools are what ultimately lead to an increased trust between humans and AI systems.

The Importance of Operating at Scale

On the panel, Dr. Rajagopalan also discussed operating at scale and how it allows risks to compound exponentially. When you’re operating at scale, there are many aspects that you can no longer individually track. Because of this, you rely on the processes, policies, and frameworks your system has in place. These tools will bring the AI system to production, monitor, govern, and take them off if they’re not performing well.

Dr. Rajagopalan, and Vianai as a company, believes AI complements humans, offering scale and repeatability that humans cannot easily replicate. If we bring these systems to work at scale, responsibly combined with human judgment and reasoning, we can increase the value of AI.

Our Approach

Vianai was advocating for human-centered and responsible AI long before this recent buzz in the enterprise we see today. H+AI is the philosophy that underpins all of the work we do at Vianai, the products we build, and how we work with customers.

Our ML monitoring solution enables high-performance ML operations at scale across the enterprise, to enable detailed monitoring, root-cause analysis, retraining and model validation in a continuous loop across large, complex, feature-rich models – ensuring models are trustworthy, explainable and transparent.

Our performance acceleration technology aims to bring down the cost and resources needed to run AI, to increase access and ensure AI is more responsible in terms of cost–performance and environmental impact.

Dealtale brings conversational AI that sits on top of marketing, CRM and advertising platforms, and causal inference – advanced AI techniques – directly to marketing professionals.

Finally, hila, our AI-powered financial research assistant was built from scratch with reliability in mind – our document-centric approach helps us to ensure that answers are accurate, including providing citations from the financial text.

To learn more about our high-performance ML monitoring capabilities that can help your business tackle AI’s reliability problems, request a demo here. To learn more about all of our products, get in touch here. We would love to connect to discuss how your enterprise can adopt AI responsibly.

We have the ability and
opportunity to change the world.

Home

Who We Are

What We Do

Noteworthy

Contact Us

Career

Home

Who We Are

What We Do

Noteworthy

Contact Us

Career

© Vianai Systems, Inc. Privacy