Connect with us


Verizon executive unveils responsible AI strategy in ‘Wild West’ landscape




Verizon executive unveils responsible AI strategy in 'Wild West' landscape

It’s time to celebrate the incredible women leading the way in AI! Nominate your inspirational leaders for VentureBeat’s Women in AI Awards today by June 18. More information

Verizon is using generative AI applications to improve customer support and experience for its more than 100 million phone customers, and is expanding its responsible AI team to mitigate risk.

Michael Raj, a vice president overseeing AI for Verizon’s network support, said the company is implementing several measures as part of this initiative. These include requiring data scientists to register AI models with a central data team to ensure security reviews, and increasing scrutiny of the types of large language models (LLMs) used in Verizon’s applications to minimize bias and avoid toxic language to prevent.

AI audit is like the “Wild West”

Raj spoke at the VentureBeat AI Impact event in New York City last week, where the focus was on auditing generative AI applications, where the LLMs used can be notoriously unpredictable. He and other speakers agreed that the field of AI auditing is still in its early stages and that companies need to increase their efforts in this area as regulators have not yet established specific guidelines.

The steady drumbeat of major errors by AI customer support agents, for example from big names like Chevy, Air Canadaeven New York Cityor even by leading LLM providers such as Google, with black Nazis, has brought a renewed focus on the need for greater reliability.

VB Transform 2024 Registration is open

Join business leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with colleagues, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. register now

The technology is advancing so quickly that government regulators are only publishing high-level guidance, leaving it to private companies to define the details behind it, said Justin Greenberger, senior vice president at UiPath, which helps large companies with automation, including generative AI. “In some ways it feels like the Wild West,” says Rebecca Qian, co-founder of Patronus AIa company that helps companies audit their LLM projects.

Many companies are currently focused on the first step of AI governance: defining rules and policies for the use of generative AI. Audits are the next step and ensure applications comply with these policies, but few companies have the resources to do this properly, the speakers noted.

A recent one Accenture report found that while 96% of organizations support some level of government regulation around AI, only 2% have fully operationalized responsible AI across all their operations.

Verizon’s focus is on supporting agents with smart AI

Raj stated that Verizon aims to be a leading player in applied AI, with a significant focus on equipping frontline employees with a smart conversational assistant to help them manage customer interactions. These customer support or Verizon retail agents face information overload, but a generative AI-based assistant can ease this burden. It can instantly provide agents with personalized information about a customer’s plan and preferences and handle “80 percent of repetitive things” such as details about different devices and phone plans. This allows agents to focus on the “20 percent of issues that actually require human intervention” and provide personalized recommendations.

Verizon also uses generative AI and other deep learning technologies to improve the customer experience on its network and website, and to learn more about its products and services. Raj said the company has implemented models to predict churn propensity among its more than 100 million customers. (See video of his full remarks below).

Verizon has made substantial investments in AI management, including model drift tracking, Raj said. This has been made possible by consolidating all governance functions into a single ‘AI and Data’ organization, which also includes the ‘Responsible AI’ unit. Raj noted that this unit is “scaling up” to promote norms around privacy and respectful language. He said the unit is a necessary “single point of contact” to assist with all things AI safety, working closely with the CISO office and with purchasing managers. Verizon published its responsible AI roadmap in a earlier this year white paper in collaboration with Northeastern University (pdf download).

To ensure AI models are properly managed, Verizon has made datasets available to developers and engineers so they can interact directly with the models instead of using unapproved models, Raj said.

This trend of registering AI models is expected to become more established among other B2C companies over time, UiPath’s Greenberger said. Models will have to be ‘version controlled and controlled’, similar to the way pharmaceutical companies handle medicines. He suggested that companies should increase the frequency of evaluating their risk profiles due to the rapid pace of technological change. Legislation to enforce model registration is being debated in the US and other countries, given the way these models are trained on publicly available data, Greenberger added.

The Rise of ‘AI Governance’ Units

Most advanced companies are setting up centralized AI teams similar to Verizon’s, Greenberger said. The rise of ‘AI Governance’ groups is also gaining momentum in many companies. Working with third-party providers of LLM models also forces companies to rethink their approach to collaboration. Each provider offers multiple LLM models with diverse and dynamic options.

The nature of generative AI applications is fundamentally different from other technologies, making it difficult to legislate the audit process. LLMs inherently produce unpredictable results, says Patronus AI’s Qian, leading to security failures, biases, hallucinations and unsafe outcomes. This requires regulation for each category of these failures and sector-specific regulations, she said. In industries like transportation or healthcare, failures can mean “life or death,” while in e-commerce recommendations the stakes are lower, Qian explains.

In the emerging field of AI auditing, creating transparency in models is a significant challenge. Traditional AI can be understood by examining its code, but generative AI is more complex. Even getting the basics of auditing right is a challenge that most companies have not taken on. Only about 5% have completed pilot projects focusing on bias and responsible AI, Greenberger estimates.

As the AI ​​landscape continues to evolve at a breakneck pace, Verizon’s commitment to responsible AI can serve as another example of an industry benchmark, while the many opportunities for failure of LLMs highlight the critical need for greater governance, transparency and underline ethics. standards in their deployment. Watch the video of the speaker’s full Q&A below.

Full disclosure: UiPath sponsored this New York event stop of VentureBeat’s AI Impact Tour, but the speakers from Verizon and Patronus were independently selected by VentureBeat. Check out our next stops on the AI ​​Impact Tour, including how to request an invite to the next events inside SF on July 9-11 (our flagship, VB Transform) And Boston on August 7.