Towards responsible AI: Why building diverse voices is essential to guard its safety and fairness

Artificial intelligence has been powering many of the applications we use daily, but the technology is now entering a new, more sophisticated phase.

As it does, we need new guardrails to ensure it develops in a way that is safe and fair.

The best way to achieve this is to limit human biases from creeping into the technology, which will require a great deal of work and investment.

Some form of artificial intelligence (AI) has been powering many of the applications that we use every day.

To name a few, AI has been working in the background to deliver music and shopping recommendations and helping us write text messages. We now enter the next, more sophisticated, evolution of AI, where it’s coming to the forefront through its ability to generate open-ended responses.

Broadly speaking, technology has been able to transform our lives for the better. But some innovations require guardrails and best practices to mitigate risk or harm to individuals, communities and society, and even then, there can be uninte
nded consequences. AI is no exception.

While we’ve been developing and researching responsible AI solutions for traditional AI and machine learning (ML) services, this new era of generative AI poses unique challenges. And because of how quickly generative AI is evolving, there is a more significant potential for risk and unintended outcomes unless we take intentional and proactive steps to minimize those risks.

DISCOVER

How is the World Economic Forum ensuring the ethical development of artificial intelligence?

A critical step is promoting fairness in generative AI. One of the ways to do that is to address the bias that can be found in the data used, in the algorithm, and the people involved in its design, development and deployment.

While technical solutions are critical to mitigate bias in AI, we must think about this holistically, investing in approaches that involve people and processes as well.

You can’t build AI responsibly without the inclusion of diverse voices

At the end of the day, it’s peopl
e who will be the stewards of responsible AI. That’s why one of the priorities is inclusion – the intentional consideration of the perspectives, voices and characteristics of all stakeholders or consumers that may be affected by or use the AI product.

Have you read?

Artificial intelligence will transform decision-making. Here’s how

How to tell if artificial intelligence is working the way we want it to

Ideally, diverse perspectives should be baked into the AI product lifecycle from the beginning. But diversity in tech, AI and data science remains a challenge. Women make up only 26% of data and AI positions in the workforce; only 4.2% of data scientists are Black or African American, and 6.9% are Hispanic or Latino. So, we must employ other ways to incorporate inclusion in centering the end user.

One creative way to do this is by leveraging a company’s employee resource groups in various product lifecycle stages. Members of these groups are often willing to share their perspectives, knowing they can posit
ively influence how the product operates, ultimately benefitting their community and customers.

In tandem, we must increase and invest in diversity in the field by building an earlier pipeline. That’s why democratizing AI education and making it accessible to underrepresented groups is paramount. For instance, AWS offers programmes like the AI and ML Scholarship and a free AI education curriculum for Historically Black Colleges and Universities (HBCUs), Minority Serving Institutions (MSIs) and community colleges.

Finally, as humans, we all have implicit biases, and it’s important to understand how they impact the design and decision-making processes, data collection, feature engineering, evaluation and testing of models. Training should be designed to help raise awareness of the existence and impact of bias in AI systems among developers, data scientists and other stakeholders and arm them with ways to reduce it.

Failing to plan is planning to fail

A focus area of the process is the necessity for a compre
hensive organizational plan to help companies proactively scope, test and respond to issues like possible biases if they arise when deploying generative AI systems.

As part of the plan, companies would want to answer questions including: who are the key customers’ personas and those most affected? Is our data representative of those stakeholders? What are the potential risks and threats? What are rollback and recovery options? What feedback loops will be employed? What is the communications plan?

By instituting an AI operating model with people, processes and technical solutions that aid in anticipating and managing risks effectively, businesses can reduce the likelihood and impact of unintended risks and consequences, demonstrate a commitment to resilience and improvement, and manage their reputation, leading to improved customer trust.

Uncertainty calls for greater investment

Because generative AI is still evolving and some of its underlying models are very complex, there is a lot we still don’t know ab
out how it produces the content it’s generating. That’s why it’s unclear when we’ll be able to eradicate bias completely (if ever). But what is clear is the need for greater investment in this area as more companies and organizations are experimenting with generative AI to transform their businesses and industries. In addition to the people-centric and process-centric ideas outlined above, we also need ongoing collaboration, research and involvement from legislators, researchers and experts alike, even beyond the companies pioneering this technology.

There’s no doubt that uncertainty can be scary, but we can’t let fear paralyze us. If anything, it should galvanize us to have a renewed sense of urgency to gain a deeper understanding of generative AI, advocate for developing necessary guardrails, and be intentional about employing responsible AI best practices.

And I truly believe we could harness this emerging technology for good. In fact, AI and ML have been playing a role in addressing some of the world’s
most pressing problems like hunger and poverty. For example, the International Rice Research Institute (IRRI) used advanced ML methods to enable rapid understanding of genomic data, and in return, it’s able to develop rice that is more tolerant to climate change. But harnessing generative AI for good means we can’t stay on the sidelines. We all have a role to play in the responsible AI journey, regardless of what job or industry we are in.

Source: World Economic Forum

Related Post