Generative AI vs. Inclusion: Bridging the Next Digital Divide
Generative Artificial Intelligence (AI) has an inclusivity problem. There’s no question about the opportunities and advancements the tools promise—but they’re only as good as their shortcomings.
Tainted Technology?
Whether you’re using your face to unlock your smartphone, commanding your smart home device to play that one song that’s stuck in your head, rabidly clicking to minimize a chatbot, or using doorbell cameras to keep porch pirates at bay, AI is inescapable. However, the widespread use and usefulness of it are starting to be overshadowed by significant limitations.
AI tools that generate content and personalize user experiences leverage information about how users behave. The potential of generative AI seems limitless, but it relies on the quality of the data it receives. Herein lies the challenge.
AI tools will, by definition, carry forward the biases of the data and algorithms used to build them. This means the information and recommendations they provide often reflect the prejudices and inequalities present in our society. As a result, the output is not always accurate—it may lack cultural, social, and emotional understanding.
As AI becomes more widespread and, ultimately, as brands and strategists grow more dependent on it, they’ll need better ways to overcome its threat to inclusion.
The first step: recognizing that AI won’t change on its own.
Clear and Present Danger
Generative AI isn’t all bad. The technology frees us from some of the mundane tasks that belabor our time and limit productivity. It has demonstrated capabilities in fraud detection, risk assessment, ad optimization, supply change management, brand sentiment, and beyond. One North has even created an AI-powered Relevance Engine that allows organizations to offer their website users hyper-relevant related content options in real-time.
Still, the transformative force mustn’t exclude key audience segments or negatively impact brand perception. That starts at the organization level.
The majority of organizations that have adopted AI haven’t taken key steps to ensure their AI is trustworthy and responsible, such as reducing unintended bias, according to IBM’s Global AI Adoption Index in 2022.
Big tech companies like Meta and Google have stepped back from offering certain AI technologies due to concerns about spreading inaccuracies or causing negative impacts on society. These concerns are not unfounded. As popular generative AI tools like OpenAI and ChatGPT gain popularity and usher in the next wave of machine learning, instances of bias are also being documented:
- Amazon, for example, stopped using a top-secret AI recruiting tool because it was not treating candidates fairly when evaluating them for developer and technical roles.
- Later, Amazon developed a facial recognition system that was more prone to recognizing lighter-skinned men and struggled with identifying darker-skinned individuals.
- In August 2023, iTutor settled a case with the EEOC after their AI-powered hiring tool automatically rejected women applicants over 55 and men over 60.
AI bias stems from historical bias, which can be amplified within AI systems. It can also result from unrepresentative or incomplete training data or algorithms. The fault here is on the humans inputting information and neglecting to provide a balanced paradigm for the machine-learning system.
While human bias stems from how people fill in missing information using data, systemic biases are rooted in institutional behaviors that disenfranchise certain populations. These include racial, gender, sexual identity, and/or religious-based discrimination, which reach a boiling point when combined with tech limitations. The result is inaccurate information generated by AI.
Several organizations have beat these biases in AI systems simply by refraining from using them. But AI isn’t going anywhere anytime soon. Now, organizations, like those guided by One North, are learning how to use generative AI to bolster inclusivity and safeguard against biases.
Breaking Bias
AI is the new Wild West—only, horses have been replaced with hard drives, and lassos are merely computational parsing tools. Taking the reins of the oft-unwieldy generative AI tools available to us means we have the responsibility to minimize bias at every turn.
Organizations can begin to break bias by following a few action items:
- Before your organization deploys AI tools, mitigate bias by creating a process that is rooted in the assumption that all AI-generated information is in some way limited by bias. Establishing internal councils to perform peer-to-peer reviews and implementing third-party audits will help ensure target audiences are not excluded.
- Use tools that have advanced sampling methods. This will ensure individuals in data sets have an equal chance of random selection.
- Work to make the AI and tech field a more diverse place to work. A more diverse tech community empowers teams to better anticipate and respond to biases in real-time. This starts with increased education and training of team members.
- Test algorithms and data sets for inclusivity. Devise scenarios using diverse users or inputs, and measure how they perform. Re-examine your tools if you don’t receive the well-balanced, inclusive responses you expect.
Generative AI is changing the way we work and engage with products and services, but systemic inequalities in the technology threaten to limit its true potential. Organizations will only truly be able to unlock its power by working to minimize—and eventually eradicate—its inherent biases.
If you’re looking for ways to responsibly incorporate AI into your current or future projects, our strategists can help. Contact us to learn more about how you can unlock insights that will optimize your AI experience with inclusivity in mind.
Photo Credit: Lucas K | Unsplash