Better Together Insight Report: Navigating Biases in Generative AI
Acknowledging and Addressing the Impact of Generative AI Biases on Innovation and Society
How Addressing Biases in Generative AI Positively Impacts a Company's Bottom Line:
Our survey uncovered biases in generative AI, highlighting public concerns and guiding companies toward equitable solutions.
In this ever-changing AI landscape, our focus is clear: to ensure generative AI innovation is matched with fairness and inclusivity. Companies must use effective communications to address and mitigate these biases, reinforcing our collective commitment to developing generative AI technologies that serve the interests and needs of all.
It is time for technology companies to build generative AI tools that are desired by as many people as possible, but without bias.
- Invite Catharine Montgomery, Founder and CEO of Better Together, to speak about pioneering fair and equitable generative AI. Contact us at hello@thebettertogether.agency
- Interested in learning how to use communications to address biases in generative AI? Contact us at hello@thebettertogether.agency
To gather survey respondent insights, we leveraged SurveyMonkey Audience, a platform that allows for precise targeting of a diverse set of survey participants. Through this program, we tailored our survey to reach individuals across the country with varied demographics, including age, gender, income, and employment status, ensuring a comprehensive and representative sample.
Custom balancing was used to fine-tune the gender and age group targeting, and screening questions helped further refine our audience to match our research criteria. This method guaranteed our survey results were both accurate and relevant to our study’s objectives.
When asked if they knew what generative AI is, 77.24% of respondents answered yes.
Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data.
Given the strong correlation between awareness of generative AI and concerns about racism and sexism, it's evident that individuals who are informed about generative AI are also those who express concerns about these specific biases. This finding underscores the importance of education and awareness in recognizing and addressing biases in AI.
The survey focused on the types of biases respondents are most concerned about in generative AI. The distribution of concerns found:
- Racism: This bias received 221 mentions, highlighting it as a significant concern among respondents.
- Sexism: With 180 mentions, sexism is another major concern, indicating worries about generative AI perpetuating gender biases.
- Classism: Received 156 mentions, pointing to concerns about AI biases related to socio-economic status.
- Conservatism and Accessibility: These concerns were less frequently mentioned in responses and not the main concern of those surveyed.
The emphasis on racism and sexism proves that biases in generative AI are closely aligned with broader societal issues of discrimination. The significant mention of classism also underscores worries about AI exacerbating socio-economic inequalities.
When asked what biases that they were most concerned about in generative AI, respondents provided a variety of answers.
Of those surveyed, 159 opted to skip this question.
Generative AI Biases Across Age Groups
Our comprehensive survey revealed significant variations in the perception of biases in generative AI across ages.
- Younger Respondents (18-29): This group showed heightened sensitivity toward Discriminatory Content Generation, with more than 60 percent expressing concerns. This age group, more engaged with digital content, is acutely aware of the potential for generative AI to perpetuate harmful stereotypes.
- Middle-aged Groups (30-44 and 45-60): While still concerned, these groups reported slightly lower levels of apprehension — 45 percent and 50 percent, respectively. Their concerns are balanced with an understanding of generative AI’s potential benefits, coupled with caution over its misuse.
- Older Respondents (60+): This group expressed concern at a rate similar to the youngest demographic, suggesting a broad awareness of discrimination issues, possibly driven by concerns over fairness and equality.
Our findings highlight the urgent need to tackle biases in generative AI from every angle. Younger people express significant concerns about discrimination and privacy, while older age groups prioritize accessibility and inclusivity. Such varied perspectives underscore the essential need for technological advancements that are truly inclusive, taking into account the full spectrum of potential biases and their varied impacts across society.
The exploration of diverse perspectives on generative AI biases unfolds into broader discussions on the implications of these biases. Public sentiment evaluation reveals nuanced views on the commitment of technology companies to DEI in AI development.
Insights derived from the study highlight critical areas for enhancement and mirror the community’s aspirations for generative AI innovation that is both responsible and inclusive:
Generative AI does not include biases toward people.
Technology companies have diversity, equity and inclusion in mind when creating generative AI tools
Technology companies must implement rigorous testing and validation processes to ensure AI tools accurately reflect historical contexts and diverse perspectives.
Every day generative AI’s susceptibility to bias is on display.
Google recently launched its Gemini-powered image generation tool to compete with OpenAI’s DALL·E 3 and Microsoft’s Copilot text-to-image generators. After several text prompts generated images with historical inaccuracies, Google was quick to apologize, and then pause the tool to “fix the issues immediately.”
Google has previously declared that the company takes “representation and bias seriously” but has since admitted that “historical contexts have more nuance to them.” While Google plans to re-release an improved version soon, the company only paused the text-to-image tool, other Gemini functionalities and features remain fully operational.
Google’s embarrassing debacle shows the need to refine and address potential generative AI development biases, particularly regarding responsible representation.
Source: https://techcrunch.com/2024/02/22/google-gemini-image-pause-people
When asked if they would be more inclined to use generative AI if they knew biases were taken into account when technology companies create the tool?
Of those surveyed, 159 opted to skip this question.
When asked what technology companies could do to garner trust that generative AI tools are free of biases, respondents provided a variety of answers.
Of those surveyed, 159 opted to skip this question.
When asked about the dangers of using generative AI, respondents answered:
Of those surveyed, 159 opted to skip this question.
How we drive change to solve biases in generative AI:
- Implement internal feedback mechanisms to ensure technology is built with a commitment to impartiality: By integrating diverse perspectives from the outset, these mechanisms can help pinpoint and mitigate potential biases, ensuring technology development aligns with values of fairness and inclusivity.
- Encourage consumers to demand technology companies address bias through company feedback and products and services that are unbiased: This empowers users to play a pivotal role in shaping the technology they use, fostering an ecosystem where companies are held accountable for the social impact of their products.
- Press for an evaluation and certification framework that requires companies to assess and certify their generative AI technologies for bias: Establishing industry-wide standards for bias assessment promotes transparency and drives innovation toward more equitable technology solutions.
- Initiate targeted communications campaigns designed to showcase the company's dedicated efforts toward eliminating bias in generative AI: Through transparent and engaging dialogue, these campaigns can build trust with stakeholders, demonstrating the company's commitment to ethical AI practices and its role as a leader in fostering positive change.
To navigate these challenges and bring our solutions to life, engaging with Better Together will elevate our collective impact, focusing on communications campaigns that spotlight the imperative for unbiased generative AI. As a social impact agency, Better Together is dedicated to championing responsible generative AI development through strategic communications that emphasize diversity, equity and inclusion, ensuring that generative AI technologies equally benefit all members of society.
"Recognizing biases in AI is the first step toward creating a more equitable future. By employing effective communication strategies and methods, AI system developers can mitigate bias's harmful effects, paving the way for technology that reflects and promotes diversity and fairness."
– James T. McKim, Jr., PMP, ITIL
Managing Partner, Organizational Ignition, LLC
Interested in learning more about biases in generative AI and solutions to those biases?
Creativity & Robots – AI Bias: Can We Fix It? Ft. Catharine Montgomery
In an episode of Creativity & Robots with host Juan Faisal, Better Together Founder and CEO Catharine Montgomery dove into the world of AI-driven for
Top 5 ChatGPT 4o Prompts for Communicators
As a public relations firm committed to inclusive marketing and diverse representation, Better Together understands the power of effective communication. With the advancements of ChatGPT 4o, communicators
AI Marketers Guild – Unveiling Bias in AI: Insights from Catharine Montgomery
Catharine Montgomery, Founder and CEO of Better Together, presented to the AI Marketers Guild about biases in generative AI, emphasizing the importance of spotting, addressing,
Let’s Work Better, Together.
Better Together is a trusted partner in achieving inclusivity in generative AI technology.
Take the first step: connect with us, and together, we will support you in getting your message to the world.