Better Together Insight Report: Navigating Biases in Generative AI

Acknowledging and Addressing the Impact of Generative AI Biases on Innovation and Society

How Addressing Biases in Generative AI Positively Impacts a Company's Bottom Line:

Companies in the top quartile for racial and ethnic diversity are 35% more likely to have financial returns above their national industry medians.
0 %
Companies in the top quartile for gender diversity are 15% more likely to have financial returns above their respective national industry medians.
0 %
Diverse teams have a 19% increase in innovation revenue, suggesting that varied perspectives drive creative solutions and contribute to growth.
0 %
Companies seen as ethical are 22% more likely to be recommended by employees and customers, directly impacting sales and recruitment.
0 %
Organizations committed to inclusivity and ethics in AI report up to 20% higher rates of employee satisfaction, which translates into lower turnover and reduced hiring costs.
0 %

PwC

To gather survey respondent insights, we leveraged SurveyMonkey Audience, a platform that allows for precise targeting of a diverse set of survey participants. Through this program, we tailored our survey to reach individuals across the country with varied demographics, including age, gender, income, and employment status, ensuring a comprehensive and representative sample.

Custom balancing was used to fine-tune the gender and age group targeting, and screening questions helped further refine our audience to match our research criteria. This method guaranteed our survey results were both accurate and relevant to our study’s objectives.

When asked if they knew what generative AI is, 77.24% of respondents answered yes.

Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data.

Given the strong correlation between awareness of generative AI and concerns about racism and sexism, it's evident that individuals who are informed about generative AI are also those who express concerns about these specific biases. This finding underscores the importance of education and awareness in recognizing and addressing biases in AI.

The survey focused on the types of biases respondents are most concerned about in generative AI. The distribution of concerns found:

  • Racism: This bias received 221 mentions, highlighting it as a significant concern among respondents.
  • Sexism: With 180 mentions, sexism is another major concern, indicating worries about generative AI perpetuating gender biases.
  • Classism: Received 156 mentions, pointing to concerns about AI biases related to socio-economic status.
  • Conservatism and Accessibility: These concerns were less frequently mentioned in responses and not the main concern of those surveyed.

 

The emphasis on racism and sexism proves that biases in generative AI are closely aligned with broader societal issues of discrimination. The significant mention of classism also underscores worries about AI exacerbating socio-economic inequalities.

When asked what biases that they were most concerned about in generative AI, respondents provided a variety of answers.

Of those surveyed, 159 opted to skip this question.

Generative AI Biases Across Age Groups

Our comprehensive survey revealed significant variations in the perception of biases in generative AI across ages.

  • Younger Respondents (18-29): This group showed heightened sensitivity toward Discriminatory Content Generation, with more than 60 percent expressing concerns. This age group, more engaged with digital content, is acutely aware of the potential for generative AI to perpetuate harmful stereotypes.
  • Middle-aged Groups (30-44 and 45-60): While still concerned, these groups reported slightly lower levels of apprehension — 45 percent and 50 percent, respectively. Their concerns are balanced with an understanding of generative AI’s potential benefits, coupled with caution over its misuse.
  • Older Respondents (60+):  This group expressed concern at a rate similar to the youngest demographic, suggesting a broad awareness of discrimination issues, possibly driven by concerns over fairness and equality.

Our findings highlight the urgent need to tackle biases in generative AI from every angle. Younger people express significant concerns about discrimination and privacy, while older age groups prioritize accessibility and inclusivity. Such varied perspectives underscore the essential need for technological advancements that are truly inclusive, taking into account the full spectrum of potential biases and their varied impacts across society.

The exploration of diverse perspectives on generative AI biases unfolds into broader discussions on the implications of these biases. Public sentiment evaluation reveals nuanced views on the commitment of technology companies to DEI in AI development.

Insights derived from the study highlight critical areas for enhancement and mirror the community’s aspirations for generative AI innovation that is both responsible and inclusive:

Generative AI does not include biases toward people.

Technology companies have diversity, equity and inclusion in mind when creating generative AI tools

Technology companies must implement rigorous testing and validation processes to ensure AI tools accurately reflect historical contexts and diverse perspectives.

Every day generative AI’s susceptibility to bias is on display.

​​Google recently launched its Gemini-powered image generation tool to compete with OpenAI’s DALL·E 3 and Microsoft’s Copilot text-to-image generators. After several text prompts generated images with historical inaccuracies, Google was quick to apologize, and then pause the tool to “fix the issues immediately.”

Google has previously declared that the company takes “representation and bias seriously” but has since admitted that “historical contexts have more nuance to them.” While Google plans to re-release an improved version soon, the company only paused the text-to-image tool, other Gemini functionalities and features remain fully operational.

Google’s embarrassing debacle shows the need to refine and address potential generative AI development biases, particularly regarding responsible representation.

Source: https://techcrunch.com/2024/02/22/google-gemini-image-pause-people

When asked if they would be more inclined to use generative AI if they knew biases were taken into account when technology companies create the tool?

Of those surveyed, 159 opted to skip this question.

When asked what technology companies could do to garner trust that generative AI tools are free of biases, respondents provided a variety of answers.

Of those surveyed, 159 opted to skip this question.

When asked about the dangers of using generative AI, respondents answered:

Of those surveyed, 159 opted to skip this question.

How we drive change to solve biases in generative AI:

To navigate these challenges and bring our solutions to life, engaging with Better Together will elevate our collective impact, focusing on communications campaigns that spotlight the imperative for unbiased generative AI. As a social impact agency, Better Together is dedicated to championing responsible generative AI development through strategic communications that emphasize diversity, equity and inclusion, ensuring that generative AI technologies equally benefit all members of society.

Let’s Work Better, Together.

Better Together is a trusted partner in achieving inclusivity in generative AI technology.

Take the first step: connect with us, and together, we will support you in getting your message to the world.

Is your company ready to address biases in generative AI?

Review our first-ever biases in generative AI insights report to learn how Better Together guides companies in creating powerful messages that spotlight their dedication to removing bias from generative AI.