In Part I of this series, we said bias in AI systems could come about for different reasons and in unexpected ways—and that businesses often can’t help it (but should). We also said AI bias can negatively affect certain groups of individuals more than others; that companies should be aware of this and take measures to prevent the resultant discrimination. Finally, we explored how AI hiring systems—in the context of pre-interview social media profiling—might interconnect and interpret information in ways best described as far from correct.
It’s axiomatic that AI will continue to shape the future of commerce; this presents the challenge of balancing innovation with ethics. Here in Part II, we’ll look at how AI bias—and the use of AI in general—affects society at large in far-reaching ways. We’ll look at technical considerations around AI bias, the relationship between bias and reinforcement (which is fundamental to learning in AI systems), accountability in the use of AI by businesses, and how the responsible use of AI can mitigate its negative effects on society.
The Societal Impact of Biased AI Systems
In both the examples we’ll now look at—examples of how bias in AI systems results in unfairness, negative outcomes, and such—you’ll notice the idea of reinforcement, which we’ll soon explore in detail.
Example #1: Healthcare and Socioeconomic Considerations
You’re probably aware that AI is increasingly being used in healthcare, particularly in diagnosis and treatment recommendations. The primary benefit, of course, is that AI systems can process vast amounts of medical data quickly; the issue of bias rears its head here as well. Consider that an AI system trained on medical records that predominantly represent one demographic may be less accurate in diagnosing or treating conditions for people from other demographics. If the first demographic is “middle-aged white men,” the other demographics could include women as well as ethnic minorities—who have different health risks and respond differently to treatments.
The bias here can result in misdiagnoses or suboptimal care for underrepresented groups; this in turn perpetuates health disparities. Studies have shown that some AI diagnostic tools have been less accurate at detecting certain conditions, such as skin cancer, in darker-skinned individuals because the systems were trained predominantly on images of lighter-skinned patients. More pointedly, a 2019 study found that a leading software used to identify high-risk patients for complex health needs consistently assigned lower risk scores to black patients who had the same risk levels as their white counterparts.
The cycle here is clear: AI, which is designed to improve healthcare outcomes, can inadvertently reinforce the inequities in healthcare access and quality that already exist. As we’ll reiterate later, it is vital for healthcare providers and AI developers to ensure that datasets are diverse, representative, and continuously updated so AI systems don’t unintentionally marginalise vulnerable populations.
Example #2: Filter Bubbles and Consumers
The phenomenon of filter bubbles, and its effects, are most easily understood with examples. Consider personalised content recommendations: AI algorithms analyse past behaviour and preferences to recommend content that aligns with individual tastes. Positive aspects aside, this can also limit exposure to diverse viewpoints—which ends up reinforcing existing beliefs and interests. Then, think of search engines, which use AI to prioritise results based on previous searches and on stated preferences. This can reinforce filter bubbles: Users are shown content similar to what they’ve previously engaged with—again, limiting access to new perspectives.
The easiest example perhaps comes from one’s choice in YouTube videos:
You believe a certain idea, X.
You notice a YouTube video whose title suggests X is true.
You click and watch it.
The related videos at the end will most likely also suggest X is true. You watch a couple.
YouTube’s algorithm, designed to maximise viewing time, notices your preference for videos supporting X.
It then recommends more such videos. Driven by your natural human tendency to seek confirmation, you’re likely to click on those videos.
Then on, each time you visit YouTube.com, your list of recommended videos—insofar as they involve Idea X at all—will almost all hold that X is true!
We can now understand the effects of filter bubbles in the context of consumer choice:
- Individuals tend to seek out information that confirms existing beliefs; filter bubbles exacerbate confirmation bias in AI systems (a bias in which an AI system mirrors that exact tendency). Those AI systems, which prioritise user preferences, amplify the tendency for consumers—narrowing the range of information they encounter.
- Filter bubbles mean consumers are less likely to encounter information that challenges their views or exposes them to new options. This can lead to a narrowing of choices—in entertainment, shopping, and even political views.
- AI-powered platforms prioritise content or products that users have already engaged with—which limits the potential for new, serendipitous discoveries. This in turn potentially hinders innovation in consumer choices.
- AI algorithms often prioritise certain content based on commercial interests in addition to consumer preferences—shaping consumer choices in subtle but powerful ways. (Think of Amazon’s “sponsored recommendations,” which certainly are relevant—and likely to appeal based on past purchases—but they appear higher up in the list of recommendations because, well, they’re sponsored.)
Reinforcement Reinforces
Coming back to the idea of reinforcement: Reinforcement is often the engine that drives AI bias. AI systems—which are designed to improve personalisation, for instance, and decision-making—have the inherent potential to amplify existing inequalities or pre-existing beliefs, creating a feedback loop that reinforces the current state of affairs rather than challenging or improving them.
Consider our examples above—healthcare and filter bubbles—to see why this is the case.
Many AI systems—especially those used for recommendations or decision-making—employ reinforcement learning. This means they learn by receiving feedback based on their actions. If the AI’s actions align with existing biases, it is “rewarded”—such as through higher engagement. This, naturally, reinforces the biases.
In our healthcare example, if an AI system were trained on data where underserved communities have historically received less comprehensive care, it might learn to prioritise resources for wealthier areas, perpetuating the cycle. More pointedly: If the system’s recommendations lead to better outcomes for already privileged groups, its “perception” that its decisions are “correct” is reinforced—even though the recommendations will exacerbate inequities.
Consider the filter-bubble case. If the system recommends content that aligns with a user’s existing views, it receives positive feedback such as clicks and increased engagement. (There is a strong human angle to this: People tend to engage more with content that aligns with their views, as we mentioned.) The AI ends up further prioritising similar content—and it can’t be blamed!
The larger point is that AI doesn’t just replicate existing biases; it amplifies them—which is what makes the problem of bias as difficult as it is. You can imagine that once a bias is embedded in an AI system, the system will continue to replicate that bias over time—which is especially concerning because AI systems operate at a scale and speed that humans cannot match.
…And then there’s the “black box” nature of some AI systems (where no one individual really knows what’s going on inside the system). This makes it difficult to identify and correct biases, and obscures the reinforcement mechanisms at play.
The reinforcement of bias through AI is, ultimately, an ethical challenge—and the phenomenon makes the ethical management of AI crucial. Consider that one result of bias-reinforcement is that AI can deepen societal divides. The “problematic power” of reinforcement makes it all the more important to mitigate existing biases in AI systems; to break the cycle of reinforcement. This can be done (though it’s easier said!) by using diverse and more representative datasets, developing algorithms that are fair and transparent (so corrective actions can be taken if reinforcement of bias occurs), and implementing mechanisms to detect and correct bias. We’ll soon elaborate upon this.
As food for thought, AI is a mirror that reflects the biases in our world—including social, economic, and political inequalities. As a tool, AI reflects the society that creates it; “rectifying” bias in AI systems means societal changes are needed, too. More plainly, the reinforcement of bias by AI is not a technical flaw but a societal issue.
Addressing the Roots of Bias in AI
We’ve spoken about how an AI system can inherit bias from its training data; we’ve reinforced that unless the training data is “perfectly” objective—representative of an ideal world!— the AI system will perpetuate stereotypes and existing inequalities. The most significant contributors to AI bias include:
- Biased Training Data: We’ve explored how AI systems, if trained on biased historical data, replicate the inequities of the past.
- Unrepresentative Data: This refers to the training data only being representative of a certain segment of the population (or other sample). For instance, a facial recognition system trained predominantly on faces of white people will be less accurate in identifying people of colour.
- Algorithmic Design Choices: The manner in which AI models are designed also influences their potential biases. Developers may inadvertently prioritise certain features over others, amplifying bias within the system; for instance, an AI hiring system that prioritises age may discriminate against older candidates.
These issues are not just technical problems; they are ethical challenges that require responsible management.
As a first step, developers—and others involved in the creation of AI systems—need to recognise that training data can be biased or unrepresentative, and that the design of a certain algorithm might be discriminatory.
Addressing these biases requires proactive solutions at every stage of the AI development lifecycle. Bias can be mitigated by using diverse datasets that reflect the full spectrum of society. These can be obtained by actively seeking out data from underrepresented groups and ensuring that demographic diversity—whether by race, gender, age, or other factors—is represented. Second, AI systems should be regularly audited to identify any gaps in the data or disproportionate outcomes affecting specific groups.
Third—and very importantly—developers must prioritise transparency in their models; this includes making the design choices, data sources, and decision-making processes clear and understandable to both internal stakeholders and the public. The ideal situation is one where when an AI system does demonstrate bias, the source of the problem can be traced—whether it is in the training data or the algorithm’s design.
We need to mention fairness-aware algorithms, which developers can incorporate to counteract bias. Such algorithms may prioritise equality alongside accuracy, ensuring that predictions or classifications do not unfairly disadvantage a specific group.
Beyond data, algorithms, and development, there is a need for accountability mechanisms—such as corrective actions or recalibrations to improve fairness— to address discriminatory outcomes. Finally, consider that real-time monitoring can identify biases early and address them quickly—and that continual evaluation of AI systems can verify that AI systems are fair and ethical and unbiased.
Whence Accountability?
To recapitulate the most important ideas we’ve discussed thus far: AI has tremendous potential to enhance the consumer experience—but it can also create filter bubbles, limit exposure to diverse choices, and amplify biases that affect consumer decision-making. The widespread use of AI in commerce has broader implications for society: Discriminatory algorithms can exacerbate existing inequalities and deepen social divisions.
Consider now that as more and more commercial activity begins to depend on AI systems, the ethical concerns the systems raise become more significant—given that AI-driven decisions influence corporate actions, consumer behaviour, societal structures, and more. In the extreme, the role of AI in shaping public opinion and consumer behaviour raises questions about autonomy and even free will: It is not a stretch to say that consumers who are constantly guided by biased algorithms have lost—to a good extent—the ability to make independent choices.
Questionable ethics. Limited exposure to choices. Involuntary discrimination. Perpetuation of stereotypes. Consumer behaviour shaped by black-box algorithms. Reduced autonomy in consumer choice. The loss of free will. If all of these are automated, who is accountable?
Towards Responsible AI
The issues we just touched upon—limited exposure to choices, involuntary discrimination, reinforcement of stereotypes, diminished consumer autonomy, and the ambiguity of accountability—are profound. Addressing them requires deliberate, proactive measures.
Transparency and Explainability
An essential step towards accountability is to increase transparency in AI systems. Organisations must prioritise clear explanations of how their algorithms function, what data they utilise, and how decisions are arrived at. Consumers should be provided with comprehensible information about why they receive certain recommendations or face certain decisions—which can restore autonomy and reduce the impact of filter bubbles. They should have controls over personalisation algorithms—controls that allow them to opt out, or to customise settings. Such transparency also allows external experts and regulators to evaluate the fairness and ethical standards of AI systems.
Regular and Independent Auditing
Towards preventing bias—or at least identifying it so it can be rectified—organisations should commit to regular audits of their AI systems. Independent, ongoing third-party audits can uncover hidden biases, unrepresentative datasets, or discriminatory outcomes; those issues can and should be addressed before they become embedded in organisational systems.
Inclusive Data Practices
Organisations must actively seek diverse, representative data that reflect real-world populations and scenarios. Incorporating data from groups diverse in terms of ethnicity, gender, and socioeconomic status will help prevent AI systems from perpetuating inequities. Along similar lines, organisations should promote diversity and inclusion in their AI development teams to ensure a broad range of perspectives at the development stage.
Fairness-Aware Algorithms and Ethical Design
Designers and developers of AI systems bear responsibility not only for performance metrics such as accuracy but also for fairness and ethics. By integrating fairness-aware algorithms, ethical guidelines, and anti-discrimination standards into the AI design and training processes, organisations can reduce the risk of perpetuating harmful biases. Ethical AI frameworks must become a foundational part of technological design rather than an afterthought.
Clear Governance Structures
Ambiguity around accountability can be addressed by establishing clear governance structures. These involve guidelines and standards, for AI development, which encompass the principles of fairness, transparency, and accountability. Organisations that use AI must clearly define roles and responsibilities such that they can always provide an answer to who is accountable if an AI system makes an outright biased decision. This might involve establishing independent ethical review boards that scrutinise AI systems before they are deployed to ensure they align with societal values. Such boards should be tasked with ongoing AI-system monitoring, updating ethical guidelines, and managing responses to ethical dilemmas.
Industry-wide Collaboration and Regulation
Finally, responsible AI requires collaboration across industries and policymakers. Common ethical standards and regulatory frameworks can to a good extent ensure accountability and fairness. Companies need to openly share insights and challenges towards contributing to industry-wide improvements in AI ethics and accountability.
A Final Word
Ethical AI in commerce is not just a technical challenge—it is a moral imperative.
The prioritisation of ethical considerations while promoting transparency and accountability is necessary to harness the transformative potential of AI while mitigating its risks.
Accountability isn’t a question of assigning blame; it’s about committing to a continuous, transparent, and collective effort to ensure AI benefits all of society.
Finally, moving towards responsible AI is not just a technological challenge; it is a societal imperative that requires a collaborative effort involving policymakers, researchers, industry leaders, and the public.
References and Further Reading
- FAI: Fairness-Aware Algorithms for Network Analysis (Michigan State University)
- Sponsored Products (Amazon Ads)
- Understanding the AI Auditing Framework (Codewave)
- Stop Screening Job Candidates’ Social Media (Harvard Business Review)
- Study finds gender and skin-type bias in commercial artificial-intelligence systems (MIT News)
- Improving Skin Color Diversity in Cancer Detection: Deep Learning Approach (National Library of Medicine)
- The potential for artificial intelligence in healthcare (National Library of Medicine)
- Ethnicity and psychopharmacology (National Library of Medicine)
- How to Develop an Effective AI Governance Framework? (Securiti)
- Women Are Still Under-Represented in Medical Research. Here’s Where the Gender Gap Is Most Pronounced (Time)
- Can Artificial Intelligence (AI) – driven personalization influence customer experiences? (Department of Business Studies, Uppsala University)
- Confirmation bias (Britannica)
- Health Equity and Ethical Considerations in Using Artificial Intelligence in Public Health and Medicine (CDC)
- Personalization Vs. Privacy: Balancing Consumer Interests (Forbes)
- AI in Healthcare (ForeSee Medical)
- Responsible AI: Compliant, ethical and innovative (FRISS)
- What is AI bias? (IBM)
- AI transparency: What is it and why do we need it? (TechTarget)
- There is no such thing as race in health-care algorithms (The Lancet)
Table of Contents