Skip to main content

What is AI Bias? Understanding How Data Shapes AI Decisions

Sometimes known as algorithm or machine learning bias, AI bias is when artificial intelligence privileges one group of users over others. This stems from the data used to train the chatbot or other AI system and can result in unfair outcomes.
A lightbulb that says "AI" in the center screwed in a purple, blue and black circuit board.

The scenario: You're preparing for a job interview. The stakes feel high. You want — you need — this job. So you do what millions of people now do: you ask an AI chatbot for advice on salary negotiation. You upload the job description, describe your qualifications and ask what number you should propose. The chatbot response comes back with confident, specific advice: you should aim for $65,000 to $70,000.

What you don't know is that another applicant — same credentials, same position — just received different advice from the same chatbot. Their number: $75,000 to $82,000.

The gap isn't random. It tracks with something the algorithm noticed about you, something you may have mentioned in an earlier conversation, or that it inferred from patterns in how you write.

What just happened?

Unknowingly, you just encountered AI bias. Sometimes known as algorithm or machine learning bias, AI bias occurred in this scenario because the training data of the chatbot privileged one group of users over others. This can result in unfair outcomes like the gap in the starting salary you just witnessed.

Where Does AI Bias Come From?

Dr. Shawn Powers, Senior Director of AI Policy at Southern New Hampshire University.
Shawn Powers, EdD

You might ask how an AI system can have bias. The fact is that the AI chatbot you consulted didn’t create the bias. Instead, its output reflects the bias of multiple sources that the chatbot draws from.

What are those multiple sources? We can start with the training data that contains decades of salary information reflecting persistent wage gaps. The AI chatbot encounters these patterns as features of how the world works. When it generated your salary recommendation, the tool was predicting what's likely based on historical patterns — which can mean reproducing inequality in its recommendations, noted Brookings, a research organization.

There are also design elements in the chatbot that will weigh different factors, optimize for one choice over another, and decide something is "similar enough" to past examples to pass muster. These mechanisms, designed by humans, constitute a “good answer.” However, a good answer derived from these assumptions is not neutral and likely is not fair, according to an article in the Boston Review.

And you need to question the foundational belief that past patterns predict future outcomes reliably. Are the past salaries of people like you the only factor to determine what's possible for you? Does your tomorrow look like someone else’s yesterday?

This matters because you live in an AI-infused world. The advice from the chatbot, and the many AI systems you encounter every day, can mediate your access to employment, credit and more.

Read more: How to Learn Artificial Intelligence (Plus Helpful Courses and Skills)

What is a Real-Life Example of AI Bias?

A decorative dark blue and yellow icon of a magnifying glass looking at a piece of paper.

You accepted the job offer and negotiated for a starting salary of $68,000 — the figure the chatbot recommended. Your gut said higher, but the chatbot had data you didn't have and patterns you couldn't see. The chatbot was confident. You trusted the AI system.

Six months later, you apply for an apartment. The landlord's screening software flags you as higher risk because your income sits just below their threshold of trust that you’ll pay your rent on time. You're approved, barely, but with a higher security deposit.

Two years on, you seek a car loan. The lending algorithm considers your salary history and your rent-to-income ratio. Both those numbers are less than what they could have been if you had followed your gut when negotiating that starting salary. Consequently, your interest rate reflects this.

In time comes the credit card limit, the insurance premium, the mortgage pre-qualification: lower approvals and higher premiums at every step of your future. These are the real-life examples of AI bias that impact countless people.

And so, you've learned to expect less. When the systems consistently approve you for smaller amounts, charge you more and flag you as riskier, you begin to internalize the assessment. The algorithms don't just predict your worth. They teach you what to believe about it and about yourself.

What’s at Stake

A decorative dark blue and yellow icon of a lightbulb with gears behind it.

Though it matters profoundly, this isn't just about fairness to individuals. When algorithms systematically miscalculate worth across entire populations, they contribute to shaping who gets to participate meaningfully in economic and civic life. Hiring algorithms can determine who enters professions, while credit algorithms can determine who can start businesses and own homes.

Each decision based on biases compounds the structural advantages for some and structural exclusion for others.

This can lead to an erosion of trust in institutions. When people discover the systems claiming objectivity are reproducing the very biases the systems promised to transcend, those who can might exit the system entirely. Those who can't might comply while trusting less. Either way, meaningful civic agency recedes.

But fixing AI bias isn’t an engineering problem to solve. Some of what we call bias reflects genuine disagreement about competing values—what researchers call the 'trade-offs' inherent to AI systems. Should algorithms optimize for efficiency or equity? Maximize prediction or preserve dignity? Pursue optimization or honor justice? These aren't technical questions with technical answers.

This is not to say that technical efforts don’t matter. Several are very effective at auditing AI systems for discriminatory patterns, requiring transparency so people can understand why they received certain decisions, and establishing governance frameworks that determine who's accountable when algorithms cause harm. But these are process improvements, not solutions. They help us build better biased systems; they don't answer the harder question of what fairness is.

Ideally, society will support efforts by:

  • Individuals questioning algorithmic authority
  • Institutions being held accountable for AI-assisted decision-making
  • Collective movements demanding that AI systems serve democratic values rather than merely predict historical patterns

Interested in an AI career path? Learn more about how to get into artificial intelligence.

Find Your Program

The Choice Ahead

A decorative dark blue and yellow icon of three overlapping arrows, one going left, one going up, and one going right.

That salary advice from the chatbot? It came from a system doing exactly what it was designed to do: predict based on patterns. The algorithm processed your credentials, the language you used in your prompts, perhaps signals you didn't know you were sending when interacting with the chatbot. It consulted decades of salary data that reflect persistent inequalities and generated a number. The other candidate, who was positioned differently in those historical patterns, received a different number.

The system worked as intended. And that's the problem.

As AI systems become infrastructure in shaping hiring, lending, housing and healthcare, the dilemma isn't just technical anymore. The dilemma is about leadership: who decides what counts as fair, and whether those most affected get a meaningful voice in that decision.

If you're interested in AI ethics, discover more about SNHU’s generative AI concentration: Find out what courses you'll take, skills you’ll learn and how to request information about the program.

Shawn Powers, EdD (she/her) serves as the senior director of AI policy at SNHU. Her work in academic policy extends back to her nearly 10-year role as an associate dean at the School of Arts, Sciences, and Education at SNHU. As senior director, Powers oversees the guidelines and policies of effective and ethical AI use throughout the university. Her vantage point is enhanced by her extended work in philosophy and ethics in education as both a teacher and researcher.

Powers has presented widely on the issues of ethical AI, including the AIxHEART conference where she presented her paper, “Prompting a Dialectic of Freedom in AI.” She is also a 2025 EDSAFE Women in AI Fellow. Connect with her on LinkedIn.

Explore more content like this article

A person resting her chin on her folded hands, researching AI courses on a laptop.

Are Artificial Intelligence Courses Worth It?

AI is rapidly changing the job market, and learning how to use it — through courses, degrees or micro-credentials — can help you stay competitive. Whether you're aiming for a technical career or want to apply AI in everyday work, even basic skills can boost productivity and open new opportunities.
A person who knows why math is important, writing equations on a clear dry erase board.

What is a Degree in Math and Why is It Important?

In today's world, math is all around you. A math degree can provide you with problem-solving, logic and analytical skills. This degree may open up many opportunities for you in the career field, allowing you to pursue roles in data science, data analytics, statistics or other fields.
A room full of computer screens showing the complexity of AI and computer science.

AI and Computer Science: How a CS Master’s Prepares You

Discover how a master’s in computer science with a concentration in AI can help you build essential skills in machine learning, language practices and even robotics — while gaining hands-on experience through real-world projects. See how this degree can prepare you for a career in tech.

About Southern New Hampshire University

Two students walking in front of Monadnock Hall

SNHU is a nonprofit, accredited university with a mission to make high-quality education more accessible and affordable for everyone.

Founded in 1932, and online since 1995, we’ve helped countless students reach their goals with flexible, career-focused programs. Our 300-acre campus in Manchester, NH is home to over 3,000 students, and we serve over 135,000 students online. Visit our about SNHU page to learn more about our mission, accreditations, leadership team, national recognitions and awards.