top of page

When AI Can’t See You: Bias and Misrepresentation

Web of a Network with people symbol icons.


When AI Can’t See You: Bias and Misrepresentation

When people talk about AI Bias and Misrepresentation, the focus often falls on generative tools such as chatbots or image generators. But bias in technology is not new.


For years, automated systems have classified, labelled and interpreted information in ways that reflect the biases present in their training data.


As AI becomes more widely used, these issues are becoming more visible.


In this post, I explore how bias and representation issues can emerge in AI systems and why these questions matter for researchers and evaluators working with evidence.


Why Representation Matters

AI systems learn from existing data. If that data does not represent the full diversity of human experience, the outputs produced by those systems can reinforce existing inequalities.


This can affect how people are represented in images, how information is interpreted, and even how decisions are supported.


For researchers and evaluators, this raises important questions about evidence and representation.


Questions Worth Asking

As AI tools become more embedded in research workflows, it can be helpful to ask questions such as:

  • Who is represented in the data?

  • Who might be missing?

  • What assumptions are embedded in the tools we use?

  • How might AI outputs influence interpretation?


These questions are familiar in research practice, but they take on new significance when AI is involved.


Keeping Human Meaning-Making at the Centre

AI can assist with analysis and summarisation, but human judgement remains essential.

Researchers and evaluators are responsible for interpreting evidence, understanding context and ensuring that findings reflect real experiences.


Technology can support this work, but it should not replace it.


Thinking About AI in Your Organisation

If your organisation is beginning to explore AI but you are unsure how it should be used in practice, you can book a call to discuss your team’s needs.


Takeaway: Responsible AI use begins with asking thoughtful questions about data and representation.

Looking For Support?


  • Subscribe to my Career Development Newsletter and download my Free Coaching Tools pack.


  • Get in touch for my free Online Career and Impact Workshops Brochure and Online AI Workshops Brochure. Get your university to bring me in to deliver AI training or researcher career development, leaving academia well, and research impact workshops!


  • Book a 1:1 Coaching Call and get my bespoke advice and support.


  • For more tips, join my Alt Ac Careers Facebook Group.


  • Connect with me on LinkedIn and grow your network through mine!

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
Post: Blog2_Post
bottom of page