Skip to main content

Training and Ethics

Introduction

AI is transforming every sector we studied in our group — from medicine and education to entertainment. While other members explored AI's potential to improve human life, my segment focused on its ethical consequences in the job market. I thought this topic would be clear and straightforward, but I realised that it was complicated. The issue raised uneasy questions — not just about the future of employment, but about fairness, control, and trust in AI systems. Within our group, aligning research topics wasn’t always easy. Communication delays, missed meetings, and different approaches slowed things down. Some groupmates were thinking more about the looks of our website, while I spent most of my time looking at job loss and discrimination. But in the end, our range of topics helped us connect ideas across sectors, especially around accountability and fairness in how AI is designed and used.

AI is transforming every sector we studied in our group — from medicine and education to entertainment. While other members explored AI’s potential to improve human life, my segment focused on its ethical consequences in the job market. I thought this topic would be clear and straightforward, but I realised that it was complicated. The issue raised uneasy questions — not just about the future of employment, but about fairness, control, and trust in AI systems. Within our group, aligning research topics wasn’t always easy. Communication delays, missed meetings, and different approaches slowed things down. Some groupmates were thinking more about the looks of our website, while I spent most of my time looking at job loss and discrimination. But in the end, our range of topics helped us connect ideas across sectors, especially around accountability and fairness in how AI is designed and used.

Frey and Osborne’s (2017) prediction that 47% of U.S. jobs could be lost to automation dominated our early group conversations. It gave our Artefact a strong starting point and helped those working on AI in healthcare or education realise that automation wasn’t just limited to factories. However, their model assumes that entire jobs are automatable, which can exaggerate the threat. Arntz, Gregory, and Zierahn (2016) offer a more cautious view — only 9% of jobs are at risk when looking at individual tasks, not whole roles. Their method, based on OECD data, felt more realistic and helped me take a balanced approach in my review. These two sources taught me that different methods give different answers, and that how we build things affects how we think about right and wrong.

To understand how AI is changing global work, I looked at Berg et al. (2018), who show how digital labour platforms are growing but often exploit workers through low pay, poor conditions, and no protection. Their research added a global and economic layer to what I had mostly viewed as a tech issue. It made me think about how AI affects people differently, especially hurting those who can’t afford protection or legal help.

I also worried about unfair AI systems. Dastin (2018) covered Amazon’s hiring tool that was biased against female job seekers, proving that bad training data leads to bad results when no one checks it. West, Whittaker, and Crawford (2019) go deeper, explaining that these issues stem from a lack of diversity in AI teams. This source made me reflect on how systems meant to be neutral often reinforce existing inequalities because of who builds them and whose interests guide them.

Bins (2018) helped me understand fairness from a theoretical perspective. He argues that AI fairness isn’t just about treating everyone the same but about addressing the deeper values that define fairness. His work was hard to explain to the group at first, especially when compared to the more practical sources, but it shaped the way I talked about ethics in meetings. For instance, when someone mentioned how AI improves grading in schools, I was able to raise the point about how it might mislabel students based on biased data, a live is example is me putting this text in an ai detector to see how accurate these detectors are and no surprise it gave me a high count for such an original piece of work.

Noble’s (2018) Algorithms of Oppression stood out most for me. She noted how AI, specifically in search engines, can perpetrate racism and sexism. In the be, this seemed unrelated to employment, but her analysis of how systems reflect social power structures helped me understand hiring algorithms in a new light. Noble’s emphasis on lived experience, alongside academic research, made her arguments feel grounded. Even though one group member thought the book was “too political,” it helped me see why ethical discussions must consider who is being impacted and how.

To explore potential solutions, I looked at ethical frameworks. Floridi et al. (2018) propose five key principles for ethical AI: beneficence, avoiding harm, justice, autonomy, and clarity. These ideas helped structure discussions in other group areas, too, such as using AI in surgeries or education. However, I noticed that many guidelines assume companies or governments will act ethically, which is rare. Good in theory, but not practical.

The European Commission’s High-Level Expert Group on AI (2019) offers more practical guidance, including principles like transparency, human agency, and technical reliability. These ideas were directly useful when building our Artefact. We used the “transparency” concept to show how AI systems must explain their decisions, whether they’re diagnosing patients or filtering job applicants. Although their guidelines are policy-focused and specific to Europe, they offer a clear step toward building trust in AI.

Eubanks (2018) provided a strong counterpoint by showing how automated systems in social services can target and punish poor people. Her case studies were emotional and honest. Even though her book isn’t only about jobs, it raises critical questions about how AI systems can dehumanise people. I used her examples when talking with the group about low-wage and marginalised workers, especially when discussing fairness in AI beyond just functionality.

Throughout this process, I applied the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose) to every source. Most were current and relevant, but they varied in purpose and authority. Academic articles like Binns (2018) and Floridi et al. (2018) were detailed and well-structured, while others like Dastin (2018) or West et al. (2019) offered real-world examples and detailed reporting. Together, they gave me a balanced view of the ethical issues.

Working in a group has both positives and challenges. I had to explain unclear ethical ideas in simpler ways, which helped me become clearer and more confident in my arguments. Some members didn’t initially take ethics seriously, but once we reviewed different sources together, the realisation struck them.

During the early planning phase, I used ChatGPT (2023) to brainstorm topic angles and organise some of my thoughts. It didn’t create content, but it helped clarify structure, especially when I was stuck on how to link certain ideas. That experience made me reflect on how even AI tools used in research can shape the direction of your thinking, for better or worse.

What affected me most was realising that AI is not neutral; it is moulded by human choices. As a student who’s soon to be involved in the market, I now know that my CV might be filtered by an algorithm before anyone sees it. This project showed me that the ethics of AI isn’t just a technical issue — it’s a human one. And it matters now more than ever.