The Effects of AI on HR Principles and The Effects it Might Have on the Employees in the Workplace

Where HR Finds Itself in the Age of AI

Walk into almost any organisation today and you’ll find AI quietly sitting in the background of HR work. CV-screening tools rank candidates before a human ever sees their applications. Chatbots answer basic HR queries at midnight. Systems predict who is “at risk” of resigning. Generative AI drafts job adverts, training materials and even performance feedback.

For South African HR teams, this is happening on top of an already complex environment: transformation targets, skills shortages, tight budgets, hybrid work, strong unions and a demanding regulatory framework. AI is being sold as the answer to “do more with less” – faster turnaround times, richer analytics and slicker employee self-service.

At the same time, the core principles of HR have not changed. Good people practice is still anchored in:

• Fairness and equity – treating people consistently and without discrimination.
• Respect and dignity – recognising the person behind the employee number.
• Confidentiality and privacy – handling personal information with care.
• Transparency and due process – being able to explain decisions and follow a clear procedure.
• Inclusion and transformation – actively addressing historic and present-day inequality.

The real question is therefore not whether AI will affect HR – it already has. The question is whether AI will strengthen these principles or quietly erode them in the background while we chase efficiency.

When Algorithms Collide with HR Principles

AI in HR is powerful precisely because it works with data about people. That is also where the complications start.

1. Bias and Discrimination in a Transformed – but still Unequal – Labour Market

AI systems “learn” from historical data. In South Africa, that history reflects deep inequalities in education, employment and access to opportunity. If an algorithm is trained on past hiring or promotion decisions, it may learn to favour certain universities, suburbs, language patterns or career paths that are strongly associated with particular race, gender or age groups.

On the surface the system appears neutral: it doesn’t “see” race; it just ranks CVs. In practice, it can easily reproduce the very patterns the Employment Equity Act is trying to dismantle. Without careful design and monitoring, AI can give a polished, “objective” layer to decisions that are still structurally biased.

2. Transparency and Explainability – “The System Decided”

Traditional HR processes at least allow a conversation. A candidate can ask for interview feedback. An employee can challenge a performance rating. With AI, the reason for a decision can be hidden inside complex models or vendor tools.If a candidate is screened out by an algorithm, can the organisation clearly explain why? If a retrenchment selection process uses a risk-scoring tool, can HR demonstrate how the score was calculated and whether it was fair?

South African labour law expects decisions to be justifiable and procedurally fair. POPIA also gives data subjects rights relating to automated decision-making. Simply saying “that’s how the system works” does not satisfy those requirements – and it certainly doesn’t build trust.

3. Privacy, Surveillance and The Psychological Impact on Employees

Many AI tools feed on detailed data about how people work: keystrokes, e-mails, call recordings, GPS locations, website usage, even tone of voice. Used carelessly, this drifts into digital surveillance.Employees may feel constantly watched and scored, unsure what is being tracked or how it might be used later in disciplinary hearings, bonus decisions or restructuring exercises. Over time this can drive anxiety, absenteeism and a breakdown in psychological safety – even if the intention was simply to boost productivity.

In the South African context, intrusive monitoring can easily clash with POPIA and with the broader constitutional commitment to dignity and privacy, especially where consent is assumed rather than truly given.

4. Job Redesign, Skills Shifts and Fears about Replacement

AI tools are particularly good at routine, repetitive and rules-based tasks – exactly the kind of work done by many administrative and entry-level roles, both in HR and across the business. As chatbots answer queries and systems generate standard documents, it is logical for organisations to question how many people they still need doing this work.

While new roles do emerge (HR Data Analyst, AI Product Owner, Prompt Specialist), employees do not automatically transition into them. Without a deliberate plan for reskilling and redeployment, AI can be experienced as a threat rather than an opportunity – especially in a country with high unemployment and a painful history of restructuring.

5. Responsibility and Accountability – Who Owns the Decision?

When an AI-driven process leads to an unfair outcome, it can be tempting for line managers or HR to point to the tool: “The shortlist came from the system”, “The risk score said this person is a flight risk”, “The model flagged this employee as underperforming”.

But legally – and ethically – responsibility for employment decisions cannot be outsourced to software vendors. HR remains accountable for ensuring that processes are fair, defensible and aligned with company values. Relying blindly on AI undermines one of HR’s most important roles: acting as a steward of people and principle in the organisation.

Keeping the “Human” in Human Resources

The challenge is not to stop AI at the door, but to shape it deliberately so that it supports – rather than undermines – HR principles and employee experience. Below are practical steps HR teams can take.

1. Start with Principles, Not with Tools

Before signing a contract with any AI vendor, HR should be clear on:

• What business problem we are actually trying to solve.
• Which HR principles must not be compromised in the process.
• Where human judgement is non-negotiable.

A simple rule of thumb is: AI may inform decisions, but people remain responsible for them.

Link AI projects back to the organisation’s values, its employment equity goals, and its obligations under legislation such as the LRA, BCEA, EEA and POPIA. This anchors the conversation in something more solid than glossy demos.

2. Build an AI Governance Framework for People Decisions

Treat AI in HR as a governance topic, not just a technology rollout. This typically includes:

• A cross-functional steering group (HR, IT, legal, risk, data specialists and ideally, employee or union representatives).
• Clear criteria for approving any AI tool that touches employment decisions.
• Periodic reviews of how these tools are performing, including unintended side effects.

For external solutions, HR should feel comfortable asking tough questions about training data, bias testing, explainability and data security – and should insist on answers in plain language, not just technical jargon in a slide deck.

3. Develop or update an AI Policy

Many South African organisations already have policies for social media, data privacy and IT use. AI now needs similar treatment. An effective AI policy should, at minimum:

• Set out which tools are permitted, which are prohibited and who can authorise new tools.
• Clarify acceptable use (e.g. drafting documents, idea generation, coding support) and non-acceptable use (e.g. feeding confidential employee information into public tools).
• Explain how POPIA and internal confidentiality obligations apply when using AI.
• Spell out consequences for misuse.

For HR specifically, it may be useful to have supporting guidelines that deal with recruitment, performance management, learning, employee relations and workforce planning.

4. Keep Humans in the Loop at Critical Points

To protect fairness and dignity, AI outputs should be treated as input, not as final answers. Practical examples include:

• Recruiters review and adjust AI-generated shortlists, rather than accepting them blindly.
• Managers sense-check algorithmic performance flags against real-world context.
Any decision that significantly affects a person’s job, pay or future is signed off by a human who understands both the data and the context.

This both reduces risk and reminds employees that they are being evaluated by people who can listen and adapt, not by a black-box machine.

5. Make Transparency and Voice Part of the Design

Employees are far more likely to accept AI if they feel informed and respected. HR can:

• Tell employees where and how AI is being used in the organisation.
• Explain, in plain language, what data is collected, how it is processed and what safeguards exist.
• Provide channels to query or challenge AI-informed decisions, with a human reviewing the case.
Include AI in consultation processes with unions, workplace forums and Employment Equity Committees.

This approach aligns with South Africa’s emphasis on participation and procedural fairness, and it helps to prevent AI from becoming something that is “done to” employees.

6. Invest in Skills, not just Software

AI will change the skills profile of HR and the broader workforce. Instead of letting this happen by accident, HR can:

• Build AI literacy for HR practitioners and line managers – not to turn them into data scientists, but so they can ask good questions and spot risks.
• Use skills audits and learning programmes to help employees move into higher-value work as routine tasks are automated.
• Integrate AI-related skills into existing talent, succession and development plans, especially for younger employees entering the workforce.

Done well, AI becomes a catalyst for growth, not a trigger for redundancy.

7. Reimagine the role of HR

If AI can take over much of the transactional work in HR – data capturing, scheduling, monitoring deadlines – that frees up capacity. The opportunity is to reinvest that time into:

• Coaching and supporting managers.
• Strategic workforce and skills planning.
• Culture, engagement and wellbeing initiatives.
• Deepening transformation and inclusion work.

In other words, AI can allow HR teams to spend more time being truly human – listening, mediating, designing better work – instead of being buried under administration.

8. Review and Refine Continuously

AI tools evolve quickly, and so do organisational needs. HR should treat AI as an ongoing journey, not a once-off project. Regularly review:

• Whether the tool is still solving the right problem.
• How employees are experiencing it.
• Whether it is supporting or undermining core HR principles.

If the data shows that an AI-driven process is creating unfair outcomes, the answer is not to “trust the model more”. It is to pause, adjust or even switch the tool off.

Conclusion

AI will undoubtedly reshape HR work and the day-to-day experience of employees in South African organisations. It can help us spot patterns we would otherwise miss, personalise learning at scale and remove tedious admin. It can also, if left unchecked, hard-code old biases, create new forms of surveillance and distance HR from the very people it is meant to serve.

The real effect of AI on HR principles – and on employees – will therefore not be determined by the technology itself, but by the choices HR leaders and business owners make now. If we approach AI with clarity, humility and courage, it can become a tool that amplifies fairness, dignity and inclusion rather than eroding them.

That is the kind of future-of-work story worth telling.

• Written by Jason van Rooyen