Skip to content
AI and the law What HR cant afford to miss
Anwar KhalilJul 22, 2025 1:52:43 PM4 min read

AI And The Law: What HR Can't Afford To Miss

Hold on a second. 

The question among HR leaders these days isn’t whether to adopt AI, it’s how. 

giphy-Jul-22-2025-03-49-20-4350-AM

Because AI is coming for HR in a big way. It’s reshaping everything from hiring, performance and people decisions. The thing is, the law hasn’t caught up to the technology. 

We assume a human is making the call around general HR and hiring decisions. From filtering resumes, analysing interviews and even recommending terminations. Which is fine, but in reality, few people are really doing the legal legwork that should come with these decisions. 

And that’s a pretty dangerous game to play. Decisions around hiring, termination and promotions require a fair bit of human description and weighing up. So then, who's accountable when bias creeps in? Who explains a decision if no one can trace how it was made?

idk-shrug

HR managers need to push for smarter, safer ways to use AI at work. Because it’s how you protect your organisation's reputation and how you protect employees from being unfairly screened, tracked, monitored or excluded by systems they didn’t even know were running.

We’re not suggesting you hit pause on AI. But you need to know how and where the risks are and have practical answers when someone asks, “Can we do this?” before it’s already done.

In this blog, we’ll explore the guardrails HR needs to implement to make sure AI is used responsibly across hiring, recruitment and general governance. 

 

Bias and fairness in recruitment

If you’re using AI in recruitment, promotion, or even performance decisions, you need to make sure you’re not discriminating against other candidates. Even if bias was buried deep in the data set, it becomes your problem.

its-your-problem-now-jason-hayes

Because AI is a tool that learns from the intel it has available. If you feed AI hiring decisions from your organisation based on 30 years, it might skew towards a certain gender or age. So it’s important to be aware of that before feeding the data set.

Courts in the US have ruled that it doesn’t matter if your algorithm was too complex to explain. If you can’t justify how a decision was made, you shouldn’t be making it that way.

So what can HR do? Audit your IT tools for bias.

giphy-Jul-22-2025-03-50-32-3270-AM

Start by knowing what your tools are doing. Ask IT vendors for transparency, test the outputs, and flag anything before they hit the front page or the Federal Court. If a system screens out people with gaps in their work history, you need to know who that disproportionately affects. If your chatbot answers applicants differently based on how they write or what names they use, you need to fix it.

phil-dunphy-ty-burrell

Recruitment decisions affect people’s careers. You can’t outsource that to a tool you don’t understand. Currently, there isn’t a requirement to disclose how AI is being used in recruitment, but that could change! So while you can and should use AI in all parts of the process, just remember you can’t outsource accountability.

 

Surveillance and privacy

Tracking keystrokes, logging break times, mining Slack messages for “tone”, it all sounds like productivity gold. But if you’re not careful, it becomes a legal and psychological nightmare.

giphy-Jul-22-2025-03-51-23-8132-AM

Workplace surveillance laws vary across Australia. If your team works remotely across states, what’s legal in one location could be illegal in another. And even if it is legal, you still need to disclose it properly, use the data fairly, and avoid doing anything that could be seen as oppressive.

But legality isn’t your only concern. Psychosocial safety is now a compliance obligation too. If your AI system is pressuring people to stay online, avoid breaks, or second-guess every word they type, you’re walking straight into a WorkSafe claim.

None of this means you have to ditch AI altogether. But it does mean you need to put guardrails in place. Be clear about what’s tracked, why, and how it’ll be used. 

3bb812af3f9726ad44ac48327ea0bb34

And if you’re tempted to use AI to catch people slacking off, ask yourself this: would you be comfortable defending that system in front of a judge, a journalist, or your staff?

 

Don’t wait for IT or legal

When AI goes wrong, everyone looks at HR. Not IT. Not marketing. Not even legal. 

bh187-braveheart

That means HR needs to lead the governance charge. You don’t need to write the AI policy from scratch, but you do need a seat at the table and ideally, a veto when something doesn’t stack up.

Start with your current tools. What decisions are they making? What data are they using? Who trained them, and with what? If you don’t know, find out. And if no one can tell you, hit pause.

neil-de-grasse-tyson-hands-up-pause-30om76ofbwqsfdzb

Make sure there’s an AI policy that covers procurement, implementation, review, and escalation. Assign ownership. Set up audits. And make it someone’s job to stay on top of the legislation, because it’s changing fast.

The takeaway? If HR doesn’t own this space, no one will. And that’s a risk no company can afford.

 

About us 

At Martian Logic, we help HR teams navigate complex compliance issues with simple, scalable solutions. Our all-in-one HRIS gives you full visibility and control across recruitment, onboarding, performance, and engagement without the guesswork. Whether you're rolling out AI tools, updating your privacy policies, or just trying to keep up with evolving legislation, we make it easier to stay ahead of the risks and focus on your people. Contact us today to see how we can support your HR strategy, safely and smartly.

avatar
Anwar Khalil
Founder and CEO at Martian Logic - Tech entrepreneur and outdoor lover
COMMENTS

RELATED ARTICLES