From education to employment

Beware the Robots: #AI and the Changing Role of Technology in a Digital World

Our guests speaking to tech start-ups in Sydney

A friend applied for a job at a well-known multinational company.

She was excited to receive a response within an hour, then quickly bemused by the speed at which she had been rejected.

“It took me hours to write that application” she said. “How could they make a decision so fast?”

I recently sold some used goods on eBay. The next day my Facebook feed suggested I might be interested in a “private” group for buyers and sellers of a branded product I had just got rid of.

A bit late I thought. And creepy…

Those of us who are not tech savvy often frame the risks associated with the use of AI as a problem of the future: after they take all of our jobs, super intelligent robots will become self-aware and either enslave the entire human race while we remain blissfully ignorant or run around town with a sawn off shotgun looking for John Connor.

 

In contrast, corporations present a utopian vision of the future where, free from assembling IKEA furniture assembling IKEA furniture and other mundane tasks, humans can realise their full potential and live the lives we have always dreamed. Who is right?

Fortunately the UK is leading global research into the ethical use of AI and has already created institutions to start to address these challenges and opportunities. So we invited two experts to Australia to share UK insights and discuss the need for constructive and effective regulation of new and emerging technologies.

Dr Mariarosaria Taddeo is a philosopher who co-leads research in digital ethics with Professor Luciano Floridi at Oxford University’s Internet Institute. Roger Taylor is the chair of the UK’s Centre for Data Ethics and Innovation. The exam question was: should we be worried about the risks of AI?

During a packed programme across Melbourne, Canberra and Sydney our experts met with government officials, industry representatives and academics. They reassured us that fears about robots taking over the world are pure science fiction: the machines are not going to turn against us – at least not without human input. However technological disruption is not “coming”, it’s already here and is rapidly transforming our environment and influencing human behaviour in ways we don’t fully understand. This raises some really important ethical issues that we do need to consider as a society. For example:

  • There is clearly an efficiency gain in using AI to filter thousands of job applications. However if we delegate some or all stages of recruitment to AI, how can we be sure that data used by the algorithm doesn’t perpetuate unwanted bias in the workforce?

  • Line advertising companies use micro-targeting to tailor our internet experience to us as individuals, based on our interests and preferences. These helpful nudges enable us to cut through the white noise but at what point does persuasion become manipulation?

  • Furthermore, if an algorithm is found to discriminate against protected groups, or online targeting inadvertently directs a vulnerable person towards a terrorist recruitment organisation, what will be our response?

Dr Taddeo outlining the enormous opportunity to use AI for good

Dr Taddeo outlining the enormous opportunity to use AI for good

These are not problems for future generations but issues many countries are grappling with right now, particularly medium sized democracies like the UK and Australia who share concerns about the use of the internet to promote harmful content or how micro-targeting is influencing our politics. This is why the UK is adopting a world-leading package of online safety measures to tackle Online Harms and ensure companies take better care of our citizens online.

On the flip side, AI presents a real opportunity to do enormous good for society, create high quality jobs and drive economic growth – potentially contributing an estimated £232bn to the UK economy by 2030. However, to date we have been passive in accepting AI technologies into our lives without fully exploring how it can be used to make our lives better. Without a clear approach that informs industry partners, we may miss the chance to realise these gains.

Our guests couldn’t offer a silver bullet but they left us with three issues to explore further:

  • Done well, regulation can be an enabler for innovation. It can provide clarity for companies about what is acceptable in society and help to create new industries.

  • Reducing the friction in data sharing will also support innovation but will need to be carefully balanced with public concerns around privacy and trust.

  • Regulation needs to be developed in a local context. International collaboration is important and global guidelines are helpful: both the UK and Australia have adopted the OECD principles on AI. However it is ultimately the domestic legal, cultural and political setting that should inform the use of AI in society.

And their final thought? Now is the time to decide how we want to govern new technologies. Because when it comes to AI, humans are both the problem and the solution.


Related Articles

Responses