From education to employment

A call for ethics in the curriculum

Silvia Lanza Castelli

The ethical implications of #AI 

Silvia (co-author) is far more sophisticated in her use of LinkedIn than am I.

She did a quick survey of her Network: Do you consider ethical aspects important in the development of artificial intelligence?

Given that we assume Sil’s contacts are likely to be a cut above the run of the mill we were disturbed to hear that 20% neither agree nor disagree. A further survey was opened by 83 contacts, but only 5 responded, indicating a general level of apathy.

Given the power of AI to penetrate, and do serious damage to, our lives we will argue that ethical considerations are paramount, and that we must, in our colleges and universities, seek to raise awareness of the ethical implications of AI.

I have been working with colleagues in New York and Saudi Arabia on a Policy Brief for the G20[1]. It concerns cybersecurity.

Our argument is that the pandemic, and subsequent lock-down, has intensified processes that were already in train, speeding up the penetration of AI into our lives, and thereby increasing our vulnerability to cybercrime.

One of the biggest unexpected consequences of the COVID-19 pandemic – and the large number of business and school closures associated with efforts to curb the virus – has been the speeding up of technological transformation

We call for urgent, global action

But there is also a longer-term issue. As new technologies emerge so the capacity for exploitation is enhanced, until society, at some point, is able to develop a new ethical framework. This is urgent.

In the first industrial revolution steam engines enabled factories which then exploited vulnerable populations including children. In the UK it took 142 years from the 1819 Cotton Mills and Factories Act to the consolidating factories Act of 1961 to achieve a settlement. In the intervening years there were a series of Acts of Parliament as society struggled to control exploitation by the unscrupulous.

We face a similar struggle today, but hopefully on a shorter timescale. New abuses of AI emerge regularly. Of particular concern at the moment is the creation of malware, malignant viruses and worms that can be used to spy, eavesdrop or hold a victim to ransom.

Our data is being harvested all the time (of no concern to Trump until China joined in). Your ‘phone is reporting what you say, and Alexa shares your gossip and indiscretions with her masters in the ether.

We are inundated with ‘click bait’ that takes us to fake news and murky spaces. These are the new streets that need policing. Yet Sil’s research indicates that very few systems engineers consider the ethical implications of their work.

The recent Twitter hack [2] shows how vulnerable we have become.

A G20 colleague, Muhammad Khan noted a link to the increased working from home [3]:

“Among many other possibilities in the Twitter hack, the work-from-home policy could be one of the reasons in the security breach as it is easier for hackers to exploit vulnerabilities and launch social engineering attack in less-controlled environments.

“According to a survey of 6,000 employees conducted by a cybersecurity company Kaspersky, 73% of employees working remotely have not yet received any cybersecurity awareness guidance or training from their employers”

In our G20 Policy Brief we point out that:

“Previous patterns of proximity are unlikely to return and virtual communications will increasingly become a permanent fixture of life. But as reliance on technology intensifies, so too does the opportunity for threat actors to carry out cybercrimes or distribute disinformation to the detriment of personal or civic life……

“In April 2020, Google observed around 18 million daily malware and phishing emails related to COVID-19, which was in addition to more than 240 million COVID-themed daily spam messages.”

Falling victim to a cyber scam has very real physical, financial and emotional consequences that can be particularly devastating for ‘digitally vulnerable’ populations.

Areas we feel require urgent attention are:

  • The security of online data,
  • Privacy of the individual in cyberspace,
  • Guidelines and protocols on safety issues,
  • Encouraging the development of low-cost security

Factors contributing to the increase in cyber-attack incidents during the pandemic include displacement, opportunism, proliferation, susceptibility and hostility. We appear not to have a moral base to confront these.

In 2019 the European Commission produced Ethics Guidelines for Trustworthy AI [4]. This is a worthy and thorough publication seeking to promote trustworthy AI.

It identifies three components for a system lifecycle:

  1. it should be lawful;
  2. it should be ethical, and
  3. it should be robust.

Of necessity this involves human agency and oversight. Given the complexity of certain recent systems, this is something we are in danger of losing sight of.

This is a matter of great concern. The Guidelines identify four ethical principles:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

A quick check on University curricula concerning AI shows that they focus on technical and business issues. Lamentably few offer, as, commendably, the Said Business School does, include a unit on social and ethical issues. Most remain resolutely silent on ethical issues, big questions such as what is a good action, what is justice?

Producing good quality AI requires a high degree of technical competence, skills beyond the reach of most of us. Like the child labourers of the early Victorian period, we find it hard to defend ourselves against exploitation. It is of concern that a network of systems engineers does not appear to engage with ethical concerns.

Presumably it was absent in their training. It is vital that this change, that those who devise the systems that will drive the Fourth Industrial Revolution do so from a perspective of those human qualities that cannot be automated. We should campaign to restore philosophy to the curriculum of the new normal, to include ethical considerations in the design of all systems.

Silvia Lanza Castelli and Paul Grainger, Co-director of the Centre for Education and Work, and Head of Innovation and Enterprise for the Department of Education, Practice and Society, UCL

Silvia Lanza Castelli is a Master in Strategic Management in Software Engineering awarded in European University of Miguel de Cervantes of Valladolid, Spain. Also she is a System of Information Engineer awarded at National Technologic University in Córdoba, Argentina. Currently, she is a teacher and director of a research project on knowledge management and Agile methodologies in engineering education. She was participating in an international collaborative project with UCL in topics related to skills for the future of work and education. She has coordinated the computer technical support of a social area of government and participated in the management of technological projects.

References:

[1] HEIGHTENING CYBERSECURITY TO PROMOTE SAFETY AND FAIRNESS FOR CITIZENS IN THE POST-COVID-19 DIGITAL WORLD , Muhammad Khurram Khan, Global Foundation for Cyber Studies and Research, USA , Paul Grainger, UCL, UK , Bhushan Sethi, PwC US , Stefanie Goldberg, PwC US

[2] https://www.bbc.co.uk/news/technology-53439585, 17 July 2020

[3] https://www.saudigazette.com.sa/article/595614/BUSINESS/Lessons-learned-from-the-historic-Twitter-hack

[4] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai


Related Articles

Responses