From education to employment

Combining machine and human intelligence to tackle complex social issues

Photo by Cameron Casey from Pexels

A new report examining how to tackle complex social issues by combining machine and human intelligence, is published today by Nesta’s Centre for Collective Intelligence Design.

The Collective Intelligence Grants Programme report shares the findings from 15 collective intelligence experiments that were co-funded by Nesta, the Wellcome Trust, Omidyar and the Patrick J. McGovern Foundation.

Collective intelligence is the enhanced capacity that is created when people work together, often with the help of technology, to mobilise a wider range of information, ideas and insights.

The experiments were chosen as part of a £500,000 grant scheme which focuses on generating actionable insight on how to advance collective intelligence to solve social problems.

Each experiment falls under one of four themes:

  1. exploring AI-crowd interaction;
  2. making better collective decisions;
  3. understanding the dynamics of collective behaviour; and
  4. gathering better data.

The experiments included demonstrate the breadth of the potential applications of collective intelligence, with case studies such as:

  • Using serious games to train AI models for medical diagnosis

Undertaken by Spotlab, this experiment asked if citizens playing online games can be as effective as physicians in training AI models for diagnosing tropical diseases. The experiment found that AI models trained on images annotated by both adults and school children can obtain similar results to ones trained on physician-based annotations, of around 93 per cent accuracy.

  • Examining how humans and AIs might work together to reduce cyber violence

Samurai Labs tested whether AI-based detection systems and humans could work together to reduce the levels of online harassment on Reddit. It used an AI bot to detect cyber violence but leant on olunteers from the Reddit community to respond and discourage. While it worked in reducing harassment, it also found that volunteers were less creative in their responses than anticipated and that the negative impact of seeing the harassment took its toll on participants. It’s an example of when automatic AI responses work better or just as well without human involvement.

  • Experimenting with robot swarms to decrease polarisation in group discussions around topics like climate change

The University of Bristol team built a swarm of 100 robots and tested whether the robots could help a crowd to reach inclusive and informed consensus by communicating opinion diversity. Participants input their responses to a question into the robot, which the robot then displayed for other participants to see before they responded to the same question. It showed that robot swarms can be used to engage people on challenging topics to diffuse and influence opinions, serve as a prompt to launch conversations, and empower introverts to share their opinions.

  • Sustaining behaviour change through collective action on air pollution

This experiment, by Umbellium, in collaboration with Loop Labs and Tower Hamlets Council, tested whether collective environmental assessment and collective action would enable people to sustain behaviour for change for actions that are known to reduce air pollution, even though the direct individual effects of these actions might not be immediately noticeable. It found that communication and collaboration among citizens led to the sustained adoption of these actions to decrease air pollution, with the experimental groups saving (on average) four times more carbon dioxide emissions than the control groups.

Kathy Peach, Co-Director at the Centre for Collective Intelligence Design, said: 

“At the Centre, we believe that to tackle problems we need to mobilise all the resources of intelligence available to us, which is why we work to understand how to best combine the complementary strengths of machine intelligence and collective human intelligence. These experiments are starting to show how we can change the way we work together with machines.

“We shouldn’t pit humans against machines, but rather design AI that improves our ability to cooperate with technology and each other, extending human intelligence rather than attempting to replicate it. It also highlights how we can involve diverse groups of people to help create more representative AI, as well as leveraging AI to overcome human biases.

“The Collective Intelligence Grant Programme report is a vital example of the research we need to undertake in order to better understand how best to combine these forms of intelligence and we need to ensure that investment is targeted at the right places to continue innovation in this field.”

The report also recommends priorities for future research and experimentation within the field of collective intelligence. These are:

  1. The need to establish better partnerships to ensure collective intelligence is done well
  2. The need for more research into how to effectively recruit participants and sustain engagement in experiments
  3. The need to target investment to fund innovation in tools for collective decision-making
  4. The need for practical experience to understand how best to integrate collective intelligence tools into established workflows
  5. The need for further exploration to develop cooperative human-machine systems
  6. The need for more research to design and test systems that enable positive collective behaviours.

Haidee Bell, Strategic Design and Innovation Lead and Zaichen Mallace-Lu, Strategic Design and Innovation Manager at the Wellcome Trust, said: 

“At Wellcome, we recognise the need for collective knowledge to understand and tackle the urgent global health challenges we target with our work: mental health, infectious disease and the health impacts of climate change. 

“We invested in the programme to support and learn about collective intelligence in health and through research. We’re hopeful that this can help point to the value of the collective in enhancing science and allow us to learn about human-level effects, such as how collective intelligence prompts people to participate and to act.”

Claudia Juech, Vice President of Data and Society at the Patrick J. McGovern Foundation, said:

“The portfolio of projects shows that community involvement adds to every step of the data lifecycle (collection, analysis and use), across geographies, and among fields as varied as online behaviour and farming. Supporting Nesta’s Collective Intelligence Grants programme is an opportunity for philanthropies to join forces to seed more discoveries across a broad spectrum of use cases for more changemakers to build upon.”

Sonny Bardhan, Head of Strategy at Omidyar Network, said:

“Omidyar Network is a social change venture that reimagines critical systems, and the ideas that govern them, to build more inclusive and equitable societies. A key aspiration is to build a global technological ecosystem that reaches and works for everyone: one that balances innovation with responsibility. So we were pleased to be able to invest in Nesta’s Collective Intelligence Grants programme to support the incubation of an emergent area of technology that seeks to combine human and machine intelligence to address social challenges.”

Case studies included in the report:

Theme one: exploring AI-crowd interaction (pg9)

  • Making more accurate open maps with the help of an AI assistant (pg10)

By the Humanitarian OpenStreetMapping Team in collaboration with Netherlands Red Cross.

  • Harnessing the wisdom of crowds for more accurate medical diagnosis (pg13)

By the Istituto di Scienze e Tecnologie della Cognizione (ISTC) at the Italian National Research Council (CNR) in collaboration with Max Planck Institute for Human Development and The Human Diagnosis Project (Human Dx).

  • Encouraging creativity with a search engine that delivers unexpected results (pg16)

By neu (Augmented Thinking) in collaboration with City, University of London.

  • Volunteers and AI working together to counter cyber violence (pg19)

By Samurai Labs.

  • Using serious games to train AI models for medical diagnosis (pg22)

By Spotlab.

Theme two: making better collective decisions (pg25)

  • Hacking the wisdom of crowds to improve collective forecasting (pg26)

By the Centre for Cognition, Computation and Modelling (Birkbeck, University of London).

  • How swarms of robots can improve group debates (pg29)

By the University of Bristol.

Theme three: understanding the dynamics of collective behaviour (pg32)

  • Inspiring better health through peer-to-peer learning (pg33)                                     

By the Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni (IEIIT) at the Italian National.

  • How to help groups share scarce resources more fairly (pg36)

By the University of Nottingham, RMIT University and University of Tasmania.

  • Sustaining behaviour change through collective action on air pollution (pg39)

By Umbrellium in collaboration with Loop Labs and Tower Hamlets Council.

Theme four: gathering better data (pg42)

  • Bringing the public into AI ethics debates (pg43)

By Dovetail Labs.

  • Collecting hyper-local sanitation data to predict cholera outbreaks (pg46)

By Kenya Flying Labs in collaboration with Kenya Red Cross Society.

  • Matchmaking patients to power their own medical research

By Just One Giant Lab (JOGL) and Open Humans Foundation. 

  • Crowdsourcing weather and pest alerts in the Andes

By Swisscontact in collaboration with Banco de Desarrollo Productivo and Latin American Centre for Rural Development.

  • Harnessing community feedback and AI to validate tools in humanitarian operations (pg55)

By the International Organization for Migration.


Related Articles

Responses