Tag Archives: ethics

Licensing AI Engineers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/03/licensing-ai-engineers.html

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Ethical Problems in Computer Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/ethical-problems-in-computer-security.html

Tadayoshi Kohno, Yasemin Acar, and Wulf Loh wrote excellent paper on ethical thinking within the computer security community: “Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversation“:

Abstract: The computer security research community regularly tackles ethical questions. The field of ethics / moral philosophy has for centuries considered what it means to be “morally good” or at least “morally allowed / acceptable.” Among philosophy’s contributions are (1) frameworks for evaluating the morality of actions—including the well-established consequentialist and deontological frameworks—and (2) scenarios (like trolley problems) featuring moral dilemmas that can facilitate discussion about and intellectual inquiry into different perspectives on moral reasoning and decision-making. In a classic trolley problem, consequentialist and deontological analyses may render different opinions. In this research, we explicitly make and explore connections between moral questions in computer security research and ethics / moral philosophy through the creation and analysis of trolley problem-like computer security-themed moral dilemmas and, in doing so, we seek to contribute to conversations among security researchers about the morality of security research-related decisions. We explicitly do not seek to define what is morally right or wrong, nor do we argue for one framework over another. Indeed, the consequentialist and deontological frameworks that we center, in addition to coming to different conclusions for our scenarios, have significant limitations. Instead, by offering our scenarios and by comparing two different approaches to ethics, we strive to contribute to how the computer security research field considers and converses about ethical questions, especially when there are different perspectives on what is morally right or acceptable. Our vision is for this work to be broadly useful to the computer security community, including to researchers as they embark on (or choose not to embark on), conduct, and write about their research, to program committees as they evaluate submissions, and to educators as they teach about computer security and ethics.

The paper will be presented at USENIX Security.

Data ethics for computing education through ballet and biometrics

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/data-ethics-computing-education-ballet-biometrics-research-seminar/

For our seminar series on cross-disciplinary computing, it was a delight to host Genevieve Smith-Nunes this September. Her research work involving ballet and augmented reality was a perfect fit for our theme.

Genevieve Smith-Nunes.
Genevieve Smith-Nunes

Genevieve has a background in classical ballet and was also a computing teacher for several years before starting Ready Salted Code, an educational initiative around data-driven dance. She is now coming to the end of her doctoral studies at the University of Cambridge, in which she focuses on raising awareness of data ethics using ballet and brainwave data as narrative tools, working with student Computing teachers.

Why dance and computing?

You may be surprised that there are links between dance, particularly ballet, and computing. Genevieve explained that classical ballet has a strict repetitive routine, using rule-based choreography and algorithms. Her work on data-driven dance had started at the time of the announcement of the new Computing curriculum in England, when she realised the lack of gender balance in her computing classroom. As an expert in both ballet and computing, she was driven by a desire to share the more creative elements of computing with her learners.

Two photographs of data-driven ballets.
Two of Genevieve’s data-driven ballet dances: [arra]stre and [PAIN]byte

Genevieve has been working with a technologist and a choreographer for several years to develop ballets that generate biometric data and include visualisation of such data — hence her term ‘data-driven dance’. This has led to her developing a second focus in her PhD work on how Computing students can discuss questions of ethics based on the kind of biometric and brainwave data that Genevieve is collecting in her research. Students need to learn about the ethical issues surrounding data as part of their Computing studies, and Genevieve has been working with student teachers to explore ways in which her research can be used to give examples of data ethics issues in the Computing curriculum.

Collecting data during dances

Throughout her talk, Genevieve described several examples of dances she had created. One example was [arra]stre, a project that involved a live performance of a dance, plus a series of workshops breaking down the computer science theory behind the performance, including data visualisation, wearable technology, and images triggered by the dancers’ data.

A presentation slide describing technologies necessary for motion capture of ballet.

Much of Genevieve’s seminar was focused on the technologies used to capture movement data from the dancers and the challenges this involves. For example, some existing biometric tools don’t capture foot movement — which is crucial in dance — and also can’t capture movements when dancers are in the air. For some of Genevieve’s projects, dancers also wear headsets that allow collection of brainwave data.

A presentation slide describing technologies necessary for turning motion capture data into 3D models.

Due to interruptions to her research design caused by the COVID-19 pandemic, much of Genevieve’s PhD research took place online via video calls. New tools had to be created to capture dance performances within a digital online setting. Her research uses webcams and mobile phones to record the biometric data of dancers at 60 frames per second. A number of processes are then followed to create a digital representation of the dance: isolating the dancer in the raw video; tracking the skeleton data; using post pose estimation machine learning algorithms; and using additional software to map the joints to the correct place and rotation.

A presentation slide describing technologies necessary turning a 3D computer model into an augmented reality object.

Are your brainwaves personal data?

It’s clear from Genevieve’s research that she is collecting a lot of data from her research participants, particularly the dancers. The projects include collecting both biometric data and brainwave data. Ethical issues tied to brainwave data are part of the field of neuroethics, which comprises the ethical questions raised by our increasing understanding of the biology of the human brain.

A graph of brainwaves placed next to ethical questions related to brainwave data.

Teaching learners to be mindful about how to work with personal data is at the core of the work that Genevieve is doing now. She mentioned that there are a number of ethics frameworks that can be used in this area, and highlighted the UK government’s Data Ethics Framework as being particularly straightforward with its three guiding principles of transparency, accountability, and fairness. Frameworks such as this can help to guide a classroom discussion around the security of the data, and whether the data can be used in discriminatory ways.

Brainwave data visualisation using the Emotiv software.
Brainwave data visualisation using the Emotiv software.

Data ethics provides lots of material for discussion in Computing classrooms. To exemplify this, Genevieve recorded her own brainwaves during dance, research, and rest activities, and then shared the data during workshops with student computing teachers. In our seminar Genevieve showed two visualisations of her own brainwave data (see the images above) and discussed how the student computing teachers in her workshops had felt that one was more “personal” than the other. The same brainwave data can be presented as a spreadsheet, or a moving graph, or an image. Student computing teachers felt that the graph data (shown above) felt more medical, and more like permanent personal data than the visualisation (shown above), but that the actual raw spreadsheet data felt the most personal and intrusive.

Watch the recording of Genevieve’s seminar to see her full talk:

You can also access her slides and the links she shared in her talk.

More to explore

There are a variety of online tools you can use to explore augmented reality: for example try out Posenet with the camera of your device.

Genevieve’s seminar used the title ME++, which refers to the data self and the human self: both are important and of equal value. Genevieve’s use of this term is inspired by William J. Mitchell’s book Me++: The Cyborg Self and the Networked City. Within his framing, the I in the digital world is more than the I of the physical world and highlights the posthuman boundary-blurring of the human and non-human. 

Genevieve’s work is also inspired by Luciani Floridi’s philosophical work, and his book Ethics of Information might be something you want to investigate further. You can also read ME++ Data Ethics of Biometrics Through Ballet and AR, a paper by Genevieve about her doctoral work

Join our next seminar

In our final two seminars for this year we are exploring further aspects of cross-disciplinary computing. Just this week, Conrad Wolfram of Wolfram Technologies joined us to present his ideas on maths and a core computational curriculum. We will share a summary and recording of his talk soon.

On 2 November, Tracy Gardner and Rebecca Franks from our team will close out this series by presenting work we have been doing on computing education in non-formal settings. Sign up now to join us for this session:

We will shortly be announcing the theme of a brand-new series of seminars starting in January 2023.  

The post Data ethics for computing education through ballet and biometrics appeared first on Raspberry Pi.

What’s a kangaroo?! AI ethics lessons for and from the younger generation

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-ethics-lessons-education-children-research/

Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s important for young people to understand them both from a technical and a societal perspective, and for educators to learn how to best support them to gain this understanding.

Mhairi Aitken.

In our first seminar we were beyond delighted to hear from Dr Mhairi Aitken, Ethics Fellow at The Alan Turing Institute. Mhairi is a sociologist whose research examines social and ethical dimensions of digital innovation, particularly relating to uses of data and AI. You can catch up on her full presentation and the Q&A with her in the video below.

Why we need AI ethics

The increased use of AI in society and industry is bringing some amazing benefits. In healthcare for example, AI can facilitate early diagnosis of life-threatening conditions and provide more accurate surgery through robotics. AI technology is also already being used in housing, financial services, social services, retail, and marketing. Concerns have been raised about the ethical implications of some aspects of these technologies, and Mhairi gave examples of a number of controversies to introduce us to the topic.

“Ethics considers not what we can do but rather what we should do — and what we should not do.”

Mhairi Aitken

One such controversy in England took place during the coronavirus pandemic, when an AI system was used to make decisions about school grades awarded to students. The system’s algorithm drew on grades awarded in previous years to other students of a school to upgrade or downgrade grades given by teachers; this was seen as deeply unfair and raised public consciousness of the real-life impact that AI decision-making systems can have.

An AI system was used in England last year to make decisions about school grades awarded to students — this was seen as deeply unfair.

Another high-profile controversy was caused by biased machine learning-based facial recognition systems and explored in Shalini Kantayya’s documentary Coded Bias. Such facial recognition systems have been shown to be much better at recognising a white male face than a black female one, demonstrating the inequitable impact of the technology.

What should AI be used for?

There is a clear need to consider both the positive and negative impacts of AI in society. Mhairi stressed that using AI effectively and ethically is not just about mitigating negative impacts but also about maximising benefits. She told us that bringing ethics into the discussion means that we start to move on from what AI applications can do to what they should and should not do. To outline how ethics can be applied to AI, Mhairi first outlined four key ethical principles:

  • Beneficence (do good)
  • Nonmaleficence (do no harm)
  • Autonomy
  • Justice

Mhairi shared a number of concrete questions that ethics raise about new technologies including AI: 

  • How do we ensure the benefits of new technologies are experienced equitably across society?
  • Do AI systems lead to discriminatory practices and outcomes?
  • Do new forms of data collection and monitoring threaten individuals’ privacy?
  • Do new forms of monitoring lead to a Big Brother society?
  • To what extent are individuals in control of the ways they interact with AI technologies or how these technologies impact their lives?
  • How can we protect against unjust outcomes, ensuring AI technologies do not exacerbate existing inequalities or reinforce prejudices?
  • How do we ensure diverse perspectives and interests are reflected in the design, development, and deployment of AI systems? 

Who gets to inform AI systems? The kangaroo metaphor

To mitigate negative impacts and maximise benefits of an AI system in practice, it’s crucial to consider the context in which the system is developed and used. Mhairi illustrated this point using the story of an autonomous vehicle, a self-driving car, developed in Sweden in 2017. It had been thoroughly safety-tested in the country, including tests of its ability to recognise wild animals that may cross its path, for example elk and moose. However, when the car was used in Australia, it was not able to recognise kangaroos that hopped into the road! Because the system had not been tested with kangaroos during its development, it did not know what they were. As a result, the self-driving car’s safety and reliability significantly decreased when it was taken out of the context in which it had been developed, jeopardising people and kangaroos.

A parent kangaroo with a young kangaroo in its pouch stands on grass.
Mitigating negative impacts and maximising benefits of AI systems requires actively involving the perspectives of groups that may be affected by the system — ‘kangoroos’ in Mhairi’s metaphor.

Mhairi used the kangaroo example as a metaphor to illustrate ethical issues around AI: the creators of an AI system make certain assumptions about what an AI system needs to know and how it needs to operate; these assumptions always reflect the positions, perspectives, and biases of the people and organisations that develop and train the system. Therefore, AI creators need to include metaphorical ‘kangaroos’ in the design and development of an AI system to ensure that their perspectives inform the system. Mhairi highlighted children as an important group of ‘kangaroos’. 

AI in children’s lives

AI may have far-reaching consequences in children’s lives, where it’s being used for decision-making around access to resources and support. Mhairi explained the impact that AI systems are already having on young people’s lives through these systems’ deployment in children’s education, in apps that children use, and in children’s lives as consumers.

A young child sits at a table using a tablet.
AI systems are already having an impact on children’s lives.

Children can be taught not only that AI impacts their lives, but also that it can get things wrong and that it reflects human interests and biases. However, Mhairi was keen to emphasise that we need to find out what children know and want to know before we make assumptions about what they should be taught. Moreover, engaging children in discussions about AI is not only about them learning about AI, it’s also about ethical practice: what can people making decisions about AI learn from children by listening to their views and perspectives?

AI research that listens to children

UNICEF, the United Nations Children’s Fund, has expressed concerns about the impact of new AI technologies used on children and young people. They have developed the UNICEF Requirements for Child-Centred AI.

Unicef Requirements for Child-Centred AI: Support childrenʼs development and well-being. Ensure inclusion of and for children. Prioritise fairness and non-discrimination for children. Protect childrenʼs data and privacy. Ensure safety for children. Provide transparency, explainability, and accountability for children. Empower governments and businesses with knowledge of AI and childrenʼs rights. Prepare children for present and future developments in AI. Create an enabling environment for child-centred AI. Engage in digital cooperation.
UNICEF’s requirements for child-centred AI, as presented by Mhairi. Click to enlarge.

Together with UNICEF, Mhairi and her colleagues working on the Ethics Theme in the Public Policy Programme at The Alan Turing Institute are engaged in new research to pilot UNICEF’s Child-Centred Requirements for AI, and to examine how these impact public sector uses of AI. A key aspect of this research is to hear from children themselves and to develop approaches to engage children to inform future ethical practices relating to AI in the public sector. The researchers hope to find out how we can best engage children and ensure that their voices are at the heart of the discussion about AI and ethics.

We all learned a tremendous amount from Mhairi and her work on this important topic. After her presentation, we had a lively discussion where many of the participants relayed the conversations they had had about AI ethics and shared their own concerns and experiences and many links to resources. The Q&A with Mhairi is included in the video recording.

What we love about our research seminars is that everyone attending can share their thoughts, and as a result we learn so much from attendees as well as from our speakers!

It’s impossible to cover more than a tiny fraction of the seminar here, so I do urge you to take the time to watch the seminar recording. You can also catch up on our previous seminars through our blogs and videos.

Join our next seminar

We have six more seminars in our free series on AI, machine learning, and data science education, taking place every first Tuesday of the month. At our next seminar on Tuesday 5 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who will be presenting on the topic of teaching AI and machine learning (ML) from a data-centric perspective (find out more here). Their talk will raise the questions of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Sign up now and we’ll send you the link to join on the day of the seminar — don’t forget to put the date in your diary.

I look forward to meeting you there!

In the meantime, we’re offering a brand-new, free online course that introduces machine learning with a practical focus — ideal for educators and anyone interested in exploring AI technology for the first time.

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi.