Open In Colab   Open in Kaggle

Tutorial 2: Ethics#

Week 2, Day 5: Mysteries

By Neuromatch Academy

Content creators: Megan Peters, Joshua Shepherd, Jana Schaich Borg

Content reviewers: Samuele Bolotta, Lily Chamakura, RyeongKyung Yoon, Yizhou Chen, Ruiyi Zhang

Production editors: Konstantine Tsafatinos, Ella Batty, Spiros Chavlis, Samuele Bolotta, Hlib Solodzhuk


Tutorial Objectives#

Estimated timing of tutorial: 30-50 minutes (depends on chosen trajectory; see below)

By the end of this tutorial, participants will be able to:

  1. Understand the relationship between consciousness, intelligence, and moral status.

  2. Discuss responsible, moral, ethical, and safe artificial intelligence.


Setup#

Install and import feedback gadget#

Hide code cell source
# @title Install and import feedback gadget

!pip install vibecheck --quiet

from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
    return DatatopsContentReviewContainer(
        "",  # No text prompt
        notebook_section,
        {
            "url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
            "name": "neuromatch_neuroai",
            "user_key": "wb2cxze8",
        },
    ).render()

feedback_prefix = "W2D5_T2"

Section 1: Ethics Intro & Moral Status#

Video 1: Ethics Lecture 1#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_1")

Discussion activity: moral status#

There are many reasons to ascribe moral status to some system, depending upon one’s view of the grounds of moral status.

Discuss! What is your view (or your intuition) about what is important for moral status – consciousness, affective consciousness, cognitive sophistication, etc. – and what would this view imply about how we approach design of and interaction with different forms of AI?

Both rooms are the same discussion topic.


Section 2: Ethical AI#

Before starting the next sections, see how much time you have left in today’s schedule.

If you have at least 30 minutes left, you should do both of the following sections all together as one group. If you have less than 30 minutes left, you should split into 2 groups and do the next 2 sections in parallel, then come back together and discuss.

Ethics roadmap.

Video 2: Ethics Lecture 2#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_2")

Discussion activity: Can AI be safe? Can it respect privacy? Can AI (or its creators/users) be responsible?#

Discuss!

  • Room 1: How can we maximize AI safety?

  • Room 2: How can we protect our privacy from AI threats?

  • Room 3: How can we decide who is responsible for AI behavior?


Section 3: Fair AI#

Video 3: Ethics Lecture 3#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_3")

Discussion activity: Can AI be fair? Can it exhibit human-like morality?#

Discuss!

  • Room 1: What can we do to help AI be more fair? – better training data? interpretable/explainable AI? alignment?

  • Room 2: What else can we do to help AI be more moral? – Is the top-down, bottom-up, or hybrid approach more promising? Why?


Summary#

Video 4: Ethics Lecture 4#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Video_4")