Open In Colab   Open in Kaggle

Intro#

Install and import feedback gadget#

Hide code cell source
# @title Install and import feedback gadget

!pip install vibecheck datatops --quiet

from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
    return DatatopsContentReviewContainer(
        "",  # No text prompt
        notebook_section,
        {
            "url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
            "name": "neuromatch_neuroai",
            "user_key": "wb2cxze8",
        },
    ).render()

feedback_prefix = "W2D1_Intro"

Prerequisites#

In order to get the most out of today’s tutorials, it would greatly help if you had experience building (simple) neural network models in PyTorch. We will also be using some concepts from Linear Algebra, so some familiarity with concepts from that domain will come in handy. We will also be looking at a specific algorithm in Reinforcement Learning (RL) called the Actor-Critic model, so it would help if you had some familiarity with Reinforcement Learning. We touched a little bit on RL in W1D2 (“Comparing Tasks”), specifically in Tutorial 3 (“Reinforcement Learning Across Temporal Scales”). It could be good to refer back to that tutorial and to check out the two videos on Meta-RL in that tutorial notebook.

Today is a little more technical, more theory-driven, but it will give you a lot of skills and appreciation to work with these very interesting ideas in NeuroAI. What we encourage you to keep in mind is how this knowledge helps you to appreciate the concept of generalization, the over-arching theme of this entire course. Lots of points today will indicate how learning dynamics will arrive at solutions that generalize well!

Video#

Intro Video#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_intro_video")

Slides#

Intro Video Slides#