Open In Colab   Open in Kaggle

Intro#

Welcome to our day on Microcircuits. What are ‘microcircuits’, you might ask? These are essentially computations that exist on a level that that is small and are core elements repeated continuously within bigger networks. We’re going to investigate some of this interesting ideas and by the end of the day, you will be able to relate a lot of what you likely already had heard of, but in a totally different way. If you have a background in AI or Machine Learning, you will likely have heard of: sparsity, attention, normalization. Today, we’re going to change the way we view these elementary operations (microcircuits) by showing how they are linked to phenomena that exist in the brain.

We’ve previously been adding new skills and tools to our NeuroAI toolkit. However, what a new NeuroAI researcher also needs to be able to do is to take ideas and concepts already familiar and see them in a new light, see them with a different lens, a different jusfication. This often happens when we can relate a biological principle to show similarity between how two methods work. Once that link is there, it frees us up to use ideas from one field (e.g. neuroscience) or another (e.g. AI) in order to bring hypotheses, tests, potential advancements. That’s all what we’re about in NeuroAI and today is a great chance for you to build extra familiarity with some common concepts.

Xaq will now introduce you to the topics of the day in a lot more detail in the video below.

Install and import feedback gadget#

Hide code cell source
# @title Install and import feedback gadget

!pip install vibecheck datatops --quiet

from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
    return DatatopsContentReviewContainer(
        "",  # No text prompt
        notebook_section,
        {
            "url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
            "name": "neuromatch_neuroai",
            "user_key": "wb2cxze8",
        },
    ).render()

feedback_prefix = "W1D5_Intro"

Prerequisites#

While the first two tutorials of this day don’t use specific frameworks or modeling techniques, discussing fundamental operations using the most popular python libraries for data processing, the last tutorial discovers the attention mechanism presented in Transformers. It might be beneficial to have an idea of this architecture type, which is already presented in W1D1; thus, no further specific knowledge is assumed.

Video#

Intro Video#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_intro_video")

Slides#

Intro Video Slides#