Skip to content

Information and materials for the Turing's "robots-in-disguise" reading group on fundamental AI research.

Notifications You must be signed in to change notification settings

alan-turing-institute/robots-in-disguise

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robots in Disguise: Fundamental AI Reading Group

Public repo for The Alan Turing Institute's reading group on fundamental AI research.

If you're based at the Turing, follow #robots-in-disguise on the Turing Slack for the most recent updates.

To see all the slides and reading materials for previous sessions, see the archive.

Note that this originated from the Research Engineering Team's reading group on Transformers.

Overview

The group meets every week on Mondays at 11-12. Everyone is welcome to join! If you have any questions email Ryan Chan, Fede Nanni or Giulia Occhini and remember to go through our Code of Conduct before joining.

Please get in touch if you would like to give a talk (either about your research or a topic you think is relevant to the reading group) add suggestions and emoji preferences to the list of proposed topics on HackMD!

Upcoming Schedule

Date Topic Room Lead
18/11/24 Biological neural networks David Blackwell Balázs Mészáros , Jess Yu
25/11/24 Application of foundation models in time series tasks David Blackwell Gholamali Aminian
02/12/24 Can language models play the Wikipedia game? David Blackwell Alex Hickey, Jo Knight
03/12/24 Mechanistic Interpretability David Blackwell Neel Nanda
09/12/24 Scaling laws of neural networks David Blackwell Edmund Dable-Heath
16/12/24 TBC David Blackwell TBC

Material for sessions

18/11/24

Biological neural networks

Two talks:

  • Event-Based Learning of Synaptic Delays in Spiking Neural Networks
  • Information-theoretic Analysis of Brain Dynamics & Neural Network Models Informed by Information Theory

02/12/24

Can language models play the Wikipedia game?

This project examines how Language Models can navigate Wikipedia. Which tests their ability to link semantically similar topics in a practical way. We have run experiments with a large variety of sentence embedding and large language models for comparison. We have also seen how the performance varies when transversing Wikipedia in other languages and when navigating between scientific papers, which allows an assessment of the breadth of the model's abilities.

09/12/24

Scaling laws of neural networks