Link to sign up

Getting here: Enter the lobby at 100 University Ave (right next to St Andrew subway station), and message Giles Edkins on the meetup app or call him on 647-823-4865 to be let up to room 6H.

We all know that Large Language Models (LLMs) can confidently emit falsehoods, a phenomenon known as hallucination. Joshua Carpeggiani will tell us about some interpretability methods - peering into the insides of the model and making sense of what we see - that might help detect and correct hallucinations.

We welcome a variety of backgrounds, opinions and experience levels.