When we see someone performing at the top of their craft, we often marvel at both their observable achievements and the hidden internal expertise that they've accumulated. Something similar is true, in a very different way, with generative AI and large language models: their successes involve both powerful observable behavior and deep internal representations of the world that they construct for their own uses. How do these internal representations work, and to what extent are they similar to or different from the representations of the world that we build as humans?

Event Date
Location
Melvin Calvin Laboratory, UC Berkeley
Event ID
312254