Loading Events
This event has passed.

Luigi Acerbi, University of Helsinki: Probabilistic machine and biological learning under resource constraints
Yasser Roudi, Kavli Institute for Systems Neuroscience: Title TBA
Tuesday, 23 February at 10:00-11:15, join URL https://aalto.zoom.us/j/67072679004 (passcode: brainmind)

Luigi Acerbi

Probabilistic machine and biological learning under resource constraints

In this talk, I will cover two areas of research in my group whose common thread is the ability of intelligent systems to deal with extreme resource constraints.

In the first part of the talk, I discuss the problem of performing Bayesian inference when only a small number of model evaluations is available, such as when a researcher is fitting a complex computational model. To address this issue, I recently proposed a sample-efficient framework for approximate Bayesian inference, Variational Bayesian Monte Carlo (VBMC) [1,2].

In the second part of the talk, I present joint theoretical work on how rational but limited agents should normatively allocate expensive memory resources in reinforcement learning [3], with predictions for human and animal behavior.

[1] Acerbi L (2018). Variational Bayesian Monte Carlo. NeurIPS ’18. Code: https://github.com/lacerbi/vbmc

[2] Acerbi L (2020). Variational Bayesian Monte Carlo with Noisy Likelihoods. NeurIPS ’20.

[3] Patel N, Acerbi L, Pouget A (2020). Dynamic allocation of limited memory resources in reinforcement learning. NeurIPS ’20.

Bio:

Luigi Acerbi is an Assistant Professor at the Department of Computer Science of the University of Helsinki, where he leads the Machine and Human Intelligence research group. He is a member of the Finnish Center for Artificial Intelligence (FCAI), an affiliate researcher of the International Brain Laboratory, and an off-site visiting scholar at New York University.

Group website: http://www.helsinki.fi/machine-and-human-intelligence

Yasser Roudi

Artificial Intelligence may beat us in chess, but not in memory

A large body of work in the theory of neural networks (artificial or biological) has been performed on neural networks comprised of simple activation functions, prominently, binary units. Analysing such networks has led to some general conclusions. For instance, there is long held consensus that local biological learning mechanisms such as Hebbian learning are very inefficient compared to iterative non-local learning rules used in machine learning. In this talk, I will show that when it comes to memory operations such a conclusion is an artefact of analysing networks of binary neurons: when neurons with graded response, more reminiscent of the response of real neurons, are considered, memory storage in neural networks with Hebbian learning can be very efficient and close to the optimal performance.

Ref: Schönsberg, F., Roudi, Y., & Treves, A. (2021). Efficiency of local learning rules in threshold-linear associative networks. Physical Review Letters126(1), 018301.

Bio:

Yasser Roudi is a Professor at the SPINOr group of the Kavli Institute for Systems Neuroscience and Centre for Neural Computation in Trondheim, Norway. His research is focused on understanding information processing in biological and artificial systems primarily by using methods from statistical mechanics and information theory. He studied at Sharif University of Technology, Tehran and at SISSA, Trieste. Prior to joining the Kavli Institute, he worked at the Gatsby Units, University College London, NORDITA in Stockholm, and Weil Medical College of Cornell University in New York.

For more info see: www.spinorkavli.org

 

Subscribe to the mailing list for updates directly to your email!

Brain & Mind Computational Breakfast home