Helsinki Distinguished Lecture Series on Future Information Technology

HIIT is the organiser of the Helsinki Distinguished Lecture Series on Future Information Technology. The focus of this lecture series is on the research challenges and solutions faced by current and future information technology, as seen by internationally leading experts in the field. The lectures are intended to be approachable for people with scientific education in fields other than information technology, whilst at the same time providing information technology experts new viewpoints to their own discipline. The series was launched in 2012, and in 2018 it continued with 5 lectures.

Machine Learning Systems: On-device AI and beyond

Nic Lane

Nic Lane

University of Oxford

February 27, 2018

Video

Abstract

In just a few short years, breakthroughs from the field of deep learning have transformed how computational models perform a wide-variety of tasks such as recognizing a face, driving a car or the translation of a language. Unfortunately, deep models and algorithms typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. Because sensor perception and reasoning are so fundamental to this class of computation, I believe the evolution of devices like phones, wearables and things will be crippled until we reach a point where current – and future – deep learning innovations can be simply and efficiently integrated into these systems.

In this talk, I will describe our progress towards developing general-purpose support for deep learning on resource-constrained mobile and embedded devices. Primarily, this requires a radical reduction in the resources (viz. energy, memory and computation) consumed by these models – especially at inference time. I will highlight various, largely complementary, approaches we have invented to achieve this goal including: binary “on-the-fly” networks, sparse layer representations, dynamic forms of compression, and scheduling partitioned model architectures. Collectively, these techniques rethink how deep learning algorithms can execute not only to better cope with mobile and embedded device conditions; but also to increase the utilization of commodity processors (e.g., DSPs, GPUs, CPUs) – as well as emerging purpose-built deep learning accelerators.

About the Speaker

Nic Lane is an Associate Professor in the Computer Science Department at the University of Oxford. Before joining Oxford, he held dual appointments at University College London (UCL) and Nokia Bell Labs; at Nokia, as a Principal Scientist, Nic founded and led DeepX – an embedded focused deep learning unit at the Cambridge location. Nic specializes in the study of efficient machine learning under computational constraints, and over the last three years has pioneered a range of embedded and mobile forms of deep learning. This work formed the basis for his 2017 Google Faculty Award in machine learning. Nic’s work has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). This year he will serve as the PC Chair of ACM SenSys 2018. Prior to moving to England, Nic spent four years at Microsoft Research based in Beijing as a Lead Researcher. He received his PhD from Dartmouth College in 2011. More information about Nic is available from http://niclane.org.

The Time-Traveling Networking Researcher: Witnessing the consolidation and fragmentation of the Internet?

Mostafa Ammar

Mostafa Ammar

Georgia Institute of Technology

June 4, 2018

Video

Abstract

A networking researcher, traveling forward in time from 1985 to the present, would be shocked by many things – not the least of which is the fact that people are still doing networking research some 33 years later. While some of the terminology would sound familiar, the networks themselves and what we use them for would be totally unrecognizable – yet quite impressive. For those of us who could not afford a time machine, we have observed a more gradual evolution interspersed with the occasional shocking development. In this talk I will first argue that a Service-Infrastructure Cycle has been fundamental to networking evolution. Four decades-worth of iterations of this Cycle have yielded the Internet as we know it today, a common and shared global networking infrastructure that delivers almost all services. I will further argue, using brief historical case studies, that success of network mechanism deployments often hinges on whether or not mechanism evolution follows the iterations of this Cycle. Many have observed that this network, the Internet, has become ossified and unable to change in response to new demands. However, novel service requirements and scale increases continue to exert significant pressure on this ossified infrastructure. The result, I will conjecture, will be a fragmentation, the beginnings of which are evident today, that will ultimately fundamentally change the character of the network infrastructure. By ushering in a ManyNets world, this fragmentation will lubricate the Service-Infrastructure Cycle so that it can continue to govern the evolution of networking. I will conclude with a brief discussion of the possible implications of this emerging ManyNets world on networking research.

About the Speaker

Mostafa Ammar is a Regents’ Professor with the School of Computer Science at the Georgia Institute of Technology. He has been with Georgia Tech since 1985. Dr. Ammar received the S.B. and S.M. degrees from the Massachusetts Institute of Technology in 1978 and 1980, respectively and the Ph.D. from the University of Waterloo, Ontario, Canada in 1985. Dr. Ammar’s research interests are in network architectures, protocols and services. He has contributions in the areas of multicast communication and services, multimedia streaming, content distribution networks, network simulation, disruption-tolerant networks, and most recently, in mobile cloud computing and network virtualization. He has published extensively in these areas. To date, 35 PhD students have completed their degrees under his supervision; many have gone on to distinguished careers in academia and industry. Dr. Ammar has served the networking research community in multiple roles. Most notably, he served as the Editor-in-Chief of the IEEE/ACM Transactions on Networking (ToN) from 1999 to 2003, and he was the co-TPC Chair for the IEEE ICNP 1997, ACM CoNEXT 2006 and ACM SIGMETRICS 2007 conferences. He currently serves on the steering committee of the IEEE Transactions on Mobile Computing. His awards include the IBM Faculty Partnership Award (1996), Best Paper Award at the 7th WWW conference (1998), the GT Outstanding Doctoral Thesis Advisor Award (2006), the Outstanding Service Award from the IEEE Technical Committee on Computer Communications (2010), the ACM Mobihoc Best Paper Award (2012), and the GT College of Computing Faculty Mentor Award (2015). Dr. Ammar was elected Fellow of the IEEE in 2002 and Fellow of the ACM in 2003.

Self-Supervised Visual Learning and Synthesis

Alexei A. Efros

Alexei A. Efros

University of California, Berkeley

June 15, 2018

Abstract

Computer vision has made impressive gains through the use of deep learning models, trained with large-scale labeled data. However, labels require expertise and curation and are expensive to collect. Can one discover useful visual representations without the use of explicitly curated labels? In this talk, I will present several case studies exploring the paradigm of self-supervised learning — using raw data as its own supervision. Several ways of defining objective functions in high-dimensional spaces will be discussed, including the use of General Adversarial Networks (GANs) to learn the objective function directly from the data. Applications in image synthesis will be shown, including automatic colorization, paired and unpaired image-to-image translation (aka pix2pix and cycleGAN), curiosity-based exploration, and, terrifyingly, #edges2cats.

About the Speaker

Alexei (Alyosha) Efros joined UC Berkeley in 2013. Prior to that, he was nine years on the faculty of Carnegie Mellon University, and has also been affiliated with École Normale Supérieure/INRIA and University of Oxford. His research is in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems where large quantities of unlabeled visual data are readily available. Efros received his PhD in 2003 from UC Berkeley. He is a recipient of CVPR Best Paper Award (2006), NSF CAREER award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008), Okawa Grant (2008), Finmeccanica Career Development Chair (2010), SIGGRAPH Significant New Researcher Award (2010), ECCV Best Paper Honorable Mention (2010), 3 Helmholtz Test-of-Time Prizes (2013,2017), and the ACM Prize in Computing (2016).

Using Edge Computing for Privacy in Real-Time Video Analytics

Mahadev Satyanarayanan

Mahadev Satyanarayanan

Carnegie Mellon University

September 24, 2018

Video

Abstract

Live video offers several advantages relative to other sensing modalities. It is flexible and open-ended: new image and video processing algorithms can extract new information from existing video streams. It offers high resolution, wide coverage, and low cost relative to other sensing modalities. Its passive nature means that a participant does not have to wear a special device, install an app, or do anything special. He or she merely has to be visible to a camera. Privacy is clearly a major concern with video in public spaces. In this talk, I will describe how Edge Computing can be used to denature live video thereby making it “safe” from a privacy point of view. Using OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy, we are able to selectively obscure faces according to user-specified policies at full frame rate. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions.

About the Speaker

Satya’s multi-decade research career has focused on the challenges of performance, scalability, availability and trust in information systems that reach from the cloud to the mobile edge of the Internet. In the course of this work, he has pioneered many advances in distributed systems, mobile computing, pervasive computing, and the Internet of Things (IoT). Most recently, his seminal 2009 publication “The Case for VM-based Cloudlets in Mobile Computing” and the ensuring research has led to the emergence of Edge Computing (also known as “Fog Computing”). Satya is the Carnegie Group Professor of Computer Science at Carnegie Mellon University. He received the PhD in Computer Science from Carnegie Mellon, after Bachelor’s and Master’s degrees from the Indian Institute of Technology, Madras. He is a Fellow of the ACM and the IEEE. For a more detailed bio, see Satya’s Wikipedia entry.

AI for Neuroscience & Neuroscience for AI

Irina Rish

Irina Rish

IBM T.J. Watson Research Center

October 11, 2018

Abstract

AI and neuroscience share the same age-old goal: to understand the essence of intelligence. Thus, despite different tools used and different questions explored by those disciplines, both have a lot to learn from each other. In this talk, I will summarize some of our recent projects which explore both directions, AI for neuro and neuro for AI. AI for neuro involves using machine learning to recognize mental states and identify statistical biomarkers of various mental disorders from heterogeneous data (neuroimaging, wearables, speech), as well as applications of our recently proposed hashing-based representation learning to dialog generation in depression therapy. Neuro for AI implies drawing inspirations from neuroscience to develop better machine learning algorithms. In particular, I will focus on the continual (lifelong) learning objective, and discuss several examples of neuro-inspired approaches, including (1) neurogenetic online model adaptation in nonstationary environments, (2) more biologically plausible alternatives to backpropagation, e.g., local optimization for neural net learning via alternating minimization with auxiliary activation variables, and co-activation memory, (3) modeling reward-driven attention and attention-driven reward in contextual bandit setting, as well as (4) modeling and forecasting behavior of coupled nonlinear dynamical systems such as brain (from calcium imaging and fMRI) using a combination of analytical van der Pol model with LSTMs, especially in small-data regimes, where such hybrid approach outperforms both of its components used separately.

About the Speaker

Irina Rish is a researcher at the AI Foundations department of the IBM T.J. Watson Research Center. She received MS in Applied Mathematics from Moscow Gubkin Institute, Russia, and PhD in Computer Science from the University of California, Irvine. Her areas of expertise include artificial intelligence and machine learning, with a particular focus on probabilistic graphical models, sparsity and compressed sensing, active learning, and their applications to various domains, ranging from diagnosis and performance management of distributed computer systems (“autonomic computing”) to predictive modeling and statistical biomarker discovery in neuroimaging and other biological data. Irina has published over 70 research papers, several book chapters, two edited books, and a monograph on Sparse Modeling, taught several tutorials and organized multiple workshops at machine-learning conferences, including NIPS, ICML and ECML. She holds over 26 patents and several IBM awards. Irina currently serves on the editorial board of the Artificial Intelligence Journal (AIJ). As an adjunct professor at the EE Department of Columbia University, she taught several advanced graduate courses on statistical learning and sparse signal modeling.