Loading Events
This event has passed.

Abstract

In this talk I will describe our recent work on effectively using graph structured data. Specifically, I will discuss how to compress graphs to facilitate predictions, understand the capacity of algorithms operating on graphs, and how to infer interaction graphs so as to predict deliberative outcomes.

Our approach to graph compression builds on the idea of optimal transport specifying the cost of mapping a large graph to a smaller one. The cost decomposes as a flow on the edges, and the selection of the subgraph to retain can be optimized via linear programming relaxations.

Graph neural networks (GNNs) are naturally suited for making predictions based on graphs but they remain poorly understood in terms of what they can and cannot do. We introduce constructions that reveal how GNNs fail to distinguish many graph properties such as cycles, diameter so long as the graphs have similar local structure. We also extend RNN generalization analysis to GNNs via computation trees.

In many cases the graph structure is not given but must be inferred. This is the case, for example, in trying to understand how a set of players arrive at their decisions. We study the role of structure — interaction graph — in the context of predicting outcomes of such deliberative games. We characterize conditions under which players converge to an equilibrium, and when the unknown interaction graph can be identified from a data set of context-dependent outcomes.

 

Bio

Vikas Garg is a PhD student in Computer Science at MIT, where his research is supervised by Prof. Tommi Jaakkola. His research interests in machine learning include generative models, graphical models, theory of deep learning, and learning under uncertainty or resource constraints along with their intersections with optimization and game theory.