In the beginning of the talk, Zoubin had an interesting look back to early 90s when he joined NIPS for the first time:

  • At that time, neural networks were hip, Hamiltonian Monte Carlo was introduced (Radford Neal), Laplace Approximations for neural networks were introduced (David MacKay), SVMs were coming up.
  • Neural networks had the same problems we have today: local optima, choice of architectures, long training times, …
  • Radford Neal showed that Bayesian neural networks with a single hidden layer converges to a Gaussian process in the limit of infinitely many hidden units. He also analyzed infinitely deep neural networks.
  • New ideas that came about at that time: EM, graphical models, variational inference.

Since then, many of these ideas have gained/lost/re-gained momentum, but they were definitely shaping machine learning.

Read the rest of this entry »

Advertisements

Rich Sutton gave a tutorial on function approximation in RL. Rich being one of the pioneers of RL, I was looking forward to his insights. He started off with some inherent properties of reinforcement learning, which include:

  • Evaluative Feedback
  • Delayed consequences
  • Trial and error learning
  • Non-stationarity
  • Sequential problems
  • No human instruction

Read the rest of this entry »

I just returned from the Gaussian Process Summer School in Sheffield, followed by a Workshop on Bayesian Optimization. The aim of the GPSS was to expose people to Gaussian processes. This was done by some introductory lectures on GP regression, followed by some generalizations (e.g., classification, the GP-LVM, sparse GPs, Bayesian optimization) and some talks on current GP research.

Read the rest of this entry »

Samuel J. Gershman, Eric J. Horvitz, Joshua B. Tenenbaum:
Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines
Science 349(6245): 273-278, 2015

The article provides a shared computation-based view on concepts in AI, cognitive science and neuroscience. Advances that address challenges of perception and action under uncertainty are discussed.
Read the rest of this entry »

Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller:
Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images
arXiv, 2015

This paper deals with the problem of model-based reinforcement learning (RL) from images. The idea behind model-based RL is to learn a model of the transition dynamics of the system/robot and use this model as a surrogate simulator. This is helpful if we want to minimize experiments with a (physical/mechanical) system. The added difficulty addressed in this paper is that this predictive transition model should be learned from raw images where only pixel information is available.

Central idea Read the rest of this entry »

Kurt T. Miller, Thomas L. Griffiths and Michael I. Jordan:
Nonparametric Latent Feature Model for Link Prediction
NIPS 2010

The objective of this paper is to predict links in social networks. The working assumption is that links depend on relational features between entities. The objective of the paper is to simultaneously infer the number of these features and learn which entities have each feature.
Read the rest of this entry »

arXiv has become a main source of information for statistics and machine learning. Daily email digests tell me what papers have been uploaded since yesterday, including authors, abstracts and a link to the paper. For me this is invaluable at the receiving side.

However, on the producing/publishing side, not everybody thinks that uploading papers to arXiv is a good idea. And there are several good reasons for this:
Read the rest of this entry »