Archive for December, 2015

Yoshua Bengio and Yann LeCun were giving this tutorial as a tandem talk.

The tutorial started off by looking at what we need in Machine Learning and AI in general. Two key points were identified:

  • Distributed representation
  • Compositional models

(more…)

This is a brief summary of the first part of the Deep RL workshop at NIPS 2015. I couldn’t get a seat for the second half…

(more…)

I attended the Bayesian Optimization workshop at NIPS 2015, and the following summarizes what was going on in the workshop from my perspective. This post primarily serves my self-interest in not losing these notes. But it may be useful for others as well.

Organizers:

Website

The workshop was effectively run by Bobak Shahriari and Roberto Calandra. In the beginning of the workshop, Bobak Shahriari was giving a brief introduction to Bayesian Optimization (BO), motivating the entire setting of data-efficient global black-box optimization and the gap that this workshop will address. (more…)

In the beginning of the talk, Zoubin had an interesting look back to early 90s when he joined NIPS for the first time:

  • At that time, neural networks were hip, Hamiltonian Monte Carlo was introduced (Radford Neal), Laplace Approximations for neural networks were introduced (David MacKay), SVMs were coming up.
  • Neural networks had the same problems we have today: local optima, choice of architectures, long training times, …
  • Radford Neal showed that Bayesian neural networks with a single hidden layer converges to a Gaussian process in the limit of infinitely many hidden units. He also analyzed infinitely deep neural networks.
  • New ideas that came about at that time: EM, graphical models, variational inference.

Since then, many of these ideas have gained/lost/re-gained momentum, but they were definitely shaping machine learning.

(more…)

Rich Sutton gave a tutorial on function approximation in RL. Rich being one of the pioneers of RL, I was looking forward to his insights. He started off with some inherent properties of reinforcement learning, which include:

  • Evaluative Feedback
  • Delayed consequences
  • Trial and error learning
  • Non-stationarity
  • Sequential problems
  • No human instruction

(more…)