Archive for the ‘research’ Category

On the way back from NIPS 2015, we got the idea of organizing an ICML workshop on data-efficient machine learning. Data-efficient machine learning is something that is currently somewhat out of the focus of the deeply hyped machine learning community, but there are so many applications where you can simply not collect enough data, e.g., personalized healthcare.

The workshop will be happening at ICML this year, and we are quite excited about the quality of the papers submitted, the invited speakers, and  the breadth of topics that fall into the category of data-efficient machine learning.

More information can be found here.

On March 4, I was contacted by the Xinhua News Agency to comment on the upcoming Go match between Google DeepMind’s AlphaGo algorithm and the top-Go player Lee Sedol. I am posting the questions and answers below:

(more…)

Yoshua Bengio and Yann LeCun were giving this tutorial as a tandem talk.

The tutorial started off by looking at what we need in Machine Learning and AI in general. Two key points were identified:

  • Distributed representation
  • Compositional models

(more…)

This is a brief summary of the first part of the Deep RL workshop at NIPS 2015. I couldn’t get a seat for the second half…

(more…)

I attended the Bayesian Optimization workshop at NIPS 2015, and the following summarizes what was going on in the workshop from my perspective. This post primarily serves my self-interest in not losing these notes. But it may be useful for others as well.

Organizers:

Website

The workshop was effectively run by Bobak Shahriari and Roberto Calandra. In the beginning of the workshop, Bobak Shahriari was giving a brief introduction to Bayesian Optimization (BO), motivating the entire setting of data-efficient global black-box optimization and the gap that this workshop will address. (more…)

In the beginning of the talk, Zoubin had an interesting look back to early 90s when he joined NIPS for the first time:

  • At that time, neural networks were hip, Hamiltonian Monte Carlo was introduced (Radford Neal), Laplace Approximations for neural networks were introduced (David MacKay), SVMs were coming up.
  • Neural networks had the same problems we have today: local optima, choice of architectures, long training times, …
  • Radford Neal showed that Bayesian neural networks with a single hidden layer converges to a Gaussian process in the limit of infinitely many hidden units. He also analyzed infinitely deep neural networks.
  • New ideas that came about at that time: EM, graphical models, variational inference.

Since then, many of these ideas have gained/lost/re-gained momentum, but they were definitely shaping machine learning.

(more…)

I just returned from the Gaussian Process Summer School in Sheffield, followed by a Workshop on Bayesian Optimization. The aim of the GPSS was to expose people to Gaussian processes. This was done by some introductory lectures on GP regression, followed by some generalizations (e.g., classification, the GP-LVM, sparse GPs, Bayesian optimization) and some talks on current GP research.

(more…)