An organizer’s reflection

In 2019, Arthur Gretton and I organized the Machine Learning Summer School (MLSS) 2019. A year on, I’m writing this post to support others organizing similar events by reflecting on the process and some of the challenges. I will not write much about the MLSS itself. Others have done this already (and better).

Read the rest of this entry »

There is a decision that every academic faces who decides to takes a sabbatical: what is the most effective way to spend my long sought-after time? Often a tension emerges: Should I spend my time on high-impact career-progressing work, like a long-planned book, or time in an international lab with a different research focus to learn new things, or should I work toward more community development and social impact?

I am a lecturer at Imperial College London, and as part of my sabbatical, I decided to spend four months at the African Institute for Mathematical Sciences (AIMS) in Rwanda where I taught a course on Foundations of Machine Learning. I’m posting this article on my last day at AIMS.

Read the rest of this entry »

On July 20, I was contacted by MIT Technology Review to comment about OpenAI’s efforts to train robots to do tasks in the home using reinforcement learning. I am posting the questions and answers below, the corresponding article is here.

Read the rest of this entry »

EU Membership in Pints

Posted: 2016-06-07 in Uncategorized

On June 23, the UK will decide whether to remain in the EU or not (“Brexit”). There are arguments going back and forth whether being part of the EU is good thing or not. Many of these arguments are based on predictions and more-or-less justified assumptions what could happen (in  a positive or negative way) if Brexit happens. Many numbers are mentioned in the campaigns (e.g., how much the economy would shrink, the pound would fall, the house prices go down), but these numbers are usually (educated) guesses.

However, a non-speculative but re-appearing number is 350 Million. This is the number the Leave Campaign propagate as the amount of money the UK will save by getting out of the EU by simply not paying the contribution to the EU budget (“membership fee”). Per week, by the way.

350 Million or 350,000,000 is such an unimaginable and unreal number that I want to look at it a bit more closely to understand how they get to this number, what it means for the UK taxpayer and how it fits in the general context of other taxes we pay. I will try to relate this number to units that make more sense to most people than the number of zeros attached to 35.

Read the rest of this entry »

On the way back from NIPS 2015, we got the idea of organizing an ICML workshop on data-efficient machine learning. Data-efficient machine learning is something that is currently somewhat out of the focus of the deeply hyped machine learning community, but there are so many applications where you can simply not collect enough data, e.g., personalized healthcare.

The workshop will be happening at ICML this year, and we are quite excited about the quality of the papers submitted, the invited speakers, and  the breadth of topics that fall into the category of data-efficient machine learning.

More information can be found here.

On March 4, I was contacted by the Xinhua News Agency to comment on the upcoming Go match between Google DeepMind’s AlphaGo algorithm and the top-Go player Lee Sedol. I am posting the questions and answers below:

Read the rest of this entry »

Yoshua Bengio and Yann LeCun were giving this tutorial as a tandem talk.

The tutorial started off by looking at what we need in Machine Learning and AI in general. Two key points were identified:

  • Distributed representation
  • Compositional models

Read the rest of this entry »

This is a brief summary of the first part of the Deep RL workshop at NIPS 2015. I couldn’t get a seat for the second half…

Read the rest of this entry »

I attended the Bayesian Optimization workshop at NIPS 2015, and the following summarizes what was going on in the workshop from my perspective. This post primarily serves my self-interest in not losing these notes. But it may be useful for others as well.



The workshop was effectively run by Bobak Shahriari and Roberto Calandra. In the beginning of the workshop, Bobak Shahriari was giving a brief introduction to Bayesian Optimization (BO), motivating the entire setting of data-efficient global black-box optimization and the gap that this workshop will address. Read the rest of this entry »

In the beginning of the talk, Zoubin had an interesting look back to early 90s when he joined NIPS for the first time:

  • At that time, neural networks were hip, Hamiltonian Monte Carlo was introduced (Radford Neal), Laplace Approximations for neural networks were introduced (David MacKay), SVMs were coming up.
  • Neural networks had the same problems we have today: local optima, choice of architectures, long training times, …
  • Radford Neal showed that Bayesian neural networks with a single hidden layer converges to a Gaussian process in the limit of infinitely many hidden units. He also analyzed infinitely deep neural networks.
  • New ideas that came about at that time: EM, graphical models, variational inference.

Since then, many of these ideas have gained/lost/re-gained momentum, but they were definitely shaping machine learning.

Read the rest of this entry »