In the beginning of the talk, Zoubin had an interesting look back to early 90s when he joined NIPS for the first time:

- At that time, neural networks were hip, Hamiltonian Monte Carlo was introduced (Radford Neal), Laplace Approximations for neural networks were introduced (David MacKay), SVMs were coming up.
- Neural networks had the same problems we have today: local optima, choice of architectures, long training times, …
- Radford Neal showed that Bayesian neural networks with a single hidden layer converges to a Gaussian process in the limit of infinitely many hidden units. He also analyzed infinitely deep neural networks.
- New ideas that came about at that time: EM, graphical models, variational inference.

Since then, many of these ideas have gained/lost/re-gained momentum, but they were definitely shaping machine learning.

Read the rest of this entry »

### Like this:

Like Loading...