Till KTH:s startsida Till KTH:s startsida

ML Sessions 2018


Interpretable ML #4 - 12. Dec. 2018

A Regularized Framework for Sparse and Structured Neural Attention

https://papers.nips.cc/paper/6926-a-regularized-framework-for-sparse-and-structured-neural-attention.pdf

Interpretable ML #3 - 29. Nov. 2018

Interpretable Explanations of Black Boxes by Meaningful Perturbation

https://www.robots.ox.ac.uk/~vedaldi//assets/pubs/fong17interpretable.pdf

Interpretable ML #2 - 15. Nov. 2018

Why should i trust you?: Explaining the predictions of any classifier.

https://arxiv.org/pdf/1602.04938v1.pdf

Interpretable ML #1 - 1. Nov. 2018

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

https://arxiv.org/pdf/1806.00069.pdf

RNNs #5 - Applications - 18. Oct. 2018


RNNs #5 - 04. Oct. 2018

Sequential Neural Models with Stochastic Layers 

http://papers.nips.cc/paper/6039-sequential-neural-models-with-stochastic-layers.pdf

RNNs #4 - 20. Sep. 2018

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

https://arxiv.org/pdf/1803.01271.pdf

RNNs #3 - 06. Sep. 2018

Neural Turing Machines

https://arxiv.org/pdf/1410.5401.pdf

RNNs #2 - 14. June. 2018

Sequence modeling: Recurrent and recursive nets (page: 388-415)

http://www.deeplearningbook.org/contents/rnn.html

RNNs #1 - 31. May. 2018

Sequence modeling: Recurrent and recursive nets  (page: 367-388)  

http://www.deeplearningbook.org/contents/rnn.html

--------------------------------------------------------------------------------

Something else - 17. May. 2018

 The limits and potentials of deep learning for robotics

http://journals.sagepub.com/doi/full/10.1177/0278364918770733

Optimization for ML #5 - 03. May. 2018

The Supervised Learning No-Free-Lunch Theorems

web.mit.edu/6.435/www/Dempster77.pdf

Optimization for ML #5 - 19. Apr. 2018

Maximum Likelihood from Incomplete Data via the EM Algorithm

http://web.mit.edu/6.435/www/Dempster77.pdf

Optimization for ML #3 - 05. Apr. 2018

Sharp Minima Can Generalize For Deep Nets

https://arxiv.org/pdf/1703.04933.pdf

Optimization for ML #2 - 08. Mar. 2018

Support Vector Machines

http://cs229.stanford.edu/notes/cs229-notes3.pdf

Optimization for ML #1 - 22. Feb. 2018

Large-Scale Machine Learning with Stochastic Gradient Descent

https://link.springer.com/content/pdf/10.1007%2F978-3-7908-2604-3_16.pdf

--------------------------------------------------------------------------------

Probabilistic Deep Learning #6 - 08. Feb. 2018

Application Session

  • Bayesian Recurrent Neural Networks
  • Learning & policy search in stochastic dynamical systems with BNNs
  • Deep Probabilistic Programming
  • Neural Discrete Representation Learning
  • Deep Bayesian Active Learning with Image Data

Probabilistic Deep Learning #5 - 25. Jan. 2018

Weight Uncertainty in Neural Networks

http://proceedings.mlr.press/v37/blundell15.pdf

Probabilistic Deep Learning #4 - 11. Jan. 2018

Priors for infinite networks

https://link.springer.com/content/pdf/10.1007%2F978-1-4612-0745-0_2.pdf

Judith Butepage skapade sidan 18 februari 2016

Administratör Judith Butepage ändrade rättigheterna 10 december 2016

Kan därmed läsas av alla och ändras av alla inloggade användare.

Hela världen får läsa.

Senast ändrad: 2020-12-10 14:48.

Taggar: Saknas än så länge.