天天看點

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

A collection of research articles, blog posts, slides and code snippets about deep learning in applied settings. Including trained models and simple methods that can be used out of the box. Mainly focusing on Convolutional Neural Networks (CNN) but Recurrent Neural Networks (RNN), deep Q-Networks (DQN) and other interesting architectures will also be listed.

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Properties: 8 weight layers (5 convolutional and 2 fully connected), 60 million parameters, Rectified Linear Units (ReLUs), Local Response Normalization, Dropout

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Properties: 19 weight layers, 144m parameters, 3x3 convolution filters, L2 regularised, Dropout, No Local Response Normalization

Properties: 22 layers, 7m parameters, Inception modules, 1x1 conv layers, ReLUs, Dropout, Mid-level outputs

Inception modules:

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Main building block of the network:

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Features are also very good and transferable with (faster) R-CNNs (see below):

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Deep dreaming from noise:

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Top selfies according to the ConvNet:

Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits
Applied Deep Learning Resources Applied Deep Learning Resources CNN DQN RNN Other promising or useful architectures Framework benchmarks Other resources Credits

Their summary: From our experiments, we observe that Theano and Torch are the most easily extensible frameworks. We observe that Torch is best suited for any deep architecture on CPU, followed by Theano. It also achieves the best performance on the GPU for large convolutional and fully connected networks, followed closely by Neon. Theano achieves the best performance on GPU for training and deployment of LSTM networks. Finally Caffe is the easiest for evaluating the performance of standard deep architectures.

<a href="http://deeplearning.net/">http://deeplearning.net/</a>

<a href="https://github.com/contact">Contact GitHub</a>

<a href="https://developer.github.com/">API</a>

<a href="https://training.github.com/">Training</a>

<a href="https://shop.github.com/">Shop</a>

<a href="https://github.com/blog">Blog</a>

<a href="https://github.com/about">About</a>

© 2016 GitHub, Inc.

<a href="https://github.com/site/terms">Terms</a>

<a href="https://github.com/site/privacy">Privacy</a>

<a href="https://github.com/security">Security</a>

<a href="https://status.github.com/">Status</a>

<a href="https://help.github.com/">Help</a>

繼續閱讀