七月在线DL翻译组:上百篇顶级DL论文,由浅入深


七月在线DL翻译组是由一群热爱翻译、热爱DL、英语六级以上的研究生或博士组成,有七月在线的学员,也有非学员。
本翻译组翻译的所有全部论文仅供学习交流,宗旨是:汇集顶级内容 帮助全球更多人。同时,欢迎在本帖子下随时推荐好的论文,共同学习交流。

第一轮 GAN五篇
1.1 Ian Goodfellow在2014年提出生成对抗网络的原始论文
初:杨
复:彭
审:张

1.2 2017年初,Ian GoodFellow在NIPS2016的发表综述性论文 NIPS 2016 Tutorial: Generative Adversarial Networks
初:范
复:lanpay
审:李

1.3 条件生成对抗网络(Conditional Generative Adversarial Nets)的原始论文
初:张
复:路
审:管

1.4 信息生成对抗网络(InfoGAN)的原始论文
初:晓
复:胥
审:smile

1.5 深度卷积生成对抗网络(Deep Convolutional Generative Adversarial Network, DCGAN)的原始论文
初:寒
复:陈

第二轮
2.1
icml07-selftaughtlearning
初:兴园
复:吴
审:彭

2.2
Multi scale_CNN_Time series
初:sunny
复:永彬
审:Seed

2.3
Sequence to Sequence Learningwith Neural Networks
初:GC
复:岳
审:任

2.4
1409.2944v2
初:张
复:路
审:尘

2.5
sigir12-p661-chen
初: Catherine
复:刘
审:红

第三轮

3.2
[v1] Going Deeper with Convolutions, 6.67% test error, http://arxiv.org/abs/1409.4842
初:darren
复:王
审:彭博

3.3
[v2] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, 4.8% test error, http://arxiv.org/abs/1502.03167
初:kenny
复:管
审:任

3.4
[v3] Rethinking the Inception Architecture for Computer Vision, 3.5% test error, http://arxiv.org/abs/1512.00567
初:杨
复:老宅
审:SnailTyan

3.5
[v4] Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, 3.08% test error, http://arxiv.org/abs/1602.07261
初:张
复:zhangdotcn
审:彭博

3.6
[1]Scene Text Detection via Holistic, Multi-Channel Prediction https://arxiv.org/abs/1606.09002
初:路
复:寒
审:

第四轮

4.1
Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
https://arxiv.org/abs/1702.03275
初:smile
复:管
审:seed

4.2
RAISR: Rapid and Accurate Image Super Resolution
https://arxiv.org/abs/1606.01299
初:陈
复:寒
审:吴

4.3
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444. pdf(Three Giants' Survey)
初:白
复:豆
审:timcompp

4.4 深度置信网络DBN
Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554 pdf
初:詹
复:张
审:杰

4.5 展示深度学习前景
Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507 pdf
初:张
复:seed
审:吴

4.6 AlexNet
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. pdf(AlexNet, Deep Learning Breakthrough)
初:sunny
复:James
审:寒

4.7 VGGNet
Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014). pdf(VGGNet,Neural Networks become very deep!)
初:郑
复:彭
审:王洋

4.8 GoogLeNet
Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. pdf(GoogLeNet)
初:程
复:翟
审:Tyan

4.9 ResNet
He, Kaiming, et al. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015). pdf(ResNet,Very very deep networks, CVPR best paper)
初:彭
复:菜
审:范

第五轮 17年3月1日开始
5.1 BP如何运行 http://www.offconvex.org/2016/12/20/backprop/
初:范
复:寒
审:杨

5.2 李飞飞最新论文:用深度学习和谷歌街景估算美国人口结构
https://arxiv.org/abs/1702.06683
初:张
复:Faye
审:庞

5.3 语音识别
Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29.6 (2012): 82-97. pdf(Breakthrough in speech recognition)
初:
复:
审:

5.4 RNN
Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [https://arxiv.org/pdf/1303.5778.pdf] (RNN)
初:杨
复:谢
审:

5.5
Graves, Alex, and Navdeep Jaitly. "Towards End-To-End Speech Recognition with Recurrent Neural Networks." ICML. Vol. 14. 2014. pdf
初:
复:
审:

5.6 谷歌语音识别系统
Sak, Haşim, et al. "Fast and accurate recurrent neural network acoustic models for speech recognition." arXiv preprint arXiv:1507.06947 (2015). pdf(Google Speech Recognition System)
初:王
复:Vlan
审:Jason

第六轮
6.1 谷歌进化算法自动寻找神经网络,EstebanReal, SherryMoore, AndrewSelle, SaurabhSaxena, YutakaLeonSuematsu,QuocLe,AlexKurakin. "Large-ScaleEvolutionofImageClassifiers"2017.[https://arxiv.org/pdf/1703.01041.pdf]
初:王浩帆
复:hw
审:darren郑

6.2
Sebastian Ruder 去年把所有深度学习优化常用的随机梯度下降算法写了一篇总结博文:http://sebastianruder.com/opti ... cent/
并发表了一篇arxiv版本论文:https://arxiv.org/abs/1609.04747
初:彭博
复:管枫
审:zhangdotcn

6.3 Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012). [pdf] (Dropout)
初:吴玉霞
复:熊先明
审:杨圣

6.4 Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958. [pdf]
初:王丽媛
复:豆浆
审:菜头

6.5
初:
复:
审:

6.6 Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016). [pdf] (Update of Batch Normalization)
初:杨圣
复:Achilles_cn
审:范诗剑

6.7 Courbariaux, Matthieu, et al. "Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1." [pdf] (New Model,Fast)
初:
复:
审:

6.8 Jaderberg, Max, et al. "Decoupled neural interfaces using synthetic gradients." arXiv preprint arXiv:1608.05343 (2016). [pdf] (Innovation of Training Method,Amazing Work)
初:
复:
审:

6.9
初:
复:
审:

6.10 Wei, Tao, et al. "Network Morphism." arXiv preprint arXiv:1603.01670 (2016). [pdf] (Modify previously trained network to reduce training epochs)
初:
复:
审:
已邀请:

July - 抠细节抠体验,不妥协不将就。

赞同来自: jpyu


DL roadmap
https://github.com/songrotek/D ... admap

近几年引用最多的深度学习paper
https://github.com/terryum/awe ... apers

简言之,第四轮起
1 群名改成:姓名_研究/兴趣方向_所在单位/学校
2 选文的主要方向是:DL,其次ML CV NLP 
3 一篇文章基本3个人,分别负责初译 复译 审校
4 正式启动之前,组内的三个人 一开始就商议好:本轮这篇是逐字逐句 还是翻译概要
5 全体翻译组每月1 轮翻译任务,每轮10篇
6 不只是翻译论文,论文/新闻/教程都是我们的翻译对象,汇集顶级内容 帮助全球更多人。且不论翻译的是什么,依然挑选认领,且除了每翻译一篇便送一个课程外,一个人累计翻译10篇之后,即送一个大礼物
礼物池目前有:VIP年会员、樱桃红轴机械键盘、beats耳机等等,且在不断丰富当中
7 素材来源:群内任何人自荐 + DL roadmap + 被引用次数最多的DL paper
8 翻译组要求:英语过6级、研究生或博士、熟练DL、爱好翻译
9 每五轮之后,如果五轮之内某个人一轮一篇文章都从没认领翻译过,那可能需要你暂时退出。来日方长 日后再来

要回复问题请先登录注册

返回顶部