Abstract
We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously.
Key Contributions
- Residual Blocks: Skip connections that enable gradient flow
- ResNet-50/101/152: Architectures that won ILSVRC 2015
- Enabling Deep Learning: Made 100+ layer networks trainable
Results
| Architecture | Top-1 Error | Top-5 Error |
|---|---|---|
| VGG-16 | 28.5 | 9.9 |
| ResNet-152 | 21.7 | 5.9 |