728x90 AI22 Rethinking the Inception Architecture for Computer Vision(Inception V3)(2015) Review Rethinking the Inception Architecture for Computer Vision Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although incr arxiv.org 0. 핵심 요약 모델 크기가 증가하고 계산 비용이 증가하는 것이 품질이 좋아지긴 하지만 다양한 곳에 활용되기 위해서는 계산 효율성과.. 2024. 1. 13. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks(EfficientNet)(2019) Review EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing n arxiv.org 0. 핵심 요약 Network의 깊이, 넓이, 해상도를 결정하는 공식인 Compound Scaling M.. 2024. 1. 11. Densely Connected Convolutional Networks(DenseNet)(2016) Review Densely Connected Convolutional Networks Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observa arxiv.org 0. 핵심 요약 특정 시점의 layer 이전에 있는 모든 layer 들을 모두 입력으로 사용 특정 시점의 layer를 이후 모든 layer들의 입력으로 사용 위의.. 2024. 1. 9. Deep Residual Learning for Image Recognition(ResNet)(2015) Reivew Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with arxiv.org 0. 핵심 요약 이전보다 더 network를 깊게 사용할 수 있도록 해주는 residual learning framework 제안 152 layers로 I.. 2024. 1. 7. 이전 1 2 3 4 5 6 다음 728x90