NAS-BENCH-201 EXTENDING THE SCOPE OF REPRODUCIBLE NEURAL ARCHITECTURE SEARCH

Posted on May 31, 2022   1 minute read ∼ Filed in  : 

Introduction

The paper introduces the NAS-Bench-201, which is extended from NAS-Bench-101.

Motivation

Previously NAS-Bench-101 and NAS-HPO-Bench are proposed. However,

  1. some NAS algorithms can not be applied directly on NASBench-101.

    101 has constraints on the nodes/edges of each cell. Some NAS algorithms based on weight-sharing cannot be applied into it.

  2. NAS-HPO-Bench only has 144 candidate architectures, which may be insufficient to evaluate NAS algorithms.

Contributions

In summary, the paper makes the following contributions.

  1. The search space: Cell-based, each cell has 4 nodes and 5 operations, which results in 15625 cell/architectures candidates in total.
  2. Each architecture is trained on three datasets (CIFAR10, CIFAR100, ImageNet-16-120 ), and the paper records their loss, accuracy, number of parameters, and FLOPs.
  3. The paper benchmarked 10 NAS algorithms on the above search space.

NAS-Bench-201

image-20220531162907710

image-20220531162924350

image-20220531162938041

Discussion





END OF POST




Tags Cloud


Categories Cloud




It's the niceties that make the difference fate gives us the hand, and we play the cards.