Efficient Net
Scaled neural network.
Efficient nets are the family of neural networks with the baseline model constructed with Neural Architecture Search.
Neural Architecture Search is a technique for automating the design of artificial neural networks.
The type of artificial neural network that can be designed depends on the search space .
With the help of a neural architecture search that optimizes both accuracy and FLOP (Floating point Operation), we firstly create the baseline model called EfficientNET-B0.
Starting with this baseline model, we perform compound scaling on it and create family of Efficient net models from EfficientNetB1 to B7.
Compound Scaling was the major topic of the paper where Efficient Net was introduced which dealt with scaling the neural network in depth, width and resolution to increase its accuracy.
The neural networks proved to work with less parameters and with more accuracy compared to other state of the art neural networks.
Comparison of Efficient Net with different neural nets on Imagenet. Source:https://arxiv.org/abs/1905.11946
We can see that the number of parameters in the Efficient net family is significantly low compared to other models.

Parameters

EfficientNet sub type

The hasty tool lets you choose from these different Efficient net.
Note that there is a trade off between the number of parameters and accuracy going from EfficientNetB1 to B7.

Weight

It is the weight that is used for model initialization. Here, we use the the weights of the EfficientNetB0 found on Image Net dataset.

Code Implementation

1
import numpy as np
2
import pandas as pd
3
import os
4
import matplotlib.image as mpimg
5
โ€‹
6
import torch
7
import torch.nn as nn
8
import torch.optim as optim
9
โ€‹
10
import torchvision
11
from torch.utils.data import DataLoader, Dataset
12
import torch.utils.data as utils
13
from torchvision import transforms
14
โ€‹
15
import matplotlib.pyplot as plt
16
%matplotlib inline
17
โ€‹
18
import warnings
19
warnings.filterwarnings("ignore")
20
โ€‹
21
data_dir = '../input'
22
train_dir = data_dir + '/train/train/'
23
test_dir = data_dir + '/test/test/'
24
โ€‹
25
labels = pd.read_csv("../input/train.csv")
26
labels.head()
27
โ€‹
28
class ImageData(Dataset):
29
def __init__(self, df, data_dir, transform):
30
super().__init__()
31
self.df = df
32
self.data_dir = data_dir
33
self.transform = transform
34
โ€‹
35
def __len__(self):
36
return len(self.df)
37
38
def __getitem__(self, index):
39
img_name = self.df.id[index]
40
label = self.df.has_cactus[index]
41
42
img_path = os.path.join(self.data_dir, img_name)
43
image = mpimg.imread(img_path)
44
image = self.transform(image)
45
return image, label
46
โ€‹
47
data_transf = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])
48
train_data = ImageData(df = labels, data_dir = train_dir, transform = data_transf)
49
train_loader = DataLoader(dataset = train_data, batch_size = 64)
50
โ€‹
51
from efficientnet_pytorch import EfficientNet
52
model = EfficientNet.from_name('efficientnet-b1')
53
โ€‹
54
# Unfreeze model weights
55
for param in model.parameters():
56
param.requires_grad = True
57
โ€‹
58
num_ftrs = model._fc.in_features
59
model._fc = nn.Linear(num_ftrs, 1)
60
โ€‹
61
model = model.to('cuda')
62
โ€‹
63
optimizer = optim.Adam(model.parameters())
64
loss_func = nn.BCELoss()
65
โ€‹
66
# Train model
67
loss_log = []
68
โ€‹
69
for epoch in range(5):
70
model.train()
71
for ii, (data, target) in enumerate(train_loader):
72
data, target = data.cuda(), target.cuda()
73
target = target.float()
74
โ€‹
75
optimizer.zero_grad()
76
output = model(data)
77
78
m = nn.Sigmoid()
79
loss = loss_func(m(output), target)
80
loss.backward()
81
โ€‹
82
optimizer.step()
83
84
if ii % 1000 == 0:
85
loss_log.append(loss.item())
86
87
print('Epoch: {} - Loss: {:.6f}'.format(epoch + 1, loss.item()))
88
โ€‹
89
โ€‹
Copied!
โ€‹
Last modified 2mo ago