MobileNet V2 Trained on ImageNet Competition Data
Identify the main object in an image

Resource retrieval

Get the pre-trained net:
In[]:=
NetModel["MobileNet V2 Trained on ImageNet Competition Data"]
Out[]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:
In[]:=
NetModel["MobileNet V2 Trained on ImageNet Competition Data","ParametersInformation"]
Out[]=
Description
Default
Allowed values
Depth
The value of the depth parameter
1.4
{1.4,1.3,1.,0.75,0.5,0.35}
Width
The width or input size of image
224
{224,192,160,128,96}
Pick a non-default net by specifying the parameters:
In[]:=
NetModel[{"MobileNet V2 Trained on ImageNet Competition Data","Depth"1.,"Width"224}]
Out[]=
Pick a non-default uninitialized net:
In[]:=
NetModel[{"MobileNet V2 Trained on ImageNet Competition Data","Depth"0.75,"Width"224},"UninitializedEvaluationNet"]
Out[]=

Basic usage

Classify an image:
In[]:=
pred=NetModel["MobileNet V2 Trained on ImageNet Competition Data"]

Out[]=
peacock
The prediction is an
Entity
object, which can be queried:
In[]:=
pred["Definition"]
Out[]=
male peafowl; having a crested head and very large fanlike tail marked with iridescent eyes or spots
Get a list of available properties of the predicted
Entity
:
In[]:=
pred["Properties"]
Out[]=

alternate names
,
broader concepts
,
definition
,
entity classes
,
equivalent entity
,
image
,
name
,
narrower concepts
,
similar entities
,
subset concepts
,
superset concepts
,
WordData senses
,
WordNet ID

large output
show less
show more
show all
set size limit...
Obtain the probabilities of the ten most likely entities predicted by the net:
In[]:=
NetModel["MobileNet V2 Trained on ImageNet Competition Data"]
,{"TopProbabilities"10}
Out[]=
Output
peacock
0.788642,
brain coral
0.00965616,
sea urchin
0.00699543,
spider web
0.00461089,
mushroom
0.00398465,
sea anemone
0.002988,
coral reef
0.00295784,
shower cap
0.00265471,
coil
0.00244829,
pillow
0.00237125
large output
show less
show more
show all
set size limit...
An object outside the list of the ImageNet classes will be misidentified:
In[]:=
NetModel["MobileNet V2 Trained on ImageNet Competition Data"]

Out[]=
vacuum cleaner
Obtain the list of names of all available classes:
In[]:=
EntityValue[NetExtract[NetModel["MobileNet V2 Trained on ImageNet Competition Data"],"Output"][["Labels"]],"Name"]
Out[]=
{other,tench,Carassius auratus,great white shark,tiger shark,hammerhead,electric ray,stingray,cock,hen,ostrich,brambling,european goldfinch,house finch,Junco,
⋯971⋯
,daisy,yellow lady's slipper,corn,acorn,rose hip,conker,coral fungus,agaric,gyromitra,carrion fungus,earthstar,Grifola frondosa,bolete,capitulum,bathroom tissue}
large output
show less
show more
show all
set size limit...

Feature extraction

Remove the last three layers of the trained net so that the net produces a vector representation of an image:
In[]:=
extractor=NetTake[NetModel["MobileNet V2 Trained on ImageNet Competition Data"],"fc7"]
Out[]=
NetChain

Input
port:
image
Output
port:
array
(size: 1001×1×1)
Number of layers:
21

Get a set of images:
Visualize the features of a set of images:

Visualize convolutional weights

Extract the weights of the first convolutional layer in the trained net:
Visualize the weights as a list of 48 images of size 3x3:

Transfer learning

Use the pre-trained model to build a classifier for telling apart images of dogs and cats. Create a test set and a training set:
Remove the linear layer from the pre-trained net:
Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:
Perfect accuracy is obtained on the test set:

Net information

Inspect the number of parameters of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Display the summary graphic:

Export to MXNet

Get the size of the parameter file: