Classifiers are models used to classify images—who would have thought? They take an image as input and output a vector of probabilities. Each probability is assigned to one class, e.g. 'elephant' or 'giraffe', and indicates the model's confidence that the picture belongs to this class. This list of classes is often referred to as an ontology.
You can interpret the output in two ways:
You only keep the class with the highest confidence and assign it to the whole image. Then, you'd do single-label classification.
Or, you store all classes with a confidence above a certain threshold and assign all those classes to the image. Then, you'd do multi-label classification, what many people also refer to as image-tagging.
If you use Hasty's annotation tool to create your ground truth data, you'll be able to use classifiers within two different assistants. You can use the tagger-assistant, which works the same as explained in the entry on an image level. You can also use the classification-assistant on an annotation level. This assistant corrects the class of the label for you if you accidentally assigned the wrong one 🦔
Classifiers are the most simple model families and were the starting point for research about applying deep learning to computer vision. There have been approaches in the fields of unsupervised and supervised learning for classification tasks. The most successful ones are Convolutional Neural Networks (CNNs) which belong to the supervised learning approach, i.e., they need to be trained on labeled data.