Semantic Segmentors create pixel-perfect masks of a given image and assign each pixel to a class.
Semantic Segmentors output masks either stored in a JSON format or as png files.
When exporting png's, each pixel gets assigned a color representing the different classes. Usually, the background itself is not annotated and represented by the pixel value 0 what's displayed as black.
Sometimes, semantic masks seem to be all black after visualizing them. This happens when the data structure is defined so that the masks only have one channel instead of three for RGB. In this case, typically, the background class has the value 0, the first class the value 1, etc. When you then try to visualize this in a grey-scale image, all these pixels will look black as the value for white is 255.
It is possible to combine semantic segmentation and instance segmentation in one image, then we talk about panoptic segmentation. The idea would be to label the road ahead as a semantic class and the cars as instances. This is not often used in practice, though (yet).