The system did not recognise me. What is the reason?

Written by NSoft Vision

Updated

Face coverage - System makes an estimation about the face from doing face attribute analysis. If one of those important face zones is covered system has a problem identifying the subject. It's highly sensitive on sunglasses, bigger hair coverages, hands in front of the face or any other object.


Head position - If the face is not directed frontally to the camera system has to make certain corrections in order to detect the face. If the face is looking to the side by the angle over 30°, a currently set threshold, from the camera lens, the system will not make face detections nor an entire recognition. Here is why this is the case:

System provides 3 dots to a detected face in the area of eyes and nose as a signal that the system has located the face and knows it's orientation as a 2D surface in a 360 degree space. Face alignment is important due to the fact that the system only executes face recognition from a frontal face position and face alignment can compensate only the in-plane face rotation and scaling to normalize face detection for subsequent steps.


Camera Distance - If the camera is far-off from the subject, system won't be able to recognize the subject. Even though stream quality also affects at which distance camera can take detections, it's important to position the camera not too distant from the location where subjects are usually captured. If the dimension of the sub-frame of the face, the zone used for face detections is lower than 20 pixels of face height, than the known identity cannot be recognized. For identification of unknown identities there should be at least 70 pixels of face height in detection.


Face changes - if the subject identified earlier has done or suffered sudden change on a face and has appeared in front of the camera again, it is highly likely that it will not be recognized. Registered cases are beard growth or shaving, hair bangs and surgeries.


Speed of motion - even though the system is accustomed to working with fast movements of subjects or their body parts it can happen that it didn't have enough time or adequate stream quality to do the recognition.


Light conditions - light deprivation or over-exposure of the subject to the light, can both cause the system to make mistakes or make wrong estimations in face recognition. Light deficiency can actually hide certain face lines or completely hide the face while over-presence of the light can cause certain distortions on the face.


Stream quality - Low framerate can produce certain difficulties in performing AI services. Stream quality is in direct relation to the AI services performance. Therefore if one subject stands in front of the camera for some time, with a lower stream, the system can create three sightings instead of one. It can also happen that the subject was not even detected or not recognized as the existing identity. In other words, the bigger the streaming quality bigger the possibility of successful face recognition.