The smart surveillance is very new emerging field in the upcoming future . We have been inspired by the movies which does facial recognition on the basis of some facial feature which can be either edges , skin texture or they can also be some kind of Biometric features.
Few points regarding EBGM are that it was quite famous because of the use of Gabor filters which extract features in different frequencies and at different phase angles. In much simple words it works like in our eyes rods and cone structures work to extract features or to differentiate among various patterns .
Is it not beautiful , the system is understanding and continuously learning features and recognizing . But these features extraction ways are not so much useful when it comes to very large database of like hundreds and thousands of person .For that a new technique is under development phase i.e Deep learning . Some of our post talks about it but not much . In this technique we let machine to understand the features and let it decide what feature extraction method will it employ and which is best suitable for it . We will discuss it in much detail in our up coming updates and will share some code also. The Deep Learning is used in all these domain and thats why it has became popular.
As per the request , we will understand python code for landmark generation and annotate the landmarks.Throughout the code I am assuming to have a image "Test Image" and all the operations will be performed on that image only. For face detection and landmark estimation , i am using very popular machine learning library i.e. dlib . If you want to install dlib library just use these commands on the terminal :
step 1: Load the image
Some of the works from the algorithm being developed by me which works on the biometric analysis of the faces and recognition shows a good accuracy but still not a very powerful tool for facial recognition in wild as these wont be working with change in illumination conditions and remember it is one of the major issue.
The biometric algorithm find faces and locate landmarks such as eyes nose jaw points and creates a face graph for the faceas shown . It determines the distances from each landmark point and tries to find out the appropriate person with which it can accurately match. You might have been seeing such kind of recognition in movies like resident evil even new movie Jason Bourne also have some seems which uses such kind of face recognition techniques.
There are other methods like LBP and EBGM which works well for facial recognition but on a small data set. The LBP stand for Local binary patters which tries to find out the skin texture component pixel by pixel . The more detailed information , please refer to the Link http://ieeexplore.ieee.org/document/6725919/?arnumber=6725919&tag=1
Similarly EBGM (Elastic bunch graph matching) is also one of the face recognition techniques which uses Gabor jets basically to extract features from the face and the landmark points on faces .These landmark points can be eyes pupil , nose tip , jaw and more and store them in a form of bunch or clusters . As shown in the image I have extracted total 68 landmark points from the faces(in green dots). The images are taken in wild i.e. from mobile camera in normal environment to depict the algorithm works in any environment with any given illumination conditions . The Gabor jets is a technique to extract facial features from these points and calculates the energy and amplitude values of each of these fiducial points to identify the person.
Few points regarding EBGM are that it was quite famous because of the use of Gabor filters which extract features in different frequencies and at different phase angles. In much simple words it works like in our eyes rods and cone structures work to extract features or to differentiate among various patterns .
So it seems that the Feature extraction is one of the vital part in any algorithm for either face recognition or object recognition . Better and accurate the features are better the accuracy of the algorithm. Most of the algorithms uses combination of both the analytical and biometric feature calculation and analyses the person . One of my works shows the recognition in gray scale which uses Multi haar cascade for detection and LBP with some machine learning and other signal processing algorithms to accurately determine the person in the live video sequences.
The video sequence is of a office environment which runs an algorithm to
determine the person in the video . It analyses and shows the two best
match and right now both the matches are of the same person i.e. it
shows the name of person two times. The camera constraints are also there but that are the part of the system .
Is it not beautiful , the system is understanding and continuously learning features and recognizing . But these features extraction ways are not so much useful when it comes to very large database of like hundreds and thousands of person .For that a new technique is under development phase i.e Deep learning . Some of our post talks about it but not much . In this technique we let machine to understand the features and let it decide what feature extraction method will it employ and which is best suitable for it . We will discuss it in much detail in our up coming updates and will share some code also. The Deep Learning is used in all these domain and thats why it has became popular.
As per the request , we will understand python code for landmark generation and annotate the landmarks.Throughout the code I am assuming to have a image "Test Image" and all the operations will be performed on that image only. For face detection and landmark estimation , i am using very popular machine learning library i.e. dlib . If you want to install dlib library just use these commands on the terminal :
sudo pip install dlib
step 1: Load the image
import cv2
test_img=cv2.imread('RGB_image.jpg')
test_img=cv2.resize(test_img, (1400,1280), interpolation = cv2.INTER_AREA)
face_detection(test_img)
step 2: Import libraries for face detection and face landmark estimation import dlib
predictor_path='shape_predictor_68_face_landmarks.dat'
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
step 3: Face Detection def face_detection(test_img):
dets = detector(test_img,1)
print("Number of faces detected: {}".format(len(dets)))
if len(dets)==0:
pass
else:
facesCoords=np.array([[d.left(), d.top(),d.right(),d.bottom()] for d in detector(test_img)])
for (x,y,w,h) in facesCoords:
cv2.rectangle(test_img,(x,y),(w,h),(0,0,255),2)
Step 4: Land_Mark Detection def get_landmarks(test_img):
dets = detector(test_img)
for k, d in enumerate(dets):
landmarks= np.matrix([[p.x, p.y] for p in predictor(test_img, dets[k]).parts()])
Step 5: Annotate Land_marks def annotate_landmarks(test_img, landmarks):
for idx, point in enumerate(landmarks):
pos = (point[0, 0], point[0, 1])
cv2.putText(im, str(idx), pos,
fontFace=cv2.FONT_HERSHEY_SCRIPT_SIMPLEX,
fontScale=0.4,
color=(0, 255, 255))
Step 6: Extract coordinates of Land_marks RIGHT_EYE_POINTS = list(range(36, 42))
#Macro for range of co-ordinates
FACE_POINT = list(range (1, 68))
LEFT_EYE_POINTS = list(range (42, 48))
NOSE_POINTS = list(range (27, 36))
MOUTH_POINTS = list(range (48, 68))
RIGHT_BROW_POINTS = list(range (17, 22))
LEFT_BROW_POINTS = list(range (22, 27))
RIGHT_EAR_POINTS = list(range ( 1, 2) )
LEFT_EAR_POINTS = list(range (15, 16))
def find_coordinate(landmarks,point):
x=landmarks[point,0]
y=landmarks[point,1]
co-ordinate=(x,y)
Please write comments share your reviews and tell us about the blog .Feel free to ask about the implementation or code as complete algorithm is made in python environment.
No comments:
Post a Comment