2017-05-12 1 views
2

Je me demande simplement s'il est possible de détecter si le visage est aligné correctement, directement sur l'appareil photo avec DLIB et OpenCV?Vérification de l'alignement du visage avec DLIB

enter image description here

J'ai essayé ce code pour détecter la forme et obtenir les points de visage:

enter image description here

detector = dlib.get_frontal_face_detector() 
predictor = dlib.shape_predictor(args["shape_predictor"]) 

vs = VideoStream(0).start() 

    while True: 
     # grab the frame from the threaded video stream, resize it to 
     # have a maximum width of 400 pixels, and convert it to 
     # grayscale 
     frame = vs.read() 
     frame = imutils.resize(frame, width=400) 
     gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 

     # detect faces in the grayscale frame 
     rects = detector(gray, 0) 

     # loop over the face detections 
     for rect in rects: 
      # determine the facial landmarks for the face region, then 
      # convert the facial landmark (x, y)-coordinates to a NumPy 
      # array 
      shape = predictor(gray, rect) 
      shape = face_utils.shape_to_np(shape) 

      # loop over the (x, y)-coordinates for the facial landmarks 
      # and draw them on the image 
      for (x, y) in shape: 
       print x,y 
       cv2.circle(frame, (x, y), 1, (0, 0, 255), -1) 
      cv2.putText(frame, "Aptiktas veidas", (10, 30), 
       cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) 

     # show the frame 
     cv2.imshow("Frame", frame) 
     key = cv2.waitKey(1) & 0xFF 
+1

Mise à jour d'une question avec un code et une image. – Streem

Répondre

3

Voici une fonction que j'ai écrit pour trouver la pose où la tête de personne est orienté vers. ici p1 et p2 définissent le vecteur de pose. Calculer l'angle entre eux est trivial et basé sur l'angle, vous pouvez décider quelle image à accepter ou à rejeter.

def pose_estimate(image, landmarks): 
    """ 
    Given an image and a set of facial landmarks generates the direction of pose 
    """ 
    size = image.shape 
    image_points = np.array([ 
     (landmarks[33, 0], landmarks[33, 1]),  # Nose tip 
     (landmarks[8, 0], landmarks[8, 1]),  # Chin 
     (landmarks[36, 0], landmarks[36, 1]),  # Left eye left corner 
     (landmarks[45, 0], landmarks[45, 1]),  # Right eye right corner 
     (landmarks[48, 0], landmarks[48, 1]),  # Left Mouth corner 
     (landmarks[54, 0], landmarks[54, 1])  # Right mouth corner 
     ], dtype="double") 

    model_points = np.array([ 
     (0.0, 0.0, 0.0),    # Nose tip 
     (0.0, -330.0, -65.0),  # Chin 
     (-225.0, 170.0, -135.0),  # Left eye left corner 
     (225.0, 170.0, -135.0),  # Right eye right corner 
     (-150.0, -150.0, -125.0), # Left Mouth corner 
     (150.0, -150.0, -125.0)  # Right mouth corner 
     ]) 

    focal_length = size[1] 
    center = (size[1]/2, size[0]/2) 
    camera_matrix = np.array([ 
     [focal_length, 0, center[0]], 
     [0, focal_length, center[1]], 
     [0, 0, 1] 
     ], dtype="double") 

    dist_coeffs = np.zeros((4, 1)) 
    success, rotation_vector, translation_vector = cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs)  
    (nose_end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector, translation_vector, camera_matrix, dist_coeffs) 
    p1 = (int(image_points[0][0]), int(image_points[0][1])) 
    p2 = (int(nose_end_point2D[0][0][0]), int(nose_end_point2D[0][0][1])) 
    return p1, p2