4

Je souhaite ajouter une métrique entre deux ensembles de points sur le visage pour l'utiliser pour la détection d'objets dans des images numériques. deux dimensions comme indiqué ci-dessousComment ajouter une métrique entre deux ensembles de points pour l'utiliser pour la détection d'objets dans des images numériques pour la reconnaissance de visage

je pouvais recoginze les traits du visage comme illustré ci-dessous image à l'aide:

-(void)markFaces:(UIImageView *)facePicture 
{ 
    // draw a CI image with the previously loaded face detection picture 
    CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage]; 

    // create a face detector - since speed is not an issue we'll use a high accuracy 
    // detector 
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace 
              context:nil options:  [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]]; 

    // create an array containing all the detected faces from the detector 
    NSArray* features = [detector featuresInImage:image]; 

    // we'll iterate through every detected face. CIFaceFeature provides us 
    // with the width for the entire face, and the coordinates of each eye 
    // and the mouth if detected. Also provided are BOOL's for the eye's and 
    // mouth so we can check if they already exist. 
    for(CIFaceFeature* faceFeature in features) 
    { 
     // get the width of the face 
     CGFloat faceWidth = faceFeature.bounds.size.width; 

     // create a UIView using the bounds of the face 
     UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds]; 

     // add a border around the newly created UIView 
     faceView.layer.borderWidth = 1; 
     faceView.layer.borderColor = [[UIColor redColor] CGColor]; 

     // add the new view to create a box around the face 
    [self.view addSubview:faceView]; 

     if(faceFeature.hasLeftEyePosition) 
     { 
      // create a UIView with a size based on the width of the face 
      UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)]; 
      // change the background color of the eye view 
      [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]]; 
      // set the position of the leftEyeView based on the face 
      [leftEyeView setCenter:faceFeature.leftEyePosition]; 

      // round the corners 
      leftEyeView.layer.cornerRadius = faceWidth*0.15; 
      // add the view to the window 
      [self.view addSubview:leftEyeView]; 
     } 

     if(faceFeature.hasRightEyePosition) 
     { 
      // create a UIView with a size based on the width of the face 
      UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)]; 
      // change the background color of the eye view 
      [leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]]; 
      // set the position of the rightEyeView based on the face 
      [leftEye setCenter:faceFeature.rightEyePosition]; 
      // round the corners 
      leftEye.layer.cornerRadius = faceWidth*0.15; 
      // add the new view to the window 
      [self.view addSubview:leftEye]; 
     }  

     if(faceFeature.hasMouthPosition) 
     { 
      // create a UIView with a size based on the width of the face 
      UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)]; 
      // change the background color for the mouth to green 
      [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]]; 

      // set the position of the mouthView based on the face 
      [mouth setCenter:faceFeature.mouthPosition]; 

       // round the corners 
      mouth.layer.cornerRadius = faceWidth*0.2; 

      // add the new view to the window 
      [self.view addSubview:mouth]; 
       } 
      } 
     } 

     -(void)faceDetector 
     { 
      // Load the picture for face detection 
      //UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"facedetectionpic.jpg"]]; 
      UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"timthumb.png"]]; 
      // Draw the face detection image 
      [self.view addSubview:image]; 

      // Execute the method used to markFaces in background 
      [self performSelectorInBackground:@selector(markFaces:) withObject:image]; 

      // flip image on y-axis to match coordinate system used by core image 
      [image setTransform:CGAffineTransformMakeScale(1, -1)]; 

      // flip the entire window to make everything right side up 
      [self.view setTransform:CGAffineTransformMakeScale(1, -1)]; 


     } 

maintenant, je veux ajouter des points pour localiser la référence des yeux, etc nez avant de télécharger la base de données. Plus tard, ces images peuvent être comparées aux images existantes basées sur ces emplacements de point de mesure, comme indiqué ci-dessous

enter image description hereenter image description here

J'ai parlé This Link mais n'a pas pu mettre en œuvre ce quelqu'un ..Si sait s'il vous plaît me suggérer

Je vous remercie

Répondre

7

J'ai peur que ce ne soit pas simple. En regardant la documentation, CIDetector ne comprend pas de détecteurs pour les repères faciaux supplémentaires. Vous devrez former le vôtre sur un ensemble d'images annotées manuellement. Il y a quelques projets open source pour le faire. Un très bon (précis et rapide) est dlib: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html

+0

merci pour la réponse. Mais je veux implémenter cela en utilisant l'objectif c – Sujania

+0

dlib est une bibliothèque C++ mais vous pouvez toujours l'appeler dans votre code Objective C (http://www.drdobbs.com/cpp/interoperating-between-c-and-objective-c/240165502). –