Thursday, December 25, 2014

Face Detection From Photograph

In this tutorial, I am going to write an app to detect face from a photograph.

Create a new XCode Project single-view application called FaceDetectionFromPhoto.

In storyboard, add an UIImageView with the following constraints for the adaptive layout:


The constraints are shown on the Size Inspector above (right of screen). Also, set the View Mode to Aspect Fit as seen in the Attributes Inspector below (right of screen):



Then add a group photo to your SupportingFiles by right-clicking SupportingFiles folder and select Add Files to... :


I added obamafamily.jpg in the above example.

Next download this file



Add the xml file to SupportingFolder the same way you did for the jpg file above.

Then, add the following frameworks to your project:

opencv2.framework
UIKit.framework
CoreGraphics.framework

In your Build Settings, under Apple LLVM 8.0, set Compile Sources As: 
Objective-C++

Hookup your storyboard's UIImageView to IBOutlet on ViewController.h and also create a variable called faceDetector and add the opencv2 header files. Your ViewController.h file should look like this:

//
//  ViewController.h
//  FaceDetectionFromPhoto
//
//  Created by Paul Chin on 12/26/14.
//  Copyright (c) 2014 Paul Chin. All rights reserved.
//

#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>
#import <opencv2/highgui/ios.h>

@interface ViewController : UIViewController{
    cv::CascadeClassifier faceDetector;
}

@property (weak, nonatomic) IBOutlet UIImageView *imageView;


@end


And your ViewController.m as follows:

//
//  ViewController.m
//  FaceDetectionFromPhoto
//
//  Created by Paul Chin on 12/26/14.
//  Copyright (c) 2014 Paul Chin. All rights reserved.
//

#import "ViewController.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    NSString *cascadePath = [[NSBundle mainBundle
                   pathForResource:@"haarcascade_frontalface_alt"
                            ofType:@"xml"];
    faceDetector.load([cascadePath UTF8String]);
    
    UIImage *image = [UIImage imageNamed:@"obamafamily.jpg"];
    
   
    cv::Mat faceImage;
    UIImageToMat(image, faceImage);
    
    cv::Mat gray;
    cvtColor(faceImage, gray, CV_BGR2GRAY);
    
    std::vector<cv::Rect>faces;
    faceDetector.detectMultiScale(gray, faces,1.1,2,
                        0|CV_HAAR_SCALE_IMAGE,cvSize(30, 30));
    
    
    for(unsigned int i=0; i<faces.size(); i++){
        const cv::Rect face = faces[i];
        cv::Point topLeft(face.x,face.y);
        cv::Point bottomRight= topLeft + cv::Point(face.width,face.height);
        
        // Draw rectangle around the face
        cv::Scalar magenta = cv::Scalar(255, 0, 255);
        cv::rectangle(faceImage, topLeft, bottomRight, magenta, 4, 8, 0);
        
        // Show resulting image
        self.imageView.image = MatToUIImage(faceImage);
    }
}

- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}

@end


Then, run your app in the iOS Simulator:




I've set the ViewController background to green so that it stands out when I post to this blog.








Canny Edge Detection

Modify the earlier ViewController.m file from this project:

//
//  ViewController.m
//  InstantOpenCv
//
//  Created by Paul Chin on 12/25/14.
//  Copyright (c) 2014 Paul Chin. All rights reserved.
//

#import "ViewController.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    //self.imageView.image=[UIImage imageNamed:@"StBernard.jpg"];
    
    UIImage *image = [UIImage imageNamed:@"StBernard.jpg"];
    UIImageToMat(image, cvImage);
    if(!cvImage.empty()){
        using namespace cv;
        Mat gray;
        cvtColor(cvImage, gray, CV_RGB2GRAY);//convert to single channel
        GaussianBlur(gray, gray, cvSize(5, 5),1.2,1.2);//remove small details
        
        Mat edges;
        Canny(gray, edges, 0, 50);  //detect edges
        cvImage.setTo(Scalar::all(255)); //fill image to white
        cvImage.setTo(Scalar(0,128,255,255),edges);//add edges
        
        self.imageView.image=MatToUIImage(cvImage);
    }
}

- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
    }


@end



Run the app on iOS simulator:


How to Add OpenCV to XCode

In this tutorial, I am going to convert an image called StBernard.jpg (a dog) to grayscale then add Gaussian Blur.  And in the process learn how to add OpenCV to Xcode. It is modified from:

https://github.com/Itseez/opencv_for_ios_book_samples

by Alexander Shishkov. Credits to Alexander for his original code.
However, you should read Alexander's book Instant OpenCV For iOS for the detailed explanation.

Alexander's original code may not work on XCode6.1.1, hence this tutorial. Hope it helps someone.


Note that my platform is:

Mavericks, Xcode 6.1.1 and iOS 8.1

Download openCV framework for iOS:

http://opencv.org/downloads.html

Select Version 2.4.10 - OpenCV for iOS

Then create a new Xcode project, single view. call it InstantOpenCV, or anything you like.
Then add these 3 frameworks:



Then, in the storyboard, add a UIImageView:


Then, in Build Settings, AppleLLVM6.0 Language, set Compile Source As: Objective-C++ :


Hookup your IBOutlet to the UIImageView, then create imageView variable,  and also add the opencv header files opencv2/opencv.hpp and opencv2/highgui/ios.h.  Your ViewController.h should look like this:

//
//  ViewController.h
//  InstantOpenCv
//
//  Created by Paul Chin on 12/25/14.
//  Copyright (c) 2014 Paul Chin. All rights reserved.
//

#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>
#import <opencv2/highgui/ios.h>

@interface ViewController : UIViewController{
    cv::Mat cvImage;
}
@property (weaknonatomicIBOutlet UIImageView *imageView;


@end



Then add code to ViewController.m:

//
//  ViewController.m
//  InstantOpenCv
//
//  Created by Paul Chin on 12/25/14.
//  Copyright (c) 2014 Paul Chin. All rights reserved.
//

#import "ViewController.h"

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    //self.imageView.image=[UIImage imageNamed:@"StBernard.jpg"];
    
    UIImage *image = [UIImage imageNamed:@"StBernard.jpg"];
    UIImageToMat(image, cvImage);
    if(!cvImage.empty()){
        using namespace cv;
        Mat gray;
        cvtColor(cvImage, gray, CV_RGB2GRAY);//convert to single channel
        GaussianBlur(gray, gray, cvSize(55),1.2,1.2);//remove small details
        self.imageView.image=MatToUIImage(gray);
    }
}

- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
    }

@end


Notice I use StBernard.jpg as the image, but you can use your own. Just add it to your SupportingFiles folder. Right click Supporting files folder, then select Add Files to:


Then, run your app in iOS Simulator:


Note that I'm using Adaptive Layout:



The constraints are as shown on Size Inspector on right above.