OpenFace: predicting unknown faces using the real-time web demo

I am working with a heavily modified version of the OpenFace real-time web demo and needed to be able to detect an unknown face. The standard version of the demo, once trained with at least two people, always chooses the best fit even if it isn’t very good. Turns out it isn’t too hard to modify it to get extra information that’s helpful in making this extra decision.

If you look at demos/web/websocketserver.py in the OpenFace GitHub repo, line 243 looks like this:

    self.svm = GridSearchCV(SVC(C=1), param_grid, cv=5).fit(X,y)

To get the extra information, the estimator constructor needs to be changed to:

    self.svm = GridSearchCV(SVC(C=1, probability=True), param_grid, cv=5).fit(X,y)

This tells the estimator to return probabilities in addition to the best identity. The predictor is called in line 306:

    identity = self.svm.predict(rep)[0]

However, due to the change in the estimator, this new call will return probabilities:

    probs = self.svm.predict_proba(rep)[0]

There will be one entry in the probs list for each identity. The best match is obviously the one with the highest probability. I am thresholding this at 0.85 which seems to work reasonably well. If the best probability is lower than 0.85, I set the identity to -1 (=Unknown). Otherwise, identity is set to the index in the probs list with the highest value.

Advertisements

2 thoughts on “OpenFace: predicting unknown faces using the real-time web demo

  1. rogerclarkmelbourne

    What hardware are you running this on ?

    Do you think it would be suitable for some sort of embedded device ?

    I presume something like the RPi Zero would be a candidate, or for that matter a smartphone.

    Reply
    1. richards-tech Post author

      You probably don’t want to know this :-). It’s running on an i7 2700K system and uses around 60% of the total CPU. That’s on a video stream though. If you just wanted to recognize a face in a single image, it might not be too impractical.

      Face detection might well be a candidate for an RPi, face recognition with the kind of pose reconstruction that this is doing not so much. It’s true to say that it could be made a lot more efficient (as noted on the OpenFace web site). The demo code first finds faces (locates eyes and nose) and then proceeds to extract features and then match to the known faces. The demo code does that for every frame it processes. An easy optimization would be to track the detected eyes and nose locations once a face has been recongized and assume that it’s still the same face if the distance that the features have moved between frames is below some threshold. I do plan to add this at some point. That would make an enormous difference.

      That said, I have no idea if all the components involved in OpenFace will run on an ARM. Judging by comments regarding Torch (one of the dependencies), it would not be easy.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s