Great tutorial here showing a fun use of TensorFlow to fill in missing parts in images of faces. The training part took just over two hours on my i7-5820K/GTX 1070 machine (80 epochs). Clearly the GPU was getting used a lot – nvidia-smi reported over 150W of power being used at times. Nice to see. I used the LFW image set for training as it can be downloaded without filling in forms and things.
Had a few small issues getting things running properly – probably my errors. My system required an extra few pre-requisites:
sudo apt-get install libboost-python-dev sudo pip install --upgrade dlib sudo pip install --upgrade scipy
Seems that the SciPy version has to be at least 0.18. 0.16 didn’t work.
Also, the models have to be downloaded for OpenFace:
cd <path to openface>/models ./get-models.sh
With all that done, training worked just fine and it was time to move on to testing the completion code. Since I was using Python 2.7, I had to make a change in model.py as otherwise it would blow up on an unsupported parameter when creating the output directories. The start of the complete() function should look like this to work with Python 2.7:
if not os.path.exists(os.path.join(config.outDir, 'hats_imgs')): os.makedirs(os.path.join(config.outDir, 'hats_imgs')) if not os.path.exists(os.path.join(config.outDir, 'completed')): os.makedirs(os.path.join(config.outDir, 'completed'))
Another problem I had with running completion was that model.py could not find the checkpoint from the training session. The load() function in model.py was post-pending the model_dir to the checkpoint_dir. Just commenting out this line fixed the problem. The results of the completion process are shown in the gif. I was lazy and used some faces from the training set – I really should have found some different images to test the results with of course.