top of page

Covid-HANDS: Use AI To Keep Your Hands Away From Your Face And Prevent Covid Infection


               The idea was simple… or so I thought.  Maybe it’s still simple, but the execution was a nightmare.

Perhaps I should start at the beginning.


          I had an idea that occurred to me while cheerios and milk were dripping from my mouth in a mid-morning stupor.  Why don’t I make a computer vision program that can detect hands and faces, and can warn a person when they are touching their own face?

The CDC says that touching one’s face, especially mouth or nose or eyes, can lead to spread of disease because we touch so many different things with our hands.

          A research paper[i] states that the average person touches their face 28 times per hour, or almost once every two minutes.

I felt like there would be a good case for the usefulness of a program that reminds users to limit self-touching.

Never one to settle for the easy, I set about training my own computer vision model using the latest and greatest object detection software: Yolo v 5.

          Unfortunately, training one’s own object detector takes a massive number of photos.  I found a dataset of public figures (Labeled Faces in the Wild) and another dataset of hands (EgoHands), and was able to select only the photos of a person and his or her hands.  I was even able to find a good number of photos where the person in the photo was touching his or her face.  I made care to select even numbers of all ethnicities, as every data scientist should do, to avoid bias and make sure the project generalizes to people of all skin tones.


          All told, after selecting and labeling all the pictures using labelImg, I had around 894 photos in my dataset.  That may sound like a lot, but usually to train a deep learning model you need around 10,000 instances of something. 


          I started looking at ways to augment my data, or to get more mileage from my same dataset.  I stumbled upon , and I am really glad that I did.  ( It shaved hours and hours off my process.

When you augment data, you modify it in some way so that your machine learning program sees it as new data.  For instance, if you have 500 pictures of cats, and then you copy those pictures and rotate them just a bit, the machine learning program will be better able to recognize cats at different angles.


          The problem is, this can be pretty difficult to do for beginners.

With Roboflow, I just uploaded my 894 images.  What’s really cool is, it recognized when I had misspelled a label and warned me.  There were a couple of them.  I was able to check my class balances with the dataset health check, and visualize each image in a small thumbnail with its bounding box to make sure everything was working ok.  I actually noticed a lot of images I had mis-labeled, so I had to go back and label them again.

After all my data had been cleaned and uploaded, Roboflow asked if I wanted to augment my dataset and how.  I resized the images to match the size my machine learning program was expecting, I rotated the images, I flipped them, and I added little dots here and there on the photos to make it more difficult for the machine to learn them.

This gave me a dataset of closer to 5,000 photos, much better than the 894 I started off with.  Even better, I didn’t have to download anything.

          I use Google Colab for my work, and so Roboflow just gave me a link for downloading the dataset, I put that into Colab, and that huge dataset of images never touched my computer’s hard drive at all.  It just got piped directly over the magical internet tubes, straight into my Colab notebook.  I was able to train my model from there using Tensorflow, but I quickly got frustrated and switched over to Pytorch.  After training the model (Yolo v5) I was able to get excellent accuracy at near-realtime speeds.  I even made a list of sounds to play to alert users when they touch their faces.

          From my experience, GUI-based machine learning software is not only easier, it is faster and more accurate than coding things by hand.  Roboflow does that by automating the data augmentation and preview process, as well as allowing you to examine any class imbalances (which is important for any kind of classification).  There are many long tutorials online for learning how to augment data.  You can go through them and take 2 hours of your time, and take a long time whenever you need to augment your data, or you can use Roboflow and take like 2 minutes.  It’s great, I highly recommend them, and I will be using them in the future for my data augmentation needs.

I create AI projects professionally.  If you are stuck or need some instruction, contact me on Fiverr.


bottom of page