Of Matchboxes and Movies

You may have remembered that our initial goal was to emphasize the “Minimum” in Minimum Viable Product. And that’s what we did – scraped some resumes for keywords and built a rudimentary machine learning model.

But now that Smart Staffing was a real project with real team members and real funding and real architecture and a bunch of other real stuff, we had to de-emphasize “Minimum” and shift more towards “Viable.”

Translation: We needed to develop a more robust machine learning model.

This is where the whole machine learning thing gets tricky. There are all kinds of algorithms out there capable of doing all kinds of things. Some are better in some situations; some are better in other situations. What does your data look like? How much do you have? Are you discovering or predicting? What kind of output do you want?

And just to make things even more completely and totally maddening, they all have multi-syllable names, oozing with mathematical and whimsical buzzwords. A few of my favorite gibberish examples include Boosted Decision Tree RegressionPCA-based Anomaly DetectionFast Forest Quantile Regression and Multi-class Decision Jungle.

So what’s a girl to do? Obviously, Google the crap out of it.

Fortunately, I found plenty of resources on the interweb to hold your hand during the machine learning algorithm selection process. Unfortunately, most required a depth of understanding that I was still in the process of developing.

And, just to add another round of complexity, the Artificial Intelligence field is evolving feverishly, and thus no one source is authoritative or comprehensive. Some sources listed some algorithms, but left out others. Some sources contradicted others. Some sources made completely no sense at all.

Slowly, we stepped through the options:

  • Are we trying to detect anomalies? Nope. We’ll save outlier detection for a rainy day.
  • Are we trying to predict values or categories? None of the above, I guess…
  • Do we have lots of data? Finally, one I can answer! Small data all the way. (Although I wish we had big data)
  • What’s more important improved accuracy or reducing training time? Since we have a small data set, training time is less important. Accuracy FTW!

The eventual winner… Matchbox Recommender!

Four things to know about the Matchbox Recommender:

  1. I have no idea why it is called that. But it is kind of a fun name. I will take saying that over “Two-class Averaged Perceptron” any day.
  2. It is basically a movie recommendation algorithm. Probably fitting, since this whole project was inspired by Netflix.
  3. We are treating our projects as “Movie Watchers” aka Users, and our resources as “Movies” aka Items. Netflix matches Users to Items based on past viewing history. We match projects to resources based on past staffing history
  4. Here is a much fancier explanation from the folks at Microsoft:

The main aim of a recommendation system is to recommend one or more items to users of the system. Examples of an item could be a movie, restaurant, book, or song. A user could be a person, group of persons, or other entity with item preferences.

There are two principal approaches to recommender systems. The first is the content-based approach, which makes use of features for both users and items. Users may be described by properties such as age and gender, and items may be described by properties such as author and manufacturer. Typical examples of content-based recommendation systems can be found on social matchmaking sites. The second approach is collaborative filtering, which uses only identifiers of the users and the items and obtains implicit information about these entities from a (sparse) matrix of ratings given by the users to the items. We can learn about a user from the items they have rated and from other users who have rated the same items.

The Matchbox recommender combines collaborative filtering with a content-based approach. It is therefore considered a hybrid recommender. When a user is relatively new to the system, predictions are improved by making use of the feature information about the user, thus addressing the well-known “cold-start” problem. However, once you have collected a sufficient number of ratings from a particular user, it is possible to make fully personalized predictions for them based on their specific ratings rather than on their features alone. Hence, there is a smooth transition from content-based recommendations to recommendations based on collaborative filtering. Even if user or item features are not available, Matchbox will still work in its collaborative filtering mode.”

In all honesty, I have read those Microsoft paragraphs a kazillion times, gaining 1-2% of insight each time. I estimate I have finally crept over the 50% comprehension threshold.

Making decisions is always tough. And it is even tougher when you are still learning the underlying technology required by those decisions.

Innovation projects layer on yet another degree of difficulty. It is like you are asked to make a decision while on a treadmill. You prefer to hit the pause button, take a deep breath and make sure you have done a comprehensive evaluation before making a decision. But instead, the treadmill of new technology rolls on at 10 MPH. You have to make the best decision you can at that moment, hold onto the guardrails, and hope your legs keep up with the pace.

Next Up: Turning negative (data examples) into positive (predictions)

. . . .

We have an algorithm. Woot! Other good news on the homefront…

  • ·What’s making me happy – In the last installment, I shared how I got distracted and built a machine learning model to predict employee utilization. We now have it in production, as part of a broader project to reinvent our staffing process. And the results have been fantastic! Utilization is way up, which means profitability is too. That makes everyone happy 🙂
  • What’s making me feel smarter – On a whim, I submitted a speaker proposal to the Predictive Analytics World conference on the aforementioned Employee Utilization prediction model. The conference organizer reached out to me to learn more, and we traded a few emails back and forth. I never heard back after that, so I assumed that my proposal was rejected, like 95% of my applications are. Imagine my delight when I found out last week that it was accepted! Come see me in Vegas June 5 🙂
  • What’s making me feel dumb – Immediately after the rush of excitement, came a huge wave of uncertainty. The people at this conference are the real deal (Uber, Twitter, YouTube). Am I totally out of my league? Am I going to make a fool of myself? Imposter syndrome is real ya’ll. I currently have the following mantra on a repeated play in my head, “I am not an idiot. They wouldn’t have picked me if I was an idiot. I can do this. I am not an idiot.”  :/

This is the eighth installment of my real-time case study on my first AI project. I plan to share what we are working on, what is going well, what is sucking at the moment – everything – as it happens.

My hope is by sharing our project’s small victories and painful bruises, you will be encouraged to tackle a project that scares the sh?! out of you too.