Finding balance on innovation projects is squirrely.
Like Doug the Talking Dog from Up, it’s ridiculously easy to get distracted by deviations from your original plan. While the Doug the Dog got off-track by actual squirrels, my squirrels have been other internal applications of Machine Learning.
As my ML knowledge and ability grow, I have a better understanding of where and how the technology can impact other parts of our business. I still want to help us staff smarter, the original goal of this project. But I also see how we could leverage ML to improve utilization, predict employee exits and a whole host of other business problems just waiting to be solved.
For instance, utilization is one of the key metrics, and pain points, of all professional services companies. Higher utilization means more of your consultants are busy on projects (which makes them happy). And it means they are billing more hours to a client (which makes the C-suite happy). When utilization drops, everyone gets grumpy.
While utilization is certainly a product of business development (sell more projects, keep more consultants busy), many less obvious factors have influence.
Over the summer, I was asked, along with a colleague, to do a deep dive into those factors. If we could understand what drives utilization, we could tweak the relevant business processes, and hopefully further improve profitability.
We approached the problem very traditionally. First, we collected lots of data, everything ranging from consultant skills to travel preferences to performance reviews. Next, we loaded everything into Excel and PowerBI. Finally, we sliced and diced and analyzed for hours on end.
At the project’s conclusion, we had a solid analysis and identified some key utilization influencers. It was a lot of work, but I was proud of what we had produced.
But the whole time, ML squirrels were running through my head. ML is supposed to identify patterns in data, so why couldn’t we use it to identify utilization influencers also?
I had a solid dataset thanks to the Excel/PowerBI exercise. It would not take much to throw it into Azure ML Studio and see what pops out.
So that’s what I did. I created a very simple classification model, trained it on past utilization data for our consultants, and hit “run.”
It took us hours upon hours to do the traditional data crunching analysis. It took the ML model less than one minute to do the same work. And the results of the two endeavors were stunningly similar.
The ML utilization classification model was able to predict if a consultant would be high or low utilization with a high degree of accuracy. Better than even our most seasoned operations leaders could do.
And the model identified the influencing factors just as well as our traditional analysis. But it did it with more finesse than we did, as it was able to quantify the impact of those influencers in a way that we could not.
Machine Learning had worked. It took a business problem, analyzed the data, made accurate predictions, and revealed dependencies. And it did it much faster and eloquently than a human could ever do.
I was already bullish on the potential use of ML on internal business processes. But this confirmed it for me. Our main ML project, Smart Staffing, could have a real impact on how we do things.
I have had a few more “squirrel” distraction projects over the last few months, snuck in during lulls in our main project. Each time I come away learning something new that I can apply to Smart Staffing.
And I come away reinvigorated. Innovation projects can be fun, but they can also be incredibly frustrating. Since you are tackling new stuff, answers do not always come easily. The constant fiddling and fussing as you look for solutions is draining. Taking a side step allows you to take a breather without getting too far from the action.
Distractions are not always bad. I encourage you to chase a few squirrels here and there.
P.s. Now that I have completed several ML projects, can I still call this blog series “My First AI Project”? 😉
Even with all these diversions, we have made solid progress on our original Smart Staffing project:
- What’s making me feel smarter – After a few too many “head pounding on desk” moments, we finally have Azure ML Studio playing nicely with our internal databases. With that essential piece in place, we can focus on the fun stuff – creating the ML model. All along we struggled with which algorithm to use (Clustering? Classification? Regression?) In a moment of divine inspiration, a clever way to address our challenge came to me. And it might just very well work. (I’m keeping you in suspense for now – you will need to keep reading these posts to find out more!)
- What’s making me feel dumb – Even with the most clever of solutions, you have nagging “gotchas.” The model is not giving me quite the results I want yet – which means rethinking our feature selection and training dataset. Sometimes I feel like for every smart idea I have, I make two mistakes, and overlook three things.
- What’s keeping me up at night – As the model looks more and more real, I dare allow myself to think about real-world implementation. We have a broad stroke plan for integrating the model into our existing software. But now I am starting to think about those pesky, and fuzzy, implementation details. Just as you gain (somewhat) mastery of one phase of a project, you move into a new, less comfortable phase.
This is the seventh installment of my real-time case study on my first AI project. I plan to share what we are working on, what is going well, what is sucking at the moment – everything – as it happens.
My hope is by sharing our project’s small victories and painful bruises, you will be encouraged to tackle a project that scares the sh?! out of you too.