HR Technology Conference Reactions: Predictive Analytics

I’ve always thought I was pretty good at analytics.Not being a practitioner who is sitting in the middle of data all the time, I get more time to just think about the type of analytics that it takes to really run the business.  It’s been a really long time since I discounted the usefulness of things like time to hire preferring things like quality of hire (efficiencies versus effectiveness measures).  But I’ve always fought with predictive analytics.  In my opinion, they don’t really exist in HR yet.  We can trend our data and draw a trend line, but that does not predict our future – it simply tells us that directionally, something is going to happen if we don’t change course.  I’ll admit that I walked into this session with a great deal of skepticism, I walked out with some great insights.

The panel was made up of some great speakers.   Moderator: Jac Fitz-enz, Ph.D., (CEO, Human Capital Source), Laurie Bassi, Ph.D., (CEO, McBassi & Company), John R. Mattox II, Ph.D., (Director of Research, KnowledgeAdvisors), Eugene Burke, (Chief Science & Analytics Officer, SHL), Natalie Tarnopolsky, (SVP, Analytics and Insights, Wells Fargo Bank).

Theme #1:  Descriptive, Predictive, Prescriptive. Let’s start with some definitions as the panel did, but I’ll use a tennis example.  I don’t know if anyone has been watching the last few grand slams, but they have been using a good mix of all these types of analytics.  Descriptive is simple.  Roger Federer has one 16 tennis grand slams.  (I’m guessing as I’m on a plane typing this).  Predictive is next and basically tells us what our destiny is going to be.  Roger’s record against Nadal in grand slam finals has not been particularly good.  If Rafa is on his game, hitting his ground strokes with the huge topspin he has, Roger is going to have to figure something out or lose again.  Here is where the last few opens has been interesting.  The broadcasters will sit there with the stats and say things like, “If Roger can get 67% of his first servers in, he has a 73% chance of winning” or “Roger needs to win 55% of Rafa’s second serves to have a 59% chance of winning.”  Now we have prescriptive – the specifics of what to do in order to change our destiny.

Theme #2:  Engagement. We probably focus in on this a bit too much.  It’s not because it’s not important, but it’s not specific or defined enough.  I mean, we all have a definition in our heads, but for 99% of us, it’s fluff.  My definition of engagement is the intangible quality that makes an employee want to provide that extra hour of discretionary work when other non-work opportunities exist.  Total fluff, right?  We can provide some correlations around engagement.  If engagement increases by 1%, then turnover decreases X% and so on.  What it provides is a great predictive measure, high level as it may be.  We know we need to increase engagement, and it is indeed important.  But it’s not the key measurement we have all been lead to believe will solve all our problems.

Theme #3:  Predict winning. OK, so if engagement is not the key metric, then what is?  Well, I have no idea.  I’m not being snide, I’m just saying that it will change for every single organization.  If you are (mall) retail organization, then having really good salespeople might be what hits the bottom line.  You could run the numbers and find out that if you rehire sales that worked for you the summer/holiday season last year, those salespeople are 20% more productive, whereas engagement reduces turnover by 1.3%.  Which metric are you going to focus on?  Right, how do you get those experienced salespeople back?  Instead of spending $1 on engagement, you could get 5 times the ROI on that same dollar elsewhere.  What we want to do is not predict outcomes.  We want to predict winning and understand what our highest contributors to winning will be.

Let’s take another example, this one from the panel.   Let’s say 5% of your workforce are high performers, but you can only give 3% of them promotions this year.  You also know that the 2% of top performers who don’t get promotions will likely leave the organization.  Now you have a problem.  You can’t afford to promote these people, but the cost of replacing top performers is extraordinary.  Analysis like this quickly leads you to decisions which are actionable.  At the end of the day, we need to compare our top drivers against our weaknesses to really figure out our greatest opportunities to invest in.

Theme #4:  HR can’t do it. This part sucks.  Towards the end of the session, we walked through a statistical model.  Yeah, we can end this post right here, but I’ll continue.  The rather brilliant by HR terms model was presented by Wells Fargo.  Go figure an ex-finance person working at a bank would have this all put together.  The point being, this was an ex-finance person, and the bak part is ot wholly irrelevant.  All the stuff I said above really makes great sense.  But when it comes down to executing it, HR in most organizations does not have the skillset to execute on it.  We don’t have very many statisticians in our HR staffs, and even if we did, HR executives would have a hard time seeing the vision and have the willingness to implement these technologies and models.  All is not lost however.  Finance has been doing this stuff forever.  I mean, I’ll bet you anything that if the interest rates drop by 1 basis point, Wells Fargo knows within seconds what the impact on profits are for savings, mortgages, etc.  Can’t we have/borrow/hire just a few of these guys?

 

Real Time Activity Analysis

Not only am I a geek, I’m a workout geek.  The latest geeky gadget I’m lusting over is a Withings E-Scale.  For just $165, I can get up in the morning, weigh myself, stand on the scale for a few seconds, have the scale measure my body fat, lean body mass, hydration level, and a few other things, and then have all of this uploaded through the home wi-fi to the internet.  I can then go online and see the trend of all of these factors over the time that I’ve been using the scale.  ((I actually weigh myself 4 times a day when I’m home.  It’s a California thing.))  Heck, I already have all of my bike ride data online for the last 4 years – I mean, I can compare how fast I pedal the crank arms on my bike today versus 2 summers ago in August.  Adding some health statistics seems reasonable to me.  Since it supports 8 users, my wife can get the same data on herself, although I’m pretty sure she would not want to, and I would be the focus of much derision for months.  Overall, while my bike stats tell me where I’m getting fitter, the scale would tell me the nuances of my body that contribute to fitness.

If I can get all of this personally for $165, I’m trying to figure out why it feels like I don’t have access to this type of data as an HR professional.

  • Case management tools are readily available:  Call centers do it.  If I go to my HR call center, they are probably tracking the number of cases each rep takes, how long it takes them to clear a call, etc.
  • Transaction data is usually available, but takes some effort:  I suppose I could audit my database tables to see how many employee name change processes there are and exactly how long they are taking.  But it’s not like I’m going to make my data entry staff use an extra minute to create a case for a 2 minute transaction.  Adding 50% effort to a small transaction is rather silly.
  • Data would be pretty impossible to get in an automated way: I mean, how much time does my staff spend in department meetings?  Not project meetings or something useful, I mean department get togethers, communications for what’s going on in the organization.  I’m not saying that this is not important stuff, but I had to run an activity analysis once just to prove to an organization that some of their people were spending 5% of their time in department meetings.

All I’m saying is that I feel like I should have a much better handle on my organization.  If I want to measure effectiveness, we seem to have dashboards for that.  Similar to my example where my bike stats can measure fitness, our dashboards can measure performance, talent acquisition, turnover, etc.  But similar to how a scale would then measure the miniscule core details of why I’m getting fitter (or not), I don’t feel like I have a dashboard for that.   For another example, we track training really well, but I think most of us would acknowledge that learning happens outside of training, and we don’t track real learning activities that well at all.

We’ve come a long way in the last [number] of years.  I’m hoping that in 3 more years, we can look back at 2011 and think, “god I’m glad we have this stuff in (in 2014).”  But no matter how far we go in the next [number] years, there will still be critical gaps.

Better Measures for Engagement

Is it Gallup that has the “Do you have a best friend at work” question?  We’re so into doing employee surveys to measure employee engagement.  They provide us with a statistically validated measurement of our workforce once or twice a year.  We can look at the engagement studies, and if we have any luck at all, capture some high level data about the organization and then correlate the data back to turnover and productivity in specific population groups.  My question is this: Isn’t waiting 6 or 12 months for engagement measurements rather a long time in today’s world of real time analytics?

How about this:  ((The idea for this post came from:  Ariely, Dan.  “CEO’s probably think of their employees as more like rats in a maze than as people.”  Wired Magazine, UK Edition.  April 11, 2011.  Page 44))

  1. Measure the time of day employees log into their PC in the morning.
  2. Measure the time of day employees log out of their PC in the afternoon.
  3. Measure the cost per day per trip (expenses) calibrated to some standard.
  4. Measure the number of sick days on Monday and Friday.

I mean, why would you wait 6 or 12 months?

  • If your employees are (on average) coming to work later or leaving earlier, they are less engaged.
  • If the aggregated cost of a trip to NYC costs more per day, employees are “fudging” their expenses, and they are less engaged.
  • If Monday and Friday sick time is increasing (faked sick time), they are less engaged.

I mean, come on, we want to have close to real time measures, right?  I’m not saying that employee engagement actually changes on a day to day basis, but charted weekly, you could get some really cool trending data and identify exactly when the engagement curve increases or decreases.  You could then correlate all of the events that happened in that timeframe and figure out what is actually causing increases or decreases in engagement.  You could also isolate specific groups and populations (sample size would have to be large enough).  Say a VP leaves and is replaced, and 6 months later employees are staying at work later.   Or, the cost of a meal in NYC seems to be getting higher for a specific project team – are they celebrating, or are they all depressed and eating more?

How cool would it be to then look at performance in correlation with a weekly trend in engagement?  This is assuming that we start managing and developing our employees on an ongoing basis rather than once a year, but the possibilities are out there.

Serendipity versus Decision Support

Would I be where I am today if I had all the facts every time?  I’m actually confident that if it were up to me, I would be digging ditches for a living somewhere (not to demean ditch diggers).  Let’s face it, I started off all on the wrong foot.  Being an Asian American kid with a prodigy brother, I was definitely the stupid one (I’ll assume you’ve all read about the Asian “Tiger Mom” thing lately).  I was the kid who, at the age of 6, was told by my piano teacher to quit.  I was the kid who was told by my 5th grade teacher, “too bad you’re not your brother.”  I was the Asian kid who graduated high school with only a 4.2 GPA.  (All of that is true btw).  I was also the kid who by some miraculous stroke of good fortune, managed to get accepted to my first choice college.  Being of relatively low income, my parents were quite please when I got a significant financial aid package (nothing compared to the brother, who incidentally got into every single Ivy League – also true).  At some point in the summer, I was sent a letter from my college of first choice and informed that they would no longer be able to offer me the amount of aid that I required.  With quite a large amount of desperation, I called around to various colleges, and was re-admitted to my (I think) 4th choice school with the financial aid that I needed.

It was at this school (one of the Claremont Colleges in S. California), where rather than hoards of students in large auditoriums being lectured to (a system that had clearly failed me so miserably to this point), I was instead surrounded by classes of maybe 15.  OK, maybe 20 max.  Rather than being lectured to, we sat around a table and talked about the book we read in the prior week.  I sat around on committees where I was literally a vote as a student to decide whether professors got tenure or not.  Rather than simple learning, I began understanding.  I really do consider this to be the first of several unplanned turning points.  Listen, I’m serious when I say that I was not a good student.  But learning for me happened a different way than for most.

We often talk about analytics and how it changes how we operate in HR.  High quality data leads to high quality choices – and often times that is true.  But it is also true that we don’t always have all of the data that we need at any specific point in time – if we had everything we needed to know, we might make vastly different choices.

I’ll take succession planning as an example.  We know who the top 10 succession candidates are for top positions (hopefully).  We know when they will be ready, what their relative skills and competencies are, and how their strengths compare to one another.  But we don’t know which of them are going to jump ship and go to another company before the position becomes vacant.  We don’t know which of them are going to stop growing, regardless of our best efforts to continue developing them.  The best that we can do, is to invest in a pool of candidates, and hope that one of them, the right one, is ready when the time comes.

We use decision support and analytics to crunch the numbers for us, but at the end of the day, it’s still serendipity – it’s still luck.  The hope here, is that while analytics and decision support can’t be a perfect predictor, we can in fact “make our own luck.”  We can improve our odds at getting the best outcomes.  At the end of the day, it is not serendipity versus decision support, but a combination of the two that will make our best data work for us.

Recruiting Effectiveness Measurement

Last post I wrote about recruiting efficiency measures.  From the effectiveness side, we’re all used to things like first year turnover rates and performance rates.  Once again, we’ve been using these metrics forever, but they don’t necessarily measure actual effectiveness.  You’d like to think that quality of hire metrics tells us about effectiveness, but I’m not sure it really does.

When we look at the standard quality of hire metrics, they usually have something to do with the turnover rate and performance scores after 90 days or 1 year.  Especially when those two metrics are combined, you wind up with a decent view of short term effectiveness.  The more people that are left, and the higher the average performance score, the better the effectiveness., right?

Not so quick.  While low turnover rates are absolutely desirable, they should also be assumed.  High turnover rates don’t indicate a lack of effectiveness.  High turnover rates instead indicate a completely dysfunctional recruiting operation.  Second of all, the utilization of performance scores doesn’t seem to indicate anything for me.

Organizations that are using 90 or 180 day performance scores have so much new hire recency bias that they are completely irrelevant.  It’s pretty rare that you have a manager review a new hire poorly after just 3 or 6 months.  For most organizations, you expect people to observe and soak in the new company culture before really doing much of anything.  This process usually takes at least 3 months.  And while the average performance score in the organization might be “3” your 90 and 180 day performance scores are often going to be marginally higher than “3” even though those new hires have not actually done anything yet.  However, you’ll have a performance score that is advantageous to the overall organizational score making you think that your recruiters are heroes.  Instead, all you have is a bunch of bias working on your metrics.

I’m not sure I have any short term metrics for recruiter effectiveness though.  Since we don’t get a grasp of almost any new hire within the first year, short term effectiveness is really pretty hard to measure.  I’m certainly not saying that turnover and performance are the wrong measures.  I’m just saying that you can’t measure effectiveness in the short term.

First of all, we need to correlate the degree of recruiting impact that we have on turnover versus things like manager influence.  If we’re looking at effectiveness over 3 years, we need to be able to localize what impact recruiting actually has in selecting applicants that will stick around in your organizational culture.  Second, we need to pick the right performance scores.  Are we looking at the actual performance score? goal attainment, competency growth, or career movement in # years?  Picking the right metrics is pretty critical, and it’s easy to pick the wrong ones just because it’s what everyone else is using.  However, depending on your talent strategy, you might be less interested in performance and more interested in competency growth.  You might want to look at performance for lower level positions while the number of career moves in 5 years is the metric for senior roles.  A one size fits all does not work for recruiting effectiveness because the recruiting strategy changes from organization to organization and even between business units within the same organization.

Overall, recruiter effectiveness is not as simple as it seems, and unfortunately there isn’t a good way to predict effectiveness in the short term.  In fact, short term effectiveness may be one of those oxymorons.

Recruiting Efficiency Measurement

If you look through Saratoga, there are all sorts of metrics around measuring our HR operations.  For recruiting, these include all the standard metrics like cost/hire, cost/requisition, time to fill, fills per recruiter, etc.   Unfortunately, I’m not a fan of most of these metrics.  They give us a lot of data, but they don’t tell us how effective or efficient we really are.  You’d like to think that there is going to be a correlation between fills per recruiter to efficiency, and there probably is some correlation, but true efficiency is a bit harder to get a handle on.

When I’m thinking about efficiency, I’m not thinking about how many requisitions a recruiter can get through in any given year or month.  I’m not even sure I care too much about the time to fill.  All of these things are attributes of your particular staffing organization and the crunch you put on your recruiters.  If you have an unexpected ramp-up, your recruiters will be forced to work with higher volumes and perhaps at faster fill rates.  Once again, I’m sure there is a correlation with recruiter efficiency, but it may not be as direct as we think.

Back to the point, when I think about recruiting efficiency, I’m thinking about the actual recruiting process, not how fast you get from step one to step 10, or how many step 1-10 you can get through.  Recruiting efficiency is about how many times you touch a candidate between each of those steps.  Efficiency is about optimizing every single contact point between every constituency in the recruiting process – recruiters, sourcers, candidates, and hiring managers.

The idea is that you should be able to provide high quality results without having to interview the candidate 20 times or have the hiring manager review 5 different sets of resumes.  If you present a set of 8 resumes to the hiring manager and none of them are acceptable, you just reduced your recruiting efficiency by not knowing the core attributes of the job well enough and not sourcing/screening well enough.  If you took a candidate through 20 interviews, you just reduced your efficiency by involving too many people who probably don’t all need to participate in the hiring decision and who are all asking the same questions to the candidate.  Sure, there is a correlation between the total “touches” in the recruiting process to time to fill, but “touches” is a much better metric.

I know we’ve been using the same efficiency metrics for ages upon ages, and most of us actually agree that we dislike these.  Touches within the recruiting process makes a whole lot more sense to me, as it gets to the actual root of the efficiency measurement.

Measuring the Temperature of your Workforce

For those of you who have read Dan Brown’s new book “The Lost Symbol” there is an interesting concept of measuring the temperature of a population. I’ll provide a quick couple of paragraphs, noting that it’s really not critical in any way to the plot of the story:

Trish laughed. “Yeah, sounds crazy, I know. What I mean is that it quantified the nation’s emotional state. It offered a kind of cosmic consciousness barometer, if you will. Trish explained how, using a data field on the nation’s communications, one could assess the nation’s mood based on the “occurrence density” of certain keywords and emotional indicators in the data field. Happier times had happier language, and stressful times vice versa. In the event, for example, of a terrorist attack, the government could use data fields to measure the shift in America’s psyche and better advise the president on the emotional impact of the event.

“Fascinating,” Katherine said, stroking her chin. “So essentially, you’re examining a population of individuals… as if it were a single organism.” ((Brown, Dan (2009). “The Lost Symbol.” New York, Doubleday.))

Admittedly, it is such an interesting idea that as we enter a world of enterprise social media, that we could install software that looked at the density of search terms or keywords, or even tags, and make some meaning out of them that translated into good or bad moods. We could even do this today with emails, but that might be a bit more problematic to have all of our client related emails searched and cataloged for terms. Since social media postings within the enterprise are theoretically all open to the entire population, this is probably much easier to digest.

Theoretically, it should really not be all that hard to do. I mean, all you’re doing is having a search engine run in the background for a specific set of terms, and counting the number of occurrences. This can yield an emotion indicator that given the size of your organization probably does have some statistical degree of accuracy. Especially if you have discussion boards that allow employees to ask about the organization and what’s going on, you’ll certainly have some good data based on the language people are using. I’m sure there are linguists out there that can help translate the overall mood based on language and word choice, even though conversations may not be intended to display any mood at all.

What intrigues me is the almost real time nature of this tracking. Rather than running the employee engagement survey once a year or quarter, you could have this assessment on a daily or weekly basis, again depending on the number of employees and the volume of traffic on your enterprise social media sites. While I have no idea if there is currently software out there that does this stuff, the possibilities seem to be quite simple and realistic. HA!! If anyone does it, let me know.