Common Sense KPI’s Gone Wrong

I love dashboards.  I have a goals list on my phone tracking how many miles I’m supposed to run or ride on my bike.  I have a trending graph on the device that tells my how much I weigh.  The only reason I have not bought one of those fitness wristbands yet is because I just can’t stand things on my wrist!  I was just on the company call, and we do the company performance dashboard, we stack ranked the all time leaders for ideation at the company, and all sorts of other visual and gamified graphics.  As employees, we should be managing our goals and goal progress, and some systems now have cool mobile components that can visually show where we are with each performance goal.  It’s great to be able to track where we are at any given time in almost any area of our lives.

Sometimes our KPI’s go desperately wrong, even though they seem to make sense.  My current personal goal is to get back to 10% body fat.  For those of you who don’t know me, let’s just say I’m already one “skinny ass dude.”  The problem is that less fat for a person who is generally athletic and out of doors as often as possible sounds like a good thing.  The question is, is it the right thing?  We actually face the same problem in our HCM KPI’s.  Here are a few examples.

Employee Referral %:  

  • Employee’s who are referred to us by other employees are our best people.  Right?  Almost all of us would agree that this is true as these employees will have higher levels of engagement, are pre-screened as people we’d want to work with, are capable and smart.  The referrer has a stake in the person’s success, but their credibility is also on the line so they probably won’t be referring crappy people.
  • Often, we’ll see that companies want to achieve as high a referral % as possible.  This allows the company to get more great people, but also reduce recruiting costs.  The problem is that there’s also a tendency to refer people who are similar to us.  This is a problem on a couple of fronts.  First of all, there is an ideation and innovation problem.  If you recruit people who are similar to you, who have similar experiences, have worked in the same places, you are not getting your company’s due in diverse thinking.  Second, people like us are not demographically diverse all of the time.  if you have a lot of “white dudes” and you want a 100% referral rate, you’re still going to be a bunch of “white dudes” in 5 years.

Employee Turnover %:

  • This one is fun.  Some organizations are SO proud of the low turnover that they have.  I’ll walk into a new project and within day’s I’m inundated with how they have achieved 5% turnover.  I mean, having great employee engagement so much so that nobody ever wants to leave is a great thing, right?  We see targets of 8% and lower all the time.
  • Depending on which philosophy to subscribe to, there is such a thing as “desirable turnover.”  Those are the Jack Welch bottom 10%, or in your forced bell curve performance ratings the bottom 5-15%.  Let’s just say that there are 10% of people in your organization at any time that SHOULD leave, and you should be encouraging to leave.  So if your desirable turnover is up to 10%, and your target is less than 10%, something is pretty much wrong.  Right?
  • The key is to figure out how to shift the conversation to unwanted turnover rate instead of total turnover rate.  A very high performing organization could have a total turnover rate at 12%, but if their unwanted turnover is only 2%, I’d say they are doing fantastically.

We all want high referral rates, and we also want low turnover rates.  These are great KPI’s, but we take too much for granted and at face value.  Going to extremes just because there’s a number to hit impacts our organizations in a pretty negative way, and in HR, it usually means that we have some of the wrong people working for us.

Jocks vs. Nerds

“There’s this idea of the jocks vs. nerds thing. That sort of ended when the nerds won decisively. We now live in this era where your big summer tentpole movies can be hobbits and minor Marvel Comics superheroes and boy wizards. If you had told me when I was in junior high there would be a $200 million movie about Hawkeye andBlack Widow, I’d be like, ‘Hawkeye — that guy’s lame!'” Jennings says. “Those nerds started running Hollywood studios, and our captains of industry became Asperger types with acne scars.”

I was reading about what Ken Jennings (of Jeopardy! fame) was up to these days, and there was the above quote that I found hilarious.  It’s totally try though, especially for a guy like me who lives in the Silicone Valley.  High school might have been a time when the jocks ruled the world, and college was a transition time, but once you get into the workforce, there are really grate charismatic guys running businesses, but the people who are really redefining the world and how we change our behaviors to adapt to the world on a daily basis are quite clearly the nerds.

(Credit to the HR Technology Conference and Bill Kutik bringing IBM’s Watson computer for making me think about Ken Jennings)

I’m continuously thinking about analytics these days and I start to think that HR has also started the transition from “people people” to something a bit nerdier.  Maybe we are in that college stage I mentioned above – we’re no longer just the people that you go to for benefits and worker’s comp – that was over a couple decades ago.  We’ve started down the path of Talent Management, and we’re probably still trying to figure that out.  We keep talking about really great analytics, but we really don’t do it well.  I think what we need to really get to a mature level of HR as a profession is we need to get nerdier.

Talent Management:
It’s entirely possible we’ve been wrong for the last decade.  We’ve built these incredible competency models, tracked how and when a goal should cascade, and automated all of our talent processes.  I don’t think the business is convinced that we’ve actually improved the core employee’s ability to get developed.  Think about what you yourself did 10 years ago.  Big deal that you can now enroll yourself in training on-line and you have a cooler performance tool that is not a piece of paper.  Have the majority of employees in any company really experienced a perceptible difference in talent and development outcomes?  I’m guessing not.

It’s entirely possible we need some nerds to take over.  I don’t care how much HR shepherds the process along – if the employee and manager don’t own their own talent, it’s game over.  The only way to do this (that I can currently think of) is to create easy to use, social, real time, talent engines.  I’m thinking of an engine that quickly allows a manager to give feedback or development instructions when and where they think of it, then have seamless execution (again in real time) by the employee.  All of this has to happen without the HR practitioner and then roll up at the end of the quarter or year so we get that macro view of progress.  Without real time integration with the employee and manager though, all we have is another failed HR process.

Where HR gets involved is not in shepherding the process, but instead in managing success.  If we can mine the data and understand who is doing what, what works, and where we are missing the mark, that is where the value is.  Somewhere and some point, process people are still important as we make the transition, and certainly we need great change people to get manager adoption, but what we really need are analytical nerds who get how to interpret data.

HR Analytics:
I really hope we don’t have illusions that we are any good at this – we’re not.  We have technical people and we have functional reporting people helping our organizations create reports.  We have vendors feeding us cool dashboards that we then flip and roll out to our managers and executives.  What seems to be missing to me… the statisticians.

Have you ever talked to the finance guys about what they are doing in their analytics functions?  The stuff they produce is absolutely amazing – and they are set up in a pretty different way than we are.  Financial models are very complicated, but shouldn’t our models of people resources be just as robust?  In fact, if anything we have more complex, more dynamic, and more diverse data sets.  If we were dealing with numbers, our lives would be easier – but we deal with more complexity with less sophistication.  No wonder we walk into the executive boardroom and don’t get credible respect.

It’s interesting – when I look at the type of people HR hires, we automatically know we’re not going to have the best friend relationships with Comp, Payroll, IT, etc…  Those guys are just different from us.  I mean, my God, they are analytical in a totally different way.  Embrace the difference – it’s what HR needs, and it’s not even enough.  I’d love to see us start to hire the nerds – math majors and people who can come up with complex statistical understandings of the HR world.  We are in our infancy for understanding HR, but it’s because we don’t structure our organizations in such a way to create deeper understanding.

Get used to the fact that the nerds have won.

Bread & Butter

It always frustrates me when I’m dieting – I have to forego one of my favorite food items:  Butter.  Butter (fat) along with bacon fat (fat) is one of those amazing joys of life.  When butter is great, a bit salty, a lot smooth, and a lot fatty, it is a wonderful thing  Unfortunately, one cannot generally eat butter straight off the spoon without incurring some ridicule from friends.  Therefore, one must also eat bread.  To me, bread is not just a necessary evil.  Great bread on its own is also a joy of life.  It can be beautifully crusty on the outside, warm on the inside.  But sometimes when the bread is not great, it’s just a delivery system for the butter.  Perfect harmony ensues when both the bread and butter a great.

HR service delivery (you knew it was coming, don’t roll your eyes) is quite like bread and butter.  Imagine your HR business partners as the bread and butter as amazing data and insights.  When the HRBP is great, you have a wonderful partnership of a person who actively gets to know the business, builds great relationships, communicates, plans and collaborates effectively.  Unfortunately, the HRBP is often paired with crappy systems, inaccurate data, and poor reporting capabilities.  The business wants a partner, but they also want a partner that can help diagnose what is going on with their people.

Butter on the other hand is like great data.  When systems and data are in good order, access to reporting and discovering insights become possible.  Insights into the organization and people don’t mean anything  however if all you have is some people at corporate that don’t have relationships into each business segment.  Data and insights get lost in the fray, lost y the wrong people, poorly communicated, and otherwise rendered meaningless.

Just as you can’t eat butter straight (again without incurring ridicule), you need a good delivery system.  That’s the bread. In this case, the delivery of the insights can’t even be consumed without great HRBP’s.  In a prior consulting firm that I worked for, we used to have a line at the bottom of each powerpoint that said something like, “content should be considered incomplete without contextual dialog.”

We’ve been so caught up in data, big data, business intelligence, predictive analytics that we’ve been on a quest to spend millions of dollars to fix all of our foundational data systems.  In a few years, we’re hoping to deliver amazing insights into the organization.  Pair processes with real time intelligence that allows managers to know exactly what actions to take with people.  I’m the downer guy to tell you that without the context of the great HRBP who understands the business, 80% of that cool data analysis is meaningless.  You don’t get insight without understanding the business – all you have is a cool analytic.

That poses the second problem.  Do we actually have great HRBP’s?  The analysis of that has been done in many other places, but the answer for the vast majority of us is “no.”  We’re spending millions of dollars on the data, but we still have not figured out how to transform our HRBP’s.  I’m not saying they are the HR generalists they were 10 years ago, but they still don’t usually have the full trust of the business, the ability to make business, people, financial, operations… correlations, and they still don’t understand the business the way they understand HR.  We still have work to do here, so realize that we can deliver the data, but whether we can make it meaningful is still uncertain.

In our quest for great data delivery to the business, let’s not forget that it’s the pairing of two great elements partnered effectively together than makes the data meaningful.

(this post was made possible during the consumption of some pretty good bread and butter)
(I thought about using “meat & potatoes” but I’m not quite as passionate about that)

HR, Twitter and Osama bin Laden

Yeah – I’m going to write about this.  I just finished watching Zero Dark Thirty on the plane, and I’m thinking back to that day.  I remember landing in the Chicago airport, booting up my phone and checking Twitter.  Scrolling through the feed, one caught my eye: “bin Laden is down.”  The tweet was more than a couple hours old at that point, but I noticed it came from a friend of mine in India.  I then proceeded straight to the United lounge where I was in absolute disbelief – they had some random Court TV channel on or something.  I asked everyone to change channels to CNN saying something like, “Guys, bin Laden is down, we need some news.”  I got blank stares and a, “Who are you and what are you smoking?”  By the time I left the club, everyone was hanging out next to the TV’s, it had finally made US media more than 4 hours after the event.

There are all sorts of Twitter analogies I love.  I love that Twitter can figure out the mood of the country every single day (probably every single minute) based on keywords.  I know that we don’t all use Twitter (hey, I’m totally a late adopter and I still barely use it to this day), but this post is really about social media and the pulse of your organization.  Hopefully you have something running whether it’s Sharepoint, SFDC Chatter, Jive or anything else.  The question is, “are you listening?”

There are all sorts of stories these days about customers who don’t go to the vendor customer service call center, but tweet problems on-line.  Service organizations are starting to get pretty good at monitoring Twitter and responding to people to fix problems.  I’m not saying that your HR service center needs to allow tickets to come in fiat social media, but when there is a thread about how bad the health insurance is, or that managers are not listening to employees, do you find out about that first, or does someone else bring it to your attention 3 days later?  You have the ability to get a view into the problem before it explodes into something bigger that execs are now worried about, but you have to be listening in the first place.  Seriously, do you want to bring it to your exec that there is a problem, or do you want your exec to bring it to you?

Mass Collaboration:
You can’t get this on email.  Even if you are using large distribution lists, most of the people on those lists ignore those emails.  Take it from me – I’m one of them.  You can get really interesting ideas out there, but if it’s in an email thread where the content is not managed, it’s not owned by the enterprise.  Social collaboration forums not only allow mass storage of insights, but they do it in perpetuity (until someone cleans up or archives).  If we’re all sitting in front of the news waiting 4 hours to get it, that’s pretty slow and we’re dependent on the distribution channel to tell us what’s important.  If we take to the user owned collaboration forums, we get to filter insights in real time.

Back to this idea of pissed off employees – there doesn’t always have to be a thread about something that is upsetting any group of people.  How cool would it be if you could create an algorithm that gives you a measure of employee engagement on a daily basis (ok, maybe weekly).  Apologies to the vendors who sell engagement surveys, but if you could put together an algorithm that gave you engagement, split it up on dimensions of level, job families, pay grades, organization, you’d have a pretty powerful tool.  You might complain that you don’t have specific actions, but I’d disagree.  What is the use of an engagement survey that gives you a report every year?  Just like the crap about performance management not being meaningful, if it’s a year later, it’s too late.  On a weekly basis, you could dig into what comments are causing lower engagement scores, deal with them in the specific populations, create engagement and solutions before things escalate.

Talent Management:
I wrote about this years ago, but I think it might actually be time.  I’m totally intrigued by the idea that you can get rid of your entire competency model and just use social media.  LinkedIn is getting closer, but it’s nowhere near perfect.  I don’t want anyone tagging me with skills.  What I do want is for HR to figure out what I’m good at by looking at my social media posts inside the corporate firewall.  If I post about HR Analytics and 20 people respond, that gives HR an idea that I might be interested in the subject.  If someone posts a question about HR Analytics and I respond, and I also get 20 “likes” for my answer, I might have some expertise.  As you aggregate all the social data over time, create a taxonomy to apply against business conversations, and apply all that data against employees, you have a pretty good idea of what people are thinking about and what they are good at.

I’ll acknowledge that listening is only part of the solution – much of the other part is figuring out how to listen, what to listen to, and how to decipher what you are hearing.  There is a lot of static out there and you need good tools to get good insights back.  I also don’t know how far off social listening is for HR, but hopefully this gets us thinking.  It’s something we need to do as our organizations get more diverse globally, disconnected geographically, and technologically savvy.  Conversations are moving to social, and we have an opportunity.  Let’s grab it.

Is Big Data An HR Directive?

I have an argument with my wife every few years.  I tend to like cars with a bit more horsepower.  I mean, that 1 time a year when there is a really stupid driver about to crash into you, a few extra horses comes in handy when you really need to speed away.  The problem is that 99.99% of the time, that extra horsepower is a luxury you really don’t need.  You’d get from point A to point B just as safely, and probably just as fast.  Sometimes though, that engine really does matter.  (My wife wins 90% of arguments by the way)

Everyone in HRIT is talking about big data these days.  Unless I completely don’t get it, I thought this is what we’ve been working towards for years.  I mean, having ALL of our talent data, core HR, learning, recruiting, payroll, benefits, compensation, safety, etc data all in the same place and running analytics against it all was always part of the data warehouse plan.  I mean come on, what else is ETL for if not to grab data from all over the place, aggregate it into the ODS, and then figure out how to make sense of it all?  We’ve built a nice engine that caters to our needs 99.99% of the time.

I’m going to propose something:  Big data does not matter to HR.  It’s just a new naming of something that does matter.  Business intelligence and truly focused analytics is what makes us focus our actions in the right places.  BI, Big Data, I don’t care what we call it.  Just do it.  Either way, HR does not have a big data need at this point.  I’d propose that we can use Big Data technology to speed up our analytics outcomes, but that’s about all we need for the next few years.

In my simplified definition of Big Data, it comes down to two major attributes: the use of external data sources, and the lack of need to normalize data across sources.  If we look at it from this point of view, the reality would state that almost no HR department on the face of the earth is ready to take HR data and compare it with government census data, or employment data.  Let’s get really creative and take local population health statistics combined with local census to get some really interesting indicators on our own employee population health.  Right, we’re just not there yet.

Let me reverse the thinking for a moment though.  What about the other 0.01% of the time that our traditional BI tools just won’t help us out?  Going back to benefits examples, how many global organizations can really directly compare benefit costs across the entire world?  How many of those same global organizations have a great handle on every payroll code?  Much of the problem is that the data is often outsourced, and definitely not standardized.  Collecting the information is problematic in the first place, but next to impossible to standardize annual changes in the second. The beauty of Big Data is that in these cases, you’d actually be able to gather all of that data and not worry about how to translate it all into equal meanings.  The data might aggregate in a more “directional” way than you’d like, but you’d probably still have an acceptable view of what global benefits or payroll is doing.  It seems to me that this puts us quite a bit further ahead of where we are now.

Listen, I know that HR has some place in Big Data at some point in the future, but the reality is that the current use cases for Big Data are so few and far between, and that we have so many other data projects to work on that we should continue investing in the current report and analytics projects.  Big Data will come back our way in a few years.

As I said, my wife usually wins the arguments.  We end up buying a car that has 175 horses under the hood, and I end up wishing we had more once a year.  But inevitably, automakers seem to up the game every few model years and come 5 years down the road, that same car model how has 195 horses.  If I just wait long enough, those extra horses in the engine just become standard.





How To Give All The Wrong Answers

As per my last post,at the end of 2012, I was doing a family vacation in Taiwan.  Being with family for 2 weeks is quite an expose into mannerisms that each of us have.  I was particularly intrigued by my brother’s questioning of my mother.  My brother would constantly ask my mother things like “why are we going to [city_name]?” instead of “what are we planning to do when we get there?” and “how much time will I need to prepare the kids to sit in the car?”  Luckily, we had my mother there fueling the ridiculous line of questioning.  90% of the time, her answers had nothing to do with the questions he was asking.

  • “Why are we going to [city_name]?” “Oh, let me tell you, when I was growing up, I used to play with my cousins there.”
  • “Mom, why are we going to [city_name]?” “Oh, did you see that beautiful view over there?”
  • “Mom, can you please just tell me why were are going to [city_name]?” “Don’t worry, you will love it.  It’s beautiful there.”

There are two items I’d like to diagnose.  First, are we actually listening to the question?  Second, did we understand the question?

The first is fascinating to me because I’m not sure we actually are listening.  Many of our reporting organizations are pure intake, create, output engines.  We grab the data that is asked for, create the report and send it out hoping we got it right.  Basically, we are spec takers.  Second question follows right after the first.  Much of the time, we don’t know why report requesters want the data at all.  We could be asking ourselves why they want to know, and if the data we are providing helps them solve a problem.  If we are really cool, we could be asking if they are even trying to solve the right problem or not.

Here are a few questions you should explore when data requests come your way:

  • How are you going to use the data?
  • What is the core problem you are trying to solve for?
  • Are there other data elements or analysis that we have that can help further?
  • Are there other correlated problems that we should try to answer at the same time?

For all intents and purposes, this post is the exact corollary of the prior on how to ask the right questions.  The problem with being a non-strategic reporting organization is that if the wrong questions get asked, the output is doomed to be the wrong information as well.  But even works, sometimes the wrong question gets asked and we still give the requestor the wrong data back.  All this does is create turn – another report request, or bad data going to managers (who in turn trust HR a little less the next time around).

In the case of my brother, he asked the wrong question in the first place.  It would have been much more advantageous had he explained why it was important for him to prepare the children for the outing, have the right clothes, have enough food along, and maybe get them extra sleep.  I’ll never know if my mother would have given him the right information in return, “yes it usually rains on that side of the island, it’s 40 minutes away, and we will be in a friend’s house so they can’t get too wild.”  But the crafting if the right answer is a tight collaboration of both sides creating understanding of what the objectives are.


How To Ask All The Wrong Questions

At the end of 2012, I was doing a family vacation in Taiwan.  When I say family vacation, I mean not just my wife and me, but my brother’s family along with my parents, visiting all of the senior members of the family (an important thing in Asian cultures).  There is an incredible exposure of habits and an interesting (but sometimes undesirable) analysis of where my brother and I got those habits from.  I was particularly intrigued by my brother’s questioning of my mother.  Let’s just say that getting 2 grown sons, their spouses, and our parents together creates a certain amount of strife.

Let’s also just say that my brothers’ hauling around of two young children may have added to the stress – he really needed to understand the daily schedules and what was going to happen when.  Back to the questions: my brother would constantly ask my mother things like “why are we going to [city_name]?” instead of “what are we planning to do when we get there?” and “how much time will I need to prepare the kids to sit in the car?”  (more on my mom’s response in the next post)

The problem in the questions was not the question itself, but in the thought process.  All too often, we ask questions about what we think we are supposed to know.  We want to know about turnover, headcount, spending per employee.  This is information that is useful, but does not actually inform us about what our next actions are.  Being “strategic” to me means that we have a plan, and we are actively managing our programs towards that plan.  If we’re using data that just skims the surface of information, we have no ability to adjust direction and keep going in the right direction.

I’ve often heard storied about HR executives who go into the CEO office for a meeting to present data, and all they get are questions back that cannot be answered.  Some HR teams go into those meetings with huge binders (sometimes binders that I’ve sent with them), and those teams come out still not having answered the questions.  The problem is not with the data.  The problem is that the team has not figured out what the actionable metric is, and what the possible actions are.  No CEO cares about the data – they want action that ties back to what the strategic objective is.  In other words, why do they care?

Here are a couple things you can do to craft better questions:

  • Always think about the root of the question:  HR tends to analyze at the surface more than some other functions.  We have finance doing complex correlations and marketing doing audience analysis.  We’re reporting headcount and turnover to executives.  What kind of crap is that?
  • Be a child:  Ask why/what/how up to 3 times.  Why 1: “Why are we going to [city_name]?”  Why 2: “Why do I want to know what we are going to do there?” What 3: “What do the kids need to be prepared with?”
  • Take action:  If you ask a question that can be answered in such a way that you can’t take action, you asked the wrong question.
  • Create an intake form that customers can request through: make sure you ask the right questions here to ensure they think through the process and understand what they need.

Many of the organizations I consult with have some pretty robust analytics organizations.  When I dig under the covers, they are reacting to create ad hoc reports for managers and HR business partners.  Once a quarter they scramble to create a CEO report card to depict the state of HR programs.  This state is sad to me.  We should be doing deeper analysis and diagnosis on a daily basis.  If we asked the HRBP’s what/why that wanted data for, we’d probably find there is a huge amour of quality analysis being performed in silos that could be leveraged organizationally.


Using HR Analytics to WIN

So I have to admit it’s hard to come off the election and not write about this.  This election was defined by some pretty deep population analysis, incredible forecasting and pretty significant actions to try to address the most important populations.  All of this went right along with a prioritization model that was pretty strong.  If we look at Obama/Romney, they each had target demographics: Obama needed the youth vote, managed to capture more than his fair share of the female vote (because the GOP was being stupid, and a couple contributions by Romney as well), and he also predictably got the “minority” vote.  As the election neared, both candidates narrowed down their activities to a few key states (Ohio, Florida, Virginia, etc.)  They had incredible models about what percentage of each population’s vote they would get, and if they followed those models, all they had to do was figure out how to get those people to the polls.  It turns out that in the end, the Obama machine was far better, and the Romney machine actually broke down (the phone app they were going to use literally went down on election day, and thousands of faithful Romney campaign volunteers had no idea which houses to go to and make sure people actually voted).

At the end of the day, the story is the same as the one we have in HR.  It’s about winning.  The difference is that we all want to win at different things.  Some of us want to win at engaging our employees.  Others want to win by having the best IP.  More yet want to win by producing the best products in our category in the world.  What is great is that if we’re good at what we do, we’re not focused on running analysis about turnover and headcount (although we’ll do that anyway), but instead we’re focused on understanding exactly what 5 things increase engagement in our populations.  Taking that a step further, we’d know exactly what populations are the most impactful on the entire workforce and target those people.  A 1% increase in engagement in the 30-something sales guys might yield an advantageous network effect while it takes a 3% increase in another population for the same to happen.

Similarly, if I want the best IP, I need to understand the roots of this.  Chances are it’s not just the standard talent management equations we need to figure out.  I mean, performance and succession are only going to take us so far.  Instead, if we’re figuring out the profile of the employees that created the most patent applications, the people who have the highest levels of trust in their subject matter, and specifically what are the attributes that made them successful, then we can start recruiting for those people, aligning our performance reviews not to achievements, but to the competencies we know work, and doing talent reviews that direct us to the right employee profiles rather than who we think is ready from a job perspective.

My assumption is that if you have created your HR strategy in the right way, and you have aligned that strategy with the corporate strategy, then HR is designed to make the organization WIN.  Therefore every analytic you are running should be directing your organization to that win.  Don’t forget about the mundane operational reports, but understand that focusing on that isn’t really helping you.

At the end of the day, Romney actually did almost everything right, and he really thought he was going to win.  The problem that he had was that he made a few wrong assumptions.  The GOP really thought that all the pollsters were wrong – that they were over emphasizing the Democratic vote that would turn out on election day – that Democrats, the youth, and Hispanics were really not as excited about Obama as they were 4 years ago.  Obama on the other hand had callers identifying who were voters, if they were voting for Obama, and then instructing the voter where their polling place was, good times to go, and what the plan was to get them there.  At the end of the day, forget about the other guy.  Just go run your analytics, make sure you are focused on what matters, and go out there and WIN!

Note:  Please assume no political commentary, simply an example of who analytics can be put into action.



Why Aren’t We Using Our Analytics?

There are everyday annoyances, and then there are just things that piss you off.  Usually the things that make me really angry are when something should have been fixed, but it hasn’t been.  Or something is happening too slowly where I should have just done something myself and been done with it.  Personally, I think I’m a pretty patient guy, but every now and then I’m shocked into anger.  Today’s topic?  NPR last week did a piece about the male and female wage gap.  Apparently it’s still there:

So, as the Washington Post notes, the authors tried to make everything as similar as possible. They tracked graduates with identical collegiate experiences, limited familiarity with the work world, and those who didn’t have spouses or children.

But the wage gap persisted.

The study found that in teaching, female college graduates earned 89 percent of what men did. In business, women earned 86 percent compared to men. In sales occupations, women earned 77 percent of what men took home.   (1)


If we were back in 1989, I get it – it was hard to pull and aggregate all of that data.  If it was 1995, only half of us were on PeopleSoft and the other half of us were managing something so clunky I’ve probably never seen it.  By 2001, everyone had upgraded due to the Y2K non-event and we all had basic reports.  We’re a decade after this and if we all just ran a report that had 2 rows on it, we’d have fixed this already.  Row 1 = Average Wage of Males in Org.  Row 2 = Average Wage of Females in Org.  You can get more sophisticated and run this by jobs or departments.  You could run this by job AND tenure.  You could get really creative in whatever you want to do.  But here’s the bottom line:

EFFING RUN THE REPORT!!!!!  (and while you’re at it, RUN IT ON ETHNICITY TOO!!!)

And helpful hint number 1:  You might want to run the thing and give it to managers right before/during merit time so they can see just what bastards they are.  And just so we are clear, there is no hiding behind “we don’t have the budget to give you more than 3%” on this one.  This is called the equity increase.  What you don’t have budget for is the $1M lawsuit that I hope is coming if you don’t act on it.

Helpful hint number 2:  Organizations I’ve worked with on analytics always had a gender and ethnicity pay, hiring and tenure scorecard as part of their monthly or quarterly deck.  Companies who do this right are looking at this every month/quarter and know exactly where they stand.  They won’t get it right 100% of the time because we’re still looking at aggregated data on a scorecard, but at least they know they are on target as a whole.

What’s sad, is that while there are a variety of reasons this stuff happens (some managers really do suck, sometimes women may not negotiate as strongly, whatever), the effects of this are pretty serious.  Whether it’s gender or race, the impacted have about 10% less money in doing the same job to pay their bills, buy a nicer house, and sending kids to better colleges.  Guys, this is a big big deal, and we are actually empowered to do something to expose it, and are empowered to bring this to light.  There’s no excuse if we don’t.

</end rant>

  1. Korva Coleman, October 24, 2012, “ Equal Pay For Equal Work: Not Even College Helps Women”

(apparently I’ve forgotten how to APA footnote and I don’t care to look it up…)

HR Technology Conference Reactions: Predictive Analytics

I’ve always thought I was pretty good at analytics.Not being a practitioner who is sitting in the middle of data all the time, I get more time to just think about the type of analytics that it takes to really run the business.  It’s been a really long time since I discounted the usefulness of things like time to hire preferring things like quality of hire (efficiencies versus effectiveness measures).  But I’ve always fought with predictive analytics.  In my opinion, they don’t really exist in HR yet.  We can trend our data and draw a trend line, but that does not predict our future – it simply tells us that directionally, something is going to happen if we don’t change course.  I’ll admit that I walked into this session with a great deal of skepticism, I walked out with some great insights.

The panel was made up of some great speakers.   Moderator: Jac Fitz-enz, Ph.D., (CEO, Human Capital Source), Laurie Bassi, Ph.D., (CEO, McBassi & Company), John R. Mattox II, Ph.D., (Director of Research, KnowledgeAdvisors), Eugene Burke, (Chief Science & Analytics Officer, SHL), Natalie Tarnopolsky, (SVP, Analytics and Insights, Wells Fargo Bank).

Theme #1:  Descriptive, Predictive, Prescriptive. Let’s start with some definitions as the panel did, but I’ll use a tennis example.  I don’t know if anyone has been watching the last few grand slams, but they have been using a good mix of all these types of analytics.  Descriptive is simple.  Roger Federer has one 16 tennis grand slams.  (I’m guessing as I’m on a plane typing this).  Predictive is next and basically tells us what our destiny is going to be.  Roger’s record against Nadal in grand slam finals has not been particularly good.  If Rafa is on his game, hitting his ground strokes with the huge topspin he has, Roger is going to have to figure something out or lose again.  Here is where the last few opens has been interesting.  The broadcasters will sit there with the stats and say things like, “If Roger can get 67% of his first servers in, he has a 73% chance of winning” or “Roger needs to win 55% of Rafa’s second serves to have a 59% chance of winning.”  Now we have prescriptive – the specifics of what to do in order to change our destiny.

Theme #2:  Engagement. We probably focus in on this a bit too much.  It’s not because it’s not important, but it’s not specific or defined enough.  I mean, we all have a definition in our heads, but for 99% of us, it’s fluff.  My definition of engagement is the intangible quality that makes an employee want to provide that extra hour of discretionary work when other non-work opportunities exist.  Total fluff, right?  We can provide some correlations around engagement.  If engagement increases by 1%, then turnover decreases X% and so on.  What it provides is a great predictive measure, high level as it may be.  We know we need to increase engagement, and it is indeed important.  But it’s not the key measurement we have all been lead to believe will solve all our problems.

Theme #3:  Predict winning. OK, so if engagement is not the key metric, then what is?  Well, I have no idea.  I’m not being snide, I’m just saying that it will change for every single organization.  If you are (mall) retail organization, then having really good salespeople might be what hits the bottom line.  You could run the numbers and find out that if you rehire sales that worked for you the summer/holiday season last year, those salespeople are 20% more productive, whereas engagement reduces turnover by 1.3%.  Which metric are you going to focus on?  Right, how do you get those experienced salespeople back?  Instead of spending $1 on engagement, you could get 5 times the ROI on that same dollar elsewhere.  What we want to do is not predict outcomes.  We want to predict winning and understand what our highest contributors to winning will be.

Let’s take another example, this one from the panel.   Let’s say 5% of your workforce are high performers, but you can only give 3% of them promotions this year.  You also know that the 2% of top performers who don’t get promotions will likely leave the organization.  Now you have a problem.  You can’t afford to promote these people, but the cost of replacing top performers is extraordinary.  Analysis like this quickly leads you to decisions which are actionable.  At the end of the day, we need to compare our top drivers against our weaknesses to really figure out our greatest opportunities to invest in.

Theme #4:  HR can’t do it. This part sucks.  Towards the end of the session, we walked through a statistical model.  Yeah, we can end this post right here, but I’ll continue.  The rather brilliant by HR terms model was presented by Wells Fargo.  Go figure an ex-finance person working at a bank would have this all put together.  The point being, this was an ex-finance person, and the bak part is ot wholly irrelevant.  All the stuff I said above really makes great sense.  But when it comes down to executing it, HR in most organizations does not have the skillset to execute on it.  We don’t have very many statisticians in our HR staffs, and even if we did, HR executives would have a hard time seeing the vision and have the willingness to implement these technologies and models.  All is not lost however.  Finance has been doing this stuff forever.  I mean, I’ll bet you anything that if the interest rates drop by 1 basis point, Wells Fargo knows within seconds what the impact on profits are for savings, mortgages, etc.  Can’t we have/borrow/hire just a few of these guys?


Commonizing Meaning

I have some favorite phrases that I’ve been picking up for years.

  • “Eh, voila!” universal for “eh, voila!”
  • “Ah, asodeska” Japanese for “I understand”  (sp?)
  • “Bo ko dien” is Taiwanese for highly unlikely or that’s ridiculous. (sp?)
  • “Oh shiitake” (shitzu is also appropriate), is an imperfectly polite way of saying “oh &#!+”

Basically, these are phrases that i love, but at least the latter two are meaningless to most people i say them to. I could of course go to Japan and most people will know what I’m talking about when i tell them I understand them, but they will then look at me funny when i exclaim in the name of a mushroom in anger.

We face the same problems when we talk about data calculations in HR. The most common of which is the simple headcount calculation. “Simple?” you ask. I mean, how hard can it be to count a bunch of head that are working in the organization on any particular day, right? The smart data guys out there are scoffing at me at this very moment.

First, we put on the finance hat. Exactly how many heads is a part time person? HR exclaims that is why we have headcount versus FTE. But finance does not really care, and they are going to run a headcount using a fraction either way.

Second, we put on our function and division hat. Every division seems to want to run the calc in a different way. And then there are realistic considerations to be made, such as the one country out there that outsources payroll, and does not have a field to differentiate a PT versus FT person. or the country that has a mess of contractors on payroll, and can’t sort them out.

Then you put on the analytics hat, and realize that when you integrated everything into your hypothetical data warehouse, the definitions for other fields have not been standardized around the organization, and you can’t get good head counts of specific populations like managers, executives, and diversity. I mean, is a someone in management a director and above? Or is she jut a people manager? How many people does she have to manage to be in management? Are we diverse as an organization simply because we have a headcount that says we are more than 50% people of color even though 2000 of those people are in Japan where the population is so homogenous that any talk of non Japanese minorities is simply silly?

Then you put on your math hat and some statistician in the organization tells you that you can’t average an average, or some nonsense like that.

So the Board of Directors comes to HR and asks what the headcount of the organization is. You tell them that you have 100,000 employees, plus or minus 10%. Yep, that’s going to go over really well.

I’m not saying its an easy discussion, but all it really takes is getting everyone into the same room one (OK, maybe over the course of a couple of weeks) to get this figured out. I’ve rarely seen an organization that is so vested in their own headcount method that they can’t see the benefits of a standardized calculation. I fact, most of the segments within are usually clamoring for this and we just have not gotten around to it yet, or we think they are resistant. In the end, it’s really not so hard, and we should just get to it.


Serendipity versus Decision Support

Would I be where I am today if I had all the facts every time?  I’m actually confident that if it were up to me, I would be digging ditches for a living somewhere (not to demean ditch diggers).  Let’s face it, I started off all on the wrong foot.  Being an Asian American kid with a prodigy brother, I was definitely the stupid one (I’ll assume you’ve all read about the Asian “Tiger Mom” thing lately).  I was the kid who, at the age of 6, was told by my piano teacher to quit.  I was the kid who was told by my 5th grade teacher, “too bad you’re not your brother.”  I was the Asian kid who graduated high school with only a 4.2 GPA.  (All of that is true btw).  I was also the kid who by some miraculous stroke of good fortune, managed to get accepted to my first choice college.  Being of relatively low income, my parents were quite please when I got a significant financial aid package (nothing compared to the brother, who incidentally got into every single Ivy League – also true).  At some point in the summer, I was sent a letter from my college of first choice and informed that they would no longer be able to offer me the amount of aid that I required.  With quite a large amount of desperation, I called around to various colleges, and was re-admitted to my (I think) 4th choice school with the financial aid that I needed.

It was at this school (one of the Claremont Colleges in S. California), where rather than hoards of students in large auditoriums being lectured to (a system that had clearly failed me so miserably to this point), I was instead surrounded by classes of maybe 15.  OK, maybe 20 max.  Rather than being lectured to, we sat around a table and talked about the book we read in the prior week.  I sat around on committees where I was literally a vote as a student to decide whether professors got tenure or not.  Rather than simple learning, I began understanding.  I really do consider this to be the first of several unplanned turning points.  Listen, I’m serious when I say that I was not a good student.  But learning for me happened a different way than for most.

We often talk about analytics and how it changes how we operate in HR.  High quality data leads to high quality choices – and often times that is true.  But it is also true that we don’t always have all of the data that we need at any specific point in time – if we had everything we needed to know, we might make vastly different choices.

I’ll take succession planning as an example.  We know who the top 10 succession candidates are for top positions (hopefully).  We know when they will be ready, what their relative skills and competencies are, and how their strengths compare to one another.  But we don’t know which of them are going to jump ship and go to another company before the position becomes vacant.  We don’t know which of them are going to stop growing, regardless of our best efforts to continue developing them.  The best that we can do, is to invest in a pool of candidates, and hope that one of them, the right one, is ready when the time comes.

We use decision support and analytics to crunch the numbers for us, but at the end of the day, it’s still serendipity – it’s still luck.  The hope here, is that while analytics and decision support can’t be a perfect predictor, we can in fact “make our own luck.”  We can improve our odds at getting the best outcomes.  At the end of the day, it is not serendipity versus decision support, but a combination of the two that will make our best data work for us.

Core HR is not a Hub

I came across this quite a few years ago as we started to tinker around with core HR systems needing to integrate with more systems than just payroll and benefits, and as analytics engines started to take off en masse.  Back in the first part of the millenium as talent applications started to get a serious look from the industry, the idea was that core HR applications could be the system of record for everything.  While some philosophical disputes existed back then (and continue to be pervasive), I think we fairly successfully resolved that core HR should not be the system of record for everything.

The easy stuff is around transactional systems.  We would never assume that core HR would be the system of record for things like employee taxes or benefit deductions.  It’s unlikely (assuming point solutions) that you’d want anything but the talent systems to be the master of performance and scores since the transactions take place outside of core HR.  We’ve pretty much determined that the core transactional system is going to be the system of record for almost all data elements, and I think this is a good thing.

The problem is that early in the day, we had it easy from an integration standpoint.  We really didn’t have that many systems, and so you really had core HR data going outbound to payroll and benefits, and you might have had recruiting inbound.  There was little doubt that Payroll had the outbound file to the GL and all the other payroll “stuff” like NACHA and taxes.  Clearly, integration has gotten a bit more complex over the years.  Rather than the obvious choices we had a decade ago, I now get to hear little debates about whether all the data from TM should be sent back so that you can run reports, and so that all data can be interfaced to other areas from core HR.  The answer is a big strong “NO!”

There are a couple things at play here.  Let’s talk about analytics first.  The idea in today’s world is that you’re supposed to have a data warehouse.  Sure, this was aspirational for many of you 3-5 years ago, but if you don’t have one today, you’re flat out lagging the adoption curve.  A data warehouse usually has this thing called an ETL tool which assumes you are going to bring data into the warehouse environment from many sources.  Bringing data into the core HR system and then into the data warehouse is simply counter intuitive.  You don’t need to do it.  Certainly I have no objection to bringing in small amounts of data that may be meaningful for HR transactions in core HR, but bringing overt data in large quantities is really unnecessary.

Second, let’s talk about integration to other systems.  I’ll be the first to admit that if you have SOA up and running, you are ahead of the curve.  In fact, if you are in HR, you are at least 2 years of the curve, and maybe 3-5 years ahead of mass adoption.  The simple idea remains that integration should continue to grow simpler over time and not require the level of strenuous effort that is took in the past when we managed dozens of flat files.  However, I have a philosophical problem with trying to manage integration out of core HR.  The fact is simply that you are distributing information which core HR may not own.  The management and quality of the data cannot be guaranteed in a non-system of record, and your data owners for the element cannot be expected to manage data quality in 2 separate systems.

The whole idea of core HR as a data hub keeps popping up, and I see the whole discussion as problematic.  It stinks from a governance perspective, and it stinks from a technology perspective.  Well, now you know where I stand at least.

Recruiting Effectiveness Measurement

Last post I wrote about recruiting efficiency measures.  From the effectiveness side, we’re all used to things like first year turnover rates and performance rates.  Once again, we’ve been using these metrics forever, but they don’t necessarily measure actual effectiveness.  You’d like to think that quality of hire metrics tells us about effectiveness, but I’m not sure it really does.

When we look at the standard quality of hire metrics, they usually have something to do with the turnover rate and performance scores after 90 days or 1 year.  Especially when those two metrics are combined, you wind up with a decent view of short term effectiveness.  The more people that are left, and the higher the average performance score, the better the effectiveness., right?

Not so quick.  While low turnover rates are absolutely desirable, they should also be assumed.  High turnover rates don’t indicate a lack of effectiveness.  High turnover rates instead indicate a completely dysfunctional recruiting operation.  Second of all, the utilization of performance scores doesn’t seem to indicate anything for me.

Organizations that are using 90 or 180 day performance scores have so much new hire recency bias that they are completely irrelevant.  It’s pretty rare that you have a manager review a new hire poorly after just 3 or 6 months.  For most organizations, you expect people to observe and soak in the new company culture before really doing much of anything.  This process usually takes at least 3 months.  And while the average performance score in the organization might be “3” your 90 and 180 day performance scores are often going to be marginally higher than “3” even though those new hires have not actually done anything yet.  However, you’ll have a performance score that is advantageous to the overall organizational score making you think that your recruiters are heroes.  Instead, all you have is a bunch of bias working on your metrics.

I’m not sure I have any short term metrics for recruiter effectiveness though.  Since we don’t get a grasp of almost any new hire within the first year, short term effectiveness is really pretty hard to measure.  I’m certainly not saying that turnover and performance are the wrong measures.  I’m just saying that you can’t measure effectiveness in the short term.

First of all, we need to correlate the degree of recruiting impact that we have on turnover versus things like manager influence.  If we’re looking at effectiveness over 3 years, we need to be able to localize what impact recruiting actually has in selecting applicants that will stick around in your organizational culture.  Second, we need to pick the right performance scores.  Are we looking at the actual performance score? goal attainment, competency growth, or career movement in # years?  Picking the right metrics is pretty critical, and it’s easy to pick the wrong ones just because it’s what everyone else is using.  However, depending on your talent strategy, you might be less interested in performance and more interested in competency growth.  You might want to look at performance for lower level positions while the number of career moves in 5 years is the metric for senior roles.  A one size fits all does not work for recruiting effectiveness because the recruiting strategy changes from organization to organization and even between business units within the same organization.

Overall, recruiter effectiveness is not as simple as it seems, and unfortunately there isn’t a good way to predict effectiveness in the short term.  In fact, short term effectiveness may be one of those oxymorons.

HRs Correlation to Business

When we talk about the impact of HR activities on our business’s operational production, we don’t usually think that there is a direct correlation.  In fact, some of our activities probably do have a relatively high correlation effect on business outcomes that we might be surprised about.  In defining correlation, we usually think about it on a –1 to +1 scale, with –1 being negatively correlated, 0 being no correlation, and 1 being positively correlated.  From an HR point of view, if we were able to show that there is a positive correlation from our activities to the business outcomes, that would be a pretty big win.

Personally, I don’t have any metrics since I don’t work in your organizations with your data.  However, with modern business intelligence tools and statistical analysis, it’s certainly possible to discover how our HR activities are impacting business outcomes on a day to day basis.

Take a couple examples.  We know that things like high employee engagement leads to increased productivity, but we don’t always have great metrics around it.  Sure, we can go to some industry survey that points to a #% increase for every point that the engagement surveys go up, but that is an industry survey, not our own numbers.  Especially in larger organizations, we should be ale to continue this analysis and localize it to our own companies.  Similarly, we should be able to link succession planning efforts to actual mobility to actual results.  Hopefully we’d be showing that our efforts in promoting executives internally is resulting in better business leadership, but if we showed a negative correlation here, that means that our development activities are lagging the marketplace and we might be better served getting execs from the external market while we redefine our executive development programs.

I’ll take a more concrete example.  Lets say we’re trying to measure manager productivity.  We might simplify an equation that looks something like this:

Manager Unit Productivity = High Talent Development Activity / (Low Recruiting Activity + Low Administrative Burden)

If this is true, we should be able to show a correlation between the amount of time a manager spends on development activities with her employees to increased productivity over time.  Also expressed in the equation, recruiting activity should also be negatively correlated to the manager’s team performance.  If the manager is spending less time recruiting, that means she is keeping employees longer, and spending more time developing those employees – therefore any time spent recruiting is bad for productivity.

I’m not saying that any of these things are the right measures or the right equations.  What I am saying is that we now have the tools to prove our impact on business outcomes, and we should not be wasting these analytical resources on the same old metrics and the newfangled dashboards.  Instead, we should be investing in real business intelligence, proving our case and our value, and understanding what we can do better.

Recruiting Efficiency Measurement

If you look through Saratoga, there are all sorts of metrics around measuring our HR operations.  For recruiting, these include all the standard metrics like cost/hire, cost/requisition, time to fill, fills per recruiter, etc.   Unfortunately, I’m not a fan of most of these metrics.  They give us a lot of data, but they don’t tell us how effective or efficient we really are.  You’d like to think that there is going to be a correlation between fills per recruiter to efficiency, and there probably is some correlation, but true efficiency is a bit harder to get a handle on.

When I’m thinking about efficiency, I’m not thinking about how many requisitions a recruiter can get through in any given year or month.  I’m not even sure I care too much about the time to fill.  All of these things are attributes of your particular staffing organization and the crunch you put on your recruiters.  If you have an unexpected ramp-up, your recruiters will be forced to work with higher volumes and perhaps at faster fill rates.  Once again, I’m sure there is a correlation with recruiter efficiency, but it may not be as direct as we think.

Back to the point, when I think about recruiting efficiency, I’m thinking about the actual recruiting process, not how fast you get from step one to step 10, or how many step 1-10 you can get through.  Recruiting efficiency is about how many times you touch a candidate between each of those steps.  Efficiency is about optimizing every single contact point between every constituency in the recruiting process – recruiters, sourcers, candidates, and hiring managers.

The idea is that you should be able to provide high quality results without having to interview the candidate 20 times or have the hiring manager review 5 different sets of resumes.  If you present a set of 8 resumes to the hiring manager and none of them are acceptable, you just reduced your recruiting efficiency by not knowing the core attributes of the job well enough and not sourcing/screening well enough.  If you took a candidate through 20 interviews, you just reduced your efficiency by involving too many people who probably don’t all need to participate in the hiring decision and who are all asking the same questions to the candidate.  Sure, there is a correlation between the total “touches” in the recruiting process to time to fill, but “touches” is a much better metric.

I know we’ve been using the same efficiency metrics for ages upon ages, and most of us actually agree that we dislike these.  Touches within the recruiting process makes a whole lot more sense to me, as it gets to the actual root of the efficiency measurement.

Misinterpreting Apple and Microsoft

As a global community, we all hate Bill Gates.  Actually we all hate Microsoft.  While we might depend on things like Microsoft Outlook and the MS Office suite of products, most of us probably believe that they have a bit too much of a monopoly and probably have bullied around other companies to ensure that they have a strong foothold in their industry.  Alternatively, we all love Steve Jobs and Apple.  This guy has reinvented Apple and given us some great products with amazing usability.  Instead of the “blue screen of death” from Microsoft, we have the incredibly usable iPhone.  Apple is also all about community, and their technologies tend to have brought us closer together, creating now ubiquitous applications (also on Google Android phones) that help us better connect in real time.

But sometimes image and marketing is everything.  The Bill and Melinda Gates Foundation is one of the largest philanthropic institutions in the world, providing funding for diverse programs in health , global economic development, and education.  Steve Jobs on the other hand had a foundation for about 15 months, but it was shut down after never doing much of anything.  In comparing the two business leaders, it appears that Gates is rather selfless in his charitable intentions, but Jobs (in the rare circumstances that he endorses a cause) only mentions a cause when it serves the purposes of Apple and the growth of his personal wealth.

It’s easy to look at something, especially a set of data, and be swayed by our own personal experiences with it.  We often have events or our relationships with the business provide specific opinions that may or may not be close to the truth.  In the Gates and Jobs example, we even have clean and quantitative data prove that Jobs is the better guy.  Apple has overtaken Microsoft in market capitalization, and therefore the consumers have voted, not for the big corporate giant providing software to big corporate giants, but to the provider of tools to everyday individual consumers.

I’m not arguing what set of technologies is more deserving of our approval in terms of market cap, but how these images inform our ability to interpret data in the absence of external influences.  The reason that I’m an advocate of business intelligence analysts that view data and operate in a function that is not touched by the externalities of the “real world” is that they can touch and feel the data with an objective eye.  Those of us who operate in business and business process can often be blinded by prior results that are not directly related to the data, but we think they are.  I’ve seen organizations completely disregard employee engagement surveys that identified terrible managers simply because productions or sales happened to be “hot.”  We’ve been influenced by circumstantial evidence that very senior managers can’t be “messed” with or that diversity data is better than it really is.  We’ve completely missed the mark on interpreting trend lines because we don’t have analysts who know how to look at data from a mathematical perspective.

At the end of the day, situational evidence is critical in how we interpret our version of reality.  In no way can we ignore this, but at the same time, we have to be able to see through it and objectify the data before we can reach a conclusion.  We can’t let our judgment around data be clouded by only what our perceived reality already is, because if we do, our role to the business as part of the decision support chain is completely irrelevant.

Explaining Facts and Dimensions

Fact:  A turnover rate based on a calculation

Dimension:  Historical time trend of turnover

Fact:  The number of heads in your HR database

Dimension:  The breakdown of headcount by organization

Fact:  The number of FTE’s in any given job code

Dimension:  The demographic analysis of FTE’s in job code

I honestly have no idea how many of you have ever heard of a “star schema.”  I’m not sure I ever wanted to, and I’m not sure knowing what it is called has ever really helped me out.  But somehow, I picked it up along the way (not surprising for a data warehouse and analytics guy) and every once in a while I am reminded how difficult it is to explain analytical reporting to HR people who don’t already have it.  And the simple fact is that most of us don’t really have it yet, or only have small bits in a couple of our applications.

Facts are all those things that we can quantify.  They are… well, facts.  They are the things we run ad hoc or operational reports against, and are usually the things that we have reported against for years.  They are those turnover reports, or the headcount reports.

Dimensions on the other hand, are the attributes we want to dynamically apply to the facts.  Lets say I get a turnover report.  I’d love to right click on one of the bars in the bar chart and see a historical trend for turnover.  So in this case, the dimension is time.  Or that same right click might have given me the option to see the turnover by business unit, so now the dimension is organization.  Perhaps we want turnover by age or gender instead – so now it’s a demographic parameter.

Sometimes, a data element can be a fact in one report and a dimension in another.  So age, a dimension in our turnover report, can also be a fact.  We run a report on the headcount by age groupings.  However, when we see that report, we decide that we want to know the race allocation within our 30-39 band, so now age is the fact and race is the dimension.

I have no idea if all that makes sense in text (and without actually seeing it), and I’m really avoiding trying to explain a ‘cube” which I’m fairly sure I’ve tried to do before.  But there’s a reason we talk about this stuff.  When you set up analytics, you set up these star schema first.  After you drop the data into your data warehouse, you get to create these “schema” which form the basis of your analytics.  If you don’t know beforehand how you might want to see your data, you’re going to wind up with a very limited set of dimensions and 2 months down the road, you’ll be wondering why your implementer didn’t ask you if you wanted turnover by age, race and gender.  Implementers implement what you tell them to.  It’s up to you to understand what your own requirements are.  The problem with data warehouse, is that understanding what the requirements are only happens when you understand the capabilities of the technology.

The Error of Operational Reports

Julian had a childhood friend named Lucy.  One day, he came home with one of those kindergarten drawings with a little girl and diamonds and presented it to his father.  John asked him what the drawing was, and Julian replies, “It’s Lucy and the sky with diamonds.”  Thus, the song.

You can take some core facts, and given those facts, you can make some assumptions, correlations, and logical guesses.  But at the end, assumptions, correlations and logical guesses don’t get you anywhere but a certain probability of correctness.  With John Lennon and the Beatles song, you have Lucy and the Sky with Diamonds (LSD) in an age where the Beatles were probably high every night.  For decades, this has been the thinking of what was behind this song.  In reality, the truth is more innocent.

Operational reports present the same challenges.  I define operational reports against analytics.  Operational reports are typically one or two dimensional in nature, present a set of data that is static in time.  While you can make some pretty good guesses as to what is going on in your organization, operational reports don’t present the type of deep digging that you can do in analytics to really identify root causes.

Turnover is often the easiest example.  If you have a turnover rate by department, you can figure out where your highest turnover is and do something about it.  But if you have trended analytics, you can see how turnover is shifting over time throughout your organizations, dig into each organization into a number of factors such as gender, age, tenure, and specific job types or look into the turnover performance of individual managers.  It’s not that you can’t do this with operational reports, but each report is run individually rather than sitting in a single “cube” of data.

Any reporting is great, and operational reports utilized correctly present a spectrum of facts that can then be further interpreted to a fairly close degree of statistical accuracy.  But it amazes me still, how many organizations are not going out there and spending the money to do real analytics and decisions support.

We’ve been talking about the “seat at the table” for too long – how talent strategies make HR a real business partner to the core needs of our organizational leaders.  But as much as we talk about that seat, 80% or more of the large organizations that I visit still don’t have good, detailed decision support tools in a comprehensive and coherently deployed manner.  You have to have a strategy around these tools, implement them well, and then actually know how to use them in a cross-functional manner with some business (not technical) sense behind it.  Let’s get moving already, people!