HR Technology Conference Reactions: Talent Management Panel

The talent management panel at The HR Technology Conference was all about diversity.  Not diversity in terms of workforce, but the diversity in terms of approaches in deploying talent processes and technologies that different companies take in pursuit of their goals. With Jason Averbook hosting, we had Walmart (2+ million employees), Motorola Solutions (called themselves an 84 year old startup), Merck (single global system in 84 countries) and ETS Lindgren (900 employees). At one end of the table, we had 2.2 million employees and the other end we had 900. We had SAP globally, and we had Rypple/Work.com.

Here are some highlights (not direct quotes in most cases):

Theme #1:  Ongoing feedback. When even Walmart says they need to deploy ongoing feedback for a workforce that is 2.2 million strong, this is something to watch.  Generally when we think of retail, we’re thinking about a population with a full set of competencies from some very senior talent to some fairly low paid employees.  Saying that real time feedback is important for the entire population is a big deal, where many of us would traditionally just focus on the top tier of talent.  ETS Lindgren said much the same and have experienced a huge jump in positive feedback.  They have shown that social can really assist in the engagement equation, but realize that the constructive feedback still happens either in private messaging or in the manager conversations.

Theme #2:  Focus on what matters. Having just said that you spread the wealth in Theme #1, there did seem to be a consistent theme around making sure that the roles that really drive revenues in your organization are the ones you focus on disproportionately.  There was a discussion about “peanut butter spread” and it seemed there was mass agreement where you provide some global focus, but your time is really spent managing the interactions with the employees that will impact your bottom line most directly.  I also want to do a theme 2.5 here.  Merck had an important call-out I think.  They are starting with a revamp of their job structure.  For any deployment be it TM, HCM or Social, if your foundation sucks, you are not going anywhere.  You can roll things out, and you might get adoption, but you won’t have great measurement.  Merck had this to say, “If someone allowed the choice of getting the basics right or deploying collaboration tools, I’d say to look at the foundation.” More on measurement later.

Theme #3:  Things still need to get easier. Walmart had a nice example with talent reviews.  They used to walk into a room of executives with volumes of huge binders.  Instead of that, they give everyone an iPad with the employee data preloaded.  This makes the discussion more dynamic and flexible.  At the same time, you can have significantly more data at your disposal compared to the volumes of binders.  This is an example where it’s working, but there are still areas where data minim does not work.  Motorola asked the question, “If I want a restaurant recommendation, I ask my friends on Facebook and get immediate answers.  If I need a best practice, there should be an app for that too.”

Theme #4:  Flexibility. This one goes hand in hand with ongoing feedback.  One of the companies stated that they will go without formal reviews and formal ratings.  WHAT?!?!?!  Not having reviews and ratings is an experiment that some have tried in smaller organizations, but I’ll be excited to see how it works in a socially based larger organization.  This theme is also about the social thread that would not stop coming up in this panel.  Most everyone seemed to have a social strategy that included not only conversations, but also some ideas of recognition.

Theme #5:  Data and analytics. We talked a bit about Merck in Theme #2.  I also liked the blended TM/Social/Analytics theme that ETS Lindgren brought up:  We want to know who is having conversations and about what at any given time.  If we can figure out what our talent is talking about, how to connect others, and measure the impact of quality interactions on our bottom lines, then we can also figure out how to invest in growing those specific conversations.  (tie in to Theme #1).

Theme #6:  Sponsorship. Motorola had this to say, “Our CEO has 2 jobs.  Managing the bottom line, and managing talent.”  ETS Lindgren had this to say, “Our Rypple tool came from the CEO.  We wanted to do something different.”  Either way you cut it, they had great sponsorship to ignite and create change.  It doesn’t always have to be the CEO, but if you don’t have top level sponsorship at all, you’re sunk.

 

Annual HR Technology Survey

There are some things that inform all of us in HR about what is going on with our industry and where things are headed.  Of all these useful tools, the annual CedarCrestone HR Technology Survey is one of the top at making us all smarter.  Therefore, if I’m going to come out of sabbatical for anything, it’s going to be to plug the survey.

Over the years, Lexy Martin and the CedarCrestone survey have provided some of the most thought provoking ideas, content and industry insight I’ve had, and I know that many others share this experience.  The thing is, while the results are always valid due to the sheer number of respondents Lexy gets over other surveys, the validity does not always carry when we look at certain cuts of the data.  Therefore, while it’s the biggest survey in HR technology, more is ALWAYS better.

This year, Lexy has told me that if 100 people fill out the survey using the systematicHR link, all of those people will get a free iPad 3.  OK, so I’m lying about the iPad 3 thing, but you will have the gratitude of one of HR’s industry giants (Lexy) and from feeble little me as well.  So click the link and take the survey.  🙂

www.cedarcrestone.com/survey/systematicHR.html

Thanks!!!

-Wes

Real Time Activity Analysis

Not only am I a geek, I’m a workout geek.  The latest geeky gadget I’m lusting over is a Withings E-Scale.  For just $165, I can get up in the morning, weigh myself, stand on the scale for a few seconds, have the scale measure my body fat, lean body mass, hydration level, and a few other things, and then have all of this uploaded through the home wi-fi to the internet.  I can then go online and see the trend of all of these factors over the time that I’ve been using the scale.  ((I actually weigh myself 4 times a day when I’m home.  It’s a California thing.))  Heck, I already have all of my bike ride data online for the last 4 years – I mean, I can compare how fast I pedal the crank arms on my bike today versus 2 summers ago in August.  Adding some health statistics seems reasonable to me.  Since it supports 8 users, my wife can get the same data on herself, although I’m pretty sure she would not want to, and I would be the focus of much derision for months.  Overall, while my bike stats tell me where I’m getting fitter, the scale would tell me the nuances of my body that contribute to fitness.

If I can get all of this personally for $165, I’m trying to figure out why it feels like I don’t have access to this type of data as an HR professional.

  • Case management tools are readily available:  Call centers do it.  If I go to my HR call center, they are probably tracking the number of cases each rep takes, how long it takes them to clear a call, etc.
  • Transaction data is usually available, but takes some effort:  I suppose I could audit my database tables to see how many employee name change processes there are and exactly how long they are taking.  But it’s not like I’m going to make my data entry staff use an extra minute to create a case for a 2 minute transaction.  Adding 50% effort to a small transaction is rather silly.
  • Data would be pretty impossible to get in an automated way: I mean, how much time does my staff spend in department meetings?  Not project meetings or something useful, I mean department get togethers, communications for what’s going on in the organization.  I’m not saying that this is not important stuff, but I had to run an activity analysis once just to prove to an organization that some of their people were spending 5% of their time in department meetings.

All I’m saying is that I feel like I should have a much better handle on my organization.  If I want to measure effectiveness, we seem to have dashboards for that.  Similar to my example where my bike stats can measure fitness, our dashboards can measure performance, talent acquisition, turnover, etc.  But similar to how a scale would then measure the miniscule core details of why I’m getting fitter (or not), I don’t feel like I have a dashboard for that.   For another example, we track training really well, but I think most of us would acknowledge that learning happens outside of training, and we don’t track real learning activities that well at all.

We’ve come a long way in the last [number] of years.  I’m hoping that in 3 more years, we can look back at 2011 and think, “god I’m glad we have this stuff in (in 2014).”  But no matter how far we go in the next [number] years, there will still be critical gaps.

Annual Plug for the HR Technology Survey

Here’s the link – just go take it.

www.cedarcrestone.com/survey/systematicHR.html

For those of you who are actually going to read this, there are several surveys out there that the industry counts on for trending year over year. How much are people spending, what systems are being bought, what functions are getting the most focus, and how are users reacting? We’re always interested in finding out about emerging trends and the last few years have seen us go from talking about things like collaboration to actually implementing them.

I’m a big proponent of benchmarking. While I realize that any benchmark is to be taken with a grain of salt, or many grains, it’s still invaluable “directionally” as to whether you are heading in the right direction at the right pace. I’ve decided over the years that saying an organization should peg itself to being at the “75th%” only leads to trouble, but being able to say “we’re ahead of the curve, and we’re focusing on many of the same things as our peers” is both comforting and strategically helpful.

The point being, CedarCrestone’s survey is the largest survey out there. They get not hundreds, but thousands of respondents. If any of you out there ever hire a guy like me, you depend not only on my experience gained from prior clients, but also that I’m keeping up with all of you through tools like this survey. It’s only as good as the number and people who respond though.

In the meantime, there was a very interesting Bill Kutik show a few weeks back where Lexy Martin discusses the origins, objectives and surprises of the HR Systems Survey www.billkutikradioshow.com.

Deception and Selling Your Data

US President Obama is a Muslim, right?  Raised in Kenya, he’s a Mau Mau sympathizer, and actually not even a US citizen (masterfully covered up I must add).

Apparently in a new poll, fully 50% of the US population registered with the Republican party believe this ((Joe Klein, “Huckabucking.”  Time Magazine, March 11, 2011.  Page 15.))  I mean, COME ON!!!! Seriously?  I remember having an argument with my uncle back in early 2002.  Before the US “shocked and awed” Iraq, there was a pretty large part of the US population that was quite sure we would never find WMD’s there.  The evidence that Saddam didn’t have WMD’s was actually stronger than the evidence that he did.  And listen, I’m sure the other side does it too.  I just can’t give you examples because I’m sure I’ve bought into all of the left’s version of crap as a left leaning Democrat.  (Please don’t stop reading because of that)  At least I’ll admit it.

We lie.  Does not matter what side we’re on.  We lie to get what we want.  No, don’t call it manipulating the truth.  There is no truth in the tools that many politicians use to coerce their voting populace to give them money and votes.

How many of our organizations do we describe as “political”?  We politic to get ahead, to get funding, to get systems, to get employees, to get what we want.  Even though we all decry how much we hate it, we all play the game to some degree. In some cases (Mau Mau sympathizer??) we’re just making up crap.  In other cases, (WMD’s) we might actually believe we have the right information, or have manipulated the data to say what we want it to say.  ROI studies are probably the best example.  We’re big believers in ROI not just so we can get funding, but so we can get executive sponsorship.

There is an art to presentation and telling a story.  Crafting an effective story is truly the difference between getting change and not getting it, whether it’s a sale, executive sponsorship, funding, etc.  At the end of the day, the story is about conveying emotion, not data.  We use data as a tool, but we realize that the data can skew the emotion of what we’re trying to change by many degrees.  Knowing this, we sometimes  manipulate the data.

I’ll be honest in saying that every now and then the data surprises me.  I’ll go back to colleagues saying, “is this really telling me what I think it’s telling me?”  The result is going back and re-cutting the data several more ways to validate the results, and then, instead of crafting new results and “spinning the message,” going back to the client and admitting, “this is not what we expected.”

I recently did a survey with one of my clients which was supposed to tell me what their functional and system weaknesses were.  Instead, the data I got back was crap (self described).  It took me a couple days and many conversations to realize that the result I wanted (knowing what areas they needed to target to improve service delivery) was never going to materialize.  Instead, a completely new and unexpected story started to unfold in front of me.  Upon reanalysis and looking at the data in a completely different way, the client had deeper issues than functionality and technology.

Sometimes we know what we want, we get some data, and it’s just not corroborating our story.  But we end up telling our story anyway.  But you know what? Somewhere in the data lies a truth which is waiting to be uncovered, and that truth is stronger than any fictional story we want to tell.  It’s hard work, and it isn’t always the direction we want to go, but get to the bottom of it.  There’s no point being 3 years down the road later and wondering why anyone thought the Mau Mau thing was anywhere close to real.

Better Measures for Engagement

Is it Gallup that has the “Do you have a best friend at work” question?  We’re so into doing employee surveys to measure employee engagement.  They provide us with a statistically validated measurement of our workforce once or twice a year.  We can look at the engagement studies, and if we have any luck at all, capture some high level data about the organization and then correlate the data back to turnover and productivity in specific population groups.  My question is this: Isn’t waiting 6 or 12 months for engagement measurements rather a long time in today’s world of real time analytics?

How about this:  ((The idea for this post came from:  Ariely, Dan.  “CEO’s probably think of their employees as more like rats in a maze than as people.”  Wired Magazine, UK Edition.  April 11, 2011.  Page 44))

  1. Measure the time of day employees log into their PC in the morning.
  2. Measure the time of day employees log out of their PC in the afternoon.
  3. Measure the cost per day per trip (expenses) calibrated to some standard.
  4. Measure the number of sick days on Monday and Friday.

I mean, why would you wait 6 or 12 months?

  • If your employees are (on average) coming to work later or leaving earlier, they are less engaged.
  • If the aggregated cost of a trip to NYC costs more per day, employees are “fudging” their expenses, and they are less engaged.
  • If Monday and Friday sick time is increasing (faked sick time), they are less engaged.

I mean, come on, we want to have close to real time measures, right?  I’m not saying that employee engagement actually changes on a day to day basis, but charted weekly, you could get some really cool trending data and identify exactly when the engagement curve increases or decreases.  You could then correlate all of the events that happened in that timeframe and figure out what is actually causing increases or decreases in engagement.  You could also isolate specific groups and populations (sample size would have to be large enough).  Say a VP leaves and is replaced, and 6 months later employees are staying at work later.   Or, the cost of a meal in NYC seems to be getting higher for a specific project team – are they celebrating, or are they all depressed and eating more?

How cool would it be to then look at performance in correlation with a weekly trend in engagement?  This is assuming that we start managing and developing our employees on an ongoing basis rather than once a year, but the possibilities are out there.

Commonizing Meaning

I have some favorite phrases that I’ve been picking up for years.

  • “Eh, voila!” universal for “eh, voila!”
  • “Ah, asodeska” Japanese for “I understand”  (sp?)
  • “Bo ko dien” is Taiwanese for highly unlikely or that’s ridiculous. (sp?)
  • “Oh shiitake” (shitzu is also appropriate), is an imperfectly polite way of saying “oh &#!+”

Basically, these are phrases that i love, but at least the latter two are meaningless to most people i say them to. I could of course go to Japan and most people will know what I’m talking about when i tell them I understand them, but they will then look at me funny when i exclaim in the name of a mushroom in anger.

We face the same problems when we talk about data calculations in HR. The most common of which is the simple headcount calculation. “Simple?” you ask. I mean, how hard can it be to count a bunch of head that are working in the organization on any particular day, right? The smart data guys out there are scoffing at me at this very moment.

First, we put on the finance hat. Exactly how many heads is a part time person? HR exclaims that is why we have headcount versus FTE. But finance does not really care, and they are going to run a headcount using a fraction either way.

Second, we put on our function and division hat. Every division seems to want to run the calc in a different way. And then there are realistic considerations to be made, such as the one country out there that outsources payroll, and does not have a field to differentiate a PT versus FT person. or the country that has a mess of contractors on payroll, and can’t sort them out.

Then you put on the analytics hat, and realize that when you integrated everything into your hypothetical data warehouse, the definitions for other fields have not been standardized around the organization, and you can’t get good head counts of specific populations like managers, executives, and diversity. I mean, is a someone in management a director and above? Or is she jut a people manager? How many people does she have to manage to be in management? Are we diverse as an organization simply because we have a headcount that says we are more than 50% people of color even though 2000 of those people are in Japan where the population is so homogenous that any talk of non Japanese minorities is simply silly?

Then you put on your math hat and some statistician in the organization tells you that you can’t average an average, or some nonsense like that.

So the Board of Directors comes to HR and asks what the headcount of the organization is. You tell them that you have 100,000 employees, plus or minus 10%. Yep, that’s going to go over really well.

I’m not saying its an easy discussion, but all it really takes is getting everyone into the same room one (OK, maybe over the course of a couple of weeks) to get this figured out. I’ve rarely seen an organization that is so vested in their own headcount method that they can’t see the benefits of a standardized calculation. I fact, most of the segments within are usually clamoring for this and we just have not gotten around to it yet, or we think they are resistant. In the end, it’s really not so hard, and we should just get to it.

Asodeska?

Decision effectiveness

A few years ago, i had a custom set of wheels made for my bike. I had the rims specifically weighed and picked out of a set of about 10 rims. I had the spokes weighed and balanced to make sure they were the lightest ones. The spoke nipples (the threaded parts that are basically nut that the spokes attach to the rims through) were color matched to the paint on my bike. all said, the wheels weighed about 1435 grams. Not crazy light, but pretty darn light. And they were fairly aerodynamic having decently deep rims and bladed spokes to cut through wind. Being aerodynamic, they cut through wind pretty well, and being light, they accelerated and climbed well. But custom rims cannot be laced as tight as some of the manufactured wheels out there. The one thing I lacked was the stability that comes from an incredibly tightly laced wheel.

I decided to give up my beloved wheel set and get a mass produced one (ok, so I have not yet seen another set of my wheels on the road, but still, they are not custom wheels). They happen to be just as light, almost as aerodynamic, and insanely sturdy. There is so little flex in my wheels that on hard corners going downhill at 45 mph, I have absolute confidence in them and I know exactly what they are going to do. Nonetheless, it was a hard decision to make, to replace my perfectly good older wheels.

I’ll admit. Even I talk too much about governance and the structure and network it takes to have a good governance model. But regardless of the model, it is not about your governance model, its about the effectiveness of your decisions. Do you make the right decisions? How fast do you make a decision? How often do you execute your decisions as planned?

You can have a great governance model. You can be totally well informed about what goes on in the organization based on working groups that inform you about the state of HR. You can be well networked and statused. And with all of that, you can still make the wrong decisions or avoid making decisions.

I have seen organizations where the governance model was to include so many people in the decision that at the end of the day, nobody wanted to be accountable for the final decision. The group would reach a point of consensus so that if anything went wrong, nobody had to take accountability, and they were all both blameless and at fault. It was also an environment where when the group was close to consensus, if someone saw something was clearly wrong, nobody would stand up for fear of being the one having to be accountable for a different decision.  It was a governance model. It was inclusive, well networked, but it turned out it was a bad model. Either nothing got done, or often, the wrong things got done. When it comes right down to it, you need to be inclusive and networked in the governance model, but you also need to be able to react quickly and authoritatively when the circumstances call for it. You need to have accountability for the decision that is separate from accountability for the execution and implementation of that decision. And you need to have the ability and the willingness to switch gears in the middle when you realize that something is either wrong, or jut that something could be better.

I had. Perfectly good bike, with perfectly good wheels. I’m continuously amazed at the quality of my new ride. It feels smoother when i ride over bumps, more solid when I ride down a hill, but jut as light and aerodynamic. I was the right decision to make, even though it pained me to make it.

Perspective and Benchmarking

In my eyes, there are three components of being from California. First, you must always be on a diet. Second, you must always bee too fat. Third, you must have a therapist (shrink for those of you in New York), who tells you that you have an eying disorder (if you are from Orange County, there is a fourth, you must have had plastic surgery). I, unhappily, qualify for all three of those conditions, and am therefore a perfect Californian. After all, it is the land of Hollywood where image is everything. So yes, i am on a diet. I have a food diary that I enter my food and calories into on a daily basis. Those of you who know me, know that I love to eat and some of you may even frequent my food blog. But nevertheless, I am always watching my weight and my caloric intake (I do weigh myself 4 times a day). I am also considerably overweight in California standards ht is. I’m sex feet tall, and at the writing of this post, I’m 157 lbs. Most people would consider me stick thin, and perhaps I’m a bit on the lean/lanky side for the general population. But in California, and especially since I’m a cyclist who needs to be light in order to go up any hills in my way, I really need to be 10 pounds lighter. My race weight in college was a scrawny 142. And back to that comment about weighing myself four times a day. You can see the point about the therapist.

While all of the above is unfortunately a true commentary about the mental state of “Dubs,” it goes right to my point about perspective. The truth of the matter is that as a once competitive cyclist (and I was pretty darn good in my prime), every pound actually counts. I don’t try to lose weight for the sake of losing weight, but to beat my local riding buddies up Mt. Tam on the weekends. Owning. Ridiculously nice bike, there isn’t much more i can pay to drop a pound or two there, so its up to me to be a tiny bit leaner. I could compare myself to the population at large and realize that I am borderline underweight, but indeed, i don’t care to compare myself to the general population. I care about what other competitive athletes look like, and when I look there, yep, I’m fat.

I often talk to clients who want me to help benchmark them against the market. I totally agree that it’s good to know where you stand, just like i know I’m borderline underweight against the general population. But when it comes down to it, that didn’t really tell them much. For example, if an agricultural or a manufacturing company ran at less than 50% of market for employee self service adoption, they might react negatively to it. But the truth would be hat they are probably expected to lag the market in adoption a bit. On the other hand, if you were a services firm or a large consultancy at only 75%, I might tell you that you are lagging, even though you outperform a large majority of organizations.

The truth is, that while its good to know, that is all it is. Good to know. You don’t really know where you stand unless you benchmark yourself against other organizations that matter to you. In some industries, this data is notoriously easy to get. There are existing surveys, or you can ask a consultancy to run a specialized survey for you, something i have done for clients many times. In other industries (hotels and hospitality for example) organizations are so secretive about their data that its all but impossible to get real data from competitors. The best you can do is hope to find a consultant at has worked with enough industry organizations to give you anecdotal data (beware of the consultant that might actually share data in a highly private and secretive industry – they will do the same for you data).

I get enough requests to benchmark to standard industry measures that i think it deserves saying that you need to be really careful when you use certain sources that will go unnamed. Just like its absurd for me to say that I’m fat unless you compare me abasing a very specific population, it’s also absurd for many organizations to look at themselves against every other organization in a survey rather than a smaller subset.

Back to Basics

It seems to me that there has been a renewed focus on core HR.  Now I need to tell the truth, I really thought core HR was dead.  I mean, it will always be around, but with the whole industry moving on to cooler things like analytics and talent management, who cares about core HR anyway?  Seriously, core HR is core HR, how many ways can you present an employee transfer or termination process?  How many different vendors can effectively pitch OSHA functionality and expect to win?  Within reason, all the core HR vendors are pretty much on an equal playing field, for one reason or another that I won’t bother going into.

So the industry if focused on talent and analytics.  The problem is that nothing seems to work if you messed up core HR.  People deployed a nice, automated performance process 3 years ago, and they got themselves away from paper.  But at the same time, the industry told them that they didn’t get where they needed to go, and a number of reasons went into this.  First, deploying a talent process just wasn’t enough, and the vendors are madly working on the next level of functionality that will actually help us manage talent as opposed to automating a process.  Secondly however, talent management did not work the first time because we implemented it assuming our core HR systems were already healthy, and they were not.

First, Job.  There seems to be a renewed effort around job these days.  I think we’ve realized that even with all the focus on competencies as a foundational building block of talent, job is still the foundational building block of HR.  Without job, nothing else seems to work.  The problem was that we’ve been neglecting job for eons.  Ok, so that’s a stretch, but it’s certainly not uncommon for many of our organizations to have 5 times as many jobs as we need.  They are not standardized, they are redundant, and they have mismatched naming conventions.  They appear differently from country to country and business unit to business unit.  At the end of the day, a corporate organization can’t make any sense our of our jobs.  So we’ve gone back over the last few years and tried to start tightening up our job tables, which will in turn enable tighter competencies, performance, recruiting, succession, etc…  Not to mention that your analytics are worthless if you can’t use Job as one of your core dimensions in your datamarts.

Second, Organization.  We don’t often think about how we think about org.  Is it a financial hierarchy? Operational? HR? People manager?  When we first implemented organization in core HR years ago, we may have tied it tightly to the payroll engine, and cost centers were the priority at the sacrifice of supervisor chains.  Or we decided that an operaitonal structure made better sense and we sacrificed how HR generalists needed to interact with employees in the structure.  Whatever the tradeoff we made, we didn’t realize that a couple years later, this thing called talent management was going to assume that core HR could provide a clean structure to talent.  I’ve been to organizations where the performance, compensation, succession, and hiring managers were all different.  Who the hell was going to think of that 5 years ago?  huh?  The point is, that we’ve needed to go back to core HR and make some sense of our initial implementations.  And oh yeah, if you’re org is not clean enough to be a dimension in your datamarts – worthless again.

So the moral of the story is this:  if you’re late the the game and just getting to talent now, learn from those before you – fix core HR.  If you are not late to the game, but have not fixed core HR, go do it now.

Serendipity versus Decision Support

Would I be where I am today if I had all the facts every time?  I’m actually confident that if it were up to me, I would be digging ditches for a living somewhere (not to demean ditch diggers).  Let’s face it, I started off all on the wrong foot.  Being an Asian American kid with a prodigy brother, I was definitely the stupid one (I’ll assume you’ve all read about the Asian “Tiger Mom” thing lately).  I was the kid who, at the age of 6, was told by my piano teacher to quit.  I was the kid who was told by my 5th grade teacher, “too bad you’re not your brother.”  I was the Asian kid who graduated high school with only a 4.2 GPA.  (All of that is true btw).  I was also the kid who by some miraculous stroke of good fortune, managed to get accepted to my first choice college.  Being of relatively low income, my parents were quite please when I got a significant financial aid package (nothing compared to the brother, who incidentally got into every single Ivy League – also true).  At some point in the summer, I was sent a letter from my college of first choice and informed that they would no longer be able to offer me the amount of aid that I required.  With quite a large amount of desperation, I called around to various colleges, and was re-admitted to my (I think) 4th choice school with the financial aid that I needed.

It was at this school (one of the Claremont Colleges in S. California), where rather than hoards of students in large auditoriums being lectured to (a system that had clearly failed me so miserably to this point), I was instead surrounded by classes of maybe 15.  OK, maybe 20 max.  Rather than being lectured to, we sat around a table and talked about the book we read in the prior week.  I sat around on committees where I was literally a vote as a student to decide whether professors got tenure or not.  Rather than simple learning, I began understanding.  I really do consider this to be the first of several unplanned turning points.  Listen, I’m serious when I say that I was not a good student.  But learning for me happened a different way than for most.

We often talk about analytics and how it changes how we operate in HR.  High quality data leads to high quality choices – and often times that is true.  But it is also true that we don’t always have all of the data that we need at any specific point in time – if we had everything we needed to know, we might make vastly different choices.

I’ll take succession planning as an example.  We know who the top 10 succession candidates are for top positions (hopefully).  We know when they will be ready, what their relative skills and competencies are, and how their strengths compare to one another.  But we don’t know which of them are going to jump ship and go to another company before the position becomes vacant.  We don’t know which of them are going to stop growing, regardless of our best efforts to continue developing them.  The best that we can do, is to invest in a pool of candidates, and hope that one of them, the right one, is ready when the time comes.

We use decision support and analytics to crunch the numbers for us, but at the end of the day, it’s still serendipity – it’s still luck.  The hope here, is that while analytics and decision support can’t be a perfect predictor, we can in fact “make our own luck.”  We can improve our odds at getting the best outcomes.  At the end of the day, it is not serendipity versus decision support, but a combination of the two that will make our best data work for us.

Trend, Drill, Slice

I love talking about and manipulating data.  One of my favorite moments of the month is when I get together with one of the premier data people in HR and we just talk data for 30 minutes just for kicks.  It’s fun talking data with someone who gets data, but often, we’re not really doing that.  The number of really high quality data people in HR seems to be rather low.  In fact, it’s really quite problematic that we can’t find enough people to fun our analytics engines that have a great depth in HR functionality and an understanding of how analytics works.  I was working with a Fortune 100 company very recently where their top HR analytics person globally did not have any real idea how a data warehouse worked.  Unfortunately, we seem to have started pulling in the best we have, which means a functionally good person, but not technically.  I have seen some organizations start pulling in great data people from outside of HR, but they are obviously hampered by their lack of functional understanding.

When we talk about analytics and a super-user, it’s nice if we can assume there is a good grounding in HR.  If we also think about how a data warehouse works, I think we can simplify analytics core capabilities in 3 easy concepts that I’ll call  trending, drilling and slicing.  These are all based on a data mart or what is also called a star schema.  I won’t go into the technical details, but it enables the trending, drilling and slicing that sets OLAP analytics apart from operational reporting.  Here’s a quick and simple primer of how the technology can be used in functional terms.

  1. Trending:  Time is pretty much a standard dimension in the fact/dimension schema in a data mart.  Trending is the simplest concept and it simply allows the ability to easily look backwards in time whether it’s in daily, weekly, monthly, quarterly, or annual chunks.  Theoretically, you should even be able to take a graph report that is has an annualized trend and click on any year to see the representative quarters or months within that year.  But that’s part of the next concept, drilling.
  2. Drilling: Drilling can really be done for any dimension represented in the schema.  You can take the example above where we drill into smaller and smaller chunks of time, but the most commonly used form of drilling is through the organizational model.  Let’s say you’re looking at a simple turnover report by division – you should be able to click on any division and see the layer below (say regions), and then click any of those and see the next layer of departments.  You could even drill based on the people manager hierarchy if that works better.  Either way, drilling means the ability to simply reveal a lower (or higher) level of detail.
  3. Slicing: The ability to cut data based on ad hoc parameters is also one of the wonderful things about OLPA analytics.  Let’s say you’re looking at that same turnover report and you find that one particular division has a particularly high turnover rate.  In order to diagnose what is going on, you decide to run several slices of turnover within that division.  The first is turnover by job, second by average hours of training by employee, third by compa-ratio, and lastly by engagement scores.  Being able to easily view your data by slicing data with alternative data views is what allows us to be smart – it lets us diagnose, but you can also see the applications for decision support.  In a similar vein, lets say you wanted to know what the contributors of low turnover are, you could simply run the same analysis backwards.

You can see the problem with analytics.  The number of HR people that can translate data from a technical standpoint and understand HR functionally are actually pretty few.  However, we are getting to the point where vendors are automatically bundling in great analytics tools with their applications or where our enterprises are getting around to implementing data warehouse solutions for HR (now that they are done with finance and operations).  The problem is that we’ve always been functional and we have not necessarily invested in creating the capability we will need to be powerful decision support contributors.  If you don’t have the tools now, you’re going to have them very soon – don’t assume the tools will do it all for you – you still need the right people.

How to Read a Newspaper

So, when I get on a plane, I often have a newspaper with me.  Whether you are on a plane, train, or anywhere with close quarters with other people, there is a bit of etiquette involved, and a standard trick that frequent travellers are supposed to know about.  Adherence with this trick is unfortunately minimal however.  The trick is as follows:  Take the paper as it was delivered, and unfold just the middle crease without opening the paper – you have only page 1 in front of you.  Fold the paper in half lengthwise and backwards, you should be able to see the left half of page 1.  Using this fold down the middle of the paper, you can read the entire paper without ever bothering the people sitting next to you.

When it comes to data, keeping everything in it’s place and not dispersing data into unwelcome areas is paramount.  HR data is probably the most sensitive data in the organization.  I’m not saying that other data that may contain trade secrets is not equally important, but HR knows stuff about our employees that they really don’t want released.  While openness about jobs and salaries has seemed to increase with the younger generations, there is still a great deal of sensitivity around many issues, and certainly a large amount of data that must be protected from a compliance perspective (such as diversity information and ER claims).  While we have tried to segregate data in such a way that prevents unauthorized access into the database, security and access rights to the systems of record is only the tip of the iceberg when it comes to unraveling the solution to this problem.  Like an email, once a report is generated or an interface is created, the owner of data simply loses control and can’t really ever be sure where that data is going to land.

There really aren’t any good solutions at this time.  You can restrict data so that it does not land in a data warehouse, or prevent integration to other systems, but at some point, there will be a hardcopy report floating on a desk, waiting to be whisked off by the wrong person’s hands.  I’m not really an advocate of putting huge amounts of controls on data.  I think that you appoint a system of record, data owners, access rights, and do your best in a well managed data environment.  I am curious about what others are doing out there to prevent unauthorized or unplanned dissemination of sensitive data other than simple data governance and data management measures.  Is there anything out there that can handle this yet?

Core HR is not a Hub

I came across this quite a few years ago as we started to tinker around with core HR systems needing to integrate with more systems than just payroll and benefits, and as analytics engines started to take off en masse.  Back in the first part of the millenium as talent applications started to get a serious look from the industry, the idea was that core HR applications could be the system of record for everything.  While some philosophical disputes existed back then (and continue to be pervasive), I think we fairly successfully resolved that core HR should not be the system of record for everything.

The easy stuff is around transactional systems.  We would never assume that core HR would be the system of record for things like employee taxes or benefit deductions.  It’s unlikely (assuming point solutions) that you’d want anything but the talent systems to be the master of performance and scores since the transactions take place outside of core HR.  We’ve pretty much determined that the core transactional system is going to be the system of record for almost all data elements, and I think this is a good thing.

The problem is that early in the day, we had it easy from an integration standpoint.  We really didn’t have that many systems, and so you really had core HR data going outbound to payroll and benefits, and you might have had recruiting inbound.  There was little doubt that Payroll had the outbound file to the GL and all the other payroll “stuff” like NACHA and taxes.  Clearly, integration has gotten a bit more complex over the years.  Rather than the obvious choices we had a decade ago, I now get to hear little debates about whether all the data from TM should be sent back so that you can run reports, and so that all data can be interfaced to other areas from core HR.  The answer is a big strong “NO!”

There are a couple things at play here.  Let’s talk about analytics first.  The idea in today’s world is that you’re supposed to have a data warehouse.  Sure, this was aspirational for many of you 3-5 years ago, but if you don’t have one today, you’re flat out lagging the adoption curve.  A data warehouse usually has this thing called an ETL tool which assumes you are going to bring data into the warehouse environment from many sources.  Bringing data into the core HR system and then into the data warehouse is simply counter intuitive.  You don’t need to do it.  Certainly I have no objection to bringing in small amounts of data that may be meaningful for HR transactions in core HR, but bringing overt data in large quantities is really unnecessary.

Second, let’s talk about integration to other systems.  I’ll be the first to admit that if you have SOA up and running, you are ahead of the curve.  In fact, if you are in HR, you are at least 2 years of the curve, and maybe 3-5 years ahead of mass adoption.  The simple idea remains that integration should continue to grow simpler over time and not require the level of strenuous effort that is took in the past when we managed dozens of flat files.  However, I have a philosophical problem with trying to manage integration out of core HR.  The fact is simply that you are distributing information which core HR may not own.  The management and quality of the data cannot be guaranteed in a non-system of record, and your data owners for the element cannot be expected to manage data quality in 2 separate systems.

The whole idea of core HR as a data hub keeps popping up, and I see the whole discussion as problematic.  It stinks from a governance perspective, and it stinks from a technology perspective.  Well, now you know where I stand at least.

Recruiting Effectiveness Measurement

Last post I wrote about recruiting efficiency measures.  From the effectiveness side, we’re all used to things like first year turnover rates and performance rates.  Once again, we’ve been using these metrics forever, but they don’t necessarily measure actual effectiveness.  You’d like to think that quality of hire metrics tells us about effectiveness, but I’m not sure it really does.

When we look at the standard quality of hire metrics, they usually have something to do with the turnover rate and performance scores after 90 days or 1 year.  Especially when those two metrics are combined, you wind up with a decent view of short term effectiveness.  The more people that are left, and the higher the average performance score, the better the effectiveness., right?

Not so quick.  While low turnover rates are absolutely desirable, they should also be assumed.  High turnover rates don’t indicate a lack of effectiveness.  High turnover rates instead indicate a completely dysfunctional recruiting operation.  Second of all, the utilization of performance scores doesn’t seem to indicate anything for me.

Organizations that are using 90 or 180 day performance scores have so much new hire recency bias that they are completely irrelevant.  It’s pretty rare that you have a manager review a new hire poorly after just 3 or 6 months.  For most organizations, you expect people to observe and soak in the new company culture before really doing much of anything.  This process usually takes at least 3 months.  And while the average performance score in the organization might be “3” your 90 and 180 day performance scores are often going to be marginally higher than “3” even though those new hires have not actually done anything yet.  However, you’ll have a performance score that is advantageous to the overall organizational score making you think that your recruiters are heroes.  Instead, all you have is a bunch of bias working on your metrics.

I’m not sure I have any short term metrics for recruiter effectiveness though.  Since we don’t get a grasp of almost any new hire within the first year, short term effectiveness is really pretty hard to measure.  I’m certainly not saying that turnover and performance are the wrong measures.  I’m just saying that you can’t measure effectiveness in the short term.

First of all, we need to correlate the degree of recruiting impact that we have on turnover versus things like manager influence.  If we’re looking at effectiveness over 3 years, we need to be able to localize what impact recruiting actually has in selecting applicants that will stick around in your organizational culture.  Second, we need to pick the right performance scores.  Are we looking at the actual performance score? goal attainment, competency growth, or career movement in # years?  Picking the right metrics is pretty critical, and it’s easy to pick the wrong ones just because it’s what everyone else is using.  However, depending on your talent strategy, you might be less interested in performance and more interested in competency growth.  You might want to look at performance for lower level positions while the number of career moves in 5 years is the metric for senior roles.  A one size fits all does not work for recruiting effectiveness because the recruiting strategy changes from organization to organization and even between business units within the same organization.

Overall, recruiter effectiveness is not as simple as it seems, and unfortunately there isn’t a good way to predict effectiveness in the short term.  In fact, short term effectiveness may be one of those oxymorons.

HRs Correlation to Business

When we talk about the impact of HR activities on our business’s operational production, we don’t usually think that there is a direct correlation.  In fact, some of our activities probably do have a relatively high correlation effect on business outcomes that we might be surprised about.  In defining correlation, we usually think about it on a –1 to +1 scale, with –1 being negatively correlated, 0 being no correlation, and 1 being positively correlated.  From an HR point of view, if we were able to show that there is a positive correlation from our activities to the business outcomes, that would be a pretty big win.

Personally, I don’t have any metrics since I don’t work in your organizations with your data.  However, with modern business intelligence tools and statistical analysis, it’s certainly possible to discover how our HR activities are impacting business outcomes on a day to day basis.

Take a couple examples.  We know that things like high employee engagement leads to increased productivity, but we don’t always have great metrics around it.  Sure, we can go to some industry survey that points to a #% increase for every point that the engagement surveys go up, but that is an industry survey, not our own numbers.  Especially in larger organizations, we should be ale to continue this analysis and localize it to our own companies.  Similarly, we should be able to link succession planning efforts to actual mobility to actual results.  Hopefully we’d be showing that our efforts in promoting executives internally is resulting in better business leadership, but if we showed a negative correlation here, that means that our development activities are lagging the marketplace and we might be better served getting execs from the external market while we redefine our executive development programs.

I’ll take a more concrete example.  Lets say we’re trying to measure manager productivity.  We might simplify an equation that looks something like this:

Manager Unit Productivity = High Talent Development Activity / (Low Recruiting Activity + Low Administrative Burden)

If this is true, we should be able to show a correlation between the amount of time a manager spends on development activities with her employees to increased productivity over time.  Also expressed in the equation, recruiting activity should also be negatively correlated to the manager’s team performance.  If the manager is spending less time recruiting, that means she is keeping employees longer, and spending more time developing those employees – therefore any time spent recruiting is bad for productivity.

I’m not saying that any of these things are the right measures or the right equations.  What I am saying is that we now have the tools to prove our impact on business outcomes, and we should not be wasting these analytical resources on the same old metrics and the newfangled dashboards.  Instead, we should be investing in real business intelligence, proving our case and our value, and understanding what we can do better.

Recruiting Efficiency Measurement

If you look through Saratoga, there are all sorts of metrics around measuring our HR operations.  For recruiting, these include all the standard metrics like cost/hire, cost/requisition, time to fill, fills per recruiter, etc.   Unfortunately, I’m not a fan of most of these metrics.  They give us a lot of data, but they don’t tell us how effective or efficient we really are.  You’d like to think that there is going to be a correlation between fills per recruiter to efficiency, and there probably is some correlation, but true efficiency is a bit harder to get a handle on.

When I’m thinking about efficiency, I’m not thinking about how many requisitions a recruiter can get through in any given year or month.  I’m not even sure I care too much about the time to fill.  All of these things are attributes of your particular staffing organization and the crunch you put on your recruiters.  If you have an unexpected ramp-up, your recruiters will be forced to work with higher volumes and perhaps at faster fill rates.  Once again, I’m sure there is a correlation with recruiter efficiency, but it may not be as direct as we think.

Back to the point, when I think about recruiting efficiency, I’m thinking about the actual recruiting process, not how fast you get from step one to step 10, or how many step 1-10 you can get through.  Recruiting efficiency is about how many times you touch a candidate between each of those steps.  Efficiency is about optimizing every single contact point between every constituency in the recruiting process – recruiters, sourcers, candidates, and hiring managers.

The idea is that you should be able to provide high quality results without having to interview the candidate 20 times or have the hiring manager review 5 different sets of resumes.  If you present a set of 8 resumes to the hiring manager and none of them are acceptable, you just reduced your recruiting efficiency by not knowing the core attributes of the job well enough and not sourcing/screening well enough.  If you took a candidate through 20 interviews, you just reduced your efficiency by involving too many people who probably don’t all need to participate in the hiring decision and who are all asking the same questions to the candidate.  Sure, there is a correlation between the total “touches” in the recruiting process to time to fill, but “touches” is a much better metric.

I know we’ve been using the same efficiency metrics for ages upon ages, and most of us actually agree that we dislike these.  Touches within the recruiting process makes a whole lot more sense to me, as it gets to the actual root of the efficiency measurement.

Misinterpreting Apple and Microsoft

As a global community, we all hate Bill Gates.  Actually we all hate Microsoft.  While we might depend on things like Microsoft Outlook and the MS Office suite of products, most of us probably believe that they have a bit too much of a monopoly and probably have bullied around other companies to ensure that they have a strong foothold in their industry.  Alternatively, we all love Steve Jobs and Apple.  This guy has reinvented Apple and given us some great products with amazing usability.  Instead of the “blue screen of death” from Microsoft, we have the incredibly usable iPhone.  Apple is also all about community, and their technologies tend to have brought us closer together, creating now ubiquitous applications (also on Google Android phones) that help us better connect in real time.

But sometimes image and marketing is everything.  The Bill and Melinda Gates Foundation is one of the largest philanthropic institutions in the world, providing funding for diverse programs in health , global economic development, and education.  Steve Jobs on the other hand had a foundation for about 15 months, but it was shut down after never doing much of anything.  In comparing the two business leaders, it appears that Gates is rather selfless in his charitable intentions, but Jobs (in the rare circumstances that he endorses a cause) only mentions a cause when it serves the purposes of Apple and the growth of his personal wealth.

It’s easy to look at something, especially a set of data, and be swayed by our own personal experiences with it.  We often have events or our relationships with the business provide specific opinions that may or may not be close to the truth.  In the Gates and Jobs example, we even have clean and quantitative data prove that Jobs is the better guy.  Apple has overtaken Microsoft in market capitalization, and therefore the consumers have voted, not for the big corporate giant providing software to big corporate giants, but to the provider of tools to everyday individual consumers.

I’m not arguing what set of technologies is more deserving of our approval in terms of market cap, but how these images inform our ability to interpret data in the absence of external influences.  The reason that I’m an advocate of business intelligence analysts that view data and operate in a function that is not touched by the externalities of the “real world” is that they can touch and feel the data with an objective eye.  Those of us who operate in business and business process can often be blinded by prior results that are not directly related to the data, but we think they are.  I’ve seen organizations completely disregard employee engagement surveys that identified terrible managers simply because productions or sales happened to be “hot.”  We’ve been influenced by circumstantial evidence that very senior managers can’t be “messed” with or that diversity data is better than it really is.  We’ve completely missed the mark on interpreting trend lines because we don’t have analysts who know how to look at data from a mathematical perspective.

At the end of the day, situational evidence is critical in how we interpret our version of reality.  In no way can we ignore this, but at the same time, we have to be able to see through it and objectify the data before we can reach a conclusion.  We can’t let our judgment around data be clouded by only what our perceived reality already is, because if we do, our role to the business as part of the decision support chain is completely irrelevant.

The Art of Story

Whether you are at a conference watching a videographer recording the event, or witnessing a $100M film getting made, the process of recording to final editing is always the same.  Actually seeing how stories unfold is rather amazing – it’s the real life is nowhere near as linear as the resultant film that everyone sees.  Instead, what gets recorded onto the raw film is more of a patchwork of completely random thoughts, statements and images.  If you actually watched all of the film in the order that it’s recorded, most of it would make absolutely no sense in the context of what surrounds it.  The editing process is about bringing together the common elements and magnifying the key points, and then putting everything back into an order than is meaningful in the sense of a story.

The problem with HR is that HR executives are not like finance or technology executives.  The art of story is a bit more important than the science of numbers.  Where we can always count on having a detailed TCO or ROI study ready for our CFO’s, sometimes HR looks at the numbers and wants to know “why?”  And the “why” is never about the number, but about the qualitative.

Whether you are a consultant or a HR practitioner creating a business case, the same thing tends to happen.  You pick up random conversations, have random meetings, perform sets of broad interviews, and at the end of collecting data, you have… lots of data.  It’s not until you distill everything you have that the major concepts and key points start to emerge.  You then start analyzing each of these key items and start to observe where interactions are and how they are related to each other, interweaving them into a storyline that executives can digest and understand.

The art of story is important in HR because even though we are interested in efficiency and cost savings, we are really about effectiveness.  We enable employees to grow and managers to execute, and as much as we hate it when people say that HR is “touchy feely,” the truth is that we are not a strictly quantitative function.  At the end of the day, we use the science that we have (cost studies, analytics, data) to enable increased effectiveness in process, engagement, and talent.

I obviously love talking about technology, and I’m pretty good at figuring out what data is telling me, but presenting data to executives is never the answer.  Sure a nice graph helps out, but there is always a story behind the data, and that story tells us where we have been and where we should be going.  What the data does not tell us, is what the outcomes are that we need to achieve.  We use data to inform our stories and direct where we need to get to based on HR strategy.

HR is comprised of quite a few random pieces of data, from technology enabled analytics, process outcomes, talent data, HR transactional data, etc.  HR outcomes and strategies are usually aggregations of each of these areas as individual data points combine to create overall direction and outcomes – formulating the data in such a way that it can actually give us a sense of place, direction and story is more important in HR than any other function that I can think of.

Dynamical Systems

Forget for a moment that I’m talking about a mathematical theory.  At the core of this post is how we apply some quantitative reasoning to our ability to look at data and predict the future.  Personally, I don’t think we do enough to extend and create meaning from our data.  If we looked at data with an increasing view of the quantitative sciences, we would have a proportional increase in our ability to apply “art” to our interpretation.

We predict things intuitively every day.  Driving down the road, we watch other cars, measure their velocity, and predict if we are going to collide or not – and based on those predictions we often change our course.  In Human Resources, we should be predicting the direction and velocity that our workforce is travelling in a number of ways.  Not only do we watch the size of the workforce, but productivity, engagement, competency growth, and any number of other factors.

A dynamical system is a mathematical construct that predicts what the state of a particular object will be at a given point in time in the future.  Dynamical systems are useful in a limited way – through them we can predict future state outcomes in a limited number of variables.  But the point is that we can predict vector and velocity – in other words, if the workforce is travelling in the right direction and at the right speed.

First off, I should absolutely admit that HR is not comprised of a bunch of mathematicians.  This should be no surprise to anybody.  However, our organizations that make up the analytical arms of the HR organization, those who generate analytics and create meaning out of data, should have some ability to quantitatively view the data and understand trending, directionality, velocity, and the general parameters of the vector.  I hate taking us all back to high school calculus, but we remember the first derivative of the curve is the slope of the curve.  This is general trend of what our workforce metric is doing.  However, the second derivative of a curve is the determination of whether the curve’s direction is slowly shifting.  In other words, we might know that our overall turnover rate is dropping, and that is a positive sign.  However, do we know if the turnover rate is dropping more slowly than it was last quarter, even if the rate is better than it was before?

While most of HR is not in the mathematics field, and most of us don’t want to be anywhere near it, we should be applying some of these theories to our analytics.  If we look at any of our programs, we can see acceleration of desired outcomes after specific events, and deceleration of outcome after we have not done any change management or communications for a while.  We should be looking at competencies not as a growth metric, but as an acceleration curve over time.  Growth is good, don’t get me wrong, but acceleration is better.

I’d like to think that all HR data analysts out there are taking a serious look at the data they are presenting, trying to create and extend meaning beyond what is being requested, but today’s reality is quite mundane, even with all the cool business intelligence and dashboard technologies out there.