Is Big Data An HR Directive?

I have an argument with my wife every few years.  I tend to like cars with a bit more horsepower.  I mean, that 1 time a year when there is a really stupid driver about to crash into you, a few extra horses comes in handy when you really need to speed away.  The problem is that 99.99% of the time, that extra horsepower is a luxury you really don’t need.  You’d get from point A to point B just as safely, and probably just as fast.  Sometimes though, that engine really does matter.  (My wife wins 90% of arguments by the way)

Everyone in HRIT is talking about big data these days.  Unless I completely don’t get it, I thought this is what we’ve been working towards for years.  I mean, having ALL of our talent data, core HR, learning, recruiting, payroll, benefits, compensation, safety, etc data all in the same place and running analytics against it all was always part of the data warehouse plan.  I mean come on, what else is ETL for if not to grab data from all over the place, aggregate it into the ODS, and then figure out how to make sense of it all?  We’ve built a nice engine that caters to our needs 99.99% of the time.

I’m going to propose something:  Big data does not matter to HR.  It’s just a new naming of something that does matter.  Business intelligence and truly focused analytics is what makes us focus our actions in the right places.  BI, Big Data, I don’t care what we call it.  Just do it.  Either way, HR does not have a big data need at this point.  I’d propose that we can use Big Data technology to speed up our analytics outcomes, but that’s about all we need for the next few years.

In my simplified definition of Big Data, it comes down to two major attributes: the use of external data sources, and the lack of need to normalize data across sources.  If we look at it from this point of view, the reality would state that almost no HR department on the face of the earth is ready to take HR data and compare it with government census data, or employment data.  Let’s get really creative and take local population health statistics combined with local census to get some really interesting indicators on our own employee population health.  Right, we’re just not there yet.

Let me reverse the thinking for a moment though.  What about the other 0.01% of the time that our traditional BI tools just won’t help us out?  Going back to benefits examples, how many global organizations can really directly compare benefit costs across the entire world?  How many of those same global organizations have a great handle on every payroll code?  Much of the problem is that the data is often outsourced, and definitely not standardized.  Collecting the information is problematic in the first place, but next to impossible to standardize annual changes in the second. The beauty of Big Data is that in these cases, you’d actually be able to gather all of that data and not worry about how to translate it all into equal meanings.  The data might aggregate in a more “directional” way than you’d like, but you’d probably still have an acceptable view of what global benefits or payroll is doing.  It seems to me that this puts us quite a bit further ahead of where we are now.

Listen, I know that HR has some place in Big Data at some point in the future, but the reality is that the current use cases for Big Data are so few and far between, and that we have so many other data projects to work on that we should continue investing in the current report and analytics projects.  Big Data will come back our way in a few years.

As I said, my wife usually wins the arguments.  We end up buying a car that has 175 horses under the hood, and I end up wishing we had more once a year.  But inevitably, automakers seem to up the game every few model years and come 5 years down the road, that same car model how has 195 horses.  If I just wait long enough, those extra horses in the engine just become standard.

 

 

 

 

How To Give All The Wrong Answers

As per my last post,at the end of 2012, I was doing a family vacation in Taiwan.  Being with family for 2 weeks is quite an expose into mannerisms that each of us have.  I was particularly intrigued by my brother’s questioning of my mother.  My brother would constantly ask my mother things like “why are we going to [city_name]?” instead of “what are we planning to do when we get there?” and “how much time will I need to prepare the kids to sit in the car?”  Luckily, we had my mother there fueling the ridiculous line of questioning.  90% of the time, her answers had nothing to do with the questions he was asking.

  • “Why are we going to [city_name]?” “Oh, let me tell you, when I was growing up, I used to play with my cousins there.”
  • “Mom, why are we going to [city_name]?” “Oh, did you see that beautiful view over there?”
  • “Mom, can you please just tell me why were are going to [city_name]?” “Don’t worry, you will love it.  It’s beautiful there.”

There are two items I’d like to diagnose.  First, are we actually listening to the question?  Second, did we understand the question?

The first is fascinating to me because I’m not sure we actually are listening.  Many of our reporting organizations are pure intake, create, output engines.  We grab the data that is asked for, create the report and send it out hoping we got it right.  Basically, we are spec takers.  Second question follows right after the first.  Much of the time, we don’t know why report requesters want the data at all.  We could be asking ourselves why they want to know, and if the data we are providing helps them solve a problem.  If we are really cool, we could be asking if they are even trying to solve the right problem or not.

Here are a few questions you should explore when data requests come your way:

  • How are you going to use the data?
  • What is the core problem you are trying to solve for?
  • Are there other data elements or analysis that we have that can help further?
  • Are there other correlated problems that we should try to answer at the same time?

For all intents and purposes, this post is the exact corollary of the prior on how to ask the right questions.  The problem with being a non-strategic reporting organization is that if the wrong questions get asked, the output is doomed to be the wrong information as well.  But even works, sometimes the wrong question gets asked and we still give the requestor the wrong data back.  All this does is create turn – another report request, or bad data going to managers (who in turn trust HR a little less the next time around).

In the case of my brother, he asked the wrong question in the first place.  It would have been much more advantageous had he explained why it was important for him to prepare the children for the outing, have the right clothes, have enough food along, and maybe get them extra sleep.  I’ll never know if my mother would have given him the right information in return, “yes it usually rains on that side of the island, it’s 40 minutes away, and we will be in a friend’s house so they can’t get too wild.”  But the crafting if the right answer is a tight collaboration of both sides creating understanding of what the objectives are.

 

How To Ask All The Wrong Questions

At the end of 2012, I was doing a family vacation in Taiwan.  When I say family vacation, I mean not just my wife and me, but my brother’s family along with my parents, visiting all of the senior members of the family (an important thing in Asian cultures).  There is an incredible exposure of habits and an interesting (but sometimes undesirable) analysis of where my brother and I got those habits from.  I was particularly intrigued by my brother’s questioning of my mother.  Let’s just say that getting 2 grown sons, their spouses, and our parents together creates a certain amount of strife.

Let’s also just say that my brothers’ hauling around of two young children may have added to the stress – he really needed to understand the daily schedules and what was going to happen when.  Back to the questions: my brother would constantly ask my mother things like “why are we going to [city_name]?” instead of “what are we planning to do when we get there?” and “how much time will I need to prepare the kids to sit in the car?”  (more on my mom’s response in the next post)

The problem in the questions was not the question itself, but in the thought process.  All too often, we ask questions about what we think we are supposed to know.  We want to know about turnover, headcount, spending per employee.  This is information that is useful, but does not actually inform us about what our next actions are.  Being “strategic” to me means that we have a plan, and we are actively managing our programs towards that plan.  If we’re using data that just skims the surface of information, we have no ability to adjust direction and keep going in the right direction.

I’ve often heard storied about HR executives who go into the CEO office for a meeting to present data, and all they get are questions back that cannot be answered.  Some HR teams go into those meetings with huge binders (sometimes binders that I’ve sent with them), and those teams come out still not having answered the questions.  The problem is not with the data.  The problem is that the team has not figured out what the actionable metric is, and what the possible actions are.  No CEO cares about the data – they want action that ties back to what the strategic objective is.  In other words, why do they care?

Here are a couple things you can do to craft better questions:

  • Always think about the root of the question:  HR tends to analyze at the surface more than some other functions.  We have finance doing complex correlations and marketing doing audience analysis.  We’re reporting headcount and turnover to executives.  What kind of crap is that?
  • Be a child:  Ask why/what/how up to 3 times.  Why 1: “Why are we going to [city_name]?”  Why 2: “Why do I want to know what we are going to do there?” What 3: “What do the kids need to be prepared with?”
  • Take action:  If you ask a question that can be answered in such a way that you can’t take action, you asked the wrong question.
  • Create an intake form that customers can request through: make sure you ask the right questions here to ensure they think through the process and understand what they need.

Many of the organizations I consult with have some pretty robust analytics organizations.  When I dig under the covers, they are reacting to create ad hoc reports for managers and HR business partners.  Once a quarter they scramble to create a CEO report card to depict the state of HR programs.  This state is sad to me.  We should be doing deeper analysis and diagnosis on a daily basis.  If we asked the HRBP’s what/why that wanted data for, we’d probably find there is a huge amour of quality analysis being performed in silos that could be leveraged organizationally.

 

Using HR Analytics to WIN

So I have to admit it’s hard to come off the election and not write about this.  This election was defined by some pretty deep population analysis, incredible forecasting and pretty significant actions to try to address the most important populations.  All of this went right along with a prioritization model that was pretty strong.  If we look at Obama/Romney, they each had target demographics: Obama needed the youth vote, managed to capture more than his fair share of the female vote (because the GOP was being stupid, and a couple contributions by Romney as well), and he also predictably got the “minority” vote.  As the election neared, both candidates narrowed down their activities to a few key states (Ohio, Florida, Virginia, etc.)  They had incredible models about what percentage of each population’s vote they would get, and if they followed those models, all they had to do was figure out how to get those people to the polls.  It turns out that in the end, the Obama machine was far better, and the Romney machine actually broke down (the phone app they were going to use literally went down on election day, and thousands of faithful Romney campaign volunteers had no idea which houses to go to and make sure people actually voted).

At the end of the day, the story is the same as the one we have in HR.  It’s about winning.  The difference is that we all want to win at different things.  Some of us want to win at engaging our employees.  Others want to win by having the best IP.  More yet want to win by producing the best products in our category in the world.  What is great is that if we’re good at what we do, we’re not focused on running analysis about turnover and headcount (although we’ll do that anyway), but instead we’re focused on understanding exactly what 5 things increase engagement in our populations.  Taking that a step further, we’d know exactly what populations are the most impactful on the entire workforce and target those people.  A 1% increase in engagement in the 30-something sales guys might yield an advantageous network effect while it takes a 3% increase in another population for the same to happen.

Similarly, if I want the best IP, I need to understand the roots of this.  Chances are it’s not just the standard talent management equations we need to figure out.  I mean, performance and succession are only going to take us so far.  Instead, if we’re figuring out the profile of the employees that created the most patent applications, the people who have the highest levels of trust in their subject matter, and specifically what are the attributes that made them successful, then we can start recruiting for those people, aligning our performance reviews not to achievements, but to the competencies we know work, and doing talent reviews that direct us to the right employee profiles rather than who we think is ready from a job perspective.

My assumption is that if you have created your HR strategy in the right way, and you have aligned that strategy with the corporate strategy, then HR is designed to make the organization WIN.  Therefore every analytic you are running should be directing your organization to that win.  Don’t forget about the mundane operational reports, but understand that focusing on that isn’t really helping you.

At the end of the day, Romney actually did almost everything right, and he really thought he was going to win.  The problem that he had was that he made a few wrong assumptions.  The GOP really thought that all the pollsters were wrong – that they were over emphasizing the Democratic vote that would turn out on election day – that Democrats, the youth, and Hispanics were really not as excited about Obama as they were 4 years ago.  Obama on the other hand had callers identifying who were voters, if they were voting for Obama, and then instructing the voter where their polling place was, good times to go, and what the plan was to get them there.  At the end of the day, forget about the other guy.  Just go run your analytics, make sure you are focused on what matters, and go out there and WIN!

Note:  Please assume no political commentary, simply an example of who analytics can be put into action.

 

 

Serendipity versus Decision Support

Would I be where I am today if I had all the facts every time?  I’m actually confident that if it were up to me, I would be digging ditches for a living somewhere (not to demean ditch diggers).  Let’s face it, I started off all on the wrong foot.  Being an Asian American kid with a prodigy brother, I was definitely the stupid one (I’ll assume you’ve all read about the Asian “Tiger Mom” thing lately).  I was the kid who, at the age of 6, was told by my piano teacher to quit.  I was the kid who was told by my 5th grade teacher, “too bad you’re not your brother.”  I was the Asian kid who graduated high school with only a 4.2 GPA.  (All of that is true btw).  I was also the kid who by some miraculous stroke of good fortune, managed to get accepted to my first choice college.  Being of relatively low income, my parents were quite please when I got a significant financial aid package (nothing compared to the brother, who incidentally got into every single Ivy League – also true).  At some point in the summer, I was sent a letter from my college of first choice and informed that they would no longer be able to offer me the amount of aid that I required.  With quite a large amount of desperation, I called around to various colleges, and was re-admitted to my (I think) 4th choice school with the financial aid that I needed.

It was at this school (one of the Claremont Colleges in S. California), where rather than hoards of students in large auditoriums being lectured to (a system that had clearly failed me so miserably to this point), I was instead surrounded by classes of maybe 15.  OK, maybe 20 max.  Rather than being lectured to, we sat around a table and talked about the book we read in the prior week.  I sat around on committees where I was literally a vote as a student to decide whether professors got tenure or not.  Rather than simple learning, I began understanding.  I really do consider this to be the first of several unplanned turning points.  Listen, I’m serious when I say that I was not a good student.  But learning for me happened a different way than for most.

We often talk about analytics and how it changes how we operate in HR.  High quality data leads to high quality choices – and often times that is true.  But it is also true that we don’t always have all of the data that we need at any specific point in time – if we had everything we needed to know, we might make vastly different choices.

I’ll take succession planning as an example.  We know who the top 10 succession candidates are for top positions (hopefully).  We know when they will be ready, what their relative skills and competencies are, and how their strengths compare to one another.  But we don’t know which of them are going to jump ship and go to another company before the position becomes vacant.  We don’t know which of them are going to stop growing, regardless of our best efforts to continue developing them.  The best that we can do, is to invest in a pool of candidates, and hope that one of them, the right one, is ready when the time comes.

We use decision support and analytics to crunch the numbers for us, but at the end of the day, it’s still serendipity – it’s still luck.  The hope here, is that while analytics and decision support can’t be a perfect predictor, we can in fact “make our own luck.”  We can improve our odds at getting the best outcomes.  At the end of the day, it is not serendipity versus decision support, but a combination of the two that will make our best data work for us.

Recruiting Effectiveness Measurement

Last post I wrote about recruiting efficiency measures.  From the effectiveness side, we’re all used to things like first year turnover rates and performance rates.  Once again, we’ve been using these metrics forever, but they don’t necessarily measure actual effectiveness.  You’d like to think that quality of hire metrics tells us about effectiveness, but I’m not sure it really does.

When we look at the standard quality of hire metrics, they usually have something to do with the turnover rate and performance scores after 90 days or 1 year.  Especially when those two metrics are combined, you wind up with a decent view of short term effectiveness.  The more people that are left, and the higher the average performance score, the better the effectiveness., right?

Not so quick.  While low turnover rates are absolutely desirable, they should also be assumed.  High turnover rates don’t indicate a lack of effectiveness.  High turnover rates instead indicate a completely dysfunctional recruiting operation.  Second of all, the utilization of performance scores doesn’t seem to indicate anything for me.

Organizations that are using 90 or 180 day performance scores have so much new hire recency bias that they are completely irrelevant.  It’s pretty rare that you have a manager review a new hire poorly after just 3 or 6 months.  For most organizations, you expect people to observe and soak in the new company culture before really doing much of anything.  This process usually takes at least 3 months.  And while the average performance score in the organization might be “3” your 90 and 180 day performance scores are often going to be marginally higher than “3” even though those new hires have not actually done anything yet.  However, you’ll have a performance score that is advantageous to the overall organizational score making you think that your recruiters are heroes.  Instead, all you have is a bunch of bias working on your metrics.

I’m not sure I have any short term metrics for recruiter effectiveness though.  Since we don’t get a grasp of almost any new hire within the first year, short term effectiveness is really pretty hard to measure.  I’m certainly not saying that turnover and performance are the wrong measures.  I’m just saying that you can’t measure effectiveness in the short term.

First of all, we need to correlate the degree of recruiting impact that we have on turnover versus things like manager influence.  If we’re looking at effectiveness over 3 years, we need to be able to localize what impact recruiting actually has in selecting applicants that will stick around in your organizational culture.  Second, we need to pick the right performance scores.  Are we looking at the actual performance score? goal attainment, competency growth, or career movement in # years?  Picking the right metrics is pretty critical, and it’s easy to pick the wrong ones just because it’s what everyone else is using.  However, depending on your talent strategy, you might be less interested in performance and more interested in competency growth.  You might want to look at performance for lower level positions while the number of career moves in 5 years is the metric for senior roles.  A one size fits all does not work for recruiting effectiveness because the recruiting strategy changes from organization to organization and even between business units within the same organization.

Overall, recruiter effectiveness is not as simple as it seems, and unfortunately there isn’t a good way to predict effectiveness in the short term.  In fact, short term effectiveness may be one of those oxymorons.

HRs Correlation to Business

When we talk about the impact of HR activities on our business’s operational production, we don’t usually think that there is a direct correlation.  In fact, some of our activities probably do have a relatively high correlation effect on business outcomes that we might be surprised about.  In defining correlation, we usually think about it on a –1 to +1 scale, with –1 being negatively correlated, 0 being no correlation, and 1 being positively correlated.  From an HR point of view, if we were able to show that there is a positive correlation from our activities to the business outcomes, that would be a pretty big win.

Personally, I don’t have any metrics since I don’t work in your organizations with your data.  However, with modern business intelligence tools and statistical analysis, it’s certainly possible to discover how our HR activities are impacting business outcomes on a day to day basis.

Take a couple examples.  We know that things like high employee engagement leads to increased productivity, but we don’t always have great metrics around it.  Sure, we can go to some industry survey that points to a #% increase for every point that the engagement surveys go up, but that is an industry survey, not our own numbers.  Especially in larger organizations, we should be ale to continue this analysis and localize it to our own companies.  Similarly, we should be able to link succession planning efforts to actual mobility to actual results.  Hopefully we’d be showing that our efforts in promoting executives internally is resulting in better business leadership, but if we showed a negative correlation here, that means that our development activities are lagging the marketplace and we might be better served getting execs from the external market while we redefine our executive development programs.

I’ll take a more concrete example.  Lets say we’re trying to measure manager productivity.  We might simplify an equation that looks something like this:

Manager Unit Productivity = High Talent Development Activity / (Low Recruiting Activity + Low Administrative Burden)

If this is true, we should be able to show a correlation between the amount of time a manager spends on development activities with her employees to increased productivity over time.  Also expressed in the equation, recruiting activity should also be negatively correlated to the manager’s team performance.  If the manager is spending less time recruiting, that means she is keeping employees longer, and spending more time developing those employees – therefore any time spent recruiting is bad for productivity.

I’m not saying that any of these things are the right measures or the right equations.  What I am saying is that we now have the tools to prove our impact on business outcomes, and we should not be wasting these analytical resources on the same old metrics and the newfangled dashboards.  Instead, we should be investing in real business intelligence, proving our case and our value, and understanding what we can do better.

Recruiting Efficiency Measurement

If you look through Saratoga, there are all sorts of metrics around measuring our HR operations.  For recruiting, these include all the standard metrics like cost/hire, cost/requisition, time to fill, fills per recruiter, etc.   Unfortunately, I’m not a fan of most of these metrics.  They give us a lot of data, but they don’t tell us how effective or efficient we really are.  You’d like to think that there is going to be a correlation between fills per recruiter to efficiency, and there probably is some correlation, but true efficiency is a bit harder to get a handle on.

When I’m thinking about efficiency, I’m not thinking about how many requisitions a recruiter can get through in any given year or month.  I’m not even sure I care too much about the time to fill.  All of these things are attributes of your particular staffing organization and the crunch you put on your recruiters.  If you have an unexpected ramp-up, your recruiters will be forced to work with higher volumes and perhaps at faster fill rates.  Once again, I’m sure there is a correlation with recruiter efficiency, but it may not be as direct as we think.

Back to the point, when I think about recruiting efficiency, I’m thinking about the actual recruiting process, not how fast you get from step one to step 10, or how many step 1-10 you can get through.  Recruiting efficiency is about how many times you touch a candidate between each of those steps.  Efficiency is about optimizing every single contact point between every constituency in the recruiting process – recruiters, sourcers, candidates, and hiring managers.

The idea is that you should be able to provide high quality results without having to interview the candidate 20 times or have the hiring manager review 5 different sets of resumes.  If you present a set of 8 resumes to the hiring manager and none of them are acceptable, you just reduced your recruiting efficiency by not knowing the core attributes of the job well enough and not sourcing/screening well enough.  If you took a candidate through 20 interviews, you just reduced your efficiency by involving too many people who probably don’t all need to participate in the hiring decision and who are all asking the same questions to the candidate.  Sure, there is a correlation between the total “touches” in the recruiting process to time to fill, but “touches” is a much better metric.

I know we’ve been using the same efficiency metrics for ages upon ages, and most of us actually agree that we dislike these.  Touches within the recruiting process makes a whole lot more sense to me, as it gets to the actual root of the efficiency measurement.

Yelp and the Future of BI Search

I’m sitting (almost) next to a guy who works for Yelp.  He just did a pretty interesting tutorial to the person in the middle seat ((great seat buddies for once – how often does that happen?)) about some interesting features in Yelp.  I at least didn’t know about some of these, but the most interesting of which is a little click box above the Google map that displays the location of the search results.  What it does is allow you to define your own search area.  You click on the box and you can then drag an area within the Google map.  Yelp then returns only results within that box.

I have no idea how they have constructed their search engine (or results engine) to be able to narrow down searches by just a drag box in the Google map, but it’s incredibly cool.  In contrast, we in HR think it’s the sexiest thing that we can click on a slice of the pie chart and drill down into more detail for that slice.  All I have to say is that our current BI tools are nowhere close to Yelp’s results capabilities.

Think about it this way, let’s say I had a competency map (or some form of a tag cloud, reinvented for HR purposes), and could do simple selections and queries off of that.  How could we engineer searches and results for managers and people trying to manage internal mobility.

Right now, if we want new data, we have to ask for it, and someone creates a query.  Even then, the data is static and while we can drill through, we only get to drill through on established parameters.  We don’t get to create, redefine and reconstruct on the fly.

How is it possible that I can define my search results with a simple drag box when I’m looking for a place to eat, but even after spending a few $MM to implement BI, I can’t?

Now go and check it out on Yelp…

Pretty is not Useful

Not all of us have fancy business intelligence tools and cool dashboards with great graphs prepared for us in our business environments yet.  If we’re lucky, we belong to organizations that have spent millions of dollars figuring out how to get us analytics when we log into self service environments.  Or perhaps we have bought into a cool talent application tool that already had the functionality.  We’ve been prophesying that business intelligence would go directly to the manager/end user rather than having requests for ad hoc reports that would have to be processed for days before the end user ever got any data.  Even if we don’t have all the cool dashboards and business intelligence tools at our disposal, I’m hoping that many of us have used the cool image and graphing functionality in MS Office 2007.  Indeed, the software now presents all sorts of data in all sorts of new, flashy and easy to build ways.  From a consulting perspective, beautiful and sexy is a wonderful thing.

The problem is that…. well, there’s a problem.

Have you noticed that many of the charts you are getting are 3 dimensional?  From my perspective, they are beautiful, and the formatting is impeccable, but I can’t really tell where the top of the bar lines up with the axis points anymore.  Sure I can tell that the top of any bar in a bar chart is…. oh between $20-25M?  Yeah – that’s problematic.  I have a $5M effort rate.

I’m not sure what to do with the dashboards.  Managers want quick access and a visual regarding where they are and how their organization is performing.  But at the end of the day, don’t the want a data dump into Excel anyway?  (warning: data privacy problem, but that’s a different topic).

Hopefully when we build the BI environment, we were smart.  After building our flashy but not very useful graph, we spent another $100K and build the drill-through charts.  You know, the ones where you click on the indecipherable bar to get a chart that presents the real data?  What we really did was create something that was pretty to look at, and then force the manager to click to get at what she really wants.  And we paid for it.  Sounds like good, smart design based on great user experience principles to me.

Ok, I know that there will be a revolt if I actually suggest that everyone goes back to 2D charts.  We all love our flashy reports and dashboards.  But can we just make sure they are actually helpful before we publish the stuff?

Business Intelligence and Data Dispersion

Data Encryption with business intelligence and reports has always been a problem.  Users are constantly requesting reports, and once data is in someone’s hands, it’s almost impossible to control data dissemination and what I’ll call data diaspora.  One must admit, especially in large organizations, that trying to put controls from a procedural perspective is not particularly realistic.  With hundreds or thousands of managers out there, controlling the actions of each person is particularly difficult.

In the good old days of ad hoc report files, excel spreadsheets, and powerpoints, any person who got their hands on data could easily forward it to someone else.  The fact is that technology was sufficiently difficult to use that most organizations, even the very large ones, have used Excel as the easiest way to aggregate and analyze data from multiple sources.  Even for single source reports, excel has long been the easiest way to communicate a data set.  Managers didn’t really have robust capabilities to tap into reports on their own, and even then, one of the selling points from software vendors has been the ability to export data into excel where managers or practitioners could continue analysis.

HR technologists have been talking about dashboards and business intelligence for years, but it does seem that the lately emergent technologies are finding some adoption in larger organizations.  Perhaps this is just maturity of the technology, perhaps the prices have started coming down from the fully customized ERP BI software to more vanilla and off the shelf analytics tools, or perhaps it’s possible that spending was just down so far in the last 2 years that nobody was buying the stuff.  Whatever the reasons, the technology and the market seems to be ready now.

Certainly, increased controls are now much more prevalent with each manager going to their own dashboard to view data, and with the large number of analytics available in the HR and talent realms, ad hoc requests are hopefully going down.  All this just means that if you can deliver a set of analytics to the manager desktop as opposed to frequent ad hoc requests, your data is controlled by the application security layer upon delivery.  Since you have never sent an email with an excel spreadsheet, there is no data to be fowarded.

You’ll argue with me that this technology has been around for years upon years – at least a decade.  I’ll absolutely agree that this is true, but I’m pretty sure that every single vendor out there (whether publicly or not) will agree with me that until recently the delivered reports were not sufficiently robust or comprehensive.  ERP vendors are now also delivering robust prebuilt analytics with sufficient drill downs and drill throughs.  The goal of the whole thing is to have enough data presented in a simple but detailed enough manner to eliminate most ad hoc needs.  If you can create an environment that does this, you utilize your application’s security as opposed to releasing your data to the winds of fate.