Guest Author: Stephen B. Jeong, Ph.D.
As a child, whenever I would screw up, my mother always said, “Why can’t you be more like Billy?” Billy was a straight-A student who excelled in every sport with which he was involved – an all-around “wunderkind” who could do no wrong. Needless to say, I didn’t like being compared to Billy all that much.
Those of you who have had experience with employee surveys – satisfaction or engagement – may be familiar with the concept of “benchmarking.” Benchmarking involves comparing a company’s survey scores – on a range of topics such as communication, supervision, engagement, and efficiency – with scores from a group of companies on comparable survey questions. Most commonly utilized benchmarks are scores from companies that fall into the same or similar industry sector, or scores from companies deemed “high performers.” This idea of gauging our company’s performance in reference to other companies can be tremendously appealing – it’s simple, intuitive, and sexy.
Notwithstanding the above, I find it hard to justify – from a scientific point of view – the current level of enthusiasm for the use of these benchmarks. There is no doubt, when carefully selected, normative benchmarks can provide useful insights into an organization’s standing. Before pulling out your pen and checkbook, however, I’d like to point out a few things for your consideration.
- Differences in company strategy – While most companies have in common the goal of increasing revenue, there are key strategic differences among companies that work to diminish the actual (as opposed to perceived) value that benchmarks bring to interpretation of survey data. Depending on their strategic goal, one company might emphasize “innovation,” while another, “efficiency.” One might focus on “training,” while another, on “R&D.” In other words, companies vary on the extent to which they place more or less value on one or more aspects of their operations or culture. This is true even those within the same industry. Take the “microchip” industry; some are now focused on producing cheaper solar panels while others are continuing to pour money into improving wafer machines. These varying strategies can and do have an impact on survey scores. So, what am I saying? The point here is that an overall survey score of 89% (satisfied employees) on “innovation” may be fabulous for one company, but unacceptable for another. So, drawing conclusions from the difference observed between one company’s score on “innovation” or “customer service” against a group of other companies (even those in the same industry) downplays the meaningful differences that exist among these companies.
- Timing of data collection (historical effects) – Employee survey is a collection of attitudes. Attitudes, in turn, are susceptible to constant fluctuations in one’s emotional state. Imagine that your company just announced the second round of layoffs and reported that revenues were less than expected for the past quarter. We all know that the conditions – both internal and external – can impact our responses to survey questions. This is what statisticians call “error.” This means that, any temporary condition – like layoffs – that can either inflate or deflate survey scores can contribute to increasing the size of this error. Imagine now, the timing of the surveys from different companies that make up a given benchmark. It is highly unlikely that the data were collected within the same month or even the same year. We’ve gone through a fairly significant roller-coaster ride in the past 12 months. Can you really draw firm conclusions from your Q2 of 2009 surveys scores when compared to data collected between Q2 of 2007 through Q2 of 2009?
- Importance of past performance (historical trending) – Benjamin Franklin emphasized the importance of gauging current performance using past data. For example, in order to stop cursing, he carried around a notepad to keep track of the number of times he cursed each day. After several weeks, he would draw a simple chart to check his progress. Similarly, one of the most important diagnostic tool available to organizations is historical survey data. Historical trend data provide information that, I would argue, is substantially more important than comparison to external benchmarks. This is primarily because one company’s culture – like one’s personality – tends to remain fairly stable over time. This means that, any significant shift in upward or downward direction (as measured by standard or average deviation) tells a lot about what is happening to different aspects of that company’s culture. Moreover, because you are likely to be aware of the changes that your company has undergone in the past 12 months, you are able to more reliably factor this into your interpretation of the results. From this perspective, historical shifts deserve much more attention than any discrepancy found between your survey results and some external benchmark data.
To summarize, there’s quite a bit of “hype” tied to the use of benchmarking data; more so than can be justified. While they can provide useful information when selected and used appropriately, differences in company strategy, cultures, and historical effects all work to make external benchmark data, in general, less useful than they appear on the surface. In worst cases, benchmarks can lead to grossly misleading conclusions and what I would call here the “Why can’t you be more like Billy” syndrome. Well-functioning companies are like Olympic athletes, you don’t need to be good at everything to win the gold, just your event. By the way, Billy is now a history teacher and although I watch the History Channel from now and then, I would never think for a moment about trading professions.
Stephen B. Jeong, is currently the Managing Director of Waypoint People Solutions – www.waypointps.com, a human capital consulting firm that focuses on high precision employee diagnostic surveys using cutting-edge measurement technology and methodologies. He holds Ph.D. in Industrial-Organizational psychology from the Ohio State University and has been advising private, public, and government organizations since 2000. He can be reached at firstname.lastname@example.org.