<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1509222356040859&amp;ev=PageView&amp;noscript=1">

Complete this form to request your access to the platform.

The Best Recruiting Metrics to Measure Your Staffing Vendors' Success

Baseball may be “on hold” until at least July, but that doesn’t have to stop you from tracking important stats—ones that reflect your staffing vendors’ performance.

Fans are eagerly awaiting the[...]

June 16, 2020

Read More
All Posts

The Growing Emphasis on Big Data for Contingent Workforce Programs

Part 2 of 2 (Part 1: Big Data Technology for Contingent Workforce Staffing Curation)

In the first part of this series, we looked at the trends in HR technology that will shape the staffing industry landscape for 2015, particularly as firms of all sizes begin investing in performance management software to draw from the wellspring that is Big Data. However, without proper interpretation, curation and a human decision-making element, all that information runs the risk of becoming marginalized or indeterminate. In this article, we’re going to look at how MSPs and their staffing partners can help clients find meaning in a sea of statistics.

The secret Netflix algorithm: data must be curated by humans

Big data and metrics are going to matter more this year. Understanding the importance of business intelligence and analytics remains a challenge for the talent acquisition function. With an array of metrics available, organizations are still trying to decide what to measure, as well as determine if they have the technology in place to support the need. The reality for staffing professionals, however, is that future business success may depend on moving beyond historical reporting toward predictive analytics.

Marrying analytics with content layout on one’s employment site, for example, is a proven way to maximize the candidate experience, showcase facets of an employment brand and draw talent toward those aspects they would find most alluring.

Data also drive the sourcing and vetting processes. Algorithms in candidate ranking systems can figure out how quickly talent have progressed in their careers and determine whether an individual is a laggard or a mover who outpaces others. However, numbers alone can’t always paint an accurate picture or reliably predict future needs and their ideal solutions.

“Deflategate” through the lens of Big Data: a lot of whats, not whys

Consider the “Deflategate” scandal that plagued the New England Patriots prior to their Super Bowl appearance. Writing for Harvard Business Review, Kaiser Fung offered an excellent example of what happens with uncurated data.

Officials discovered that the Patriots’ footballs were under-inflated, which could have given them a dishonest advantage. Data analyst Warren Sharp used Big Data to provide a statistical analysis that seemed to correlate the team’s superior fumbling rate to the under-inflated balls.

  • With its plays-per-fumble metrics, New England outperformed other teams (by nearly twice the NFL average), making it an outlier.
  • Visualizing and re-formulating the metric produced the same result.
  • New England, which plays at home in an outdoor stadium, also outperformed competitors who play under a dome. In a covered stadium, most teams suffer 10 fewer fumbles. New England again became an outlier.
  • Using a bell curve, New England’s odds of attaining such unparalleled levels of success were deemed “extremely remote.”
  • The analysis alone concluded that it is “nearly impossible” for any team to possess such an ability without cheating.

Through a process of reverse causation and Big Data’s uncurated results, it would be easy to accuse the New England Patriots of cheating. Maybe they did. The point is that the data do not actually prove anything, as even Sharp confessed in his article. The analysis proves only that New England excels at preventing fumbles, not why. There could be entirely legitimate reasons: the players are better trained, the coaching emphasizes extra attention on combating fumbles, the team has perfected ball security techniques and handling, and so on. It’s also difficult to consider the Patriots an outlier because they still fall behind the Atlanta Falcons in fumble performance. The Falcons had been removed from the analysis because they play in an enclosed stadium.

“Big data,” Fung observed, “is exposing all kinds of outliers and trends we hadn’t seen before and we’re assigning causes somewhat recklessly, because it makes a good story, or helps confirm our biases.”

And this is where we can learn a valuable lesson from Netflix. Prior to Netflix’s success in digital content streaming, television studios relied on limited data to guide their programming decisions: the intuition of a rather narrow and homogeneous pool of executives, gross sales figures and Nielsen ratings, which like the group of studio executives also lacked diversity. The results weren’t always stellar or even close to hitting the mark.

Then came Netflix and Ted Sarandos, its chief content officer. The company invested heavily in data-driven programming, creating advanced algorithms that could more accurately predict the desires and behaviors of viewers. The success of Netflix’s original programming attests to the value of its data systems.

For example, when Netflix approached actor Kevin Spacey to star in the now award-winning and critically acclaimed “House of Cards,” it told him: “We believe in you. We’ve run our data and it tells us that our audience would watch this series. We don’t need you to do a pilot.” The rest is history. Still, though, Big Data alone did not arrive at this conclusion in an AI-fueled vacuum. Sarandos revealed the real secret behind these algorithms: humans.

“It is important to know which data to ignore,” he confided to journalist Tim Wu of The New Yorker.  “In practice, it’s probably a seventy-thirty mix. Seventy is the data, and thirty is judgment… But the thirty needs to be on top, if that makes sense.”

How MSPs and staffing professionals can become masters of data curation

Using the famous research model pioneered in 2006 by Philip Tetlock of the University of Pennsylvania, which dealt with predictive analytics in the political sphere, we can reconfigure the principles so that they apply to MSPs and staffing curators. In times past, social scientists detailed countless forecasting failures, concluding that the art of prediction was just that -- an art or even dumb luck. Research shows that such is not the case. Prediction is a learned skill, and something that develops over time with practice. And it has merit. To invoke the recent Super Bowl once again, if you were to pick a winning team randomly, you’d be wrong 50 percent of the time. Elite forecasters, however, consistently reduced their error rates by more than half. Here are insights on how MSPs and staffing professionals can improve their ability to interpret Big Data to yield better predictive analytics. Learn more about how staffing curators bring clients and MSPs the best talent

Teams consistently trump individuals. Research demonstrated time and time again that groups outperformed individual analysts. Of course the team needs to collaborate effectively and communicate well. And for those who did, their predictions and analyses were far more meaningful than those presented by individuals working alone. With the growing emphasis being placed on Big Data in contingent labor programs, one of the most important things you can do is dedicate a team to processing, reviewing, tracking and reporting on that information.

Human intelligence helps interpret business intelligence. Data analysis is a mathematical and statistical process, best placed in the hands of those with training and aptitude. In Tetlock’s studies, he discovered that subjects who scored higher in intelligence tests were more accurate in their readings of data, especially when entering new domains. MSPs and staffing professionals don’t need to run their resources through a gauntlet of IQ tests, however they should be locating workers within their organizations with backgrounds, academic training and demonstrable skills in math, formal data analysis or statistics. Once this core team of analysts takes off, its members can train others; and at that point, the need to find workers with the same heightened qualifications diminishes.

Domain expertise natters. A string of letters behind one’s name and academic pedigrees aren’t the be-all-end-all here. Genuine domain expertise can greatly augment the abilities of the team. When assembling your group, be sure to look for workers who possess experience managing programs for specific industries, job categories, skill sets and suppliers. Having a handle on these niche needs will accelerate the analysis and lend more depth and insight to the conclusions being drawn.

Practice, practice, practice. The best forecasters and analysts honed their skills over time. The accuracy of their assessments, while strong, required development, exposure, experience and practical application. If you’re just beginning to develop a team, don’t expect superior results immediately. And don’t discourage the team from continuing to try, even if the first round of analytics bears little fruit. Committing to the evolution of the team will yield the successes you and your clients are seeking.

Open-minded individuals reach better conclusions. Biases and foregone conclusions can only taint an analysis of data. Let’s revisit the New England Patriots example. For those rooting against New England, it was easy to interpret the statistical data as solid evidence of cheating, despite the fact that the data could not establish that independently. This is especially tricky when reverse causation comes into play -- where the results are known and researchers must work backward through them to determine causes. To take effective action on the information contained in Big Data, the team members tasked with analyzing the data must approach findings without preconceived notions, influence from above (such as an executive saying a client wants to reduce its headcount next month) or any preemptive guidance that would imply the analysis should fit a predetermined outcome.

Analysis takes time. In studies, those analysts who spent time thoroughly vetting data and deliberating on their final assessments performed better than those who rushed to meet a deadline. In group settings, this was even more apparent. When accepting a request from clients to provide ad hoc analysis, make sure you’re consulting with your analysts first; you want to come up with a reasonable timetable that ultimately benefits your team and the integrity of the results they’ll present to the client. For regularly scheduled or recurring reports, such as quarterly business reviews, start the teams well in advance to ensure a proper interpretation of the data.

Revision produces better results. There’s an axiom among authors that all writing is rewriting. The same holds true for data analysis. When forecasters and analysts were given the opportunity to revise their initial findings based on the introduction of new information, their final conclusions soared beyond the interpretations of those who did not. It’s important to ensure that your teams have a decent amount of time to perform their revisions, as well as access to all incoming data and the support of leadership to produce the best analysis possible.

Post a comment

Related Posts

The Best Recruiting Metrics to Measure Your Staffing Vendors' Success

Baseball may be “on hold” until at least July, but that doesn’t have to stop you from tracking important ...
Crowdstaffing Jun 16, 2020 9:08:00 AM

From In-House Programs to AI-Boosted Tech Platforms: Vendor Management System History

Ford’s revolutionary assembly line altered manufacturing forever, much like its early concept of a vendor...
Crowdstaffing Jun 2, 2020 10:15:00 AM

Evolving Contingent Work Trends and How Companies Can Benefit From Them

The continual growth of contingent work makes one thing clear: it’s here to stay. More and more individua...
Crowdstaffing May 19, 2020 10:41:00 AM