Are You Practicing “Bad Data Science” with your Pre-Hire Talent Assessments?
By Greta Roberts, Jul 27, 2017
Talent Analytics uses data gathered from our own proprietary talent assessments as an input variable to predict hiring success – pre-hire. We treat this dataset just like any other dataset in our predictive work. We are careful to analyze it for a strong (or weak) correlation to actual job performance. Our theory? If there is no correlation between data gathered via this method our clients should stop using it. Continuing without proof of success would be a little like a doctor “knowing” a certain medication doesn’t work for you, but continues to encourage their patients to keep using the medication. Malpractice at the very least.
Like all great predictive solutions, we use the most current predictive analytics methods any top data scientist would use – with any dataset – to find if there are strong patterns in human attributes that predict either “lasting in a role” or achieving some kind of KPI performance like sales performance, calls per hour, balanced cash drawers, customer satisfaction scores, errors and the like.
We use methodologies that include training datasets, validation datasets and lots of cross validation which all lead to the highest level of rigor called Criterion Validation of our talent assessment. Criterion Validation proves the correlation between certain assessment characteristics – and specific performance in the role.
If your business can show that your talent assessments accurately increase your hiring success, then it clearly makes sense to continue using them. If you can’t – what’s the point?
I am stunned by how few businesses (or assessment vendors) take the time to analyze their talent assessment dataset to see if it provides any positive or negative value.
We recently evaluated another vendor’s solution to see if it accurately predicted customer service scores (pre-hire) among bank tellers. It was predictive – but negatively so meaning, if their assessment said someone would have great customer service and flagged them as a “hire”– the new hires actually ended up having low customer service scores. We analyzed the predictions, individual assessment scores as well and actual customer service scores new hires received after they were hired.
How do you know if you’re practicing good data science with your talent assessments:
Ask your talent assessment vendor for access to the raw assessment scores so you can analyze how their scores compare to the actual performance they are “predicting”
Ask your talent assessment vendor if their assessments are Criterion Validated. If so – how often. If not, ask them how they know they work?
Once you have the raw talent assessment scores, ask your workforce analytics team to see if there is a correlation between any of the scores and length of time in a role or performance KPIs
If you can prove that nothing positive is being predicted, stop using them immediately.
If you’d like some assistance with pre-hire testing, look for a solution that is Criterion Validated and uses modern data science to prove their usefulness
Pre-hire assessments can be a powerful dataset for learning more about your job candidates. Used as part of a responsible data science initiative, they can often predict the probability of someone lasting in a role, or performing very specific KPIs. Used irresponsibly they introduce bias, they waste time and worst of all they are a signification cost to your organization both in terms of the fees your organization pays to use them and in terms of the bottom performing employee they help you to hire.
About the author
Greta Roberts is an influential pioneer of the emerging field of predictive workforce analytics where she continues to help bridge the gap and generate dialogue between the predictive analytics and workforce communities.
Since co-founding Talent Analytics in 2001, CEO Greta has successfully established the firm as the recognized employee predictions leader, both pre- and post-hire, on the strength of its powerful predictive analytics approach and innovative Advisor™ software platform designed to solve complex employee attrition and performance challenges. Greta has a penchant for identifying strategic opportunities to innovate and stay ahead of the curve as evident in the firm’s early direction to use predictive analytics to solve “line of business” challenges instead of “HR” challenges and model business outcomes instead of HR outcomes.
In addition to being a contributing author to numerous predictive analytics books, she is regularly invited to comment in the media and speak at high end predictive analytics and business events around the world.
Follow Greta on twitter @GretaRoberts.
Accelerate your organization’s journey to analytics maturity
Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.