Data Models seem to be all the rage right now. Everything has a model. Can’t decide which home to build? Go look at some builder models. Worried about your retirement? A financial advisor can personalize a model for you.
According to Princeton University, a data model organizes data elements and standardizes how the data elements relate to one another. Since data elements document things and the events between them, the data model should represent reality.
Why are data models so popular? Likely because a data model provides a visualization of what something looks like when it’s right—the optimal and ideal situation. Everything would be alright if we could only make our tricky, bedeviling situation look like that model.
But what makes a data model “right “? A model is “right” when we can use it to repeat a process over and over and get a predictable, satisfying outcome. The home model with the open concept and the first-floor master is just what a bunch of home buyers is looking for.
We can build hundreds of these homes at a profit and have a lot of satisfied customers. Repeatable, scalable, and reliable output are what make a data model work.
That’s great for tangible things like homes and intangibles like financial plans, but what about people? What about employees? Each human being is unique and unpredictable. That’s true, but when it comes to the hiring process, we use models anyway.
We may not call it a model, and it may not be rigorously structured, but we look for certain things to be present when we hire someone. It might be the amount of education or work experience. It might be personality or disposition. It might even be the availability of the candidate. The point is, we’re looking for the presence or absence of things that fit our personal model for a good hire.
But there are potential problems with this model. It’s not standardized. Every hiring manager will have a different model. Some will be very similar, but some will be drastically different. It doesn’t accomplish our objective of repeatable, scalable, and reliable output.
One approach tried by many employers is to specify fairly narrow requirements for a particular job. Maybe we require five years of comparable experience, a defined set of skills, certain education level achievement, and so on. If it’s a more sophisticated approach, we might analyze the words and linguistic patterns in their resume or application because we think the presence of certain words and phrases could mean that the person has a temperament that would be just perfect for our job.
We build a talent acquisition data model to find these specifications in new candidates. As new applications and resumes come in, we automatically analyze them to find which ones most closely match our job specifications. The strongest matches bubble up to the top of the stack, and our hiring managers will make a great hire if they just select one of the closest matches.
A good job is done by all, and now we’re getting somewhere. Right?
Probably not. How did we develop these selection criteria?
We worked with our recruitment team and hiring managers to identify the primary traits and characteristics of the people we hired previously. The automatic analysis means that we don’t have to spend valuable time screening candidates.
That’s a big win, isn’t it? We’re making it easier and faster to hire the way we’ve always hired.
But we said we didn’t like the way we were hiring. We wanted to do better. Our employee turnover is way too high. Our new automated system is allowing us to make bad hires faster.
There’s another problem. Candidates are pretty smart, and whether we like it or not, they can often figure out our process for ranking candidates. Just like they figured out video interviewing, they can figure out your keyword system. One popular way to do this is to copy your job description on my resume.
But that would be too easy to spot, wouldn’t it? Not when you use white font for the electronic text. You can’t see it, but the computer sure can. With this technique, voilà, I’m your perfect match.
Ok, let’s stop bashing the talent acquisition data models. They are very valuable, and there is absolutely a right way to construct and use your model to get that repeatable, scalable, and reliable output. But you have to set your sights on something different. You have to know and understand the traits and characteristics of your best-performing employees in each job type. Those are your quality hires. That’s the reliable and satisfying output you’re looking for.
Improving the quality of hire will have a dramatic positive impact on your business. By reducing turnover and increasing productivity, your company’s costs will decrease, and revenue will increase. A more experienced workforce translates to increased revenue due to a better customer experience. Fewer mistakes and lower recruiting costs lead to reduced operating expenses.
So how do we get this different talent acquisition data model? Leverage your historical data for both candidates and employees. Your best-performing employees were candidates at one time. What did their qualifications look like when they were a candidate? How many of your current candidates have the same profile for that position? Those are the candidates with the highest potential to stay longer and perform better.
Machine learning is highly effective and critical to the success of an employee data model. You have a lot of data on every candidate. In addition, there is the potential to get a lot more derived data. For example, even though the candidate doesn’t specify the distance between their residence and the job location, that data element can be easily derived by comparing zip codes.
You have tons of data, but don’t worry about which data elements are most important. The machine learning algorithms do all, not some, but all the heavy lifting for you. Machine learning analyzes every bit of data to calculate which candidate data elements correlate to high-performing employees. The correlation may differ from job to job, but again, the algorithms figure that out automatically.
The champion algorithm will evaluate every candidate in seconds, instantly informing your recruiters and hiring managers which ones are most likely to become quality hires.
There’s more good news. These algorithms keep learning and adapting to environmental changes. As time progresses, the machine learning system will learn more and more to make the model even more accurate and reliable.
We said earlier that a model is “right” when we can use it to repeat a process over and over and get a predictable, satisfying outcome. There’s nothing more satisfying than making a positive, tangible impact on the success of your business. Get your employee data model “right” today and start reducing your employee turnover while increasing your company’s revenue.
To learn about precision talent acquisition data modeling, explore Decision Point—the new AI hiring tool from Cadient Talent. Or watch this short demo of the Decision Point Dashboard to see how it works.
While you're here - learn more about data-driven hiring decisions with these additional resources: