How Do Employers Qualitatively Keep Pace in a Sea of Embellished Resumes?
A recent Financial Times article estimates that around half of job applicants are now consistently using AI tools, such as ChatGPT and Gemini, to apply for roles. This statistic is staggering, and is indicative of a growing threat to the integrity of hiring processes, particularly for technology roles.
Candidates are increasingly using AI tools to create cover letters, resumes, and even to answer assessment questions, in hopes of differentiating themselves from other applicants. These AI-powered resumes highlight impressive-sounding achievements or inflated certifications, and the cover letters are written in polished language, such that it’s increasingly difficult for recruiters to discern between genuinely qualified candidates and those relying on AI to embellish their qualifications and abilities.
Dan (real name withheld) shared his experience using one of these AI-powered application services called LazyApply to submit applications for about 200 managerial quality assurance roles. While the app saved him considerable time, allowing him to apply for hundreds of positions without much effort on his end, he admitted that the response rate from employers was comparable to when he applied manually through an applicant tracking system (ATS). This highlights a key issue with the growing popularity of these AI tools: that applicants are likely to flood the system with large volumes of applications without a significant improvement in their chances of actually being interviewed or hired. The one guaranteed outcome, though, is a contribution to the deluge of low-quality submissions that recruiters must sift through, rendering the hiring process more challenging and time-consuming for the talent acquisition team.
In roles such as Scrum Masters, Business Analysts, and Project Managers, the use of AI to craft CVs, which often include fabricated work experience and certifications, has become increasingly common.
This influx of unqualified candidates inevitably results in wasted time and resources, as HR professionals struggle to filter out misleading applications and identify authentic talent from the flood of applications.
This artificial competition dilutes the talent pool, forcing skilled job seekers to compete with individuals whose credentials may not reflect true expertise.
When an unqualified individual using an AI-generated application goes undetected and is hired, the consequences are severe. If an underqualified Project Manager or Business Analyst is hired, for example, their lack of genuine experience may lead to missed deadlines, inefficient workflows, or poorly managed teams. Projects can suffer from mismanagement, experience delays, and become poorly executed, directly impacting company performance. The downstream effect of these mistakes can be costly, both financially and operationally. Organizations that mistakenly hire unqualified individuals may face reputational damage as well, particularly in industries where trust and expertise are critical. Arleigh Lane, a biopharma recruiting consultant who has worked with prominent companies like Moderna and the Gates Medical Research Institute, agrees:
“The flood of unqualified applications is definitely creating a lot of extra work. AI has made formatting resumes and applying to jobs almost effortless, leading to a massive increase in unqualified applications for companies to manage.”
Within the job market as a whole, the increasing popularity of AI-generated applications has the potential to drastically shift the way candidates are assessed, as previously valued markers of competence become more difficult to trust. This may also lead to a greater disparity between socio-economic classes and greater hurdles for certain demographic groups, as barriers to accessing these AI tools vary and those without access may fall behind.
Financial and Operational Impact on Employers
The financial impact of hiring these unqualified candidates goes beyond the initial onboarding costs. Organizations often invest substantial resources in training and development for new hires, and when it becomes clear that a candidate lacks the expertise they claimed, the cost of rehiring and retraining becomes an even more expensive burden.
According to some studies, a bad hire can cost up to 30% of the employee’s first-year salary—a figure that can be especially significant in fast-moving, high-stakes industries like technology.
Operationally, the risks are even more pronounced. Projects dependent on skilled professionals can suffer significant delays, resulting in missed market opportunities and further lost revenue. In industries rooted in relationships with a client, poor performance from under-qualified employees can erode these connections, leading to further financial losses and dissolving trust in the organization’s capabilities. As a result, employers may find themselves spending significant amounts of time managing the fallout from these poor hires rather than focusing on growth and innovation.
Ethical and Long-Term Consequences
Beyond the immediate challenges, the widespread use of AI in job applications raises broader ethical concerns. For job seekers, relying on AI to embellish their qualifications can lead to a lack of accountability and a culture where cutting corners is accepted. While there have always been those who’ll embellish a job application, these AI tools have made it much easier to do so convincingly, so these falsified applications may often pass under the radar. This not only undermines the value of genuine experience and skill, but also sets a competitive standard for future job seekers who may feel pressured to do the same.. The widespread adoption of these tools, combined with the increased ease in automated applications and subsequent flooding job postings has inundated many employers ATS to the point where they are no longer useful.
For employers, the ethical implications of rejecting candidates who use AI in any form must also be considered.
Where should the line be drawn between using these tools for assistance in minor tweaks like grammar correction or formatting, and outright deception powered by AI?
It’s essential to strike a balance between encouraging innovation and maintaining the integrity of the hiring process, and this line may be difficult to pin down as these AI tools continue to evolve.
Long-term, if the trend of AI-generated applications continues unchecked, it could lead to a shift in how employers assess qualifications. With many candidates potentially inflating their resumes using AI, traditional markers of competence, such as degrees and certifications, may hold less weight. This could result in an increased reliance on real-world assessments and practical demonstrations of skill during the hiring process. It may also exacerbate socio-economic disparities as financial and technological access to AI tools varies among different demographic groups.
Addressing the Challenges: Recommendations for the Future
To counter the flood of AI-generated applications and the risks posed by unqualified candidates, companies must adapt their hiring processes to include more rigorous screening.
- Skill-Based assessments are essential.
These assessments allow companies to test candidates on real-world tasks that they’ll be expected to perform in their roles, providing an immediate indicator of whether they possess the expertise they claim. For technical roles, this might include coding challenges, case studies, or project simulations. - Behavioral Interviews can help employers assess problem-solving abilities, adaptability, and critical thinking skills.
These interviews are particularly useful in determining whether a candidate has the practical experience to back up the qualifications listed on their resume. Asking candidates to walk through specific challenges they’ve faced in previous roles can quickly reveal gaps in knowledge that may have been obscured by polished, AI-crafted CVs. For a further exploration of behavioral based interviews, check out another article from Issue 4: “In Person Interviewing: A Relic of the Past?”
- Consider leveraging AI detection tools to help identify fraudulent CVs or work experience.
If you can’t beat them, join them. AI detection tools can help flag inconsistencies in resumes, or detect patterns of language and formatting that commonly used in AI-generated applications. They can also assist in filtering out generic resumes or those that rely heavily on buzzwords without substantiating real expertise.
- Thorough reference checks are another key safeguard.
Contacting previous employers to verify job titles, responsibilities, and duration of employment can help confirm whether a candidate truly has the experience they claim. Similarly, credential verification should be a consistent part of the hiring process, particularly for candidates who list certifications or advanced degrees. This step is crucial for technology roles in particular, where specific qualifications, like Scrum Master certifications, are often required.
While AI has revolutionized many aspects of the job application process, its misuse is also creating significant challenges for both candidates and employers. As the presence of AI continues to grow and evolve, organizations will need to adopt more sophisticated screening methods, promote transparency, and engage in a broader conversation about the ethical use of technology in recruitment to protect the integrity of the hiring process. Without these measures, the long-term consequences of AI-driven applications could diminish trust in the job market, and make it increasingly difficult for companies to identify and hire the talent they need to succeed.