it_recruitment

From the Kitchen to Prompts – Why Recruitment Can’t See AI Talent

A recruiter once told me something that stuck with me: that you can’t really tell from a CV and a job interview whether someone will be good at a given position. That intuition plays a large role — whatever you choose to call it. She was talking about ordinary, well-defined roles. Traditional positions with lists of responsibilities that have existed for years.

Now imagine that same recruiter receives an application from someone who works kitchen shifts at a restaurant. In their free time, they build natural language processing systems, run a blog about human-AI collaboration, and can go from a raw idea to a working application prototype in a single session with a language model. This person has no computer science degree. No experience at a tech corporation. Their CV reads: “food service worker.”

The question is: will that recruiter even open the CV?

And the second question, perhaps more important: does that person even know they should send it?


Recruitment is blind — and that’s not a metaphor

Before we even get to the topic of AI, it’s worth understanding the scale of the problem with recruitment itself. A 2025 TestGorilla study, conducted with one thousand HR and recruitment leaders in the US, revealed a picture that’s hard to call encouraging: 58% of recruiters admit they struggle to verify the skills candidates claim on their CVs. 47% cannot effectively assess a candidate’s cultural fit. And 46% consider their current sourcing tools simply ineffective.

But that’s just the beginning. A 2025 SHRM report shows that both the cost of recruitment and the time needed to fill a position have increased over the past three years — precisely the period when companies were rolling out AI tools for HR processes en masse. Nichol Bradford of SHRM put it bluntly: “The AI arms race isn’t benefiting either side.” Candidates use AI to generate perfect CVs and cover letters. Companies use AI to filter those CVs. The result? Both sides are playing a game in which genuine competence gets lost somewhere in the middle.

A Gartner study from March 2025 adds another dimension: only 26% of candidates (from the 1Q25 study, 2,918 respondents) trust that artificial intelligence will assess them fairly in the recruitment process. At the same time, more than 39% (from a separate 4Q24 study, 3,290 respondents) admit to using AI themselves when applying — to rewrite CVs, generate answers to recruitment questions, create portfolios. We have a situation in which no one trusts anyone, and everyone is using the same tools to “game” each other.

And it is precisely in this context that the question of AI competence arises — the real, deep, hard-to-measure kind.


Three Lives of Prompt Engineering

To understand where we are, we need to go back to 2023. ChatGPT had just exploded onto the scene. The tech media announced the birth of a new profession: prompt engineer — the “AI whisperer.” Anthropic was posting job listings with salaries reaching $375,000 a year. No computer science degree required (though basic programming skills were needed). Put simply, you just had to know how to talk to a language model. It sounded like a dream.

Searches for “prompt engineering jobs” on Indeed peaked in April 2023. And then they started falling. Consistently.

In a 2025 Microsoft survey of 31,000 employees across 31 countries, the prompt engineer role came in second to last among positions companies plan to create in the next 12–18 months. Jared Spataro, Microsoft’s Chief Marketing Officer of AI at Work, put it plainly: “Two years ago, everyone said prompt engineer was going to be a hot job. But you don’t need to have the perfect prompt anymore.”

Sam Altman, CEO of OpenAI, predicted this even earlier — back in 2022 he said that in five years no one would be doing prompt engineering. Models would become good enough that you could simply say what you want.

And in a sense, he was right. But only in a sense.

Because here is what actually happened: the prompt engineer job title essentially evaporated. But the ability to work effectively with language models didn’t just survive — it became one of the most sought-after competencies on the market. Except no one knows how to measure it, where to look for it, or how much to pay for it.

Allison Shrivastava, an economist at Indeed, put it well: “Prompt engineering as a skill is still definitely valuable. But it’s not a full-time job.” And one commentator at Fast Company compared prompt engineering to being an “Excel expert” or a “PowerPoint guru” — a valuable competency, but not one companies build dedicated roles around. It gets absorbed into other positions.

The problem is that when a competency gets “absorbed” — it becomes invisible. It doesn’t appear in job titles. It doesn’t show up in ATS system filters. It doesn’t make it onto requirements lists, because no one knows how to describe it. And suddenly we have a situation where a company desperately needs someone who can collaborate effectively with AI, but the job posting says “familiarity with AI tools a plus” — which means about as much as nothing.


Paper Ceiling — A Glass Ceiling Made of Paper

In the world of recruitment, there is a term that perfectly describes this problem: the “paper ceiling.” Not glass, not concrete — paper. Because it’s degrees, certificates, and lines on a CV that determine who gets noticed at all.

The data is unforgiving: 87% of employers in the technology sector report difficulty filling positions due to skills gaps. At the same time, formal education requirements continue to narrow the talent pool, blocking access for people who acquired their skills independently or through informal learning paths. LinkedIn notes that job postings dropping degree requirements rose by 36% between 2019 and 2022 — but this is still a minority.

As analysts wrote in the 2025 article “Beyond the CV”: a self-taught programmer from India or a marketer from Nigeria may be passed over simply because their CV lacks prestige — no well-known university, no recognizable company, no familiar career trajectory.

But there is one particularly interesting case: the New York-based non-profit Pursuit. Pursuit prepares low-income workers — people without college degrees, often with no technology experience whatsoever — for careers in IT. Historically, their graduates achieve an average salary increase of 400% — from around 18,000 to 90,000 a year. In 2025, Pursuit launched a new AI-focused training program aimed at the same participant profile.

A four-hundred-percent increase. From manual labor to tech. No MIT degree, no Google internship.

But Pursuit is an exception — an organization that actively builds a bridge between two worlds. Most people who could make that same journey don’t even know the bridge exists. Because how would they?

Someone who has spent their entire career searching for manual work, who knows that advancement means putting in hard hours on a shift, who has never had any contact with the IT world — how is that person supposed to find out that they can leap from flipping burgers to building systems with prompts? This isn’t a matter of lacking talent. It’s a matter of lacking the information that this path exists and is accessible at all.

The barrier to entering the AI world is not technical — it is informational and social. Nobody tells people working in food service, logistics, or manufacturing: “Hey, what you’re doing with ChatGPT after hours — that is a real, market-valued competency.” And so these people don’t apply. And recruiters don’t know they should be looking for them.


Vibe Coding — New Equalizer or New Illusion?

In February 2025, Andrej Karpathy — a founding member of OpenAI and former head of AI at Tesla — published a post that changed the way people think about programming. He called it “vibe coding”: an approach in which a developer describes a project in natural language, and a language model generates the code. No manually writing every line. No laboring over syntax. Instead — conversation, iteration, “vibing” with AI.

The term made Merriam-Webster’s trends list. Collins Dictionary named it their Word of the Year for 2025. Searches for “vibe coding” shot up 6,700% in the spring of 2025. Y Combinator revealed that 25% of startups from their Winter 2025 cohort had codebases more than 95% generated by AI.

But what is truly revolutionary about vibe coding has nothing to do with developers. It concerns people who are not developers.

The LIT.AI website described the case of an experienced HR recruiter who used vibe coding to build a professional candidate assessment application for job interviews — in a matter of hours. This person brought their domain knowledge: they understood which questions reveal a candidate’s quality, how to structure an evaluation, which nuances matter. AI handled the code, the interface, the data organization. The result: a production-grade tool that traditionally would have required months of work from a developer team.

This is the moment where the paths intersect. A cook who builds NLP tools after hours. An HR recruiter who creates a candidate assessment application. A logistics worker who automates reporting at their company through a conversation with Claude. These people don’t fit any recruitment template — because the templates don’t exist yet.

Vibe coding promises to democratize software creation. But it also has its darker sides, worth discussing with honesty. The METR study from July 2025 — a randomized controlled trial — found that experienced open-source developers were 19% slower when using AI coding tools, despite estimating themselves to be 24% faster. A December 2025 CodeRabbit analysis covering 470 pull requests on GitHub found that AI-co-authored code contained 1.7 times more serious bugs than manually written code — including 2.74 times more security vulnerabilities.

Fast Company, writing in September 2025, described a “vibe coding hangover,” quoting senior engineers who described “development hell” when working with AI-generated code.

So vibe coding is not a magic gateway. But it is a different gateway — one that for the first time in history allows people without a formal computer science education to build real tools. And here the fundamental question arises: is the job market ready for people who walk through that gate?


Where Is the Space for Change?

An experiment conducted by Stanford researchers in collaboration with the University of Southern California sheds light on the direction recruitment could evolve. The experiment compared two methods: traditional CV screening versus AI-conducted interviews. Candidates who went through AI interviews achieved a success rate of 53.12% in subsequent human interviews — compared to 28.57% in the traditional group.

That is not a small effect. That is nearly double the effectiveness. And what matters is understanding why this method works: because it evaluates actual competencies rather than declarations on paper. As the authors noted, AI-conducted interviews “minimize the risk of favoring particular backgrounds and level the playing field for non-traditional candidates, career changers, and underrepresented groups.”

SSIR (Stanford Social Innovation Review) proposes an even more radical approach: replacing the CV’s “proxy indicators” with tasks based on real work — samples, simulations, supervised trial projects with standardized evaluation criteria. “A candidate for a data analytics role should demonstrate they can clean messy data, build a basic model, verify the credibility of results, and explain the trade-offs to a non-technical person.”

A 2025 Ipsos study for Google reports that 49% of hiring managers are beginning to use formal skills assessments — technical tests, work simulations, fit evaluations. That’s already almost half. But “beginning to use” does not mean “using effectively.” Between intention and practice lies a gap.

And within that gap lie opportunities.

One of them is the portfolio as an alternative to the CV. Not a traditional project portfolio — but a portfolio of thinking process. A blog documenting collaboration with AI. GitHub repositories showing not just the final result, but the iteration, the debugging, the communication with the model. A record of a vibe coding session where you can see how someone guides AI from an idea to a working prototype.

A second is “live vibe coding sessions” as a new form of job interview. The company Netclues already does this: a candidate is given a problem to solve, uses their AI tool of choice, and solves it live while the recruiter evaluates not just the outcome — but the thinking process, the quality of prompts, the ability to debug, the response to AI errors.

A third — and perhaps most important — is a shift in the narrative about who can work with AI. The prevailing assumption is that AI competency belongs to the domain of programmers and data engineers. But research suggests something different: the defining traits of people who work most effectively with LLMs are problem-solving ability, adaptability, and willingness to learn — not a specific set of technical skills. Universum’s “2025 Talent Outlook” report emphasizes that these “soft” competencies are just as critical as hard technical skills in an AI-augmented environment.

There is also something analysts at SSIR call “latent expertise.” AI can help a student analyze data, a retail worker generate code snippets, and a high school graduate produce professional-grade marketing materials. AI reveals competencies that traditional recruitment systems cannot see.

The question, then, is not whether these opportunities exist. The question is who will be the first to seize them — companies, candidates, or perhaps entirely new players in the market that we don’t yet know.

Closing

Somewhere in a restaurant kitchen, someone is just finishing their shift, taking off their apron, sitting down at a computer, and starting another session with a language model. They are building a tool no one expected of them. Solving a problem no recruiter has written a job posting for. Creating something no ATS system can identify.

This person has the competencies companies are desperately searching for. But they don’t know they have them — or they don’t know that anyone would pay for them. And companies don’t know they should be looking for exactly these kinds of people.

This is the gap. Not technological, not educational — informational and imaginative. And until someone fills it, AI talents will keep vanishing in the dead zone between the world they come from and the world that could value them.


Sources:

  • TestGorilla, 2025 – survey of 1,000 HR and recruitment leaders in the US (betanews.com)
  • SHRM, 2025 – “Recruitment Is Broken. Automation and Algorithms Can’t Fix It.” (shrm.org)
  • Gartner, March 2025 – candidate trust in AI recruitment (unleash.ai)
  • Microsoft, 2025 – survey of 31,000 workers on the future of AI roles (salesforceben.com)
  • Fortune / Indeed, May 2025 – “This six-figure role was predicted to be the next big thing—it’s already obsolete” (fortune.com)
  • Fast Company, May 2025 – “AI is already eating its own: Prompt engineering is quickly going extinct” (fastcompany.com)
  • Robert Half, 2025 – “Building Future-Forward Tech Teams” (roberthalf.com)
  • LinkedIn, 2022 – growth in job postings without degree requirements, 2019–2022 (linkedin.com)
  • Stanford / USC – experiment comparing AI vs. traditional recruitment (weforum.org)
  • Dice, 2025 – “The Rise of Nontraditional Tech Career Paths” (dice.com)
  • Pursuit / Stand Together, 2025 – AI program for workers without degrees (standtogether.org)
  • “Beyond the CV” – AI-powered skills graphs in recruitment (virtualemployee.com)
  • Ipsos / Google, 2025 – “Future-proofing careers in the age of AI” (ipsos.com)
  • SSIR – “A New AI Career Ladder” (ssir.org)
  • Universum, 2025 – “2025 Talent Outlook” – soft competencies in an AI environment (via: skywalkgroup.com)
  • Wikipedia – Vibe coding (wikipedia.org)
  • LIT.AI – Vibe Coding: A Human-AI Development Methodology (lit.ai)
  • METR, July 2025 – randomized controlled trial of developer productivity with AI (wikipedia.org/Vibe_coding)
  • CodeRabbit, December 2025 – analysis of AI vs. human code quality (wikipedia.org/Vibe_coding)
  • Netclues – live vibe coding sessions in recruitment (netclues.com)
  • Anthropic, 2023 – job posting “prompt engineer and librarian,” 280K–375K (businessinsider.com)
  • Sam Altman, October 2022 – statement on the future of prompt engineering

Similar Posts