Behind Every Hire

Metaview Sourcing Review: A Recruiter’s Honest Playbook | Behind the Hire by Tarek Mahdy

Tarek Mahdy

26 Mar 2026

One day, while checking a recorded intake call, a notification popped up.

My eyes instantly glazed.

Yes, it was a notification.

A prompt to try Metaview’s new sourcing agent.

Not much was needed from me. One click. And that recently recorded intake call was used to refine the search criteria. Metaview’s agent created the criteria, searched, filtered, and refined, then handed me 50 candidates to start with.

And it did not stop there.

It asked me to start giving feedback right away. If a candidate looked good, I could say why. In simple English.

This is where this kind of tech takes an edge. It understands nuance. It understands plain language. It understands “simple words”. It turns sourcing into a breath.

It’s these little moments when sourcing becomes fun again.

Metaview brought back the fire.

So I wanted to write this piece, but I also wanted to go beyond my own experience. I went hunting for people talking about it outside of Metaview. My rule was simple: this needs to be recruiter-friendly, and it needs to be a playbook you can use without fooling yourself.

Sourcing fails in the same few places.

Not because recruiters are bad.

Because the work gets messy fast.

You see it daily. Intake calls feel like a conversation, not a spec. The brief shifts mid week. Hiring managers reject profiles without giving a useful signal. Recruiters build lists… and then those lists get neglected (pain point, yeah?). “Good candidate” means something else for each interviewer. The ATS becomes a graveyard of notes nobody reads.

If an AI sourcing tool helps, it needs to improve one thing.

Your speed to clarity.

Not your speed to a bigger list.

Because a bigger list can still be a wrong list. And wrong lists burn everyone out. Recruiters. Hiring managers. Candidates.

The part that surprised me was not “wow it found candidates”.

Every tool finds candidates.

The surprise was this: it started with an intake call.

Not a polished JD. Not a recruiter guessing game. Not me doing the usual dance of “let me translate this vague brief into a Boolean that sort of works”. It took what was said, then it turned it into something usable. A refined search criteria that actually looked like a recruiter wrote it.

And then it did the thing we all do manually. It created a few different angles, tested them in parallel, adjusted based on what came back, and narrowed down with feedback.

Except it did that without me opening 12 tabs and losing my mind.

That is the shift.

Not automation.

The shift is: sourcing starts to feel like a conversation again.

Metaview positions it as an autonomous sourcing agent. Plain version: you give it context, it drafts an Ideal Candidate Profile, you refine it, it searches and returns candidates, you rate candidates yes maybe no, you leave feedback, and it reruns with that feedback.

It also talks about “Deep Research”.

That feels more like market mapping, not just “find me people”. More like: show me how the market looks, show me clusters, show me patterns, help me build a smarter plan before I burn a week sourcing the wrong lane.

So in my head, it’s two workflows.

Find candidates.

Understand the market so you stop guessing.

This was the fun part.

It did not run one search. It ran multiple. Parallel strategies. Different angles. Different assumptions. Then it told me what it was doing and why.

That alone changes the experience. Because sourcing is not only about results. It is also about trust. If I cannot tell how you got the shortlist, I will not trust it. And I will not defend it to a hiring manager.

So seeing the steps matters. It feels like a teammate sitting next to you saying, “I tried these five paths, here’s what each one gave me, and here’s what I will do next.”

That is the mental model.

This is the part I want recruiters to take seriously.

Because most teams say they want feedback loops, then they give feedback like:

“not relevant”

“too senior”

“no”

That is not feedback. That is rejection. And rejection teaches nothing.

What Metaview prompted me to do was different. It nudged me to explain “why”. In plain English. And that is where the leverage is.

Because sourcing success is mostly calibration, not search. Search is easy. Calibration is the hard part.

So when you give feedback, give usable feedback. Stuff that actually moves results:

Same stack, but more product ownership, less platform only work.

Keep frontend depth, drop heavy design systems only profiles.

We need someone who shipped features end to end, not someone who only maintained.

Keep the seniority, but move away from big orgs, we want small team shipping.

Same domain, but more B2B, less consumer.

Keep React, but add TypeScript depth, not just UI assembly.

This kind of feedback does two things. It helps the tool rerun better, and it forces you to get crisp on what you mean.

And that second one is the real win.

Because sometimes the problem is not sourcing. Sometimes the problem is that nobody can describe “good” consistently.

It removes the worst parts. The repetitive parts. The parts that feel like punishment.

But it does not remove the recruiter job.

If anything, it shifts the recruiter job back to what it should be. Asking better intake questions. Turning vague wants into real constraints. Translating hiring manager language into candidate language. Protecting candidate experience. Building trust inside the hiring team. Spotting patterns across the market.

This tool still needs a recruiter brain.

It just stops wasting that brain on mechanical work.

That is the best case.

If you want to test Metaview Sourcing properly, do not throw ten roles at it randomly. Pick one role where sourcing is painful.

Painful as in: the brief changes a lot, the market is noisy, hiring managers are picky, you need speed and quality, and you do not have time for five days of search experiments.

Then follow this flow.

Do not let intake be a casual chat. You want outputs.

Ask for what success looks like in 90 days. What work they will own. What skills are non negotiable. What can flex. What backgrounds failed before and why. What traits matter on this team. What they mean by “senior”.

If you cannot get these answers, no tool will save you.

It will just rerun forever.

Not a JD rewrite. A real ICP. Short. Sharp. Testable.

I write ICPs like this: must have, nice to have, deal breakers, trade offs we accept, team context, and the one line that defines “why this role exists”.

If the ICP is good, sourcing is easy.

If the ICP is vague, sourcing becomes a slot machine.

When the first 50 come in, do not reject fast. Calibrate fast.

Pick 5 you like. Pick 5 you do not. Then explain why. Not long paragraphs. Just clear signals.

This gives the tool something to work with. It also gives your hiring manager a chance to react to patterns, not random profiles.

This is a power move.

When you interview someone great, you finally have real signal: how they think, how they communicate, how they handle trade offs, what they owned, what kind of work they avoid.

That signal does not live on LinkedIn.

So if the tool can use interview context to find lookalikes, that is where it gets scary useful. Because now your sourcing is grounded in reality. Not in titles. Not in keywords. In actual performance signals.

These are the kinds of instructions that make sourcing smoother.

Role: Senior Product Engineer Frontend.

Must have: React + TypeScript depth in real production, shipped features end to end, worked with product, not only tickets.

Nice to have: experience in fast moving startup environments, some design systems exposure, but not only that.

Red flags: only maintenance work, no ownership, very narrow scope.

Give me 15 candidates. For each candidate: 3 reasons they match, 3 risks.

Update the criteria: drop “big tech preference”, add “shipped in teams under 20”, prioritize “handled unclear requirements”, lower years, keep depth.

Rerun. Keep quality high.

Find similar candidates to the person we just interviewed. Focus on ownership level, product thinking, communication clarity, shipping cadence. Do not overfit on title.

One thing I respect is that Metaview is clear that sourcing profiles come from publicly available professional info via third party providers.

That means you treat it like any sourcing channel. You should be transparent in outreach.

Simple line. No drama.

“I came across your profile through public professional sources while searching for X. If you want me to remove your details from my search notes, tell me and I will.”

This is not only compliance.

It is respect.

And respect is the only scalable candidate experience strategy.

Metaview Sourcing does not magically make sourcing perfect.

It makes sourcing feel alive again.

Because it turns it back into what it should be: a loop, a conversation, a calibration engine. Not a lonely recruiter wrestling filters at 1 AM.

And if you are reading this thinking “ok but will it actually work for my roles”, I would frame it like this.

If your biggest problem is “I cannot find anyone”, this might help. If your biggest problem is “we cannot agree what good looks like”, this will help even more.

Because it forces clarity.

And clarity is the real sourcing cheat code.

Ready to hire faster?

Recruitera helps growing teams source better candidates, automate hiring workflows, and make confident decisions.

Book a Demo

You might also like

Recruitment Software

Resume Mistakes That Cost You the Job And How to Fix Them

Recruitera

30 Mar 2026

Requisition Management

What are Fringe Benefits: Calculation, Taxes, and More

Recruitera

20 Mar 2026

Uncategorized

Performance Management vs. Performance Appraisal: Why the Difference Matters

Recruitera

10 Mar 2026