Skip to content

Diligence FAQ

Why now? What changed that makes this feasible?

Models can now turn messy narrative into structured claims at scale, and conversational interfaces let users pressure-test context asynchronously—so we can create decision-ready representations instead of keyword artifacts.

What’s the wedge—what do you ship first that people use weekly?

Phase 1 is shareable role/profile agents with clarifying loops: they answer real questions, expose missing context, and continuously improve the underlying structured claims.

What’s your initial ICP and first role archetype?

Post-seed startups with urgent hiring needs; start with one high-cost, high-mismatch archetype (e.g., senior technical roles) where screening time and false positives are especially painful.

How do you distribute—what’s the repeatable channel?

Shareable agent URLs embedded in job posts and applications are the primary loop; agent Q&A creates long-tail SEO pages; founder-led outreach seeds the first high-trust inventory.

How do you de-risk distribution if this requires new behavior (people clicking and interrogating agents)?

We treat distribution as a measurable funnel, not an assumption. In Phase 1, agents are embedded directly into existing workflows (job posts, applications, outreach) as a lightweight “ask anything” layer. We instrument link surfaces with optional tags (job post vs application vs outreach vs referral) and track the full loop: external reach (unique non-owner visitors), external engagement (question submission rate), depth (prompts per agent), and responsiveness (follow-ups resolved timely).

If a surface drives views but not questions, it’s not working; we iterate on the “click reward” and placement until non-owner interrogation becomes repeatable. We only advance to Phase 2 once this loop is stable week-over-week and no longer dependent on manual pushes.

How do you solve the marketplace cold start?

Single-player value first: public agents generate inventory, distribution, and structured signal without matching; matching turns on only after liquidity is proven by shortlist reliability.

Why will candidates create profile agents before matching exists?

It’s a reusable “living application” that reduces repeated effort, answers hiring questions asynchronously, and improves evaluation outcomes even outside the marketplace. Shared alongside a traditional application, the agent also differentiates the candidate by giving employers a faster, higher-signal way to evaluate fit before scheduling interviews.

What’s the activation trigger for Phase 2 matching?

Enable matching once a target role can consistently receive N qualified Match Briefs within T days across 3+ role archetypes, week over week for four consecutive weeks. Our Go-to-Market Strategy writeup has all the details.

Who pays, and why will they pay early?

The hiring manager/founder pays first because it replaces screening hours and accelerates time-to-qualified-conversation; as usage scales, the buyer shifts to Head of Talent/HR leaders.

What’s the pricing model?

Companies pay per open role (subscription) or via a time-boxed per-role package; risers increase parallel match capacity on high-fit Match Briefs only, not raw volume; candidates are free.

Doesn’t throughput pricing incentivize spam?

Risers increase concurrency on qualified Match Briefs and remain constrained by quality thresholds; companies can’t buy their way into flooding pipelines.

How do you avoid becoming “another inbox of leads”?

We constrain throughput and deliver only qualified Match Briefs; the system is designed for shortlist compression and conversation completion, not volume.

How do you know the claims are true and not polished fiction?

Claims are elicited through structured prompts and consistency checks, and pressure-tested through context interrogation; low-confidence or contradicted claims are surfaced and corrected through follow-ups and outcome feedback.

What data do you need to win, and how do you get it?

We need structured claims, interrogation transcripts, and downstream outcomes; Phase 1 agents generate this directly from real questions and clarifications, not synthetic labeling. We can also bootstrap and calibrate early models using licensed third-party datasets (e.g., corpora of job descriptions and resumes), while treating real in-product interactions as the primary source of compounding signal.

How do you handle privacy/PII with public agents?

PII is blacklisted by default, sensitive disclosures are gated by the author, and the system enforces safe defaults and abuse monitoring to prevent leakage or misuse.

How do you address bias and hiring compliance concerns?

Ibby reduces bias by default: candidates remain anonymized until both sides affirm interest, pushing identity-based judgment as far downstream as practical in a real hiring workflow. Structured claims make evaluation criteria explicit and comparable, and the system supports governance via auditability of what signals drove each Match Brief; sensitive attributes are excluded or tightly controlled by design.

What prevents incumbents from copying this?

A chatbot is easy to copy; the compounding advantage is structured inventory, a growing context corpus, workflow distribution via shareable artifacts, and switching costs that accrue with continued use—plus incumbents’ volume incentives resist constrained shortlists.

What are the biggest failure modes?

Distribution and responsiveness: if agents don’t attract real questions or authors don’t answer follow-ups, claims don’t improve—so we optimize for shareability, reminders, and visible freshness incentives.

What does success look like in 6–12 months?

Agents are being shared and interrogated weekly, structured claim completeness is rising, and Ibby can reliably deliver small qualified shortlists for initial role archetypes—enough to turn on matching and convert companies to paid plans.

What is a Match Brief?

A Match Brief is a standardized packet describing a candidate or role, surfacing the most relevant claims and context needed to decide whether to proceed. A candidate Match Brief provided to the company is fully anonymous, with details exposed after the

What about fraud, gaming, and adversarial behavior?

Ibby doesn’t “verify truth” as a credentialing service. We treat claims the way the market treats resumes today: assertions that employers must validate through reference checks, work samples, interviews, and background processes.

What Ibby does change is detectability and incentives.

  • Structured claims are harder to fake at scale. Instead of a polished narrative, candidates (and roles) are represented as explicit, queryable claims. Vague statements don’t survive contact with real questions for long.
  • Interrogation creates an audit trail. Employers can pressure-test a candidate’s representation asynchronously. Contradictions, evasive answers, and low-confidence areas become visible earlier than they do in traditional screening.
  • Outcome feedback tightens the loop. If a candidate is found to be deceitful (or a company misrepresents a role), that outcome becomes a first-party signal that can be used to reduce distribution and trust going forward.
  • Governance is enforcement, not vibes. We apply escalating penalties for adversarial behavior—downranking/throttling, reduced visibility, and removal—mirroring the same posture we use to enforce reliable follow-through in the Ibby Handshake.

Net: Ibby won’t certify reality, but it will make truth cheaper to validate and systematic deception harder to sustain—while keeping the employer’s verification responsibility exactly where it already is today.

What is the "Ibby Handshake?"

The Ibby Handshake is Ibby’s two-sided commitment step that turns a promising match into a real first conversation — without wasting anyone’s time or exposing anyone’s identity too early.

How it works

  • Company reviews an anonymized candidate Match Brief (standard format, unique details) and can interrogate the underlying context model.
  • If the company wants to proceed, they affirm interest. In Ibby, that affirmation means: if the candidate also affirms, the company is committing to take a real first conversation/interview (not leave them hanging).
  • Candidate reviews the company/role Match Brief (also standardized, but not anonymized) and can interrogate the same modeled context.
  • If the candidate affirms, the handshake completes: the company is notified and Ibby de-anonymizes the candidate (and shares contact/profile details) so the intro can happen immediately.

Why we do it

  • Reduces spam and wasted cycles by requiring intent on both sides before exchanging identities.
  • Protects candidates from being “reviewed” endlessly without follow-through.
  • Protects companies by ensuring candidates are opting in to this specific role, not spraying applications everywhere.
  • Makes the first conversation reliable by turning “interest” into an explicit, system-mediated commitment.

The Ibby Handshake is a sharp network-effect nucleus: if it reliably produces actual conversations, it becomes the place both sides prefer to be (classic marketplace network externalities).