Lean Canvas v2.1¶
PROBLEM¶
Hiring fails not because talent is scarce, but because signal is buried under volume.
Employers - Overwhelming applicant volume - Poor signal from resumes and job posts - High screening cost and slow decisions - False negatives hidden by noise
Candidates - “Screaming into the void” - Massive unpaid effort with no feedback - Automated rejection despite strong fit - Ghosting and lack of accountability - Optimize for speed, not best fit; causes more noise for employers
Existing platforms optimize for volume metrics, not decision quality.
The real cost isn’t just bad hires — it’s the people cost of hiring itself: recruiters, coordinators, sourcing, screening, and endless evaluation loops exist primarily because signal is buried. If you surface signal by default, most of that hiring-specific labor shrinks or disappears.
CUSTOMER SEGMENTS¶
Employers (Initial ICP)
- Post-seed B2B startups with hiring urgency and budget
- Economic buyer: Founder / hiring manager (often the same person early)
- Why this segment: highest willingness to adopt a new workflow + most acute pain from screening time.
Employers (Adjacent)
- SMBs with recurring hiring needs and limited recruiting capacity
- Economic buyer: Hiring manager / owner-operator
Employers (Later)
- Mid-market companies open to structural change
- Economic buyer: Head of Talent / HR leader
- Model generalizes to orgs of all sizes that will adopt constraint + commitment.
Candidates - Senior ICs (Initial ICP) - Hands-on managers / technical leaders (Adjacent)
Unique Value Proposition (UVP)¶
Headline¶
Ibby makes hiring a shortlist delivery problem — not a sourcing + screening problem.
Subheadline¶
Both sides talk to Ibby. Ibby turns those conversations into explicit, structured claims about the role and the person—signals that can be weighted and matched across dimensions (not keywords). The result: a small, conversation-ready shortlist, with sourcing and screening work largely removed.
What makes it different¶
- Natural Language conversation: dialog produces comparable, high-resolution input from both sides
- Claims, not vibes: outputs are structured statements about requirements, constraints, preferences, and evidence
- Semantic precision: embeddings + structured claims enable matching across multiple signals with controllable weighting and filtering
Outcomes¶
For companies - High-signal matching that eliminates “needle-in-the-haystack” sifting - Fewer candidates to review, because the strong matches get identified earlier - Better downstream conversations because candidates arrive with role-relevant claims already surfaced
For candidates - No racing the clock to be first applicant—not competing on submission timing - Evaluated against every relevant job, every time, using the same claim framework - Opportunity isn’t limited to what you found or had energy to chase—the system brings matches to you
One-line positioning¶
“Claim-based fit modeling for hiring: structured signals in, precise matches out.”
Solution¶
Ibby is a claim-based hiring system that converts role and candidate information into structured signals, then uses those signals to deliver a constrained, conversation-ready shortlist—eliminating haystack sifting and timing games.
- Claims in: Discussion-based intake produces comparable, structured claims for both roles and candidates.
- Precision match: Claims are embedded and weighted to match across multiple semantic dimensions, not keywords.
- Small shortlist out: Limited high-fit intros with context to act immediately.
- Handshake enforcement: Mutual affirmation converts intent into action—first conversations are SLA-bound and verified; ghosting triggers throttles, fee pressure, and eventual removal.
- Truth posture: Ibby doesn’t certify credentials; it makes claims interrogable and auditable, and penalizes proven deception (downrank/throttle/remove).
- Hiring effort collapse: When the shortlist is consistently high-signal and intro-ready, companies don’t need volume triage (and the recruiter-heavy process it creates).
- Recruiting becomes optional: High-signal shortlists make most recruiting labor unnecessary except for edge cases.
KEY INSIGHT¶
Hiring breaks when: - High-signal fit is forced into low-bandwidth artifacts (resumes, postings, keyword filters) - Human attention is treated as infinite, so volume becomes the default strategy
Ibby restores signal by: - Converting narrative into structured claims and Qualified Match Briefs - Constraining throughput so attention is spent only on high-fit introductions
Constraint is the mechanism—not the limitation. If attention is protected by constraint, the system doesn’t need humans to do volume triage.
CHANNELS (long form)¶
Phase 1 (agents → inventory + distribution) - Shareable agent URLs: embedded in job posts and applications as an “ask anything” layer; optional link tags (“Copy link for job post/application”) to attribute and optimize distribution. - Founder-led LinkedIn + targeted communities (Slack/Reddit) to seed early usage - Direct outreach to hiring managers/founders to publish the first company agents
Phase 2 (matching → expansion) - Candidate referrals (candidates share their profile agent as a “living application”) - Partnerships/placements where role links live (job boards, newsletters, communities) - Content/SEO driven by the questions agents answer (long-tail, high-intent pages)
Long-term growth depends on balanced adoption, but Phase 1 creates standalone value and durable inventory before matching. Phase 1 is PLG via shareable agents; Phase 2 becomes founder-led sales into hiring managers.
CHANNELS (short form)¶
- PLG via shareable role/profile agent URLs (embedded in job posts + applications)
- Founder-led LinkedIn + targeted communities (Slack/Reddit)
- Direct outreach to hiring managers/founders to seed company agents
- Content/SEO from agent Q&A (long-tail acquisition)
- Referrals, partnerships, traditional demand gen once matching (Phase 2) is live
Balanced adoption matters, but Phase 1 generates inventory and distribution before matching.
REVENUE STREAMS (Phase 2)¶
Base subscription¶
- $750 / role / month
- Includes 20 Qualified Match Briefs / month
- Risers: +$200 / month per +10 additional briefs
- Volume Discounting
- 5–9 roles: $650/role/mo
- 10+ roles: $550/role/mo
- All roles subject to same SLA
Feedback Credits (quality-incentive, not a loophole)¶
To encourage high-quality rejection signal (and continuously improve matching), Ibby offers limited “feedback credits.”
- Up to 5 credits / role / month
- Eligibility: applies to pre-handshake rejections where the employer interrogated the candidate agent
- How it works: the employer submits free-text rejection feedback. Ibby evaluates whether that feedback produces a meaningful update to the role and/or candidate model used for future matching.
- If it changes the model (e.g., clarifies a requirement, adjusts weighting, adds a constraint, clarifies a claim), the employer earns 1 additional QMB credit (usable that month).
- If it doesn’t change the model (duplicative, too vague, already accounted for), no credit is issued—though the feedback is still recorded.
System responses (example): - ✅ Credit earned: “Updated role model: increased emphasis on X; clarified constraint Y.” - ❌ No credit: “Recorded feedback; no model update required (already captured for this role).”
Why this model?¶
TL;DR: Ibby is priced to reward focus, clarity, and follow-through—the opposite of the applicant-volume arms race.
In a world where hiring systems compete on volume (“more applicants,” “more outreach,” “more leads”), Ibby is deliberately priced around throughput of high-signal decisions, not applicant flood.
1) Brief throughput maps directly to hiring velocity - A Qualified Match Brief is the atomic unit of progress: it’s the moment a company can confidently say yes/no and move forward. - In practice, hiring speed is constrained by how many candidates a team can evaluate with real attention—not how many resumes they can collect. - By pricing on briefs per month, Ibby aligns cost to the real bottleneck: how fast you can make high-quality decisions and start conversations.
We charge for brief throughput because it directly maps to hiring velocity.
2) Hard limits prevent “volume drift” and preserve signal - Unlimited “candidates” pushes every system toward spam, shallow screening, and noisy funnels. - A fixed brief allowance keeps the product honest: Ibby is not selling access to a haystack; it’s selling a constrained set of conversation-ready candidates. - This protects both sides of the market: - Companies don’t drown in review work. - Candidates aren’t subjected to mass rejection or silent churn.
3) Risers monetize urgency without incentivizing spam - When hiring urgency increases, companies should be able to pay for speed, not volume. - The risers expand decision throughput (more briefs per month) rather than encouraging broader, lower-quality intake.
4) Feedback credits buy signal quality, not extra work - Rejection feedback is one of the most valuable inputs to improve matching and calibrate roles. - Ibby converts that into a direct incentive: - If feedback creates a meaningful update to the role/candidate model, the company earns a limited credit. - This is intentionally capped to keep it a “nice to have,” not something worth gaming.
Feedback credits trade budget for signal quality.
5) Predictable budgets, clean procurement - Employers can budget Ibby like a tool: a clear monthly cost per role and an explicit throughput allowance. - If they need more velocity, they can add risers. If they don’t, the plan doesn’t quietly balloon in cost.
COST STRUCTURE (Phase 1 Green)¶
Fixed (monthly run-rate) - Founders: ~$33k/mo (2 × $200k base) - Core infra: ~$1k/mo - Vendors (ops + support): ~$500/mo - Trust & safety tooling: ~$200/mo - Paid acquisition: ~$2.5k/mo (Reddit $2k for candidates; LinkedIn $500 for ~5 employer leads/mo)
Variable (usage-driven) - LLM inference: ~$0.02 per prompt (~$494/mo at Phase 1 Green traffic) - Embeddings: negligible (~$0.15/mo; ~$0.47 one-time to seed)
One-time (conservative) - Design: $50k - Legal: $30k - Unknown unknowns: $20k
Burn - Early Phase 1 run-rate: ~$38k/mo (all-in, incl. ads + compute) - Late Phase 1 (6 months ops contractor): ~$44k/mo
KEY METRICS¶
Phase 1 - Single-player (Agents → prove distribution + interrogation loop)¶
- External reach: unique non-owner visitors per agent (7-day)
- External engagement: % of non-owner visitors who submit ≥ 1 Q&A prompt (7-day)
- Engagement depth: non-owner Q&A prompts per agent per week
- Guardrail (loop health): % of follow-up questions resolved by agent within 72 hours (response or claim update)
- Inventory (critical mass): # of match-ready profiles (rolling 30 days), by archetype
Success criteria (Green): - External reach ≥ 3 unique non-owner visitors per agent (7-day) - External engagement ≥ 25% submit ≥ 1 question (7-day) - Loop health ≥ 80% of follow-ups resolved within 72 hours - Match-ready inventory (Archetype 1): 2,000–3,000 match-ready profiles (minimum viable pilot: ~1,000)
Phase 1 → Phase 2 transition: Advance when both (a) non-owner interrogation and follow-up resolution are stable week-over-week and (b) match-ready inventory reaches critical mass in at least one archetype.
Phase 2 (Matching → prove outcomes + liquidity)¶
- Shortlist reliability: # of qualified Match Briefs delivered within 24 hours of role intake
- Handshake conversion: % of Match Briefs that reach mutual “affirm interest” within 7 days
- Intro completion: % of mutually affirmed matches that complete a first conversation within 10 days
- Guardrail (trust): employer follow-through rate (no-ghost) after mutual affirm
- Liquidity (critical mass): median match-ready candidates available per role intake (coverage ratio), plus p25
Success criteria (Green): - Shortlist reliability ≥ 3–5 qualified Match Briefs within 7 days - Handshake conversion ≥ 20% within 7 days - Intro completion ≥ 75% within 10 days - Employer follow-through ≥ 85% after mutual affirm - Coverage ratio (Archetype 1): median 60–100 match-ready candidates per role (minimum viable: ~30; comfortable: 150+), track p25 to avoid “average-only” success
Phase 2 → scale transition: When shortlist reliability and intro completion are stable week-over-week across multiple role archetypes (and p25 coverage remains healthy), Ibby can scale acquisition because outcomes are no longer dependent on manual curation or one-off supply.
Secondary / Diagnostic metrics¶
- Qualified match rate (above fit threshold): The share of generated matches that exceed a defined fit score based on claim alignment across key dimensions.
- Shortlist compression (median candidates shown per role): How small we can make the candidate set per role while still maintaining high match quality and forward progress.
- Timing neutrality index (outcome not tied to “who applied first”): A measure of whether match visibility and outcomes are independent of when a candidate joined or a role was created.
- Reuse events (canonical profile reuse): The frequency with which a candidate or role profile can be reused across new matches without re-entering or re-explaining the same information.
UNFAIR ADVANTAGE¶
(Why LinkedIn can’t just copy Ibby)
Summary¶
Ibby’s defensibility isn’t a single feature—it’s a system of incentives, enforcement, and compounding data. LinkedIn could replicate pieces, but copying the whole would require changes that conflict with their current economics, brand posture, and org structure.
-
Business-model conflict (self-cannibalization): Ibby optimizes for constrained flow + high-certainty matches + enforced follow-through. LinkedIn’s hiring surfaces are fundamentally designed around volume, broad reach, and engagement. To mimic Ibby's approach would undercut mechanics that make their current model work.
-
System, not feature: Our advantage comes from an interlocking bundle—canonical profiles + claim/evidence structure + interrogation + mutual commitment + enforcement. LinkedIn can ship “AI matching,” but “AI matching” without enforced reciprocity and structured truth-building collapses back into the incumbent experience.
-
Proprietary compounding data moat: Ibby generates hard-to-recreate assets: structured claims, clarifications, interrogation transcripts, and outcomes that improve matching and briefing quality over time. LinkedIn has data, but not this process-shaped, fit-explanatory, outcome-linked corpus.
-
Network effects around reliability: The Ibby Handshake creates a different kind of marketplace gravity: if “expressed interest reliably becomes a real first conversation,” both sides preferentially route through Ibby. Reliability becomes the network effect, not reach.
-
Organizational friction (politics + ops): Replicating Ibby would be a cross-org revamp: Enforcing follow-through, penalizing ghosting, redefining success metrics away from volume, building new workflows. Even if they agree it’s good, it’s slow, contentious, and easy to deprioritize.
-
Brand-image conflict: Ibby’s identity is neutral ground + anti-volume + enforceable reciprocity. For LinkedIn to adopt that posture credibly would create a public contradiction with how users experience the platform today—and invite scrutiny of their current dynamics.
-
Cost advantage from focus: Ibby is purpose-built to do one thing extremely well: structured fit + enforceable introductions. LinkedIn carries massive product scope and incentive complexity; matching Ibby’s depth and enforcement would mean sustained investment in a direction that doesn’t naturally align with their broader optimization goals.