Strategy Canvas
Defining canvas attributes¶
Not levers — they are comparison baselines.
- ERRC defines what you change.
- The strategy canvas must also reflect what buyers already care about — even if you didn’t innovate it.
- Table-stakes buyer criteria - Things buyers expect to compare no matter what.
- Context-setting anchors - Axes that help interpret your differentiation.
- Economic reality checks - Axes that prevent mis-bucketing (“cheap job board” vs “expensive recruiter”).
- Can a buyer meaningfully compare platforms on this dimension?
| Axis | Ibby | Job Boards | Recruiters / Search Firms |
|---|---|---|---|
| Price | 5.5 | 2 | 9 |
| Operational Load | 4.2 | 8.6 | 3.2 |
| Volume Pressure | 5.1 | 9 | 3 |
| First-Come Timing Bias | 1.3 | 9 | 4.5 |
| Rework | 1 | 8 | 3.5 |
| Pre-call Signal | 7.5 | 1 | 5 |
| Claim-based Fit Modeling | 9 |
🏭Price¶
(Baseline anchor — not ERRC) The total financial and operational cost required to identify, qualify, and engage a truly good-fit candidate.
What it actually measures¶
- Direct spend required to access candidates (fees, subscriptions, success fees)
- Hidden downstream labor cost (screening time, coordination, rework)
- Cost of low-signal inflow (noise tax paid in reviewer hours)
- Cost predictability vs surprise costs (e.g., “cheap applicants” but expensive triage)
Competitive variation¶
- Job boards (Low) → cheap access, expensive downstream effort, high waste
- Recruiters (Very high) → contingent or retained fees tied to salary
- Ibby (Moderate) → higher than boards, lower than recruiters; pay for signal + process discipline, reduces downstream waste
🏭Operational Load¶
(Baseline anchor — not ERRC) The ongoing time and effort burden placed on the company to run hiring through the channel.
What it actually measures¶
- Amount of human time spent triaging, coordinating, and shepherding candidates
- Process overhead (status chasing, scheduling, stage management)
- Cognitive load and decision fatigue created by the channel
- Work shifted onto hiring managers vs absorbed by the intermediary/system
Competitive variation¶
- Job boards (Very high) → employers do the screening and herding
- Recruiters (Low) → recruiter absorbs sourcing + initial coordination
- Ibby (Low–moderate) → constrained flow reduces triage, but includes a pre-ATS step by design
🏭Volume Pressure¶
(Baseline anchor — not ERRC) The degree to which the channel overwhelms participants with too many “possible” candidates, forcing shallow filtering.
What it actually measures¶
- The size of the candidate inflow relative to review capacity
- The extent to which evaluation becomes “fast elimination” instead of careful selection
- How often good candidates are missed due to attention scarcity
- Whether the system optimizes for throughput over fit
Competitive variation¶
- Job boards (Very high) → volume is the product; overload is the norm
- Recruiters (Low) → curated slates throttle volume
- Ibby (Moderate) → intentionally bounded flow, still maintains optionality; adjustable goldilocks flow
🧩First-Come Timing Bias¶
(Reduce: Queue-Based Candidate Prioritization) How much being early in the queue determines whether someone is meaningfully considered, regardless of actual fit.
What it actually measures¶
- Whether candidates must optimize for speed over fit to be seen
- The extent to which submission order determines visibility and consideration
- Whether hiring evaluation is constrained by queue exhaustion rather than merit
- How sensitive outcomes are to timing rather than match quality
Competitive variation¶
- Job boards (Very high) → apply early or be buried
- Recruiters (Moderate) → human-managed, but still time-sensitive
- Ibby (Low) → fit can be evaluated whenever matching occurs, not when the req opens
🧩Rework (Data Re-entry)¶
(Raise: Canonical Profile Construction) The amount of repeated rework required to apply across opportunities (and for employers, to create/maintain postings and intake).
What it actually measures¶
- How often candidates must re-enter the same information across systems
- How fragmented the hiring “profile” becomes across platforms and applications
- How much employer effort is repeated across req creation and maintenance
- The degree to which the system supports reusable, normalized representations of fit
Competitive variation¶
- Job boards (High) → duplicative forms and fragmented application flows
- Recruiters (Moderate) → reuse candidate packets, but repackage per client/role
- Ibby (Low) → normalized dossier is reused; minimal re-entry by design
🧩Pre-Call Signal¶
(Create: Conversational Context Exploration) How much meaningful, interrogable context exists to evaluate fit before the first live conversation.
What it actually measures¶
- Availability of structured, comparable information prior to the first call
- Whether context is specific and testable vs generic narrative
- How well a reviewer can form a confident “why/why not” without meeting
- The degree to which pre-call review reduces wasted interviews
Competitive variation¶
- Job boards (Very low) → resumes + sparse profiles; weak structure and comparability
- Recruiters (Moderate) → recruiter narrative adds context, but quality varies and is not standardized
- Ibby (High) → normalized, structured context designed for pre-call inspection
🌟Claim-based Fit Modeling**¶
(Create: Claim-Based Fit Modeling) An Ibby-exclusive capability that turns fit into structured claims tied to evidence, producing a defensible “why this match” dossier.
What it actually measures¶
- Extraction of capability claims from available source material into a structured model
- Mapping of claims to role requirements with explicit rationale
- Evidence binding: each key claim is linked to its supporting source(s)
- Interrogability: reviewers can inspect, challenge, and validate claims without a live call
- Proof strength: distinguishing strong vs weak evidence and surfacing gaps
Competitive variation¶
- Job boards (Absent / very low) → keyword matching + self-asserted profiles; little proofing or auditability
- Recruiters (Secondary / moderate) → credibility comes from human vetting and summaries, not systematic claim/evidence binding at scale
- Ibby (Very high) → fit is computed from structured claims and “proofed” through evidence-bound, interrogable dossiers