You have two, three, maybe five offers on the table. The compensation spreadsheet says one thing. Your gut says another. Neither is using the most important signal you have: a calibrated dimensional profile of what you're actually good at, and what each role actually demands. This outcome maps every offer's real demands against your measured capabilities — not by title, not by salary, but by fit, growth trajectory, and the trade-offs each choice forces.
Each offer's role demands are mapped against your calibrated Profile. You get a fit score per offer with 80% confidence intervals, a growth-trajectory score showing which offer stretches you most, per-dimension breakdowns, and an honest trade-off analysis. Below is what an actual result looks like for three real offers — ranked by fit, with the growth signal and trade-offs named.
The 60-90 minutes is split across four steps. You can pause and resume between any of them — your work is signed and saved. Each step's investment is named so you can plan around your decision timeline.
If you already have a calibrated QLM Profile, this step is instant. If not, you take a 15-minute adaptive diagnostic that measures your dimensional shape across eight capabilities. The Profile is yours, portable, and reusable across every QLM outcome. One diagnostic powers every comparison you'll ever run.
For each offer (2-5), you enter the role description, compensation structure, organizational context, and growth trajectory signals. The engine parses role descriptions to extract dimensional demand profiles — you don't need to know what dimensions matter. Paste the job listing and add what you know about the team and scope.
Each role's extracted demand profile is compared against your measured capabilities. The engine calculates fit scores with confidence intervals, growth trajectory projections, and identifies which dimensions drive the largest gaps between your profile and each role's demand.
You receive a ranked comparison with fit scores, growth trajectories, per-dimension breakdowns, and a trade-off analysis that names what each choice costs you. The report is signed and timestamped — yours permanently. Use it to negotiate, to decide, or to revisit when the next set of offers arrives.
Offer comparison only works if your profile is measured well and the role's demands are extracted accurately. QLM publishes per-claim validation status across the product line — every assertion below is tagged validated, calibrated, or in progress. current releasees with VQC profile estimation validated and role-demand extraction calibrated against practitioner consensus.
Your dimensional profile is estimated using variational quantum circuit methods with classical shadow tomography. Test-retest reliability > 0.91 across eight dimensions. The profile converges after approximately 45 adaptive items; the 15-minute diagnostic is the minimum viable measurement for comparison-grade confidence.
Role descriptions are parsed to extract dimensional demand profiles using a calibrated extraction model. Calibrated against practitioner consensus panels (~10 senior + ~10 mid-level per role category). Demand profiles are supplemented by any contextual information you provide about scope, team, and growth expectations.
Fit is computed as a weighted cosine similarity between your profile vector and each role's demand vector, with confidence intervals derived from posterior profile uncertainty. The scoring function, the CI methodology, and the fairness audit are infrastructure-validated — the same engine that powers every QLM comparison.
Growth trajectory estimates which offer stretches your weakest dimensions most. v1 uses the gap between your profile and role demand as a proxy for growth pressure. Longitudinal validation — whether growth-pressure-predicted development actually occurs — requires 12-18 months of cohort outcome data. customers get early access to validated growth coefficients as data accrues.
Every fit score carries an 80% credible interval derived from the posterior uncertainty in your profile estimation. Narrow CIs mean the engine is confident in the ranking; wide CIs mean the ranking could shift with more data. If two offers' CIs overlap, the comparison tells you honestly that the engine cannot distinguish them.
The empirical claim — that choosing the higher-fit offer correlates with job satisfaction and performance at 6, 12, and 18 months — requires longitudinal cohort data. The current release includes the comparison; v2 ships measured correlation coefficients. Participating in the outcome-validation cohort gives you early access to your own longitudinal fit data.
A 60-90 minute offer comparison answers a specific question well: which offer's demands best match your measured capabilities, and which stretches you most. It does not answer the adjacent questions that look similar but require different evidence.
Compare up to two offers per month at no cost. Most people comparing offers need the free tier exactly once — when it matters most. Pro unlocks unlimited comparisons and historical tracking for professionals who change roles frequently or want to track how their profile-to-market fit evolves.
Up to 2 offer comparisons per month, each with up to 5 offers compared. Full dimensional breakdown, confidence intervals, growth trajectory, and trade-off analysis. Audit trail and signed report included. No credit card required.
Unlimited offer comparisons with full historical tracking. See how your profile-to-market fit changes over time. Re-run past comparisons against your updated profile after new diagnostics. Includes saved role-demand templates, comparison history, and export to PDF.
Sixty to ninety minutes returns a calibrated comparison with fit scores, growth trajectories, confidence intervals, and the trade-offs each choice forces. Free for your first two comparisons each month.