Methodology
Structured, reviewable intelligence—not a black-box score.
Enterprise buyers, standards bodies, and investors should ask the same question: can we defend how conclusions were reached? Follicle Intelligence is built so evidence, weighting, confidence, benchmark context, and adjudication are explicit—methodology is the difference between marketing claims and reviewable quality.
Method framework
Repeatability, traceability, institutional credibility.
The pipeline below is stable across deployments: capture through closed-loop quality. What follows deepens each stage—how weighting, confidence, benchmarks, and review layers behave in practice.
Suggested diagram
End-to-end pipeline: capture → structure → score → confidence → insight → re-audit, with side branches for review and benchmark context.
Evidence
Evidence capture and weighting.
Scores aggregate evidence; they must not pretend uniform strength where the record is uneven. Weighting logic is designed to make that unevenness legible to reviewers—not to optimize a prettier headline number.
What “weighting” means here
Not a hidden preference for outcomes. Weighting refers to how strongly each piece of evidence contributes to a domain score given completeness, imaging quality, documentation depth, and follow-up availability. Sparse evidence does not silently equal strong evidence—gaps are visible in the output posture, not erased in the average.
Explicit handling of missing or weak inputs
When required artefacts are absent or low quality, the system does not substitute guesswork for data. Domain-level conclusions reflect reduced support; confidence and completeness cues communicate that reduction to reviewers and downstream reporting.
Signal quality
Confidence and integrity markers.
Outputs carry more than a scalar. Confidence and integrity dimensions exist so leaders and committees can see whether a conclusion is well-supported or thinly documented—before disclosure or training decisions harden.
Evidence density
Signals how much structured input supported a conclusion relative to what the methodology expects for that domain—useful for prioritizing review, not for claiming precision that the record does not support.
Integrity and consistency checks
Basic consistency rules (e.g., incompatible inputs, out-of-range values) flag items for human review where automation should not infer intent. These are guardrails, not a substitute for clinical judgment.
Benchmarks
Benchmark context.
A number without context is not a benchmark—it is a label. Methodology requires explicit cohort membership, baselines, and versioning so standing can be interpreted and audited.
Cohort definition
Standing is always relative to a defined cohort or baseline: peer sets, historical internal bands, or policy targets. Rule sets are versioned where methodology changes—so a shift in “top quartile” reflects policy or membership change when applicable, not silent drift.
Denominators and eligibility
Benchmarks require clear inclusion logic. FI’s methodology treats denominator and eligibility as explicit inputs to interpretation: a score without a cohort label is an incomplete administrative object, not a public ranking.
Suggested diagram
Cohort ladder or table: score + cohort ID + rule version + effective date—ideal for trust and security reviews.
Governance
Review layers and adjudication.
Scoring produces candidates for judgment; governance produces decisions. The methodology separates those concerns so institutions can map roles, escalation, and disclosure to policy.
Separation of roles
Scoring produces structured outputs; governance assigns review ownership (clinical lead, quality office, committee) per tenant policy. The methodology supports separation between who generates signal and who adjudicates exceptions—aligned with serious quality programs.
States, not vibes
Cases and reports move through defined states (e.g., draft, under review, cleared for limited disclosure). Adjudication is recorded; that record is what makes external claims defensible under scrutiny.
Defensibility
Why this is reviewable.
Reviewability is the bar for institutional adoption: a third party can ask how a conclusion was produced and receive a coherent answer from structure, not from internal lore.
- Inputs are structured and attributable: what was observed is distinguishable from what was inferred.
- Scores are decomposable into domains—reviewers can interrogate components rather than fight a single opaque number.
- Confidence and completeness are surfaced so “high score / low support” situations are visible before disclosure.
- Benchmark labels and cohort rules are explicit enough for a third party to ask: compared to whom, under what rules, as of when.
- Governance events (review, escalation, clearance) leave a trace suitable for institutional oversight—not merely an activity log.
Interpretation
How confidence should be interpreted.
Misread confidence and you either over-trust thin evidence or under-use strong signal. The following guardrails are methodological, not promotional.
Higher confidence
Generally indicates more complete, consistent evidence against methodology expectations for that domain. It does not imply clinical certainty or legal proof—it means the structured record supports the conclusion more strongly.
Lower confidence
Should trigger proportionate caution: broader review, additional documentation requests, or withholding external-facing summaries until governance clears—not automatic dismissal of the case.
Not a substitute for judgment
Confidence markers inform allocation of human attention. They do not replace professional review where policy, regulation, or ethics require it.
Standards
Alignment with IIOHR and institutional practice.
FI does not claim to be a professional regulator. Methodology alignment means operable frameworks, review pathways, and traceability that institutes can adopt alongside IIOHR-led training and governance—not a substitute for professional judgment or statutory requirements.
- Methodology documentation and review pathways are designed to sit alongside IIOHR-aligned training and standards work—not to replace professional bodies, but to make institutional programs operable.
- Versioning and auditability expectations match what associations and institutes need when they adopt third-party infrastructure: explicit rules, traceable changes, and exportable review records where policy allows.
- Advisory alignment with IIOHR is organizational and methodological; specific certifications or endorsements are stated only where contractually and factually accurate.
Ecosystem
The same framework underpins HairAudit scoring and supports Hair Longevity Institute longitudinal pathways—so surgery, biology, and professional standards reinforce one another in one architecture.
For extended discussion of evaluation in hair restoration, see our clinical evaluation pillar.