ICE scoring is how Kowalah turns “this feels promising” into a defensible call about what to build, when to build it, and how it landed. Every Opportunity, Deliverable, and Expert Request carries an ICE score — three numbers and a short written rationale — that everyone in your organization and your Kowalah team can see.Documentation Index
Fetch the complete documentation index at: https://docs.kowalah.com/llms.txt
Use this file to discover all available pages before exploring further.
The three dimensions
Every score is made up of three numbers, each on a 1 to 10 scale. The composite is simply the sum, so it ranges from 3 to 30.| Dimension | What it measures |
|---|---|
| Impact | How much value this would create if it works |
| Confidence | How likely it is to actually work |
| Ease | How quickly and cheaply it can be delivered |
Impact has three flavours
Impact is the dimension that drives the most discussion, so it’s broken into three named rationale slots. You don’t have to fill in all three — the ones you fill in tell everyone else what kind of impact you’re claiming.| Slot | What goes here | Example |
|---|---|---|
| Business | Revenue uplift or cost reduction, in money where possible | ”Reduce supplier costs by ~£2.5M annually” |
| Productivity | Hours saved, headcount freed up, throughput increased | ”200 hours/week saved across the support team” |
| Adoption | Users active, login rate, behaviour change | ”Drive Claude weekly active rate from 15% to 60%” |
You can claim a high Impact score (say, 9) on the strength of a single slot — for example a productivity-only opportunity that’s genuinely transformational. The slots force you to say what kind of impact you mean, not to fill them all in.
Where scoring shows up
ICE scores attach to three entities in the platform:Opportunities
A single live score that captures current triage state. Refines as you learn more.
Deliverables
An estimated score at planning, then a final score at closeout — the prediction vs. result.
Expert Requests
Same pattern as Deliverables: estimated at submission, final when the work is delivered.
Why opportunities have one score and the rest have two
At the opportunity stage, work hasn’t started — the score is just the current best read on whether this idea is worth promoting. AI may take a first pass, then a human refines it. There’s no “delivered” yet, so there’s no estimated-vs-final story. Once an opportunity is promoted into a Deliverable or Expert Request, the question changes: did we deliver what we said we would? That’s why Deliverables and Expert Requests carry both an estimated score (set when planning the work) and a final score (set at closeout, alongside any Outcomes that have actually landed).How scores change over time
Every change to a score is recorded — who made it, when, what changed, and why.- Estimated score is editable while the work is in flight.
- Final score locks shortly after the work closes, to preserve the prediction-vs-result narrative for reporting. If something material changes after that, capture it as an Outcome instead.
- Score history is visible on every entity. You can see whether the AI took the first pass and a human refined it, or whether confidence climbed as the team got further into delivery.
Who scores
Scoring is a shared activity:- Kowalah AI often takes a first pass on Opportunities the moment they arrive — particularly opportunities raised through Slack, Teams, Google Chat, or email — so the triage queue is never empty.
- Your Kowalah team refines scores during triage and sets the estimated score when work is committed.
- Anyone with write access to the entity can update a score, including customer stakeholders linked to a Deliverable or Expert Request. Each change records a rationale.
The first row in the score history is usually Kowalah AI’s initial take. That’s intentional — it lets you see how human judgement adjusted the AI’s first pass, and over time it lets the AI calibrate against human refinements.
Using scores to make decisions
| You’re trying to | Look at |
|---|---|
| Decide which Opportunities to promote first | Composite score on Opportunities, sorted descending |
| See whether a Deliverable is still worth the effort | Estimated score vs. how the work is progressing |
| Report to the leadership team on what’s been delivered | Final scores on completed Deliverables + linked Outcomes |
| Calibrate how your team scopes future AI work | Estimated vs. final gaps across recent Deliverables |
Outcomes
Realised, often measurable results that back up the final score on a Deliverable or Expert Request.
Opportunities
Where ICE scoring starts — capture and triage AI ideas before promoting them.