Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kowalah.com/llms.txt

Use this file to discover all available pages before exploring further.

ICE scoring is how Kowalah turns “this feels promising” into a defensible call about what to build, when to build it, and how it landed. Every Opportunity, Deliverable, and Expert Request carries an ICE score — three numbers and a short written rationale — that everyone in your organization and your Kowalah team can see.

The three dimensions

Every score is made up of three numbers, each on a 1 to 10 scale. The composite is simply the sum, so it ranges from 3 to 30.
DimensionWhat it measures
ImpactHow much value this would create if it works
ConfidenceHow likely it is to actually work
EaseHow quickly and cheaply it can be delivered
Higher is better on all three. A score of 9/10/8 (composite 27) is a clearer “build this next” signal than 5/4/3.
ICE works because it’s small enough to reason about out loud. If you can’t hum out three numbers in a meeting, you probably haven’t thought about the work clearly enough yet.

Impact has three flavours

Impact is the dimension that drives the most discussion, so it’s broken into three named rationale slots. You don’t have to fill in all three — the ones you fill in tell everyone else what kind of impact you’re claiming.
SlotWhat goes hereExample
BusinessRevenue uplift or cost reduction, in money where possible”Reduce supplier costs by ~£2.5M annually”
ProductivityHours saved, headcount freed up, throughput increased”200 hours/week saved across the support team”
AdoptionUsers active, login rate, behaviour change”Drive Claude weekly active rate from 15% to 60%”
You can claim a high Impact score (say, 9) on the strength of a single slot — for example a productivity-only opportunity that’s genuinely transformational. The slots force you to say what kind of impact you mean, not to fill them all in.
Confidence and Ease each have a single rationale field, because they’re unitary judgements. A short sentence explaining why you scored what you did is enough.

Where scoring shows up

ICE scores attach to three entities in the platform:

Opportunities

A single live score that captures current triage state. Refines as you learn more.

Deliverables

An estimated score at planning, then a final score at closeout — the prediction vs. result.

Expert Requests

Same pattern as Deliverables: estimated at submission, final when the work is delivered.

Why opportunities have one score and the rest have two

At the opportunity stage, work hasn’t started — the score is just the current best read on whether this idea is worth promoting. AI may take a first pass, then a human refines it. There’s no “delivered” yet, so there’s no estimated-vs-final story. Once an opportunity is promoted into a Deliverable or Expert Request, the question changes: did we deliver what we said we would? That’s why Deliverables and Expert Requests carry both an estimated score (set when planning the work) and a final score (set at closeout, alongside any Outcomes that have actually landed).
If the final score is dramatically different from the estimated score — say, you estimated Impact 9 and delivered Impact 4 — that’s a useful signal, not a failure. The gap is a learning prompt for how the team scopes future work.

How scores change over time

Every change to a score is recorded — who made it, when, what changed, and why.
  • Estimated score is editable while the work is in flight.
  • Final score locks shortly after the work closes, to preserve the prediction-vs-result narrative for reporting. If something material changes after that, capture it as an Outcome instead.
  • Score history is visible on every entity. You can see whether the AI took the first pass and a human refined it, or whether confidence climbed as the team got further into delivery.

Who scores

Scoring is a shared activity:
  • Kowalah AI often takes a first pass on Opportunities the moment they arrive — particularly opportunities raised through Slack, Teams, Google Chat, or email — so the triage queue is never empty.
  • Your Kowalah team refines scores during triage and sets the estimated score when work is committed.
  • Anyone with write access to the entity can update a score, including customer stakeholders linked to a Deliverable or Expert Request. Each change records a rationale.
The first row in the score history is usually Kowalah AI’s initial take. That’s intentional — it lets you see how human judgement adjusted the AI’s first pass, and over time it lets the AI calibrate against human refinements.

Using scores to make decisions

You’re trying toLook at
Decide which Opportunities to promote firstComposite score on Opportunities, sorted descending
See whether a Deliverable is still worth the effortEstimated score vs. how the work is progressing
Report to the leadership team on what’s been deliveredFinal scores on completed Deliverables + linked Outcomes
Calibrate how your team scopes future AI workEstimated vs. final gaps across recent Deliverables

Outcomes

Realised, often measurable results that back up the final score on a Deliverable or Expert Request.

Opportunities

Where ICE scoring starts — capture and triage AI ideas before promoting them.