Skip to main content

Why do my Pivot Table numbers look different from my Workbench?

Understand why pivot tables can show different numbers compared with the Workbench (aka Dashboard)

Christina Petrakos avatar
Written by Christina Petrakos
Updated yesterday

It’s normal to see differences between Pivot Tables and Workbenches in Kapiche.

Both are correct, they’re just answering slightly different questions.

This article explains:

  1. Why numbers differ (the mechanics behind it)

  2. Which view to use in different scenarios

  3. How to reconcile your numbers

  4. FAQ + Cheat sheet for quick reference


1. Common reasons for differences

1.1 Baseline differences

  • Workbench: When you filter your data (e.g. by brand, segment, or product line), the Workbench recalculates NPS Impact using that filtered set as the baseline.

  • Pivot table: The pivot table shows the impact of the subset you’ve selected (e.g. Brand A + “Customer Service”) against the overall dataset.

💡 Example:

On the Workbench, Brand A’s Customer Service looks like a strong driver of NPS because the baseline has adjusted to Brand A only.

In the pivot, the same theme looks weaker because it’s being compared against all brands combined.


1.2 Filters and segmentation

  • Workbench: Often has filters applied by default (like dates, sentiment, or customer segments).

  • Pivot table: Shows all records unless you add the same filters manually.

💡 Example:

Your Workbench is filtered to “Last 30 days” and “Promoters only.”

The pivot table shows every record from the whole year.

The numbers don’t line up until you apply the same filters in both places.


1.3 Aggregation logic

  • Workbench: May calculate metrics per customer or per record, depending on the widget. This smooths out duplicates and blanks.

  • Pivot table: Usually counts every record at the rawest level.

💡 Example:

One customer leaves three comments about Delivery.

The Workbench counts them once (because it’s aggregating at customer level).

The pivot table counts all three comments.


1.4 Derived vs raw fields

  • Workbench: Uses calculated fields (like sentiment, theme groups, or top-2 box scores).

  • Pivot table: Shows the raw underlying data, including blanks or uncategorised values.

💡 Example:

The Workbench shows a neat “Top-2 Box Satisfaction” score of 72%.

The pivot table shows the original 1–10 scale responses, so the average might look different.


2. Which should I use? (By scenario)

Here’s a guide to picking the right tool depending on what you want to do.


2.1 Comparing overall scores (NPS, CSAT, CES) across segments

Best to use: Standard score (e.g. NPS = %Promoters − %Detractors)

  • Workbench: Good for high-level reporting and side-by-side comparisons.

  • Pivot tables: Useful if you want to break scores down further by another field.

Avoid Impact on NPS here - it’s not designed for head-to-head comparisons.


2.2 Understanding what’s driving NPS (or CSAT/CES) inside a segment

Best to use: Impact on NPS (or Impact on CSAT/CES)

  • Workbench: Show which themes are driving the score within the slice you’ve filtered (e.g. Brand A customers). Baseline = that slice.

  • Pivot tables: Show how the same theme+slice compares against the overall dataset (e.g. Brand A’s Customer Service impact vs all brands combined).

Why: Workbenches answer “what matters most for this slice?” while pivots answer “how does this slice compare to the global picture?”


2.3 Tracking sentiment across segments

Best to use: Sentiment proportions (Positive/Negative/Neutral)

  • Workbench: Best for comparing sentiment patterns quickly (e.g. channels, regions).

  • Pivot tables: Allow you to cross-tab sentiment with other fields (e.g. Negative sentiment about Pricing in Region X).


2.4 Checking frequencies (how often something is mentioned)

Best to use: Mentions/frequencies

  • Workbench: Great for showing the “top themes” at a glance.

  • Pivot tables: Provide exact counts and let you break down mentions by multiple dimensions (e.g. How many times was “Ease of Use” mentioned by Detractors in Q2?).


3. How to reconcile your numbers

If you want your pivot to match your Workbench more closely:

  1. Check what filters are active on your Workbench.

  2. Apply the same filters in your pivot table (date, sentiment, theme, etc.).

  3. Compare like-for-like metrics (e.g. customer-level counts vs record-level counts).

✨ Tip: If the numbers still don’t match, check whether you’re looking at a filtered baseline (Workbench) vs the overall dataset (pivot).


4. FAQ

Which number is “correct”?

Both. They’re just calculated differently. Workbenches are better for reporting trends. Pivots are better for exploring the raw details.

Which should I use in reports to stakeholders?

Use the Workbench view for consistency and clarity. Use Pivots when you need to drill into detail or answer “why is this number like that?”


5. Cheat sheet summary

  • Comparing scores across segments? → Use standard NPS/CSAT/CES

  • Understanding drivers inside a segment? → Use Impact (Workbench = within slice, Pivot = vs overall)

  • Tracking sentiment? → Use Sentiment proportions (Workbench for patterns, Pivot for cross-tabs)

  • Counting mentions? → Use Frequencies (Workbench for quick overview, Pivot for detail)

👉 Rule of thumb:

  • Reporting = Workbenches

  • Comparing = Standard NPS/CSAT/CES

  • Drivers = Impact

  • Investigation = Pivots

Did this answer your question?