TL;DR
- Normalized UTM fields before save; standardized sources and mediums.
- Exception subflow surfaces malformed UTMs and prompts remediation without blocking reps.
- Cleaner attribution improved marketing ops confidence and MQL acceptance.
Context
Multi-channel SaaS marketing with self-serve and SDR-assisted funnels. Disparate sources (ads, webinars, partner) created noisy UTMs.
Problem
Inconsistent UTMs and missing parameters led to 'unknown source' buckets, rework, and rejected MQLs.
Intervention
• Normalization — Before-save Flow trims/lowercases UTMs, maps synonyms (e.g., 'ppc'→'paid'), and validates required params.
• Exceptions — Malformed/missing values raise a soft exception with a remediation task; partner UTMs auto-corrected via mapping.
• Attribution hooks — Subflow stamps primary touch and last touch fields for dashboards; human overrides logged with reason.
• Ops playbook — Weekly review of exceptions; trend dashboard by channel/partner.
Outcomes
| Window | 60 days pre vs 60 days post go-live |
| Industry | SaaS |
| Clouds | Sales Cloud |
| Flow Types | Before-Save, Record-Triggered, Subflow |
Unknown source = missing/invalid UTMs after normalization. Acceptance = MQLs approved by Sales within SLA. Confidence score from ops rubric.
Timeline
1 sprint build + 1 sprint reporting + 2-week bedding-in.
Stack
Sales Cloud (Leads/Campaign Members), Before-save Flow, Subflows, Mapping custom metadata.
Artifacts
- Normalization mapping table
- Exception surfacing flow diagram
- Attribution dashboard slice (primary vs last touch)
FAQ
Will normalization overwrite true source data?
No; raw UTMs are preserved in parallel fields for audit. Normalized fields power reporting and routing.
How are partner anomalies handled?
Known partner patterns auto-map; otherwise, an exception record is created with suggested values for quick fix.