Last week we wrote about the invisible work that happens between business platforms — the moving, reconciling, summarising and routing that nobody owns and everybody depends on. This is what we built to absorb it inside our own marketing stack.
It is not glamorous work. It is, however, the part of the system that changes the unit economics. Until the cognitive layer is in place, most of the value of the surrounding tools is theoretical.
The constraint
Pattern is a small studio. There is no marketing operations team. There is no ops engineer hired to sit between Customer.io and Dynamics 365 reconciling fields. Whatever cognitive layer we built had to:
- Run unattended — once shipped, it has to operate without daily human babysitting
- Use the tools we already paid for — no new vendor contract for "automation suite of the year"
- Cost less than a junior salary in tooling — measured in hundreds of dollars a month, not tens of thousands a year
- Be readable by one engineer — when something breaks, the person on call has to be able to reason about it
Most of the commercial answers to this problem (iPaaS platforms, dedicated marketing-ops vendors) violate at least three of those constraints. They are priced for businesses with a marketing ops team, on the assumption that platform fees will be smaller than the salaries they replace.
For a lean organisation, the maths is different.
The problem
A modern marketing stack is around nine tools. Ours runs:
- Capture — Customer.io (web tracking + journeys), RB2B (account de-anonymisation), native React forms posting to an internal Azure Function
- Intelligence — Apollo.io (enrichment), PostHog (analytics, replay)
- AI — Claude API (scoring, personalisation, summarisation)
- CRM — Dynamics 365 Sales (system of record)
- Paid media — Google Ads, LinkedIn Campaign Manager
Each tool is good at its own job. None of them owns the cognitive work between the jobs. When a contact downloads our AI Basecamp guide, the chain of work that follows looks like this:
- Customer.io captures the form fill and identifies the visitor
- The contact has to land in D365 as a Lead with the right source, owner, and metadata
- Apollo has to enrich the company and contact records
- Claude has to score the result against current rubric and behavioural signals
- The score has to write back to both D365 and Customer.io
- Customer.io has to route the contact into the correct journey based on the new score
- If the score crosses an opportunity threshold, D365 has to notify the right person
That is seven hand-offs across five platforms for a single form fill. Without a deliberate layer in the middle, six of those steps become someone's Tuesday afternoon.
The solution structure
The cognitive layer is n8n, running on an Azure B2s VM with Docker, docker-compose, and Traefik for TLS — twenty-two workflows across six categories:
- Category: Sync · Count: 3 · What it owns: Bi-directional D365 ↔ Customer.io contact sync, unsubscribe propagation
- Category: Scoring · Count: 2 · What it owns: Real-time lead scoring, daily re-score with hysteresis
- Category: Enrichment · Count: 3 · What it owns: Apollo (single + bulk), RB2B account intelligence
- Category: Content · Count: 8 · What it owns: Personalisation, generation, quality gate, style memory, edit-learning
- Category: Publishing · Count: 3 · What it owns: Blog → Sanity, newsletter → Customer.io, social → Buffer
- Category: ABM · Count: 3 · What it owns: Account summary, opportunity creation, pre-generation
Each workflow is a JSON file in version control. Each one starts from a clear trigger — webhook, schedule, or event — and ends with a writeback to one of the systems of record. Claude calls only happen where judgement is needed (scoring, summarising, personalising) and never where deterministic plumbing will do.
End-to-end tooling cost runs around $110 to $280 a month depending on Customer.io tier and Apollo usage. Claude API sits inside that envelope at $10–50/month at our volume. Everything else is free or already-paid-for.
A note as we publish this: we are mid-migration from Dynamics 365 to Attio. The cognitive-layer architecture above is independent of which CRM sits at the back — the same n8n workflows route equivalently to either system. We will publish a teardown of the swap itself once it ships.
Where the friction lives
Three failure modes worth naming.
Field ownership across systems. Bi-directional sync between two systems of record sounds clean on a whiteboard. In production, two updates land within seconds of each other and one risks overwriting the more recent state. The mitigation is structural — every field in the contact model has a documented owner-of-truth (Dynamics owns identity fields, Customer.io owns engagement fields, lifecycle stage is bi-directional with the most recent score-driven change winning) and sync runs in the direction of the owner where one is named. Runtime race-condition handling — last-write-wins via timestamp comparison — is on the build list. The field-direction discipline keeps it manageable today.
Schema drift. D365 custom fields get renamed. Customer.io attributes get retired. The cost of catching that in production is a silent data gap that nobody notices for a fortnight. We do not have automated schema-validation yet — divergences are caught by the next workflow run that depends on the missing field, which is too slow. A nightly schema-validation workflow with alerting is on the build list.
Observability of an invisible layer. A cognitive layer that runs unattended also fails unattended. n8n logs every workflow execution by default with full input/output payloads at each node, and the scoring workflow additionally logs Claude token usage and estimated cost per call. Latency logging and before-and-after writeback logging are on the build list. When something breaks today, the answer is usually in the n8n execution history; we want it surfaced in a single dashboard rather than dug out of execution logs by hand.
Practical implications
Last week's piece argued that the cognitive work between platforms is the real bottleneck for SME teams. The payoff to that argument is this: the cognitive layer is now buildable for a couple of hundred dollars a month and a few weeks of engineering, where it used to require a six-figure marketing-ops hire or an enterprise iPaaS contract.
For a small studio, that changes which problems are tractable. The same shift will apply to any SME that runs a comparable tool count.
We will publish four more pieces in this series — the AI Basecamp landing-page teardown, the blog publishing pipeline, the lead-to-CRM sync workflow, and the governance failure modes — each one a single workflow inside the layer described above.
On the build list
To keep this honest: the cognitive layer described above is what we run today. Not every guardrail is in place yet. Three things on the build list, in priority order:
- Re-score hysteresis — stop lifecycle-stage flicker when a contact's score sits near a threshold
- Nightly schema-validation workflow with alerting — catch field drift before it shows up as missing data
- Latency and before-and-after writeback logging — observability the layer needs to scale beyond a small studio
We will publish each one as a teardown when it ships.
The architecture poster is yours to take. A one-page visual of the stack, the twenty-two workflows by category, and the field-level data model — bundled below. We share it because the easiest way to evaluate whether a studio has done the work is to read what they shipped.
If you are looking at the cognitive layer in your own stack — which workflows to run, which to retire, where Claude earns its keep and where it should not be near — we are happy to talk it through.