The AI Basecamp landing page is 60 words. The automated chain behind it is considerably more involved.
This is a walkthrough of what actually runs when the form submits — from browser event to qualified contact in the CRM — and where the gaps are.
The constraint
Running a considered B2B inbound flow without a dedicated sales development function means the work that would normally be handled by a sales development rep — identifying who came in, enriching their company data, scoring their fit, routing them to the right next step — has to happen automatically or not at all.
A human checking every download at the end of each day is not a process. It is a task that gets skipped.
The chain
When a contact fills the download gate, the React component fires three things in sequence: a Customer.io identify call (which resolves the anonymous visitor to a known contact), a Customer.io track event with the asset slug, and conversion events to Google Ads and LinkedIn.
The download itself is delivered immediately from the browser — the contact gets the file before any of the downstream automation runs.
Step 2: Customer.io routes the contact
Customer.io receives the identify and track events and does two things: it sends the delivery confirmation email with the download link as a fallback, and it notifies us that someone came in. It also fires a webhook to n8n to trigger the scoring pipeline.
Step 3: n8n scores the contact
The lead scoring workflow (described in the previous teardown) wakes on the webhook. It fetches the contact's CRM record and Customer.io engagement data, assembles a context payload, calls Claude, and parses a score from 0 to 100 based on firmographic fit, behavioural signals, and account intent.
The score and lifecycle stage write back to both the CRM and Customer.io in the same workflow run.
Step 4: enrichment runs in parallel
Apollo receives the contact's email via a separate n8n webhook and enriches the company record — employee count, industry, revenue range, tech stack. That data feeds into the next time the scoring workflow fires, which happens on a daily re-score for any contact enriched in the last seven days.
The first score (from step 3) runs on limited data. The enriched score the next day is more reliable.
Step 5: above threshold, a deal is created
If the score crosses 70, the ABM workflow fires. We are currently mid-migration from Dynamics 365 to Attio; in either case the workflow creates a CRM record in the pipeline at Stage: Qualified, assigns ownership, and queues an account summary.
Contacts scoring below 70 enter the follow-up sequence — a four-touch email journey over 14 days, described in the first teardown in this series.
Where it breaks
Enrichment lag. Apollo enrichment typically completes within a few minutes but occasionally takes longer. When it lags, the first score fires on unenriched data — which tends to underweight firmographic fit and overweight behavioural signals. Contacts with strong engagement but weak firmographic data can score high on the first run and then score lower once enrichment lands. The daily re-score corrects this, but the first classification can be wrong.
Threshold calibration. Score > 70 as the ABM trigger was set empirically rather than from data. We have had a small number of contacts enter the deal pipeline who were clearly not ready — repeat visitors who scored high on intent signals but turned out to be researching rather than buying. We have not changed the threshold yet because we would rather err toward over-qualification than under, but the threshold will need revisiting once we have more conversion data.
The scoring model has no feedback loop yet. We log every score and outcome, but we have not yet built the loop that uses won and lost deals to recalibrate the rubric. That work is on the roadmap.
What this replaces
Before this chain existed, the process was: check downloads periodically, look up the contact manually, decide whether to follow up, send an email. At low volume that is fine. At higher volume it breaks.
The automated chain described above handles the same decisions — identify, enrich, score, route — without the volume constraint. The human involvement shifts from doing the work to reviewing the outputs and acting on the cases above threshold.
The inbound automation blueprint below includes the n8n workflow JSON for the scoring pipeline, the DownloadGateModal component, the Claude scoring prompt, and deployment notes covering the gaps described above.
If you are building something similar — or looking at how to make an inbound flow work without a dedicated sales development function — we are happy to talk through how this is set up and what we would do differently.