360Brew Explained: How LinkedIn's AI Decides Who Sees Your Outreach

If your connection acceptance rate dropped 30-50% in the last six months, you're not imagining it. And it's probably not your copy.
In late 2025, LinkedIn quietly began rolling out 360Brew — a 150-billion-parameter decoder-only foundation model built by the FAIT (Foundation AI Technologies) team to replace the fragmented stack of ranking systems that previously scored feeds, search, jobs, and outreach independently. By Q1 2026, it became the unified scoring layer across nearly every surface where one LinkedIn member encounters another.
Most coverage of 360Brew has focused on creators — Depth Score, dwell time, the death of broetry. But there's a much quieter story that founders and sales leaders need to understand: 360Brew also decides whether your outreach lands in someone's primary inbox, gets buried in "Other," or never surfaces at all. This post breaks down the mechanics.
What 360Brew actually is (and why it matters for outbound)
The original 360Brew arXiv paper published by LinkedIn's FAIT team is unambiguous on one point: the model is designed to rank across surfaces — feed, search, jobs, and People You May Know. It is not a feed-only system.
The legacy LinkedIn stack used dozens of narrow models, each trained on a specific surface with surface-specific signals. A connection request was scored by one system. Feed posts by another. InMail deliverability by a third. None of them shared context.
360Brew unifies this. It uses In-Context Learning (ICL) over a structured prompt that includes the sender's profile, recent activity, network graph position, and the message itself — then predicts the probability that the recipient will engage positively. That single probability score determines whether your outreach gets surfaced, throttled, or silently demoted.
The practical implication: your profile is now part of every message you send, whether you reference it or not.
The "Depth Score" equivalent for outreach
For feed content, Depth Score combines dwell time, comment quality, niche coherence, and creator authority. For outreach, 360Brew evaluates a similar bundle of signals — but optimized for one-to-one prediction rather than one-to-many distribution.
Based on the paper's described feature set and observable behavior since the rollout, the outreach scoring function appears to weigh:
- Sender profile coherence — does your headline, experience, and recent activity form a consistent topic identity?
- Semantic relevance to recipient — does the message body genuinely connect to the recipient's stated work, not just their job title?
- Network proximity and overlap — shared connections, shared groups, shared content engagement history
- Message pattern entropy — how similar is this message to other messages you've sent in the last 30 days?
- Recipient response history — does the recipient typically engage with cold outreach from people in your topic cluster?
A message that scores well on all five lands in the primary inbox with a notification. A message that fails on coherence or pattern entropy gets routed to "Other" or held entirely. There is no error message. You just see lower acceptance.
Why your profile is now a credibility gate
This is the part most outbound playbooks haven't caught up to. Under the legacy system, a thin profile didn't really hurt your outreach — the message either got delivered or it didn't, and the recipient decided.
Under 360Brew, your profile is an input to the model before delivery. The system reads your headline, your last 90 days of posts and comments, your skills, your endorsements, and the topical consistency of your network — and uses that representation to decide whether you're a credible sender to this specific recipient on this specific topic.
This is what practitioners are calling topic DNA. If your headline says "Helping B2B teams scale revenue" and your recent activity is empty, your topic DNA is undefined. The model has nothing to anchor your credibility to. Your eligible recipient graph shrinks accordingly.
A founder who posts twice a week about supply chain software, comments on supply chain content, and connects with supply chain operators has a sharp topic DNA. Their outreach to a supply chain VP is scored as relevant by default. The same outreach from a generalist profile gets throttled.
This dynamic is also driving the March 2026 Authenticity Update restriction triggers — accounts with weak topic DNA but high outbound volume are flagged disproportionately.
Why generic AI-personalized DMs are getting deprioritized
The pattern entropy signal is worth dwelling on, because it explains the reach cliff that hit AI-assisted outbound tools in early 2026.
360Brew sees every message you send. It also sees every message every other sender sends. With 150B parameters and ICL across the full conversation graph, it's trivial for the model to identify structural templates — even when the surface words differ.
A message that opens with "Saw your post about [topic] — really resonated with [vague reason]" followed by a pivot to a pitch is structurally identical to ten thousand other messages, regardless of which specific post is referenced. The model learns this template signature. Senders who lean on it heavily see their outreach scores collapse.
This matches what we documented in our Q1 2026 A/B data on AI opener deprioritization: formulaic personalization performs worse than no personalization at all, because the model treats it as a fingerprint of low-effort outbound.
Richard van der Blom's Algorithm Insights team flagged this in their February 2026 report as well — reply rates on "templated personalization" patterns dropped roughly 40-60% between Q3 2025 and Q1 2026, while genuinely contextual messages held steady.
How network overlap quietly throttles cold outreach
The LinkedIn Economic Graph has always tracked relationships between members, companies, schools, and skills. 360Brew now uses that graph as a primary input to outreach scoring.
A cold message from someone with zero shared connections, zero shared group memberships, and no overlapping engagement history starts with a heavy probability discount. The model has no positive signal to counterweight the cold-start risk.
This is why warm-up sequences (engaging with a prospect's content for 1-2 weeks before reaching out) now have outsized impact. They aren't just psychological — they create graph-level signals the model can read. A like, a thoughtful comment, a profile view, all increase the prior probability that your subsequent message will be welcome.
The practical math: industry studies show acceptance rates on fully cold connection requests have dropped from roughly 35-40% in 2023 to 18-22% in early 2026. Acceptance rates on requests preceded by 7+ days of light engagement have held at 38-45%.
LinkedCamp runs AI-personalized LinkedIn + email sequences on dedicated IPs, with AI agents that book meetings while you focus on closing.
What "profile authority" looks like in practice
LinkedIn Profile Authority isn't a published metric. But based on observable scoring behavior, it appears to be a composite of:
- Topic consistency across headline, about section, experience, and last 90 days of activity
- Engagement quality on your own content — comments-to-impressions ratio, reply-to-comment ratio
- Reciprocity — do your past outreach recipients accept, reply, and stay connected, or do they ignore and dismiss?
- Endorsement and recommendation density in your declared topic areas
- Posting cadence — not volume, but consistency over rolling 60-day windows
Profiles that score high on these dimensions effectively get a pre-approved badge from the model. Their outreach surfaces in primary inboxes with notifications. Profiles that score low — even with strong copy — get routed to "Other" where open rates collapse to single digits.
The RAIN Group's 2024 buyer research found that 82% of B2B buyers will accept a meeting with a seller who reaches out — but only when the seller is perceived as credible. 360Brew is now operationalizing that credibility judgment at the model layer, before the buyer ever sees the message.
Concrete fixes for the outreach reach cliff
If your numbers are down, work the inputs the model actually reads. In rough order of impact:
- Tighten your topic DNA. Your headline, about section, and recent posts should reinforce one specific problem you solve for one specific buyer. Generalist positioning shrinks your eligible recipient graph.
- Post or comment substantively at least 2-3 times per week in your declared topic area. The model reads this as signal that you're a credible voice on the topic, not just a sender.
- Warm before you message. A profile view, a thoughtful comment, and a content engagement before sending a connection request measurably lifts acceptance — because the graph signal is now part of the score.
- Vary your message structures, not just your variables. If every outreach has the same opening pattern, swapping in different first names and company names won't help. Rewrite the structural template every 50-100 sends.
- Match message specificity to the recipient's actual recent activity — not their job title. A reference to something they posted last week scores fundamentally differently than a reference to their company's industry.
- Audit your reciprocity signals. If you're sending high volume with low acceptance, the model is learning that your outreach is unwelcome. Slow down, raise quality, let the historical signal recover before scaling again.
For message-level improvements, our breakdown of 10 opener templates hitting 30%+ reply rates walks through specific structural patterns that have held up under the new scoring regime.
- 360Brew is a unified ranking model across feed, search, jobs, and outreach — not a feed-only system. Every connection request and InMail is scored before delivery.
- Your profile is now an input to every message you send. Weak topic DNA shrinks your eligible recipient graph, regardless of message quality.
- Pattern entropy is the killer signal for AI-assisted outbound. Templated personalization is identifiable at the structural level and gets deprioritized accordingly.
- Graph-level warm-up creates measurable lift because the model reads engagement history as a prior. Cold acceptance rates have roughly halved since 2023; warmed acceptance has held steady.
- Fix the inputs the model reads: sharpen your topic DNA, post consistently in your domain, warm before messaging, and vary message structures — not just variables.
Keep reading

LinkedIn's 360Brew Broke Your Outreach: Fix It Before Q2
LinkedIn's 360Brew AI now scores outreach against your profile for coherence — and reply rates cratered in Feb-March. Here's the 7-day audit before Q2.

LinkedIn's 360Brew + March 2026 Authenticity Update: The Outreach Triggers That Get You Restricted
The March 2026 Authenticity Update doesn't punish volume alone. It punishes signal combinations. Here's the forensic breakdown of what 360Brew actually flags.

360Brew Is Deprioritizing AI Openers: Q1 2026 A/B Data
Fresh Q1 2026 A/B data from LinkedCamp campaigns shows human-written openers now out-reply AI-only messages by 2.4x. Here's the mechanism and the workflow that still works.
Ready to try LinkedCamp?
14-day free trial, dedicated IP, AI agents — start outbound in under an hour.