What Makes a Press Release Get Cited by ChatGPT in 2026?
By Daniel Grainger, founder of Ranking Atlas
Published ยท Updated
Press releases now compete for AI citation, not just journalist attention.
According to Muck Rack's December 2025 "What Is AI Reading?" report, which analysed more than one million AI citations across ChatGPT, Claude, Gemini, and Perplexity, press release citations grew fivefold between July and December 2025.
The releases that won those citations share a structure most companies don't write to.
This piece walks through what changed, where press releases actually sit in the wider citation hierarchy, and what separates a cited release from one that quietly gets ignored.
Key takeaways:
- Press release citations in AI tools grew 5x in the second half of 2025 (Muck Rack)
- Earned media drives 82% of all AI citations. Press releases are a sub-component, not a substitute.
- Cited releases contain 2x more statistics and 2.5x more bullet points than non-cited ones (Muck Rack)
- Half of all AI citations are from content published within the last 11 months. Recency compounds.
- Press releases earn citation durability. Editorial coverage earns citation gravity. Both matter, in different ways.
Why Are Press Releases Suddenly Getting Cited By ChatGPT?
Press releases used to be a backlink tactic. You distributed via wire, picked up syndication on Yahoo Finance and MarketWatch, collected a few referring domains, called it earned media. SEO gravity, not editorial gravity.
That changed in the second half of 2025. According to Muck Rack's analysis of more than one million AI citations, press release citations across ChatGPT, Claude, Gemini, and Perplexity grew fivefold between July and December 2025. The growth was concentrated in ChatGPT and Gemini specifically. Wire-distributed releases rose from 0.2% to 1% of all AI citations in that window. Including syndicated placements on Yahoo Finance, news aggregators, and financial portals, press release citations grew from roughly 1.2% to 6%.
But the headline number obscures the actual story. A separate analysis from almcorp.com, looking at over four million AI citations, found that syndicated press releases on platforms like Yahoo Finance and MSN still account for just 0.04% of all citations. PRNewswire-direct citations sit at 0.21%. The 5x growth is real. The base rate it grew from is small.
What this means in practice: press releases are now a citable category, but they aren't the citable category. Earned media, journalist-written coverage, accounts for 82% of all AI citations. Press releases sit inside that 82% as one input. They don't replace it.
Where Do Press Releases Sit In The Citation Hierarchy?
This is the framework most "how to write a press release for AI" guides skip, and it's the one that determines whether the time you spend writing one is worth it.
At Ranking Atlas we classify AI citations into three layers: syndication, editorial, and entity. They are not interchangeable. They compound differently. They decay differently. Every press release operates at one or more of them, and the layer you reach is the variable that determines whether the work pays back.
Layer 1, syndication citations.
Your release runs on the wire and gets indexed across syndicating outlets. AI can cite the syndicated version (Yahoo Finance /news/, MarketWatch, Benzinga). Citation strength is moderate, citation durability is weeks to months. The release fades from AI memory as fresher content displaces it. Half of all AI citations come from content less than 11 months old.
Layer 2, editorial citations.
A journalist reads the release, finds it useful, writes their own original story citing the announcement. The journalist's article, not your release, becomes the cited source. Citation strength is high, citation durability is years. AI tools weight independent journalism more heavily than wire copy because the editorial layer signals validation.
Layer 3, entity citations.
Sustained coverage builds a recognised brand entity in AI training data. Once you're an entity AI knows about, you get cited even without a specific recent placement triggering it. Citation strength is highest, citation durability is the longest. This is the layer most companies haven't earned yet.
A well-distributed press release operates at Layer 1. A press release that earns editorial pickup operates at Layers 1 and 2. A company that has been earning editorial coverage for 18-24 months operates at all three.
| Citation Layer | Strength | Durability | What Drives It |
|---|---|---|---|
| Syndication | Moderate | Weeks to months | Wire distribution, structure, recency |
| Editorial | High | Years | Journalist pickup, story angle, source authority |
| Entity | Highest | Compounds indefinitely | Sustained earned coverage at Layers 1 and 2 |
A press release on its own is a Layer 1 asset with a six-month decay curve. A release converted into editorial coverage is a Layer 2 asset with a multi-year compound. The brands winning AI citation in 2026 are using press releases as the trigger for editorial work, not the substitute for it.
The mistake most companies make is writing every press release as a Layer 1 announcement and never doing the work that converts it to Layer 2. Layer 1 alone produces a small, decaying citation footprint. Pair it with targeted journalist outreach and the same release can earn editorial coverage that compounds for years.
What Does A Press Release That Gets Cited Actually Look Like?
Muck Rack's research on the structural traits of cited releases is the most useful publicly available data on this. Cited releases contain 2x more statistics than non-cited releases. They contain 2.5x more bullet points. They use a 30% higher rate of objective sentences and a measurably higher frequency of action verbs and data-rich claims.
The pattern across the cited set:
- A specific number in the headline or first paragraph
- A dateline that's accurate and current. Recency is one of the strongest signals.
- Bullet-formatted data points rather than narrative paragraphs
- Quotes attributed to a named executive with title, offering perspective rather than rephrasing the headline
- A boilerplate that's three to four sentences of factual company description, not marketing copy
- Hyperlinks to primary sources where claims are made
The pattern across releases that get retrieved but not cited:
- Headlines led with company name rather than the specific news
- Adjective-heavy lead paragraphs ("excited," "thrilled," "innovative," "leading")
- Generic boilerplate that doesn't differentiate the company
- Quotes that read as marketing-approved messaging rather than human commentary
- No specific numbers or comparison points
- Buried claims that AI can't cleanly extract as standalone facts
The structural difference matters because AI extracts at the paragraph level. A release written as a 600-word narrative gives AI nothing to pull cleanly. A release written as discrete factual claims with bullet-formatted data points gives AI extractable chunks.
"Acme Inc., a leading provider of innovative cloud security solutions, is excited to announce the launch of its transformative new platform that empowers enterprise teams to defend against next-generation threats."
Six adjectives. Zero numbers. No extractable claim. Retrieved, not cited.
"Acme Inc. today released a cloud security platform that reduced average incident response time from 47 minutes to 9 minutes across 12 enterprise pilot deployments between June and November 2025."
Specific number. Comparison. Time window. Sample size. Extractable as a standalone fact.
Illustrative example. The "after" line answers the prompt "how do enterprises reduce incident response time" without any further context.
Which Press Release Types Actually Drive AI Citation In B2B SaaS?
Not all release types perform equally. Across the eight categories most agencies template, event, product launch, funding, executive hire, partnership, award, earnings, rebrand, the citation distribution is uneven.
Highest citation likelihood for B2B SaaS:
- Funding rounds with named investors and specific use of capital. AI cites these as evidence of company traction in category overviews.
- Product launches anchored to a measurable customer outcome. "Reduces X process from Y weeks to Z days" gets cited; "introduces new platform features" doesn't.
- Earnings or operating milestones with specific year-over-year comparison data. Public companies have an advantage here, but private SaaS companies that release ARR milestones with actual numbers earn citations too.
Moderate citation likelihood:
- Partnership announcements, only when the partnership enables a specific customer outcome rather than just announcing the relationship.
- Executive hires, citable when the hire signals strategic direction (CRO from named scaling background) rather than just title fills.
Low citation likelihood:
- Award and recognition releases unless the granting body is itself a recognised authority and the criteria are specific.
- Rebranding announcements unless paired with substantive strategic change. Logo refreshes don't get cited.
- Event announcements decay quickly, citable in the week before the event, then dormant.
The release types that compound for B2B SaaS in fintech, cybersecurity, data infrastructure, and privacy are the ones that contribute factual claims AI can use across multiple future queries. A funding round release becomes citation material whenever someone asks "who are the leading [category] companies." A product launch release with specific outcome data becomes citation material whenever someone asks "how do companies solve [problem]." Awards and rebrands rarely get a second citation life.
When Should You Send A Press Release For Maximum Pickup?
Timing matters less than people claim, but it isn't zero. Prowly's analysis of 55,470 press releases gives the cleanest available data.
- Thursday: ~27% open rate, highest of the week
- Tuesday: ~19% open rate, second highest
- Wednesday: ~15% open rate
- Friday: ~15% open rate, near-dead zone
- Weekend: ~2% open rate, don't bother
- Monday: low open rate plus 1,000+ email inboxes, buried before you start
The 10am-2pm window captures roughly 33% of all email opens. Before 10am you're competing with the overnight backlog. After 2pm, content calendars are set and journalists are filing rather than scanning.
One operational adjustment most distribution tools miss: schedule for an odd time. Most automated tools fire at 9:00, 9:30, 10:00. Your release lands in a cluster of others. Sending at 10:17 or 10:43 ET breaks you out of the cluster and into a quieter inbox window.
For US B2B SaaS audiences specifically, calibrate everything to US Eastern Time. New York, DC, and Boston cover the largest concentration of relevant journalists, even for tech companies based on the West Coast.
The honest caveat: a perfectly timed mediocre release still gets ignored. Timing is the marginal advantage that breaks ties between two equally good releases, not a substitute for newsworthiness or structure.
What's The Most Common Reason Good Press Releases Don't Get Cited?
Promotional language.
Across the cited and non-cited release sets in Muck Rack's research, the strongest single differentiator wasn't word count, distribution channel, or even data density taken alone. It was the rate of objective versus promotional sentences. Cited releases use a 30% higher rate of objective claims. Non-cited releases lean on adjectives, brand-led framing, and self-evaluative language ("the leading provider," "innovative," "best-in-class," "transforms the industry").
AI tools appear to be detecting promotional intent and deprioritising it for citation. The release can be technically well-structured, well-distributed, and well-timed, and still get retrieved without being cited because the language reads as marketing rather than reportage.
The fix is easier to describe than to execute. Lead every claim with the fact, not the framing. Replace adjectives with numbers. Replace self-evaluative language with attributable third-party recognition. Replace "we are excited to announce" openers with the actual news. The release that reads as if a journalist could file it without changing a sentence is the release AI tools cite.
Pro tip
Run a draft release through a "delete every adjective" pass before distribution. If the news still stands without "leading," "innovative," "transformative," or "best-in-class," you have a release with a chance of citation. If the news collapses without them, you don't have news.
How Do You Build A Press Release Strategy Around Citation Durability?
The companies winning AI citations in 2026 aren't running press release programmes. They're running citation programmes where press releases are one tactic.
The pattern across the brands earning durable AI visibility:
A baseline cadence of well-structured press releases, distributed through wire services that get indexed by the right outlets. This is Layer 1, moderate strength, decaying durability. It maintains a citation footprint but doesn't grow one.
On top of that, deliberate journalist outreach for the announcements that warrant it. Funding rounds, product launches with substantive customer outcomes, and operating milestones get a personalised pitch to 5-15 journalists covering the category beat. The goal isn't wire pickup. The goal is converting the release into editorial coverage that operates at Layer 2.
On top of that, sustained editorial presence between announcements, bylined contributions, expert commentary in others' coverage, podcast and analyst interviews. This is what builds the entity recognition AI uses for Layer 3 citations.
A press release on its own is a Layer 1 asset with a six-month decay curve. A press release converted to editorial coverage is a Layer 2 asset with a multi-year compound. A company sustaining both for two years builds the Layer 3 entity that gets cited even when there's no recent news to anchor on.
The mistake is treating Layer 1 as the strategy. Wire distribution alone produces a small citation footprint that decays as fresher content displaces it. The companies winning durable AI citations are the ones using press releases as the trigger for the editorial work, not the substitute for it.
FAQ
How Long Should A Press Release Be In 2026?
400-600 words for most release types. AI extracts at the paragraph level, so additional length doesn't increase citation likelihood and often dilutes the extractable claims. Earnings and regulatory filings are exceptions.
Do AI Tools Cite Press Releases Differently From Regular Articles?
Yes. Press releases are labelled as company-provided content by Perplexity. ChatGPT and Gemini cite them more selectively, with confirmed citation behaviour primarily for high-authority distribution paths like Yahoo Finance /news/. The same content distributed across multiple paths can get cited by some platforms and ignored by others.
Which Press Release Distribution Wires Get Cited Most?
Wire-distributed releases account for roughly 1% of all AI citations after the second half of 2025 growth (Muck Rack). PRNewswire and GlobeNewswire have the strongest citation footprint among major wires, primarily because their syndication paths reach the financial portals AI tools weight heavily.
Can A Press Release Alone Build AI Citation Visibility?
No. Earned media drives 82% of AI citations. Press releases sit inside that 82% as one signal. They contribute to citation footprint but don't, by themselves, build the durable visibility that compounds over years. The brands earning sustained AI citation use press releases as the trigger for editorial coverage, not the replacement for it.
How Quickly Do AI Tools Cite A New Press Release?
The highest citation rate occurs within the first seven days of publication (Muck Rack). Half of all citations come from content less than 11 months old. After 12 months, citation rate drops sharply. Recency compounds, and older releases without follow-up coverage fade out of AI memory.
Turn announcements into editorial coverage AI cites.
Fixed-price campaigns. Guaranteed placements on authoritative publications. Press releases are the trigger. Editorial coverage is the asset.
Start a Campaign โ $3.5K โ