This is the third dispatch in a series. The first piece introduced the 60-tag, 14-category audit modern sites need in 2026 and named E-E-A-T as one of the most under-implemented categories. The second piece got tactical about FAQ, HowTo, and Speakable schema — the markup that makes your content extractable. This piece covers the missing third leg: the trust signals that decide whether an answer engine, having extracted your content, actually trusts it enough to cite.

Schema gets you to the door. E-E-A-T gets you through it.

Why E-E-A-T Stopped Being a Google-Only Signal

E-E-A-T started life inside Google's Search Quality Rater Guidelines as a heuristic for human evaluators — Experience, Expertise, Authoritativeness, Trustworthiness. The concept was originally meant to help raters spot low-quality content that ranking signals had missed. For most of its life it was treated as a Google-specific thing, only loosely connected to anything machine-readable.

That is no longer true. In 2026, every major answer engine — ChatGPT, Claude, Perplexity, Google AI Overviews — independently converged on the same four trust signals when deciding which page to cite. They didn't coordinate. They didn't have to. The reason is mechanical, not ideological: every wrong citation is reputationally expensive for the answer engine itself. A model that cites a confidently-stated lie loses user trust faster than one that cites cautiously. So they all over-weight provenance.

The result is that the same E-E-A-T signals Google's quality raters were trained to look for — clear authorship, verifiable credentials, organizational authority, freshness, source attribution — are now machine-extractable signals every answer engine reads off your page. The good news: you can mark them up explicitly, and when you do, citations follow.

The Provenance Premium

Two pages with identical answers. One has a clear author byline, organizational publisher, recent dateModified, and outbound citations. The other is anonymous, undated, and unlinked. The answer engine will cite the first one almost every time. The content didn't win — the provenance did.

The Four Pillars, Reframed for AEO

Each letter of E-E-A-T maps to a different class of signal, and each class needs different markup. Here is the orientation before we get into code:

The trap is treating these as an undifferentiated blob of "quality." They aren't. Each pillar is independently scored. A page can have brilliant Expertise markup and zero Authoritativeness signals and still get skipped. Cover all four, or you have a hole.

Experience — Proving You Actually Did the Thing

Experience is the newest of the four pillars and the one most sites get wrong. It's not credentials. Credentials live under Expertise. Experience is the first-hand, "I was there, I did it, here's what happened" signal that separates a practitioner from a content marketer summarizing other people's work.

The signals an answer engine looks for:

Most of these are content patterns, not markup. But two of them — author experience claims and reviewedBy attribution — have direct JSON-LD support:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "How We Cut Build Times by 73% on a Monorepo",
  "author": {
    "@type": "Person",
    "@id": "https://example.com/team/jane-smith#person",
    "name": "Jane Smith",
    "jobTitle": "Principal Engineer",
    "knowsAbout": [
      "Continuous Integration",
      "Bazel",
      "Monorepo tooling"
    ],
    "description": "Jane has shipped CI infrastructure at three production-scale monorepos since 2019."
  },
  "reviewedBy": {
    "@type": "Person",
    "name": "Marcus Chen",
    "jobTitle": "Staff DevOps Engineer",
    "knowsAbout": ["Build systems", "Distributed caching"]
  }
}
</script>

Two things are doing real work here. knowsAbout on the author Person tells the answer engine which topics this byline carries authority on. reviewedBy introduces a second qualified human into the trust graph, which materially raises the citation probability for technical content.

Expertise — Credentials AI Models Recognize

Expertise is where author Person schema earns its keep. Every byline on your site should resolve to a Person entity with structured credentials. Not a string in a byline field. A real Person with a stable @id, an about page, education, occupation, and credential markup.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Person",
  "@id": "https://example.com/team/jane-smith#person",
  "name": "Jane Smith",
  "url": "https://example.com/team/jane-smith",
  "image": "https://example.com/images/team/jane-smith.jpg",
  "jobTitle": "Principal Engineer",
  "worksFor": {
    "@type": "Organization",
    "@id": "https://example.com/#organization"
  },
  "alumniOf": {
    "@type": "EducationalOrganization",
    "name": "Carnegie Mellon University"
  },
  "hasCredential": {
    "@type": "EducationalOccupationalCredential",
    "credentialCategory": "degree",
    "educationalLevel": "Master's degree",
    "recognizedBy": {
      "@type": "EducationalOrganization",
      "name": "Carnegie Mellon University"
    }
  },
  "knowsAbout": [
    "Continuous Integration",
    "Bazel",
    "Monorepo tooling",
    "Distributed builds"
  ],
  "sameAs": [
    "https://www.linkedin.com/in/janesmith/",
    "https://github.com/janesmith",
    "https://orcid.org/0000-0002-1825-0097"
  ]
}
</script>

The structure looks heavy because it is — but it's also one-time work per author. Once a Person entity exists, every article that author writes references it via @id, and the entity accumulates authority across the corpus. A new article by Jane Smith inherits all of Jane Smith's prior authority because the answer engine can resolve the @id to the same entity it already trusts.

The non-obvious requirement: every author Person entity should have a real, browsable about page at the URL it claims. Answer engines deduplicate Person entities by @id, but they verify by fetching the URL. An @id that 404s is worse than no @id at all.

Authoritativeness — Building the Entity Graph

Authoritativeness is about the network. An anonymous page is hard to trust. A page from an organization with a verifiable presence across the open web is much easier. The mechanism that lets answer engines verify "is this organization who they claim to be" is sameAs.

sameAs is the single highest-leverage authoritativeness tag. Every Organization and every Person on your site should have a sameAs array that links out to every place that entity exists on the open web — LinkedIn, GitHub, Wikidata, ORCID, X, Mastodon, the company's own About page. The more independent confirmations, the stronger the entity claim.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "@id": "https://example.com/#organization",
  "name": "Example Engineering",
  "url": "https://example.com",
  "logo": {
    "@type": "ImageObject",
    "url": "https://example.com/logo.png",
    "width": 512,
    "height": 512
  },
  "foundingDate": "2019-03-12",
  "founder": {
    "@type": "Person",
    "@id": "https://example.com/team/jane-smith#person"
  },
  "sameAs": [
    "https://www.linkedin.com/company/example-engineering/",
    "https://github.com/example-engineering",
    "https://www.wikidata.org/wiki/Q1234567",
    "https://en.wikipedia.org/wiki/Example_Engineering",
    "https://x.com/exampleeng",
    "https://mastodon.social/@example"
  ],
  "contactPoint": {
    "@type": "ContactPoint",
    "contactType": "customer service",
    "email": "hello@example.com",
    "areaServed": "Worldwide"
  }
}
</script>

Three properties carry the real weight. @id gives the organization a stable URI other schema blocks can reference. sameAs creates the cross-platform identity graph. contactPoint proves there's a real organization behind the markup that can be reached.

Wikidata and Wikipedia entries are the highest-confidence sameAs targets when you can earn them, because they're independently editorially curated. If your organization has a Wikidata entry — even a stub — link to it. If it doesn't, neither does your competitor; that's an opportunity, not a problem.

Trustworthiness — Freshness, Sources, and the Contact Layer

Trustworthiness is the most measurable of the four pillars. Three signals do most of the work:

Freshness. A current dateModified is the single biggest trust signal in 2026. Undated content is treated as stale by default, and stale content is preferentially deprioritized for citation. Answer engines have learned the hard way that citing outdated information makes them look bad, and they err toward recently-modified sources.

Source attribution. The citation array on Article schema lets you explicitly enumerate the sources backing your claims. This is one of the most under-used Article properties, and it's heavily weighted by Claude and Perplexity in particular.

Contact transparency. A real contactPoint, a real About page, a real privacy policy. Answer engines verify these exist before fully trusting an entity.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Distributed Build Caching: Lessons from 2 Years in Production",
  "datePublished": "2025-11-04",
  "dateModified": "2026-04-22T09:30:00.000Z",
  "author": { "@id": "https://example.com/team/jane-smith#person" },
  "publisher": { "@id": "https://example.com/#organization" },
  "citation": [
    {
      "@type": "ScholarlyArticle",
      "name": "A Large-Scale Study of Build System Performance",
      "url": "https://dl.acm.org/doi/10.1145/3377816.3377823",
      "datePublished": "2020-05-23"
    },
    {
      "@type": "WebPage",
      "name": "Bazel Remote Cache Architecture",
      "url": "https://bazel.build/remote/caching"
    }
  ]
}
</script>

The citation array is doing double duty. It's a trust signal for the answer engine — "this author cites their sources" — and it's a recommendation for the answer engine to follow the citations and verify your claims. Pages with citation arrays measurably outperform pages without them in citation rate, because the answer engine can confirm the underlying claims before quoting you.

The Author + Publisher + ReviewedBy Triangle

The single highest-leverage E-E-A-T pattern is what we call the trust triangle: an Article that explicitly identifies its author Person, its publisher Organization, and a third-party reviewer Person — all three with stable @id values that resolve to real, browsable entity pages.

Here is the full pattern, suitable for any technical or editorial article:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "@id": "https://example.com/blog/build-caching#article",
  "headline": "Distributed Build Caching: Lessons from 2 Years in Production",
  "description": "What we learned scaling Bazel's remote cache to a 400-engineer monorepo.",
  "image": "https://example.com/images/blog/build-caching-cover.png",
  "datePublished": "2025-11-04",
  "dateModified": "2026-04-22T09:30:00.000Z",
  "wordCount": 3200,
  "inLanguage": "en-US",
  "author": {
    "@type": "Person",
    "@id": "https://example.com/team/jane-smith#person",
    "name": "Jane Smith",
    "url": "https://example.com/team/jane-smith"
  },
  "reviewedBy": {
    "@type": "Person",
    "@id": "https://example.com/team/marcus-chen#person",
    "name": "Marcus Chen",
    "url": "https://example.com/team/marcus-chen"
  },
  "publisher": {
    "@type": "Organization",
    "@id": "https://example.com/#organization",
    "name": "Example Engineering",
    "url": "https://example.com",
    "logo": {
      "@type": "ImageObject",
      "url": "https://example.com/logo.png"
    }
  },
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": "https://example.com/blog/build-caching"
  },
  "citation": [
    "https://bazel.build/remote/caching",
    "https://dl.acm.org/doi/10.1145/3377816.3377823"
  ]
}
</script>

This is the markup that maxes out E-E-A-T scoring across all four major answer engines. Three named, verifiable humans/organizations are linked via stable @id values. Each one's full Person or Organization schema lives at its own URL with full credentials and sameAs graphs. The Article itself has freshness signals and outbound citations.

An answer engine looking at this page can answer four questions in a single pass: who wrote it, who reviewed it, who published it, and what sources back it. That's the entire E-E-A-T checklist, mechanically verifiable.

▸ The One Pattern To Implement First

If you can only ship one E-E-A-T improvement this week, ship the trust triangle on your single highest-traffic article. Author Person, publisher Organization, and one reviewedBy Person — all three with stable @id values pointing to real about pages. Add dateModified. Add a citation array of two or three sources. That single article will start outperforming its peers in answer-engine citations within days.

What Each Pillar Looks Like in Markup

The audit table below maps each pillar to the specific properties that signal it. This is what an E-E-A-T audit actually checks for, page by page:

PILLAR PRIMARY SIGNAL JSON-LD PROPERTIES
Experience First-hand evidence of doing Person.knowsAbout, Person.description, Article.reviewedBy
Expertise Verifiable qualifications Person.hasCredential, Person.alumniOf, Person.jobTitle, Person.worksFor
Authoritativeness Cross-platform identity graph @id, sameAs, Organization.founder, Organization.foundingDate
Trustworthiness Freshness, sources, contactability dateModified, citation, ContactPoint, Article.publisher

An audit that catches all four columns is a complete E-E-A-T audit. An audit that only catches schema validity (which most free tools do) misses three of the four pillars entirely.

What the Four Major Answer Engines Actually Weight

The same observed-pattern caveats from the AEO Playbook apply here: these aren't documented preferences, they're empirical behavior across many test queries. But the patterns are consistent enough to be useful:

Six E-E-A-T Failures That Tank Citations

The recurring mistakes we see, ranked by frequency:

1. Anonymous bylines. Articles with no author, or with a string-only author field. The single biggest E-E-A-T failure. Without a Person entity, three of the four pillars have nowhere to attach.

2. Author Person without sameAs. A Person entity that exists only on your site is a closed loop. The answer engine has no way to verify the person is real. sameAs to LinkedIn at minimum, ORCID for academic content, GitHub for engineering content.

3. Article without a publisher Organization. The Article references an author but no publisher, leaving the organizational authority pillar empty. Always include both.

4. Stale or missing dateModified. Either no date at all, or a date set when the page was first published and never updated. Both signal stale content. Update dateModified whenever you make a substantive edit, and don't update it for cosmetic changes — answer engines notice the pattern.

5. No outbound citations. Authoritative content cites its sources. Pages that make claims without sourcing them are increasingly treated as opinion regardless of how well-written they are. Even two or three citations in the citation array materially help.

6. Orphan entities. A Person or Organization with an @id that doesn't resolve, or that resolves to a page without matching schema. The whole point of @id is verification — if the URL is dead or the page contradicts the schema, the entity claim collapses.

▲ Overt Ops

Auditing every article on a real site for the trust triangle, the sameAs graph, the citation array, and the freshness signals — across every author, every publisher, every reviewer entity — is exactly the kind of work that doesn't happen because nobody has the time. Overt Ops scores the full E-E-A-T layer automatically and generates the missing JSON-LD in your voice, ready to paste into your <head>. Coming soon from Area 51 Software.

Your Four-Week E-E-A-T Rollout

The same staged-rollout approach from the AEO Playbook applies. Don't try to retrofit the whole site at once. Stack the wins:

Week One — Author Identity

Week Two — Organizational Authority

Week Three — The Trust Triangle

Week Four — Verify and Expand

Take Action

The 60-tag audit told you what was missing. The AEO Playbook gave you the schema patterns that get you extracted. This piece gave you the trust signals that get you cited. Together they're the playbook for being one of the pages answer engines actually quote in 2026.

The single most useful thing you can do today: pick one article. Add the trust triangle. Author Person with sameAs. Publisher Organization with sameAs. One reviewedBy Person. A current dateModified. A citation array with two real sources. Validate the JSON-LD. Then paste the URL into Perplexity and Claude and ask each one a question your article answers.

If both cite you, the trust triangle is working — repeat the pattern on the next nine articles. If only one does, the failure is almost always in the orphan-entity list above: an @id that doesn't resolve, a sameAs that points to a closed account, a dateModified that's two years old. Fix that first.

For full-site E-E-A-T auditing with generated Person, Organization, and Article schema you can deploy immediately, keep an eye on Overt Ops.

OVERT OPS — COMING SOON

AI-powered SEO + AEO auditing. Full E-E-A-T scoring across Experience, Expertise, Authoritativeness, and Trustworthiness. Person, Organization, and Article schema generated automatically. Citation-readiness scoring across the four major answer engines.

Coming Soon