htmlfacebook-opengraphtwitter-card

Twitter website doesn't have open graph tags?


I'm trying to get url previews (for websites that support them) to show up in a project I'm working on. I recently noticed that twitter urls don't have open graph meta tags anymore. I was expecting og:title, og:description and so on, which if I remember correctly used to exist for all twitter links.

E.g. if I see the page source for this link: twitter.com/DalaiLama/status/1274998376338124800

I don't see og metadata apart from og:site_name. I also don't see any twitter:title or respective content. What am I missing?

Update: so it turns out view source doesn't show og:title, but I do see it under Chrome's "inspect" menu. Does that mean the JS actually has it but not the HTML (also it only shows the og:title and not other fields)? Is that expected?


Solution

  • Twitter uses client-side-rendering (CSR) to generate HTML in the browser

    Viewing the source directly will not show any of the relevant <meta> tags or actual page HTML content, because it is all dynamically generated on the client's browser in React using JavaScript (i.e. CSR: Client-side rendering). In fact, the HTML source will have a stub containing "We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?". This can be verified by opening up developer tools and peeking at the "Elements" tab during page load/render or downloading the page without JavaScript emulation.

    However, to improve Search Engine Optimization (SEO) for various prominent web-crawlers, Twitter will instead return server-side-rendered (SSR) HTML content (which does contain the <meta> tags). This enables crawlers to not have to emulate JavaScript to view the page, and only crawl raw HTML content. Twitter recognizes crawlers based on the supplied User-Agent HTTP Header. Server-side-rendering is generally a more expensive operation than offloading the HTML rendering onto the client, which may be a reason why Twitter opts for client-side-rendering as the default behavior.

    Bypassing the User-Agent whitelist to receive server-side-rendered (SSR) HTML

    Various prominent web-crawlers are whitelisted by Twitter to receive server-side-rendered HTML. By spoofing the User-Agent HTTP Header in your own request, you can bypass the whitelist and receive server-side-rendered HTML containing the relevant <meta> tags (whether or not this is recommended is a totally different subject matter). For programmatic HTTP requests, check for support for changing the User-Agent HTTP Header in your HTTP library - most non-trivial libraries support this functionality.

    whatismybrowser.com has a list of well known web-crawler User-Agent headers; some of these web crawlers are whitelisted (but not necessarily all). At the time of writing, here are some working user agents: