Recently I've came across an article or tutorial which showed the current technical state of the current web, based on crawled website data. Is there any service or data set available to query e.g. CSS query usage like @media
or @supports
of as many crawled pages as possible? Similar to The average web page but able to query own properties/content. Does Google BigQuery or any other service offer that? It should be available freely as it's going to be used for open scientific research.
After a while I stumbled across it again, I was looking for HTTP Archive with their corporation with Google BigQuery where you can write your own queries for such web statistics.