- SPARQL is _a lot better_ than the many different forms of SQL.
- Adding some JSON-LD can be done through simple JSON metadata. Something people using Wordpress are already able to do. All this will be more and more automated.
- The benefit is ontological cohesion across the whole web. Please take a look at the https://conze.pt project and see what this can bring you. The benefit is huge. Simple integration with many different stores of information in a semantically precise way.
2) AI/NLP is never completely precise and requires huge resources (which require centralization). The basics of the semantic web will be based on RDF (whether created through some AI or not), SPARQL, ontologies and extended/improved by AI/NLP. Its a combination of the two that is already being used for Wikipedia and Wikidata search results.
> The benefit is ontological cohesion across the whole web
This has no benefit for the person who has to pay to do the work. Why would I pay someone to mark up all my data, just for the greater good? When humans are looking/using my products, none of this is visible. It's not built into any tools, it doesn't get me more SEO, and it doesn't get me any more sales.
Why are people editing Wikipedia and Wikidata? What would it bring you if your products were globally linked to that knowledge graph and Google's machines would understand that metadata from the tiny JSON-LD snippet on each page? The tools are here already, the tech is evolving still, but the knowledge graph concept is going to affect web shop owners too soon enough.
It’s unclear to me at this point why people are contributing to Wikipedia and certainly wikidata, but they’re getting something out of it (perhaps notoriety), and a lot probably has to do with contributing to the greater good. It’s all non profit. The rest of the web is unlike these stand out projects.
Meanwhile, why would say Mouser or Airbnb pay someone to markup their docs? WebMD? Clearly nothing has been compelling them to do so thus far, and when you’re talking about harvesting data and using it elsewhere, it’s a difficult argument to make. Google already gets them plenty of traffic without these efforts.
They do it because it benefits them too. OpenStreetMaps links with WD, GLAMs link with WD, journals/ORCIDs link with WD, all sorts of other data archives link with WD. Whoever is not linking with may see a crawler pass by to collect license-free facts.
Also, I just checked: WebMD is using a ton of embedded RDF on each page. They understand SEO well as you said :)
- SPARQL is _a lot better_ than the many different forms of SQL.
- Adding some JSON-LD can be done through simple JSON metadata. Something people using Wordpress are already able to do. All this will be more and more automated.
- The benefit is ontological cohesion across the whole web. Please take a look at the https://conze.pt project and see what this can bring you. The benefit is huge. Simple integration with many different stores of information in a semantically precise way.
2) AI/NLP is never completely precise and requires huge resources (which require centralization). The basics of the semantic web will be based on RDF (whether created through some AI or not), SPARQL, ontologies and extended/improved by AI/NLP. Its a combination of the two that is already being used for Wikipedia and Wikidata search results.