Danny Ayers on the state of RDF and Web2.0 and SPARQL etc.
Mind you, compare Danny :
It’s of secondary importance whether the future of the Web looks like the kind of thing TimBL had in mind way back, or whether it is based around 100GB of Microsoft Office applications at every node, or the Google Singularity (beta) or whether it’ll be something where everything is bits on the wire (with no endpoints), or maybe everyone will connect through a DRM-powered iAnalProbe, or a wild heterogenous mix of these. The implementation details, even at a high level, aren’t so important. The proposition suggests there will be something, and the trend so far has been towards net improvement. It’s been a rare old mix of small increments, steps backwards, refactoring, tech resurrection, evolution and intelligent design, and not least paradigm shifting.
Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web. The amount of meta-data we generate is increasing dramatically, and it is being exposed for consumption by machines as well as, or instead of, people. But it is being designed a bit at a time, out of self-interest and without regard for global ontology. It is also being adopted piecemeal, and it will bring with it with all the incompatibilities and complexities that implies. There are significant disadvantages to this process relative to the shining vision of the Semantic Web, but the big advantage of this bottom-up design and adoption is that it is actually working now.
How much difference is there between these two? What is Danny commited to that Clay isn't?