Even the “blogosphere” of the early 21st century, in which independently run blog sites posted items on news and responded both to Big Media stories and to each other was more like traditional media in some respects than like Usenet or social media. To read content on blogs, readers had to go there. To interact, bloggers had to read each other’s sites and decide to post a response, generally with a link back to the post they were replying to. If you didn’t like a blog you could just ignore it. A story that spread like wildfire through the blogosphere still did so over the better part of a day, not over minutes, and it was typically pretty easy to find the original item and get context, something the culture of blogging encouraged… In addition, a story’s spreading required at least a modicum of actual thought and consideration on the part of bloggers, who were also constrained, to a greater or lesser degree, by considerations of reputation. Some blogs served as trusted nodes on the blogosphere, and many other bloggers would be reluctant to run with a story that the trusted nodes didn’t believe. In engineering parlance, the early blogosphere was a “loosely coupled” system, one where changes in one part were not immediately or directly transmitted to others. Loosely coupled systems tend to be resilient, and not very subject to systemic failures, because what happens in one part of the system affects other parts only weakly and slowly. Tightly coupled systems, on the other hand, where changes affecting one node swiftly affect others, are prone to cascading failures… [On Twitter,] little to no thought is required, and in practice very few people even follow the link (if there is one) to “read the whole thing”.
Glenn “Instapundit” Reynolds, The Social Media Upheaval, 2019. (link goes to Althouse, who posted this quote based on the Kindle edition)
June 29, 2019
July 20, 2017
ESR on the early history of distributed software
Eric S. Raymond is asking for additional input to his current historical outline of the development of distributed software collaboration:
Nowadays we take for granted a public infrastructure of distributed version control and a lot of practices for distributed teamwork that go with it – including development teams that never physically have to meet. But these tools, and awareness of how to use them, were a long time developing. They replace whole layers of earlier practices that were once general but are now half- or entirely forgotten.
The earliest practice I can identify that was directly ancestral was the DECUS tapes. DECUS was the Digital Equipment Corporation User Group, chartered in 1961. One of its principal activities was circulating magnetic tapes of public-domain software shared by DEC users. The early history of these tapes is not well-documented, but the habit was well in place by 1976.
One trace of the DECUS tapes seems to be the README convention. While it entered the Unix world through USENET in the early 1980s, it seems to have spread there from DECUS tapes. The DECUS tapes begat the USENET source-code groups, which were the incubator of the practices that later became “open source”. Unix hackers used to watch for interesting new stuff on comp.sources.unix as automatically as they drank their morning coffee.
The DECUS tapes and the USENET sources groups were more of a publishing channel than a collaboration medium, though. Three pieces were missing to fully support that: version control, patching, and forges.
Version control was born in 1972, though SCCS (Source Code Control System) didn’t escape Bell Labs until 1977. The proprietary licensing of SCCS slowed its uptake; one response was the freely reusable RCS (Revision Control System) in 1982.
[…]
The first dedicated software forge was not spun up until 1999. That was SourceForge, still extant today. At first it supported only CVS, but it sped up the adoption of the (greatly superior) Subversion, launched in 2000 by a group for former CVS developers.
Between 2000 and 2005 Subversion became ubiquitous common knowledge. But in 2005 Linus Torvalds invented git, which would fairly rapidly obsolesce all previous version-control systems and is a thing every hacker now knows.
Questions for reviewers:
(1) Can anyone identify a conscious attempt to organize a distributed development team before nethack (1987)?
(2) Can anyone tell me more about the early history of the DECUS tapes?
(3) What other questions should I be asking?
March 15, 2013
Will the death of Google Reader also be the death of RSS?
Felix Salmon on the knock-on ramifications of Google’s announcement that it is killing Google Reader:
But whether or not Reader was ever going to be a good business for Google, it was from day one a fantastic public service for its users. Google started as a public service — a way to find what you were looking for on the internet — and didn’t stop there. Google would also do things like buy the entire Usenet archives, or scan millions of out-of-print books, or put thousands of people to work making maps, all in order to be able to get all sorts of information to anybody who wants it. [. . .]
The problem with the death of Reader is that it was the architecture underpinning lots of other services — the connective tissue of just about all RSS readers and services, from Summify to Reeder to Flipboard. You didn’t even need to use Google Reader; it was just the master central repository of your master OPML list, all the different feeds that you were subscribed to. Google spent real money to provide that public service, and it’s going to be sorely missed. As Marco Arment says, “every major iOS RSS client is still dependent on Google Reader for feed crawling and sync.”
Arment sees a silver lining in the cloud, saying that with Google gone, “we’re finally likely to see substantial innovation and competition in RSS desktop apps and sync platforms for the first time in almost a decade.” I’m less sanguine. Building an RSS sync platform is a hard and pretty thankless task, it costs real money, and it might not work at all — especially in a world where less and less content is actually available in RSS format. (You can subscribe to my Tumblr feed in RSS format, but there’s no such feed for my posts on Twitter or Facebook or Instagram or Path or even Google+.)
RSS has been dying for years — that’s why Google killed Reader. It was a lovely open format; it has sadly been replaced with proprietary feeds like the ones we get from Twitter and Facebook. That’s not an improvement, but it is reality. Google, with Reader, was really providing the life-support mechanism for RSS. Once Reader is gone, I fear that RSS won’t last much longer.
October 8, 2009
Google’s neglect of its USENET archive
Maybe giving Google a monopoly over all those millions of out of date books won’t work out as well as we might hope. They’ve (kinda) been here before, and the results weren’t what you would expect. For those who remember the “good old days” when USENET was the place to be (before the Web, it was the best thing online), Kevin Poulsen looks at what happened after Google took over the effective ownership of the archive:
Salon hailed the accomplishment in an article headlined “The geeks who saved Usenet.” “Google gets the credit for making these relics of the early net accessible to anyone on the web, bringing the early history of Usenet to all.”
Flash forward nearly eight years, and visiting Google Groups is like touring ancient ruins.
On the surface, it looks as clean and shiny as every other Google service, which makes its rotting interior all the more jarring — like visiting Disneyland and finding broken windows and graffiti on Main Street USA.
Searching within a newsgroup, even one with thousands of posts, produces no results at all. Confining a search to a range of dates also fails silently, bulldozing the most obvious path to exploring an archive.
Want to find Marc Andreessen’s historic March 14, 1993 announcement in alt.hypertext of the Mosaic web browser? “Your search — mosaic — did not match any documents.”