Quotulatiousness

April 14, 2014

SpaceX to test hover capability on next launch

Filed under: Space, Technology — Tags: , , , — Nicholas Russon @ 08:25

In The Register, Brid-Aine Parnell explains what will be different about the next SpaceX launch to resupply the ISS:

NASA has said that SpaceX’s latest cargoship launch to the International Space Station will go ahead, despite a critical computer outage on the station, allowing the firm to test the craft’s hovering abilities.

[...]

The booster rocket that’s blasting the Dragon supply capsule into space is going to attempt to make a hovering soft landing after it’s disengaged and dropped back to Earth.

The spruced-up Falcon 9 has its own landing legs, which Elon Musk’s space tech company hopes will eventually make for precise set-downs on the surface of alien worlds. For this test though, the rocket will still be coming down over the ocean, just in case.

The launch is already a month late with its nearly 5,000 pounds of supplies and payloads, including VEGGIE, a new unit capable of growing salad vegetables for the ‘nauts to munch on. The ship was delayed from March after a ground-based radar system at Cape Canaveral was damaged.

April 13, 2014

Ephemeral apps and the NSA

Filed under: Business, Media, Technology — Tags: , , — Nicholas Russon @ 11:21

Bruce Schneier on the rising popularity of apps that only leave your content visible briefly and then automatically removes it:

Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.

This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn’t still available.

These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.

[...]

At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.

The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In SnapChat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations — either through a secret NSA demand or a more normal legal process involving an employer or school — the companies would have no choice but to hand them over.

Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.

That last bit isn’t just paranoia.

April 12, 2014

The Baaaa-studs 2009 – Extreme LED Sheep Art

Filed under: Britain, Humour, Technology — Tags: — Nicholas Russon @ 15:38

April 11, 2014

Open source software and the Heartbleed bug

Filed under: Technology — Tags: , , , , — Nicholas Russon @ 07:03

Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:

Heartbleed bugI actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.

The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.

That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.

[...]

Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…

Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.

The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.

Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:

Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.

Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.

Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.

And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):

Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.

April 9, 2014

XKCD on the impact of “Heartbleed”

Filed under: Technology — Tags: , , , , , — Nicholas Russon @ 11:00

Update: In case you’re not concerned about the seriousness of this issue, The Register‘s John Leyden would like you to think again.

The catastrophic crypto key password vulnerability in OpenSSL affects far more than web servers, with everything from routers to smartphones also affected.

The so-called “Heartbleed” vulnerability (CVE-2014-0160) can be exploited to extract information from the servers running vulnerable version of OpenSSL, and this includes email servers and Android smartphones as well as routers.

Hackers could potentially gain access to private encryption key before using this information to decipher the encrypted traffic to and from vulnerable websites.

Web sites including Yahoo!, Flickr and OpenSSL were among the many left vulnerable to the megabug that exposed encryption keys, passwords and other sensitive information.

Preliminary tests suggested 47 of the 1000 largest sites are vulnerable to Heartbleed and that’s only among the less than half that provide support for SSL or HTTPS at all. Many of the affected sites – including Yahoo! – have since patched the vulnerability. Even so, security experts – such as Graham Cluley – remain concerned.

OpenSSL is a widely used encryption library that is a key component of technology that enables secure (https) website connections.

The bug exists in the OpenSSL 1.0.1 source code and stems from coding flaws in a fairly new feature known as the TLS Heartbeat Extension. “TLS heartbeats are used as ‘keep alive’ packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange,” explains security veteran Paul Ducklin in a post on Sophos’ Naked Security blog.

The Heartbleed vulnerability in the OpenSSL cryptographic library might be exploited to reveal contents of secured communication exchanges. The same flaw might also be used to lift SSL keys.

This means that sites could still be vulnerable to attacks after installing the patches in cases where a private key has been stolen. Sites therefore need to revoke exposed keys, reissue new keys, and invalidate all session keys and session cookies.

Bruce Schneier:

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

April 7, 2014

US government data security failures

Filed under: Bureaucracy, Government, Technology — Tags: , , , , — Nicholas Russon @ 09:02

David Gewirtz says that the press has totally mis-reported the scale of government security breaches:

Summary: This is one of those articles that spoils your faith in mankind. Not only are government security incidents fully into holy-cow territory, the press is reporting numbers three magnitudes too low because someone misread a chart and everyone else copied that report.

You might think this was an April Fool’s gag, except it was published on April 2nd, not April 1st.

According to testimony given by Gregory C. Wilshusen [PDF], Director of Information Security Issues for the Government Accountability Office to United States Senate Committee on Homeland Security and Governmental Affairs that, and I quote, “most major federal agencies had weaknesses in major categories of information security controls.”

In other words, some government agency data security functions more like a sieve than a lockbox.

Some of the data the GAO presented was deeply disturbing. For example, the number of successful breaches doubled since 2009. Doubled. There’s also a story inside this story, which I’ll discuss later in the article. Almost all of the press reporting on this testimony got the magnitude of the breach wrong. Most reported that government security incidents numbered in the thousands, when, in fact, they numbered in the millions.

Emphasis mine. Here are the actual numbers:

Incidents involving personal identifying information grew from about 10.5 million in 2009 to over 25 million last year. By the way, some press reports on this misread the GAO’s charts. For example, the Washington Free Beacon wrote about this, claiming “25,566 incidents of lost taxpayer data, Social Security numbers, patient health information.” What they missed was the little notation on the chart that says “in thousands,” so when they reported 25,566 incidents, what that really reads as is 25,566 x 1000 incidents.

2014 GAO analysis of security breaches

This is an example of how the Internet echo chamber can get information very, very wrong. The Chicago Tribune, via Reuters reported the same incorrect statistic. So did InformationWeek. So did FierceHealthIT. Business Insider picked up the Reuters report and happily repeated the same statistic — which was three orders of magnitude incorrect.

This is why I always try to go to the original source material [PDF] and not just repeat the crap other writers are parroting. It’s more work, but it means the difference between reporting 25 thousand government breaches and 25 million government breaches. 25 thousand is disturbing. 25 million is horrifying.

Big data’s promises and limitations

Filed under: Economics, Science, Technology — Tags: , — Nicholas Russon @ 07:06

In the New York Times, Gary Marcus and Ernest Davis examine the big claims being made for the big data revolution:

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. Jeopardy! champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

April 3, 2014

ESR reviews Jeremy Rifkin’s latest book

Filed under: Economics, Media, Technology — Tags: , , , , — Nicholas Russon @ 10:46

The publisher sent a copy of The Zero Marginal Cost Society along with a note that Rifkin himself wanted ESR to receive a copy (because Rifkin thinks ESR is a good representative of some of the concepts in the book). ESR isn’t impressed:

In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods — software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.

In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own expensive instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.

[...]

Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different way, both of which bear on the premises of his book.

First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.

Second, even in software — with those strong secondary markets — open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.

There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true — which is exactly why capitalists can make a lot of money by substituting capital goods for labor.

These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.

April 2, 2014

Enigma’s 21st century open sourced descendent

Filed under: History, Military, Technology — Tags: , , , — Nicholas Russon @ 09:51

The Enigma device was used by the German military in World War 2 to encrypt and decrypt communication between units and headquarters on land and at sea. Original Enigma units — the few that are on the market at any time — sell for tens of thousands of dollars. You may not be able to afford an original, but you might be interested in a modern implementation of Enigma using Arduino-based open-source hardware and software:

Actual hand-crafted Final design

Actual hand-crafted Final design

Enigma machines have captivated everyone from legendary code breaker Alan Turing and the dedicated cryptographers from England’s Bletchley Park to historians and collectors the world over.

But while many history buffs would surely love to get their hands on an authentic Enigma machine used during WWII, the devices aren’t exactly affordable (last year, a 1944 German Enigma machine was available for auction at Bonhams with an estimated worth of up to $82,000). Enter the Open Enigma Project, a kit for building one from scratch.

The idea came to Marc Tessier and James Sanderson from S&T Geotronics by accident.

“We were working on designing and building intelligent Arduino-based open-source geocaching devices to produce a unique interactive challenge at an upcoming Geocaching Mega Event,” Tessier told Crave. “A friend of ours suggested we use an Enigma type encrypting/decrypting machine as the ultimate stage of the challenge and pointed us to an Instructables tutorial that used a kid’s toy to provide some Enigma encoding. We looked all over to buy a real Enigma machine even if we had to assemble it ourselves and realized that there was nothing available at the moment. So we decided to build our own.”

[...]

“Our version is an electronic microprocessor-based machine that is running software which is a mathematical expression of how the historical mechanical machine behaved,” Sanderson told Crave. “Having never touched a real Enigma M4, we built our open version based on what we read online. From what we understand, the real electro-mechanical devices are much heavier and a little bigger.”

They took some design liberties — replacing the physical rotors with LED units and replacing the light bulbs with white LEDs. The replica can be modified by changing the Arduino code and can communicate to any computer via USB. Future versions may include Wi-Fi and/or Bluetooth.

People are less inclined to shop or bank online after NSA surveillance reports

Filed under: Business, Government, Technology — Tags: , , , , , — Nicholas Russon @ 08:46

Among the side-effects of government surveillance revelations, ordinary people are deciding to be a bit less involved in online activities, according to a new Harris Poll:

Online banking and shopping in America are being negatively impacted by ongoing revelations about the National Security Agency’s digital surveillance activities. That is the clear implication of a recent ESET-commissioned Harris poll which asked more than 2,000 U.S. adults ages 18 and older whether or not, given the news about the NSA’s activities, they have changed their approach to online activity.

Almost half of respondents (47%) said that they have changed their online behavior and think more carefully about where they go, what they say, and what they do online.

When it comes to specific Internet activities, such as email or online banking, this change in behavior translates into a worrying trend for the online economy: over one quarter of respondents (26%) said that, based on what they have learned about secret government surveillance, they are now doing less banking online and less online shopping. This shift in behavior is not good news for companies that rely on sustained or increased use of the Internet for their business model.

[...]

Whether or not we have seen the full extent of the public’s reaction to state-sponsored mass surveillance is hard to predict, but based on this survey and the one we did last year, I would say that, if the NSA revelations continue – and I am sure they will – and if government reassurances fail to impress the public, then it is possible that the trends in behavior we are seeing right now will continue. For example, I do not see many people finding reassurance in President Obama’s recently announced plan to transfer the storage of millions of telephone records from the government to private phone companies. As we will document in our next installment of survey findings, data gathering by companies is even more of a privacy concern for some Americans than government surveillance.

And in case anyone is tempted to think that this is a narrow issue of concern only to news junkies and security geeks, let me be clear: according to this latest survey, 85% of adult Americans are now at least somewhat familiar with the news about secret government surveillance of private citizens’ phone calls, emails, online activity, and so on.

March 26, 2014

Minimum-wage jobs becoming more likely to be replaced by robots

Filed under: Economics, Technology — Tags: , , — Nicholas Russon @ 07:44

Everyone seems to want to raise the minimum wage right now (well, everyone in the media certainly), but it might backfire spectacularly on the very people it’s supposed to help:

It’s become commonplace for computers to replace American workers — think about those on an assembly line and in toll booths — but two University of Oxford professors have come to a surprising conclusion: Waitresses, fast-food workers and others earning at or near the minimum wage should also be on alert.

President Obama’s proposal to increase the federal minimum wage from $7.25 to $10.10 per hour could make it worthwhile for employers to adopt emerging technologies to do the work of their low-wage workers. But can a robot really do a janitor’s job? Can software fully replace a fast-food worker? Economists have long considered these low-skilled, non-routine jobs as less vulnerable to technological replacement, but until now, quantitative estimates of a job’s vulnerability have been missing from the debate.

Based on a 2013 paper by Carl Benedikt Frey and Michael A. Osborne of Oxford [PDF], occupations in the U.S. that pay at or near the minimum wage — that’s about one of every six workers in the U.S. — are much more susceptible to “computerization,” or as defined by the authors, “job automation by means of computer-controlled equipment.” The researchers considered a time frame of 20 years, and they measured whether such jobs could be computerized, not whether these jobs will be computerized. The latter involves assumptions about economic feasibility and social acceptance that go beyond mere technology.

The minimum-wage occupations that Frey and Osborne think are most vulnerable include, not surprisingly, telemarketers, sales clerks and cashiers. But also included are occupations that employ a large share of the low-wage workforce, such as waiters and waitresses, food-preparation workers and cooks. If the computerization of these low-wage jobs becomes feasible, and if employers find it economical to invest in such labor-saving technology, there will be huge implications for the U.S. labor force.

H/T to Colby Cosh, who said “McDonald’s is going to turn into vending machines. Can’t say this enough. McDonald’s…vending machines.”

Oculus in the news

Filed under: Business, Media, Technology — Tags: , , , — Nicholas Russon @ 07:33

Raph Koster reflects on the promise of Oculus:

Metaverse RoadmapRendering was never the point.

Oh, it’s hard. But it’s rapidly becoming commodity hardware. That was in fact the basic premise of the Oculus Rift: that the mass market commodity solution for a very old dream was finally approaching a price point where it made sense. The patents were expiring; the panels were cheap and getting better by the month. The rest was plumbing. Hard plumbing, the sort that calls for a Carmack, maybe, but plumbing.

[...]

Look, there are a few big visions for the future of computing doing battle.

There’s a wearable camp, full of glasses and watches. It’s still nascent, but its doom is already waiting in the wings; biocomputing of various sorts (first contacts, then implants, nano, who knows) will unquestionably win out over time, just because glasses and watches are what tech has been removing from us, not getting us to put back on. Google has its bets down here.

There’s a beacon-y camp, one where mesh networks and constant broadcasts label and dissect everything around us, blaring ads and enticing us with sales coupons as we walk through malls. In this world, everything is annotated and shouting at a digital level, passing messages back and forth. It’s an ubicomp environment where everything is “smart.” Apple has its bets down here.

These two things are going to get married. One is the mouth, the other the ears. One is the poke, the other the skin. And then we’re in a cyberpunk dream of ads that float next to us as we walk, getting between us and the other people, our every movement mined for Big Data.

[...]

The virtue of Oculus lies in presence. A startling, unusual sort of presence. Immersion is nice, but presence is something else again. Presence is what makes Facebook feel like a conversation. Presence is what makes you hang out on World of Warcraft. Presence is what makes offices persist in the face of more than enough capability for remote work. Presence is why a video series can out-draw a text-based MOOC and presence is why live concerts can make more money than album sales.

Facebook is laying its bet on people, instead of smart objects. It’s banking on the idea that doing things with one another online — the thing that has fueled it all this time — is going to keep being important. This is a play to own walking through Machu Picchu without leaving home, a play to own every classroom and every museum. This is a play to own what you do with other people.

Update: Apparently some of the folks who backed the original Kickstarter campaign have their panties in a bunch now that there’s big money involved.


Attendees wear Oculus Rift HD virtual reality head-mounted displays as they play EVE: Valkyrie, a multiplayer virtual reality dogfighting shooter game, at the Intel booth at the 2014 International CES, January 9, 2014 in Las Vegas, Nevada. ROBYN BECK/AFP/Getty Images

Facebook’s purchase of virtual reality company Oculus for $2bn in stocks and shares is big news for a third company: Kickstarter, which today celebrates the first billion-dollar exit of a company formed through the crowdfunding platform.

Oculus raised $2.4m for its Rift headset in September 2012, exceeding its initial fundraising goal by 10 times. It remains one of the largest ever Kickstarter campaigns.

But as news of the acquisition broke Tuesday night, some of the 9,500 people who backed the project for sums of up to $5,000 a piece (the most popular package, containing an early prototype of the Rift, was backed by 5,600 for a more reasonable $300) were rethinking their support.

[...]

For Kickstarter itself, the purchase raises awkward questions. The company has always maintained that it should not be viewed as a storefront for pre-ordering products; instead, a backer should be aware that they are giving money to a struggling artist or designer, and view the reward as a thanks rather than a purchase.

Kickstarter Is Not a Store” is how the New York-based company put it in 2012, shortly after the Oculus Rift campaign closed. Instead, the company explained: “It’s a new way for creators and audiences to work together to make things.”

But if Kickstarter isn’t a store, and if backers also aren’t getting equity in the company which uses their money to build a $2bn business, then what are they actually paying for?

“Structurally I have an issue with it,” explains Buckenham, “in that the backer takes on a great deal of risk for relatively little upside and that the energy towards exciting things is formalised into a necessarily cash-based relationship in a way that enforces and extends capitalism into places where it previously didn’t have total dominion.”

March 25, 2014

Tech culture and ageism

Filed under: Business, Technology, USA — Tags: , , , , — Nicholas Russon @ 07:56

Noam Scheiber examines the fanatic devotion to youth in (some parts of) the high tech culture:

Silicon Valley has become one of the most ageist places in America. Tech luminaries who otherwise pride themselves on their dedication to meritocracy don’t think twice about deriding the not-actually-old. “Young people are just smarter,” Facebook CEO Mark Zuckerberg told an audience at Stanford back in 2007. As I write, the website of ServiceNow, a large Santa Clara–based I.T. services company, features the following advisory in large letters atop its “careers” page: “We Want People Who Have Their Best Work Ahead of Them, Not Behind Them.”

And that’s just what gets said in public. An engineer in his forties recently told me about meeting a tech CEO who was trying to acquire his company. “You must be the token graybeard,” said the CEO, who was in his late twenties or early thirties. “I looked at him and said, ‘No, I’m the token grown-up.’”

Investors have also become addicted to the youth movement:

The economics of the V.C. industry help explain why. Investing in new companies is fantastically risky, and even the best V.C.s fail a large majority of the time. That makes it essential for the returns on successes to be enormous. Whereas a 500 percent return on a $2 million investment (or “5x,” as it’s known) would be considered remarkable in any other line of work, the investments that sustain a large V.C. fund are the “unicorns” and “super-unicorns” that return 100x or 1,000x — the Googles and the Facebooks.

And this is where finance meets what might charitably be called sociology but is really just Silicon Valley mysticism. Finding themselves in the position of chasing 100x or 1,000x returns, V.C.s invariably tell themselves a story about youngsters. “One of the reasons they collectively prefer youth is because youth has the potential for the black swan,” one V.C. told me of his competitors. “It hasn’t been marked down to reality yet. If I was at Google for five years, what’s the chance I would be a black swan? A lot lower than if you never heard of me. That’s the collective mentality.”

Some of the corporate cultures sound more like playgroups than workgroups:

Whatever the case, the veneration of youth in Silicon Valley now seems way out of proportion to its usefulness. Take Dropbox, which an MIT alumnus named Drew Houston co-founded in 2007, after he got tired of losing access to his files whenever he forgot a thumb drive. Dropbox quickly caught on among users and began to vacuum up piles of venture capital. But the company has never quite outgrown its dorm-room vibe, even now that it houses hundreds of employees in an 85,000-square-foot space. Dropbox has a full-service jamming studio and observes a weekly ritual known as whiskey Fridays. Job candidates have complained about being interviewed in conference rooms with names like “The Break-up Room” and the “Bromance Chamber.” (A spokesman says the names were recently changed.)

Once a year, Houston, who still wears his chunky MIT class ring, presides over “Hack Week,” during which Dropbox headquarters turns into the world’s best-capitalized rumpus room. Employees ride around on skateboards and scooters, play with Legos at all hours, and generally tool around with whatever happens to interest them, other than work, which they are encouraged to set aside. “I’ve been up for about forty hours working on Dropbox Jeopardy,” one engineer told a documentarian who filmed a recent Hack Week. “It’s close to nearing insanity, but it feels worth it.”

It’s safe to say that the reigning sensibility at Dropbox has conquered more or less every corner of the tech world. The ping-pong playing can be ceaseless. The sexual mores are imported from college—“They’ll say something like, ‘This has been such a long day. I have to go out and meet some girls, hook up tonight,’ ” says one fortysomething consultant to several start-ups. And the vernacular is steroidally bro-ish. Another engineer in his forties who recently worked at a crowdsourcing company would steel himself anytime he reviewed a colleague’s work. “In programming, you need a throw-away variable,” the engineer explained to me. “So you come up with something quick.” With his co-workers “it would always be ‘dong’ this, ‘dick’ that, ‘balls’ this.”

There’s also the blind spot about having too many youth-focussed firms in the same market:

The most common advice V.C.s give entrepreneurs is to solve a problem they encounter in their daily lives. Unfortunately, the problems the average 22-year-old male programmer has experienced are all about being an affluent single guy in Northern California. That’s how we’ve ended up with so many games (Angry Birds, Flappy Bird, Crappy Bird) and all those apps for what one start-up founder described to me as cooler ways to hang out with friends on a Saturday night.

H/T to Kathy Shaidle for the link.

March 23, 2014

The march of technology and the future of work

Filed under: Business, Technology — Tags: , , , — Nicholas Russon @ 10:28

Matt Ridley on the perpetual fretting that technological change will eliminate jobs and leave many permanently without work:

Bill Gates voiced a thought in a speech last week that is increasingly troubling America’s technical elite — that technology is about to make many, many people redundant. Advances in software, he said, will reduce demand for jobs, substituting robots for drivers, waiters or nurses.

The last time that I was in Silicon Valley I found the tech-heads fretting about this in direct proportion to their optimism about technology. That is to say, the more excited they are that the “singularity” is near — the moment when computers become so clever at making themselves even cleverer that the process accelerates to infinity — the more worried they are that there will be mass unemployment as a result.

This is by no means a new worry:

In the 1700s four in every five workers were employed on a farm. Thanks to tractors and combine harvesters, only one in fifty still works in farming, yet more people are at work than ever before. By 1850 the majority of jobs were in manufacturing. Today fewer than one in seven is. Yet Britain manufactures twice as much stuff by value as it did 60 years ago. In 1900 vast numbers of women worked in domestic service and were about to see their mangles and dusters mechanised. Yet more women have jobs than ever before.

Again and again technology has disrupted old work patterns and produced more, not less, work — usually at higher wages in more pleasant surroundings.

The followers of figures such as Ned Ludd, who smashed weaving looms, and Captain Swing, who smashed threshing machines (and, for that matter, Arthur Scargill) suffered unemployment and hardship in the short term but looked back later, or their children did, with horror at the sort of drudgery from which technology had delivered them.

Why should this next wave of technology be different? It’s partly that it is closer to home for the intelligentsia. Unkind jibe — there’s a sort of frisson running through the chatterati now that people they actually know might lose their jobs to machines, rather than the working class. Indeed, the jobs that look safest from robots are probably at the bottom of the educational heap: cooks, gardeners, maids. After many years’ work, Berkeley researchers have built a robot that can fold a towel — it takes 24 minutes.

March 16, 2014

Defining hackers and hacker culture

Filed under: History, Technology — Tags: , , , — Nicholas Russon @ 09:49

ESR put this together as a backgrounder for a documentary film maker:

In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.

Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)

People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.

Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.

There are — seriously — people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet — they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.

[...]

There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.

There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors — sometimes superficially, sometimes deeply and genuinely.

Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.

Older Posts »
« « David Friedman responds to William Nordhaus on global warming costs| QotD: American “cheese” » »

Powered by WordPress

%d bloggers like this: