Quotulatiousness

November 23, 2014

ESR on how to learn hacking

Filed under: Technology — Tags: , , , — Nicholas @ 10:59

Eric S. Raymond has been asked to write this document for years, and he’s finally given in to the demand:

What Is Hacking?

The “hacking” we’ll be talking about in this document is exploratory programming in an open-source environment. If you think “hacking” has anything to do with computer crime or security breaking and came here to learn that, you can go away now. There’s nothing for you here.

Hacking is a style of programming, and following the recommendations in this document can be an effective way to acquire general-purpose programming skills. This path is not guaranteed to work for everybody; it appears to work best for those who start with an above-average talent for programming and a fair degree of mental flexibility. People who successfully learn this style tend to become generalists with skills that are not strongly tied to a particular application domain or language.

Note that one can be doing hacking without being a hacker. “Hacking”, broadly speaking, is a description of a method and style; “hacker” implies that you hack, and are also attached to a particular culture or historical tradition that uses this method. Properly, “hacker” is an honorific bestowed by other hackers.

Hacking doesn’t have enough formal apparatus to be a full-fledged methodology in the way the term is used in software engineering, but it does have some characteristics that tend to set it apart from other styles of programming.

  • Hacking is done on open source. Today, hacking skills are the individual micro-level of what is called “open source development” at the social macrolevel. A programmer working in the hacking style expects and readily uses peer review of source code by others to supplement and amplify his or her individual ability.
  • Hacking is lightweight and exploratory. Rigid procedures and elaborate a-priori specifications have no place in hacking; instead, the tendency is try-it-and-find-out with a rapid release tempo.
  • Hacking places a high value on modularity and reuse. In the hacking style, you try hard never to write a piece of code that can only be used once. You bias towards making general tools or libraries that can be specialized into what you want by freezing some arguments/variables or supplying a context.
  • Hacking favors scrap-and-rebuild over patch-and-extend. An essential part of hacking is ruthlessly throwing away code that has become overcomplicated or crufty, no matter how much time you have invested in it.

The hacking style has been closely associated with the technical tradition of the Unix operating system.

Recently it has become evident that hacking blends well with the “agile programming” style. Agile techniques such as pair programming and feature stories adapt readily to hacking and vice-versa. In part this is because the early thought leaders of agile were influenced by the open source community. But there has since been traffic in the other direction as well, with open-source projects increasingly adopting techniques such as test-driven development.

November 16, 2014

In Civilization games, Gandhi is the most likely leader to use nukes

Filed under: Gaming, India — Tags: , , , — Nicholas @ 00:03

In Kotaku, Luke Plunkett explains why of all the AI leaders in the game, none are more likely to espouse the philosophy “nuke ’em ’till they glow, then shoot ’em in the dark” than India’s Gandhi:

Civilization V - Gandhi

In the original Civilization, it was because of a bug. Each leader in the game had an “aggression” rating, and Gandhi – to best reflect his real-world persona – was given the lowest score possible, a 1, so low that he’d rarely if ever go out of his way to declare war on someone.

Only, there was a problem. When a player adopted democracy in Civilization, their aggression would be automatically reduced by 2. Code being code, if Gandhi went democratic his aggression wouldn’t go to -1, it looped back around to the ludicrously high figure of 255, making him as aggressive as a civilization could possibly be.

In later games this bug was obviously not an issue, but as a tribute/easter egg of sorts, parts of his white-hot rage have been kept around. In Civilization V, for example, while Gandhi’s regular diplomatic approach is more peaceful than other leaders, he’s also the most likely to go dropping a-bombs when pushed, with a nuke “rating” of 12 putting him well ahead of the competition (the next three most likely to go nuclear have a rating of 8, with most leaders around the 4-6 region).

Update, 16 November: Fixed the broken link.

October 11, 2014

QotD: The economies (and dis-economies) of scale

Filed under: Business, Economics — Tags: , , — Nicholas @ 00:01

We are the heirs of the Industrial Revolution, and, of course, the Industrial Revolution was all about economies of scale. Its efficiencies and advances were made possible by banding people together in larger and larger amalgamations, and we invented all sorts of institutions — from corporations to municipal governments — to do just that.

This process continues to this day. In its heyday, General Motors employed about 500,000 people; Wal-Mart employs more than twice that now. We continue to urbanize, depopulating the Great Plains and repopulating downtowns. Our most successful industry — the technology company — is driven by unprecedented economies of scale that allow a handful of programmers to make squintillions selling some software applications to half the world’s population.

This has left us, I think, with a cultural tendency to assume that everything is subject to economies of scale. You find this as much on the left as the right, about everything from government programs to corporations. People just take it as naturally given that making a company or an institution or a program bigger will drive cost efficiencies that allow them to get bigger still.

Of course, this often is the case. Facebook is better off with 2 billion customers than 1 billion, and a program that provides health insurance to everyone over the age of 65 has lower per-user overhead than a program that provides health insurance to 200 homeless drug users in Atlanta. I’m not trying to suggest that economies of scale don’t exist, only that not every successful model enjoys them. In fact, many successful models enjoy diseconomies of scale: After a certain point, the bigger you get, the worse you do.

Megan McArdle, “In-N-Out Doesn’t Want to Be McDonald’s”, Bloomberg View, 2014-10-02.

September 30, 2014

“These bugs were found – and were findable – because of open-source scrutiny”

Filed under: Technology — Tags: , , , — Nicholas @ 08:13

ESR talks about the visibility problem in software bugs:

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

September 20, 2014

Can you trust Apple’s new commitment to your privacy?

Filed under: Business, Technology — Tags: , , , — Nicholas @ 12:32

David Akin posted a list of questions posed by John Gilmore, challenging the Apple iOS8 cryptography promises:

Gilmore considered what Apple said and considered how Apple creates its software — a closed, secret, proprietary method — and what coders like him know about the code that Apple says protects our privacy — pretty much nothing — and then wrote the following for distribution on Dave Farber‘s Interesting People listserv. I’m pretty sure neither Farber nor Gilmore will begrudge me reproducing it.

    And why do we believe [Apple]?

    • Because we can read the source code and the protocol descriptions ourselves, and determine just how secure they are?
    • Because they’re a big company and big companies never lie?
    • Because they’ve implemented it in proprietary binary software, and proprietary crypto is always stronger than the company claims it to be?
    • Because they can’t covertly send your device updated software that would change all these promises, for a targeted individual, or on a mass basis?
    • Because you will never agree to upgrade the software on your device, ever, no matter how often they send you updates?
    • Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?
    • Because if a future update INSERTS privacy or security bugs, we will surely be able to distinguish these updates from future updates that FIX privacy or security bugs?
    • Because if they change their mind and decide to lessen our privacy for their convenience, or by secret government edict, they will be sure to let us know?
    • Because they have worked hard for years to prevent you from upgrading the software that runs on their devices so that YOU can choose it and control it instead of them?
    • Because the US export control bureacracy would never try to stop Apple from selling secure mass market proprietary encryption products across the border?
    • Because the countries that wouldn’t let Blackberry sell phones that communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?
    • Because we’re apple fanboys and the company can do no wrong?
    • Because they want to help the terrorists win?
    • Because NSA made them mad once, therefore they are on the side of the public against NSA?
    • Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?

    There must be some other reason, I’m just having trouble thinking of it.

July 11, 2014

Remember the Y2K bug? It just caused draft notices to be sent to thousands of dead men

Filed under: Bureaucracy, Government, USA — Tags: , , , — Nicholas @ 07:57

This is a story that rightfully should have been published at the beginning of April (except it actually happened):

A year 2000-related bug has caused the US military to send more than 14,000 letters of conscription to men who were all born in the 1800s and died decades ago.

Shocked residents of Pennsylvania began receiving letters ordering their great grandparents to register for the US military draft by pain of “fine and imprisonment.”

“I said, ‘Geez, what the hell is this about?’” Chuck Huey, 73, of Kingston, Pennsylvania told the Associated Press when he received a letter for his late grandfather Bert Huey, born in 1894 and a first world war veteran who died at the age of 100 in 1995.

“It said he was subject to heavy fines and imprisonment if he didn’t sign up for the draft board,” exclaimed Huey. “We were just totally dumbfounded.”

The US Selective Service System, which sent the letters in error, automatically handles the drafting of US citizens and other US residents that are applicable for conscription. The cause of the error was narrowed down to a Y2K-like bug in the Pennsylvania department of transportation (PDT).

A clerk at the PDT failed to select a century during the transfer of 400,000 records to the Selective Service, producing 1990s records for men born a century earlier.

June 18, 2014

The liability concern in the future of driverless cars

Filed under: Law, Technology — Tags: , , , — Nicholas @ 08:04

Tim Worstall asks when it would be appropriate for your driverless car to kill you:

Owen Barder points out a quite delightful problem that we’re all going to have to come up with some collective answer to over the driverless cars coming from Google and others. Just when is it going to be acceptable that the car kills you, the driver, or someone else? This is a difficult public policy question and I’m really not sure who the right people to be trying to solve it are. We could, I guess, given that it is a public policy question, turn it over to the political process. It is, after all, there to decide on such questions for us. But given the power of the tort bar over that process I’m not sure that we’d actually like the answer we got. For it would most likely mean that we never do get driverless cars, at least not in the US.

The basic background here is that driverless cars are likely to be hugely safer than the current human directed versions. For most accidents come about as a result of driver error. So, we expect the number of accidents to fall considerably as the technology rolls out. This is great, we want this to happen. However, we’re not going to end up with a world of no car accidents. Which leaves us with the problem of how do we program the cars to work when there is unavoidably going to be an accident?

[…]

So we actually end up with two problems here. The first being the one that Barder has outlined, which is that there’s an ethical question to be answered over how the programming decisions are made. Seriously, under what circumstances should a driverless car, made by Google or anyone else, be allowed to kill you or anyone else? The basic Trolly Problem is easy enough, kill fewer people by preference. But when one is necessary which one? And then a second problem which is that the people who have done the coding are going to have to take legal liability for that decision they’ve made. And given the ferocity of the plaintiff’s bar at times I’m not sure that anyone will really be willing to make that decision and thus adopt that potential liability.

Clearly, this needs to be sorted out at the political level. Laws need to be made clarifying the situation. And hands up everyone who thinks that the current political gridlock is going to manage that in a timely manner?

Quite.

June 9, 2014

QotD: Economic modelling

Filed under: Economics, Gaming, Quotations — Tags: , — Nicholas @ 07:03

If we think of ourselves as empiricists who judge the value of the theory on the basis of how well it predicts, then we should have ditched economic models years ago. Never have our models managed with data to predict the major turning points, ever, in the history of capitalism. So if we were honest, we should simply accept that and rethink our approach.

But actually, I think they’re even worse. We can’t even predict the past very well using our models. Economic models are failing to model the past in a way that can explain the past. So what we end up doing with our economic models is retrofitting the data and our own prejudices about how the economy works.

This is why I’m saying that this profession of mine is not really anywhere near astronomy. It’s much closer to mathematized superstition, organized superstition, which has a priesthood to replicate on the basis of how well we learn the rituals.

Yanis Varoufakis, talking to Peter Suderman, “A Multiplayer Game Environment Is Actually a Dream Come True for an Economist”, Reason, 2014-05-30.

June 4, 2014

Sarcasm-detecting software wanted

Filed under: Media, Technology — Tags: , , , , — Nicholas @ 09:02

Charles Stross discusses some of the second-order effects should the US Secret Service actually get the sarcasm-detection software they’re reportedly looking for:

… But then the Internet happened, and it just so happened to coincide with a flowering of highly politicized and canalized news media channels such that at any given time, whoever is POTUS, around 10% of the US population are convinced that they’re a baby-eating lizard-alien in a fleshsuit who is plotting to bring about the downfall of civilization, rather than a middle-aged male politician in a business suit.

Well now, here’s the thing: automating sarcasm detection is easy. It’s so easy they teach it in first year computer science courses; it’s an obvious application of AI. (You just get your Turing-test-passing AI that understands all the shared assumptions and social conventions that human-human conversation rely on to identify those statements that explicitly contradict beliefs that the conversationalist implicitly holds. So if I say “it’s easy to earn a living as a novelist” and the AI knows that most novelists don’t believe this and that I am a member of the set of all novelists, the AI can infer that I am being sarcastic. Or I’m an outlier. Or I’m trying to impress a date. Or I’m secretly plotting to assassinate the POTUS.)

Of course, we in the real world know that shaved apes like us never saw a system we didn’t want to game. So in the event that sarcasm detectors ever get a false positive rate of less than 99% (or a false negative rate of less than 1%) I predict that everybody will start deploying sarcasm as a standard conversational gambit on the internet.

Wait … I thought everyone already did?

Trolling the secret service will become a competitive sport, the goal being to not receive a visit from the SS in response to your totally serious threat to kill the resident of 1600 Pennsylvania Avenue. Al Qaida terrrrst training camps will hold tutorials on metonymy, aggressive irony, cynical detachment, and sarcasm as a camouflage tactic for suicide bombers. Post-modernist pranks will draw down the full might of law enforcement by mistake, while actual death threats go encoded as LOLCat macros. Any attempt to algorithmically detect sarcasm will fail because sarcasm is self-referential and the awareness that a sarcasm detector may be in use will change the intent behind the message.

As the very first commenter points out, a problem with this is that a substantial proportion of software developers (as indicated by their position on the Asperger/Autism spectrum) find it very difficult to detect sarcasm in real life…

Bruce Schneier on the human side of the Heartbleed vulnerability

Filed under: Technology — Tags: , , , — Nicholas @ 07:24

Reposting at his own site an article he did for The Mark News:

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes — thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

May 9, 2014

QotD: Real history and economic modelling

Filed under: Economics, History, Media, Quotations — Tags: , , — Nicholas @ 08:23

I am not an economist. I am an economic historian. The economist seeks to simplify the world into mathematical models — in Krugman’s case models erected upon the intellectual foundations laid by John Maynard Keynes. But to the historian, who is trained to study the world “as it actually is”, the economist’s model, with its smooth curves on two axes, looks like an oversimplification. The historian’s world is a complex system, full of non-linear relationships, feedback loops and tipping points. There is more chaos than simple causation. There is more uncertainty than calculable risk. For that reason, there is simply no way that anyone — even Paul Krugman — can consistently make accurate predictions about the future. There is, indeed, no such thing as the future, just plausible futures, to which we can only attach rough probabilities. This is a caveat I would like ideally to attach to all forward-looking conjectural statements that I make. It is the reason I do not expect always to be right. Indeed, I expect often to be wrong. Success is about having the judgment and luck to be right more often than you are wrong.

Niall Ferguson, “Why Paul Krugman should never be taken seriously again”, The Spectator, 2013-10-13

April 11, 2014

Open source software and the Heartbleed bug

Filed under: Technology — Tags: , , , , — Nicholas @ 07:03

Some people are claiming that the Heartbleed bug proves that open source software is a failure. ESR quickly addresses that idiotic claim:

Heartbleed bugI actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Heartbleed bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.

The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.

That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.

[…]

Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…

Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to an implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation exists. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.

The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.

Update: There are lots of sites offering tools to test whether a given site is vulnerable to the Heartbeat bug, but you need to step carefully there, as there’s a thin line between what’s legal in some countries and what counts as an illegal break-in attempt:

Websites and tools that have sprung up to check whether servers are vulnerable to OpenSSL’s mega-vulnerability Heartbleed have thrown up anomalies in computer crime law on both sides of the Atlantic.

Both the US Computer Fraud and Abuse Act and its UK equivalent the Computer Misuse Act make it an offence to test the security of third-party websites without permission.

Testing to see what version of OpenSSL a site is running, and whether it is also supports the vulnerable Heartbeat protocol, would be legal. But doing anything more active — without permission from website owners — would take security researchers onto the wrong side of the law.

And you shouldn’t just rush out and change all your passwords right now (you’ll probably need to do it, but the timing matters):

Heartbleed is a catastrophic bug in widely used OpenSSL that creates a means for attackers to lift passwords, crypto-keys and other sensitive data from the memory of secure server software, 64KB at a time. The mega-vulnerability was patched earlier this week, and software should be updated to use the new version, 1.0.1g. But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.

April 3, 2014

ESR reviews Jeremy Rifkin’s latest book

Filed under: Books, Economics, Media, Technology — Tags: , , , , — Nicholas @ 10:46

The publisher sent a copy of The Zero Marginal Cost Society along with a note that Rifkin himself wanted ESR to receive a copy (because Rifkin thinks ESR is a good representative of some of the concepts in the book). ESR isn’t impressed:

In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods — software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.

In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own expensive instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.

[…]

Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different way, both of which bear on the premises of his book.

First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.

Second, even in software — with those strong secondary markets — open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.

There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true — which is exactly why capitalists can make a lot of money by substituting capital goods for labor.

These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.

March 25, 2014

Tech culture and ageism

Filed under: Business, Technology, USA — Tags: , , , , — Nicholas @ 07:56

Noam Scheiber examines the fanatic devotion to youth in (some parts of) the high tech culture:

Silicon Valley has become one of the most ageist places in America. Tech luminaries who otherwise pride themselves on their dedication to meritocracy don’t think twice about deriding the not-actually-old. “Young people are just smarter,” Facebook CEO Mark Zuckerberg told an audience at Stanford back in 2007. As I write, the website of ServiceNow, a large Santa Clara–based I.T. services company, features the following advisory in large letters atop its “careers” page: “We Want People Who Have Their Best Work Ahead of Them, Not Behind Them.”

And that’s just what gets said in public. An engineer in his forties recently told me about meeting a tech CEO who was trying to acquire his company. “You must be the token graybeard,” said the CEO, who was in his late twenties or early thirties. “I looked at him and said, ‘No, I’m the token grown-up.’”

Investors have also become addicted to the youth movement:

The economics of the V.C. industry help explain why. Investing in new companies is fantastically risky, and even the best V.C.s fail a large majority of the time. That makes it essential for the returns on successes to be enormous. Whereas a 500 percent return on a $2 million investment (or “5x,” as it’s known) would be considered remarkable in any other line of work, the investments that sustain a large V.C. fund are the “unicorns” and “super-unicorns” that return 100x or 1,000x — the Googles and the Facebooks.

And this is where finance meets what might charitably be called sociology but is really just Silicon Valley mysticism. Finding themselves in the position of chasing 100x or 1,000x returns, V.C.s invariably tell themselves a story about youngsters. “One of the reasons they collectively prefer youth is because youth has the potential for the black swan,” one V.C. told me of his competitors. “It hasn’t been marked down to reality yet. If I was at Google for five years, what’s the chance I would be a black swan? A lot lower than if you never heard of me. That’s the collective mentality.”

Some of the corporate cultures sound more like playgroups than workgroups:

Whatever the case, the veneration of youth in Silicon Valley now seems way out of proportion to its usefulness. Take Dropbox, which an MIT alumnus named Drew Houston co-founded in 2007, after he got tired of losing access to his files whenever he forgot a thumb drive. Dropbox quickly caught on among users and began to vacuum up piles of venture capital. But the company has never quite outgrown its dorm-room vibe, even now that it houses hundreds of employees in an 85,000-square-foot space. Dropbox has a full-service jamming studio and observes a weekly ritual known as whiskey Fridays. Job candidates have complained about being interviewed in conference rooms with names like “The Break-up Room” and the “Bromance Chamber.” (A spokesman says the names were recently changed.)

Once a year, Houston, who still wears his chunky MIT class ring, presides over “Hack Week,” during which Dropbox headquarters turns into the world’s best-capitalized rumpus room. Employees ride around on skateboards and scooters, play with Legos at all hours, and generally tool around with whatever happens to interest them, other than work, which they are encouraged to set aside. “I’ve been up for about forty hours working on Dropbox Jeopardy,” one engineer told a documentarian who filmed a recent Hack Week. “It’s close to nearing insanity, but it feels worth it.”

It’s safe to say that the reigning sensibility at Dropbox has conquered more or less every corner of the tech world. The ping-pong playing can be ceaseless. The sexual mores are imported from college—“They’ll say something like, ‘This has been such a long day. I have to go out and meet some girls, hook up tonight,’ ” says one fortysomething consultant to several start-ups. And the vernacular is steroidally bro-ish. Another engineer in his forties who recently worked at a crowdsourcing company would steel himself anytime he reviewed a colleague’s work. “In programming, you need a throw-away variable,” the engineer explained to me. “So you come up with something quick.” With his co-workers “it would always be ‘dong’ this, ‘dick’ that, ‘balls’ this.”

There’s also the blind spot about having too many youth-focussed firms in the same market:

The most common advice V.C.s give entrepreneurs is to solve a problem they encounter in their daily lives. Unfortunately, the problems the average 22-year-old male programmer has experienced are all about being an affluent single guy in Northern California. That’s how we’ve ended up with so many games (Angry Birds, Flappy Bird, Crappy Bird) and all those apps for what one start-up founder described to me as cooler ways to hang out with friends on a Saturday night.

H/T to Kathy Shaidle for the link.

December 11, 2013

I’ve heard all of these responses many, many times

Filed under: Humour, Technology — Tags: , — Nicholas @ 11:08

This was posted to Google+ the other day, and it’s pretty accurate:

Programmer top 20 replies

« Newer PostsOlder Posts »

Powered by WordPress