Quotulatiousness

December 5, 2014

Ross Perot (of all people) and one of the earliest real computers

Filed under: History, Technology, USA, WW2 — Tags: , , — Nicholas @ 00:02

At Wired, Brendan I. Koerner talks about the odd circumstances which led to H. Ross Perot being instrumental in saving an iconic piece of computer history:

Eccentric billionaires are tough to impress, so their minions must always think big when handed vague assignments. Ross Perot’s staffers did just that in 2006, when their boss declared that he wanted to decorate his Plano, Texas, headquarters with relics from computing history. Aware that a few measly Apple I’s and Altair 880’s wouldn’t be enough to satisfy a former presidential candidate, Perot’s people decided to acquire a more singular prize: a big chunk of ENIAC, the “Electronic Numerical Integrator And Computer.” The ENIAC was a 27-ton, 1,800-square-foot bundle of vacuum tubes and diodes that was arguably the world’s first true computer. The hardware that Perot’s team diligently unearthed and lovingly refurbished is now accessible to the general public for the first time, back at the same Army base where it almost rotted into oblivion.

ENIAC was conceived in the thick of World War II, as a tool to help artillerymen calculate the trajectories of shells. Though construction began a year before D-Day, the computer wasn’t activated until November 1945, by which time the U.S. Army’s guns had fallen silent. But the military still found plenty of use for ENIAC as the Cold War began — the machine’s 17,468 vacuum tubes were put to work by the developers of the first hydrogen bomb, who needed a way to test the feasibility of their early designs. The scientists at Los Alamos later declared that they could never have achieved success without ENIAC’s awesome computing might: the machine could execute 5,000 instructions per second, a capability that made it a thousand times faster than the electromechanical calculators of the day. (An iPhone 6, by contrast, can zip through 25 billion instructions per second.)

When the Army declared ENIAC obsolete in 1955, however, the historic invention was treated with scant respect: its 40 panels, each of which weighed an average of 858 pounds, were divvied up and strewn about with little care. Some of the hardware landed in the hands of folks who appreciated its significance — the engineer Arthur Burks, for example, donated his panel to the University of Michigan, and the Smithsonian managed to snag a couple of panels for its collection, too. But as Libby Craft, Perot’s director of special projects, found out to her chagrin, much of ENIAC vanished into disorganized warehouses, a bit like the Ark of the Covenant at the end of Raiders of the Lost Ark.

Lost in the bureaucracy

An ENIAC technician changes a tube. (Photo: US Army)

An ENIAC technician changes a tube. (Photo: US Army)

November 25, 2014

When was it exactly that “progress stopped”?

Filed under: Environment, Health, Media, Technology — Tags: , , , , — Nicholas @ 00:05

Scott Alexander wrote this back in July. I think it’s still relevant as a useful perspective-enhancer:

The year 1969 comes up to you and asks what sort of marvels you’ve got all the way in 2014.

You explain that cameras, which 1969 knows as bulky boxes full of film that takes several days to get developed in dark rooms, are now instant affairs of point-click-send-to-friend that are also much higher quality. Also they can take video.

Music used to be big expensive records, and now you can fit 3,000 songs on an iPod and get them all for free if you know how to pirate or scrape the audio off of YouTube.

Television not only has gone HDTV and plasma-screen, but your choices have gone from “whatever’s on now” and “whatever is in theaters” all the way to “nearly every show or movie that has ever been filmed, whenever you want it”.

Computers have gone from structures filling entire rooms with a few Kb memory and a punchcard-based interface, to small enough to carry in one hand with a few Tb memory and a touchscreen-based interface. And they now have peripherals like printers, mice, scanners, and flash drives.

Lasers have gone from only working in special cryogenic chambers to working at room temperature to fitting in your pocket to being ubiquitious in things as basic as supermarket checkout counters.

Telephones have gone from rotary-dial wire-connected phones that still sometimes connected to switchboards, to cell phones that fit in a pocket. But even better is bypassing them entirely and making video calls with anyone anywhere in the world for free.

Robots now vacuum houses, mow lawns, clean office buildings, perform surgery, participate in disaster relief efforts, and drive cars better than humans. Occasionally if you are a bad person a robot will swoop down out of the sky and kill you.

For better or worse, video games now exist.

Medicine has gained CAT scans, PET scans, MRIs, lithotripsy, liposuction, laser surgery, robot surgery, and telesurgery. Vaccines for pneumonia, meningitis, hepatitis, HPV, and chickenpox. Ceftriaxone, furosemide, clozapine, risperidone, fluoxetine, ondansetron, omeprazole, naloxone, suboxone, mefloquine, – and for that matter Viagra. Artificial hearts, artificial livers, artificial cochleae, and artificial legs so good that their users can compete in the Olympics. People with artificial eyes can only identify vague shapes at best, but they’re getting better every year.

World population has tripled, in large part due to new agricultural advantages. Catastrophic disasters have become much rarer, in large part due to architectural advances and satellites that can watch the weather from space.

We have a box which you can type something into and it will tell you everything anyone has ever written relevant to your query.

We have a place where you can log into from anywhere in the world and get access to approximately all human knowledge, from the scores of every game in the 1956 Roller Hockey World Cup to 85 different side effects of an obsolete antipsychotic medication. It is all searchable instantaneously. Its main problem is that people try to add so much information to it that its (volunteer) staff are constantly busy deleting information that might be extraneous.

We have the ability to translate nearly major human language to any other major human language instantaneously at no cost with relatively high accuracy.

We have navigation technology that over fifty years has gone from “map and compass” to “you can say the name of your destination and a small box will tell you step by step which way you should be going”.

We have the aforementioned camera, TV, music, videophone, video games, search engine, encyclopedia, universal translator, and navigation system all bundled together into a small black rectangle that fits in your pockets, responds to your spoken natural-language commands, and costs so little that Ethiopian subsistence farmers routinely use them to sell their cows.

But, you tell 1969, we have something more astonishing still. Something even more unimaginable.

“We have,” you say, “people who believe technology has stalled over the past forty-five years.”

1969’s head explodes.

November 21, 2014

Elon Musk’s constant nagging worry

Filed under: Business, Technology — Tags: , , , , — Nicholas @ 07:14

In the Washington Post, Justin Moyer talks about Elon Musk’s concern about runaway artificial intelligence:

Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

Unfortunately, Musk didn’t explain how humanity might be compromised by “digital superintelligences,” “Terminator”-style.

He never does. Yet Musk has been holding forth on-and-off about the apocalypse artificial intelligence might bring for much of the past year.

November 17, 2014

An online font specially designed to help dyslexics read more accurately

Filed under: Media, Technology — Tags: — Nicholas @ 00:02

On the LMB mailing list, Marc Wilson shared a link to a free downloadable Dyslexia Font:

Dyslexie font

November 13, 2014

Where’s the rimshot?

Filed under: Humour, Technology — Tags: , — Nicholas @ 09:37

Marc Wilson posted this to the Lois McMaster Bujold mailing list (off-topic, obviously):

Apparently the inventor of predictive text has died.

His funfair will be on Sundial.

October 24, 2014

Google Design open sources some icons

Filed under: Media, Technology — Tags: , , , — Nicholas @ 07:19

If you have a need for system icons and don’t want to create your own (or, like me, you have no artistic skills), you might want to look at a recent Google Design set that is now open source:

Today, Google Design are open-sourcing 750 glyphs as part of the Material Design system icons pack. The system icons contain icons commonly used across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on. They’re equally useful when building for the web, Android or iOS.

Google Design open source icons

October 21, 2014

A different approach to building your own PC case

Filed under: Technology, Woodworking — Tags: — Nicholas @ 00:02

Published on 13 Nov 2012

In this video I show the features of my homemade silent wooden PC case, and how I built it.

Most silent PCs compromise on speed for silence, but not this one. Specs:

i7 2600k @ 4.4Ghz
GTX 460
24GB RAM
3TB HDD space + SSD

October 20, 2014

Marc Andreessen still thinks optimism is the right attitude

Filed under: Technology, USA — Tags: , — Nicholas @ 07:14

In NYMag, Kevin Roose talks to Marc Andreessen on a range of topics:

It’s not hard to coax an opinion out of Marc Andreessen. The tall, bald, spring-loaded venture capitalist, who invented the first mainstream internet browser, co-founded Netscape, then made a fortune as an early investor in Twitter and Facebook, has since become Silicon Valley’s resident philosopher-king. He’s ubiquitous on Twitter, where his machine-gun fusillade of bold, wide-ranging proclamations has attracted an army of acolytes (and gotten him in some very big fights). At a controversial moment for the tech industry, Andreessen is the sector’s biggest cheerleader and a forceful advocate for his peculiar brand of futurism.

I love this moment where you’re meeting Mark Zuckerberg for the first time and he says to you something like, “What was Netscape?”

He didn’t know.

He was in middle school when you started Netscape. What’s it like to work in an industry where the turnover is so rapid that ten years can create a whole new collective memory?

I think it’s fantastic. For example, I think there’s sort of two Silicon Valleys right now. There’s the Silicon Valley of the people who were here during the 2000 crash, and there’s the Silicon Valley of the people who weren’t, and the psychology is actually totally different. Those of us who were here in 2000 have, like, scar tissue, because shit went wrong and it sucked.

You came to Silicon Valley in 1994. What was it like?

It was dead. Dead in the water. There had been this PC boom in the ’80s, and it was gigantic—that was Apple and Intel and Microsoft up in Seattle. And then the American economic recession hit—in ’88, ’89—and that was on the heels of the rapid ten-year rise of Japan. Silicon Valley had had this sort of brief shining moment, but Japan was going to take over everything. And that’s when the American economy went straight into a ditch. You’d pick up the newspaper, and it was just endless misery and woe. Technology in the U.S. is dead; economic growth in the U.S. is dead. All of the American kids were Gen-X slackers — no ambition, never going to do anything.

October 4, 2014

The “Herod Clause” to get free Wi-Fi

Filed under: Britain, Business, Humour, Law, Technology — Tags: , , — Nicholas @ 10:48

I missed this earlier in the week (and it smells “hoax-y”, but it’s too good to check):

A handful of Londoners in some of the capital’s busiest districts unwittingly agreed to give up their eldest child, during an experiment exploring the dangers of public Wi-Fi use.

The experiment, which was backed by European law enforcement agency Europol, involved a group of security researchers setting up a Wi-Fi hotspot in June.

When people connected to the hotspot, the terms and conditions they were asked to sign up to included a “Herod clause” promising free Wi-Fi but only if “the recipient agreed to assign their first born child to us for the duration of eternity”. Six people signed up.

F-Secure, the security firm that sponsored the experiment, has confirmed that it won’t be enforcing the clause.

“We have yet to enforce our rights under the terms and conditions but, as this is an experiment, we will be returning the children to their parents,” wrote the Finnish company in its report.

“Our legal advisor Mark Deem points out that — while terms and conditions are legally binding — it is contrary to public policy to sell children in return for free services, so the clause would not be enforceable in a court of law.”

Ultimately, the research, organised by the Cyber Security Research Institute, sought to highlight public unawareness of serious security issues concomitant with Wi-Fi usage.

September 19, 2014

QotD: Faster computers, and why we “need” ’em

Filed under: Quotations, Technology — Tags: , , — Nicholas @ 00:01

I recall, in the very early days of the personal computer, articles, in magazines like Personal Computer World, which expressed downright opposition to the idea of technological progress in general, and progress in personal computers in particular. There was apparently a market for such notions, in the very magazines that you would think would be most gung-ho about new technology and new computers. Maybe the general atmosphere of gung-ho-ness created a significant enough minority of malcontents that the editors felt they needed to nod regularly towards it. I guess it does make sense that the biggest grumbles about the hectic pace of technological progress would be heard right next to the places where it is happening most visibly.

Whatever the reasons were for such articles being in computer magazines, I distinctly remember their tone. I have recently, finally, got around to reading Virginia Postrel’s The Future and Its Enemies, and she clearly identifies the syndrome. The writers of these articles were scared of the future and wanted that future prevented, perhaps by law but mostly just by a sort of universal popular rejection of it, a universal desire to stop the world and to get off it. “Do we really need” (the words “we” and “need” cropped up in these PCW pieces again and again), faster central processors, more RAM, quicker printers, snazzier and bigger and sharper and more colourful screens, greater “user friendlinesss”, …? “Do we really need” this or that new programme that had been reported in the previous month’s issue? What significant and “real” (as opposed to frivolous and game-related) problems could there possibly be that demanded such super-powerful, super-fast, super-memorising and of course, at that time, super-expensive machines for their solution? Do we “really need” personal computers to develop, in short, in the way that they have developed, since these grumpy anti-computer-progress articles first started being published in computer progress magazines?

The usual arguments in favour of fast and powerful, and now mercifully far cheaper, computers concern the immensity of the gobs of information that can now be handled, quickly and powerfully, by machines like the ones that we have now, as opposed to what could be handled by the first wave of personal computers, which could manage a small spreadsheet or a short text file or a very primitive computer game, but very little else. And of course that is true. I can now shovel vast quantities of photographs (a particular enthusiasm of mine) hither and thither, processing the ones I feel inclined to process in ways that only Hollywood studios used to be able to do. I can make and view videos (although I mostly stick to viewing). And I can access and even myself add to that mighty cornucopia that is the internet. And so on. All true. I can remember when even the most primitive of photos would only appear on my screen after several minutes of patient or not-so-patient waiting. Videos? Dream on. Now, what a world of wonders we can all inhabit. In another quarter of a century, what wonders will there then be, all magicked in a flash into our brains and onto our desks, if we still have desks. The point is, better computers don’t just mean doing the same old things a bit faster; they mean being able to do entirely new things as well, really well.

Brian Micklethwait, “Why fast and powerful computers are especially good if you are getting old”, Samizdata, 2014-09-17.

July 25, 2014

QotD: The singularity already happened

Filed under: Media, Quotations, Technology — Tags: , , — Nicholas @ 00:01

The gulf that separates us from the near past is now so great that we cannot really imagine how one could design a spacecraft, or learn engineering in the first place, or even just look something up, without a computer and a network. Journalists my age will understand how profound and disturbing this break in history is: Do you remember doing your job before Google? It was, obviously, possible, since we actually did it, but how? It is like having a past life as a conquistador or a phrenologist.

Colby Cosh, “Who will be the moonwalkers of tomorrow?”, Maclean’s, 2014-07-24.

July 15, 2014

The attraction (and danger) of computer-based models

Filed under: Environment, Science, Technology — Tags: , , — Nicholas @ 00:02

Warren Meyer explains why computer models can be incredibly useful tools, but they are not the same thing as an actual proof:

    Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.

I see this all the time. A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything. Computer models are extremely useful when we have hypotheses about complex, multi-variable systems. It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

[…]

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate. But the techniques are substantially the same. And the pitfalls.

Confession time. In my very early days as a consultant, I did something I am not proud of. I was responsible for a complex market model based on a lot of market research and customer service data. Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results. In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion. It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy. A few tweaks to assumptions and I could get the answer I wanted. And no one would ever know. Someone could stare at the model for an hour and not recognize the tuning.

June 18, 2014

This is why computer security folks look so frustrated

Filed under: Technology — Tags: , , — Nicholas @ 07:41

It’s not that the “security” part of the job is so wearing … it’s that people are morons:

Security white hats, despair: users will run dodgy executables if they are paid as little as one cent.

Even more would allow their computers to become infected by botnet software nasties if the price was increased to five or 10 cents. Offer a whole dollar and you’ll secure a herd of willing internet slaves.

The demoralising findings come from a study lead by Nicolas Christin, research professor at Carnegie Mellon University’s CyLab which baited users with a benign Windows executable sold to users under the guise of contributing to a (fictitious) study.

It was downloaded 1,714 times and 965 users actually ran the code. The application ran a timer simulating an hour’s computational tasks after which a token for payment would be generated.

The researchers collected information on user machines discovering that many of the predominantly US and Indian user machines were already infected with malware despite having security systems installed, and were happy to click past Windows’ User Access Control warning prompts.

The presence of malware actually increased on machines running the latest patches and infosec tools in what was described as an indication of users’ false sense of security.

April 17, 2014

QotD: User interface design for all ages

Filed under: Humour, Media, Technology — Tags: , , — Nicholas @ 07:51

As your body staggers down the winding road to death, user interfaces that require fighter pilot-grade eyesight, the dexterity of a neurosurgeon, and the mental agility of Derren Brown, are going to screw with you at some point.

Don’t kid yourself otherwise — disability, in one form or another, can strike at any moment.

Given that people are proving ever harder to kill off, you can expect to have decades of life ahead of you — during which you’ll be battling to figure out where on the touchscreen that trendy transdimensional two-pixel wide “OK” button is hiding.

Can you believe, people born today will spend their entire lives having to cope with this crap? The only way I can explain the web design of many Google products today is that some wannabe Picasso stole Larry Page’s girl when they were all 13, and is only now exacting his revenge. Nobody makes things that bad by accident, surely?

Dominic Connor, “Is tech the preserve of the young able-bodied? Let’s talk over a fine dinner and claret”, The Register, 2014-04-17

April 16, 2014

QotD: The wizards of the web

Filed under: Business, Quotations, Technology — Tags: , , , , — Nicholas @ 08:28

You would have thought this would have sunk in by now. The fact that it hasn’t shows what an extraordinary machine the internet is — quite different to any technology that has gone before it. When the Lovebug struck, few of us lived our lives online. Back then we banked in branches, shopped in shops, met friends and lovers in the pub and obtained jobs by posting CVs. Tweeting was for the birds. Cyberspace was marginal. Now, for billions, the online world is their lives. But there is a problem. Only a tiny, tiny percentage of the people who use the internet have even the faintest clue about how any of it works. “SSL”, for instance, stands for “Secure Sockets Layer”.

I looked it up and sort of understood it — for about five minutes. While most drivers have at least a notion of how an engine works (something about petrol exploding in cylinders and making pistons go up and down and so forth) the very language of the internet — “domain names” and “DNS codes”, endless “protocols” and so forth — is arcane, exclusive; it is, in fact, the language of magic. For all intents and purposes the internet is run by wizards.

And the trouble with letting wizards run things is that when things go wrong we are at their mercy. The world spends several tens of billions of pounds a year on anti-malware programs, which we are exhorted to buy lest the walls of our digital castles collapse around us. Making security software is a huge industry, and whenever there is a problem — either caused by viruses or by a glitch like Heartbleed — the internet security companies rush to be quoted in the media. And guess what, their message is never “keep calm and carry on”. As Professor Ross Anderson of Cambridge University says: “Almost all the cost of cybercrime is the cost of anticipation.”

Michael Hanlon, “Relax, Mumsnet users: don’t lose sleep over Heartbleed hysteria”, Telegraph, 2014-04-16

« Newer PostsOlder Posts »

Powered by WordPress