Quotulatiousness

December 18, 2014

Admiral Grace Hopper

Filed under: History, Military, Technology, USA — Tags: , , , — Nicholas @ 00:04

The US Naval Institute posted an article about the one and only Admiral Grace Hopper earlier this month to mark her birthday:

The typical career arc of a naval officer may run from 25-30 years. Most, however, don’t start at age 35. Yet when it comes to Rear Adm. Grace Hopper, well, the word “typical” just doesn’t apply.

Feisty. Eccentric. Maverick. Brilliant. Precise. Grace Hopper embodied all of those descriptions and more, but perhaps what defined her as much as anything else was the pride she had in wearing the Navy uniform for 43 years. Ironically, Rear Adm. Grace Hopper — “Amazing Grace” as she was known — had to fight to get into the Navy.

Grace Brewster Murray was born into a well-off family in New York on Dec. 9, 1906. She could have followed what many of her peers did during those times: attending college for a year or two, getting married then devoting their lives to their families and volunteer work.

Instead, Grace’s path would be less traveled. Encouraged to explore her innate curiosity on how things worked, a 7-year-old Grace dismantled all of the family’s alarm clocks trying to put them back together again. Rather than banishment from the practice, she was allowed one to practice on.

[…]

When she joined the WAVES in December 1943, Lt. j.g. Grace Hopper was 37 years old. Williams noted that after graduating at the top of her class of 800 officer candidates in June 1944, Hopper paid homage to Alexander Wilson Russell, her great-grandfather, the admiral who apparently took a “dim view of women and cats” in the Navy and laid flowers on his grave to “comfort and reassure him.”

Hopper was sent to the Bureau of Ordnance Computation Project at Harvard University under the guidance of Howard Aiken. The Harvard physics and applied mathematics professor helped create the first Automatic Sequence Controlled Calculator (ASCC), better known as Mark I. He ran a lab where design, testing, modification and analysis of weapons were calculated. Most were specially trained women called computers. “So the first ‘computers’ were women who did the calculating on desk calculators,” Williams said. And the time it took for the computers to calculate was called “girl hours.”

What happened next put Hopper on a new path that would define the rest of her life, according to a passage in the book Improbable Warriors: Women Scientists in the U.S. Navy during World War II also by Williams.

On July 2, 1944, Hopper reported to duty and met Aiken.

“That’s a computing engine,” Aiken snapped at Hopper, pointing to the Mark I. “I would be delighted to have the coefficients for the interpolation of the arc tangent by next Thursday.”

Hopper was a mathematician, but what she wasn’t was a computer programmer. Aiken gave her a codebook, and as Hopper put it, a week to learn “how to program the beast and get a program running.”

Hopper overcame her lack of programming skills the same way she always tackled other obstacles; by being persistent and stopping at nothing to solve problems. She eventually would become well-versed in how the machine operated, all 750,000 parts, 530 miles of wire and 3 million wire connections crammed in a machine that was 8-feet tall and 50-feet wide.

December 10, 2014

QotD: Quality, innovation, and progress

Filed under: Economics, Liberty, Quotations, Technology — Tags: , , , , — Nicholas @ 00:01

Measured by practically any physical metric, from the quality of the food we eat to the health care we receive to the cars we drive and the houses we live in, Americans are not only wildly rich, but radically richer than we were 30 years ago, to say nothing of 50 or 75 years ago. And so is much of the rest of the world. That such progress is largely invisible to us is part of the genius of capitalism — and it is intricately bound up with why, under the system based on selfishness, avarice, and greed, we do such a remarkably good job taking care of one another, while systems based on sharing and common property turn into miserable, hungry prison camps.

We treat the physical results of capitalism as though they were an inevitability. In 1955, no captain of industry, prince, or potentate could buy a car as good as a Toyota Camry, to say nothing of a 2014 Mustang, the quintessential American Everyman’s car. But who notices the marvel that is a Toyota Camry? In the 1980s, no chairman of the board, president, or prime minister could buy a computer as good as the cheapest one for sale today at Best Buy. In the 1950s, American millionaires did not have access to the quality and variety of food consumed by Americans of relatively modest means today, and the average middle-class household spent a much larger share of its income buying far inferior groceries. Between 1973 and 2008, the average size of an American house increased by more than 50 percent, even as the average number of people living in it declined. Things like swimming pools and air conditioning went from being extravagances for tycoons and movie stars to being common or near-universal. In his heyday, Howard Hughes didn’t have as good a television as you do, and the children of millionaires for generations died from diseases that for your children are at most an inconvenience. As the first 199,746 or so years of human history show, there is no force of nature ensuring that radical material progress happens as it has for the past 250 years. Technological progress does not drive capitalism; capitalism drives technological progress — and most other kinds of progress, too.

Kevin D. Williamson, “Welcome to the Paradise of the Real: How to refute progressive fantasies — or, a red-pill economics”, National Review, 2014-04-24

December 5, 2014

Ross Perot (of all people) and one of the earliest real computers

Filed under: History, Technology, USA — Tags: , , , — Nicholas @ 00:02

At Wired, Brendan I. Koerner talks about the odd circumstances which led to H. Ross Perot being instrumental in saving an iconic piece of computer history:

Eccentric billionaires are tough to impress, so their minions must always think big when handed vague assignments. Ross Perot’s staffers did just that in 2006, when their boss declared that he wanted to decorate his Plano, Texas, headquarters with relics from computing history. Aware that a few measly Apple I’s and Altair 880’s wouldn’t be enough to satisfy a former presidential candidate, Perot’s people decided to acquire a more singular prize: a big chunk of ENIAC, the “Electronic Numerical Integrator And Computer.” The ENIAC was a 27-ton, 1,800-square-foot bundle of vacuum tubes and diodes that was arguably the world’s first true computer. The hardware that Perot’s team diligently unearthed and lovingly refurbished is now accessible to the general public for the first time, back at the same Army base where it almost rotted into oblivion.

ENIAC was conceived in the thick of World War II, as a tool to help artillerymen calculate the trajectories of shells. Though construction began a year before D-Day, the computer wasn’t activated until November 1945, by which time the U.S. Army’s guns had fallen silent. But the military still found plenty of use for ENIAC as the Cold War began — the machine’s 17,468 vacuum tubes were put to work by the developers of the first hydrogen bomb, who needed a way to test the feasibility of their early designs. The scientists at Los Alamos later declared that they could never have achieved success without ENIAC’s awesome computing might: the machine could execute 5,000 instructions per second, a capability that made it a thousand times faster than the electromechanical calculators of the day. (An iPhone 6, by contrast, can zip through 25 billion instructions per second.)

When the Army declared ENIAC obsolete in 1955, however, the historic invention was treated with scant respect: its 40 panels, each of which weighed an average of 858 pounds, were divvied up and strewn about with little care. Some of the hardware landed in the hands of folks who appreciated its significance — the engineer Arthur Burks, for example, donated his panel to the University of Michigan, and the Smithsonian managed to snag a couple of panels for its collection, too. But as Libby Craft, Perot’s director of special projects, found out to her chagrin, much of ENIAC vanished into disorganized warehouses, a bit like the Ark of the Covenant at the end of Raiders of the Lost Ark.

Lost in the bureaucracy

An ENIAC technician changes a tube. (Photo: US Army)

An ENIAC technician changes a tube. (Photo: US Army)

November 25, 2014

When was it exactly that “progress stopped”?

Filed under: Environment, Health, Media, Technology — Tags: , , , , — Nicholas @ 00:05

Scott Alexander wrote this back in July. I think it’s still relevant as a useful perspective-enhancer:

The year 1969 comes up to you and asks what sort of marvels you’ve got all the way in 2014.

You explain that cameras, which 1969 knows as bulky boxes full of film that takes several days to get developed in dark rooms, are now instant affairs of point-click-send-to-friend that are also much higher quality. Also they can take video.

Music used to be big expensive records, and now you can fit 3,000 songs on an iPod and get them all for free if you know how to pirate or scrape the audio off of YouTube.

Television not only has gone HDTV and plasma-screen, but your choices have gone from “whatever’s on now” and “whatever is in theaters” all the way to “nearly every show or movie that has ever been filmed, whenever you want it”.

Computers have gone from structures filling entire rooms with a few Kb memory and a punchcard-based interface, to small enough to carry in one hand with a few Tb memory and a touchscreen-based interface. And they now have peripherals like printers, mice, scanners, and flash drives.

Lasers have gone from only working in special cryogenic chambers to working at room temperature to fitting in your pocket to being ubiquitious in things as basic as supermarket checkout counters.

Telephones have gone from rotary-dial wire-connected phones that still sometimes connected to switchboards, to cell phones that fit in a pocket. But even better is bypassing them entirely and making video calls with anyone anywhere in the world for free.

Robots now vacuum houses, mow lawns, clean office buildings, perform surgery, participate in disaster relief efforts, and drive cars better than humans. Occasionally if you are a bad person a robot will swoop down out of the sky and kill you.

For better or worse, video games now exist.

Medicine has gained CAT scans, PET scans, MRIs, lithotripsy, liposuction, laser surgery, robot surgery, and telesurgery. Vaccines for pneumonia, meningitis, hepatitis, HPV, and chickenpox. Ceftriaxone, furosemide, clozapine, risperidone, fluoxetine, ondansetron, omeprazole, naloxone, suboxone, mefloquine, – and for that matter Viagra. Artificial hearts, artificial livers, artificial cochleae, and artificial legs so good that their users can compete in the Olympics. People with artificial eyes can only identify vague shapes at best, but they’re getting better every year.

World population has tripled, in large part due to new agricultural advantages. Catastrophic disasters have become much rarer, in large part due to architectural advances and satellites that can watch the weather from space.

We have a box which you can type something into and it will tell you everything anyone has ever written relevant to your query.

We have a place where you can log into from anywhere in the world and get access to approximately all human knowledge, from the scores of every game in the 1956 Roller Hockey World Cup to 85 different side effects of an obsolete antipsychotic medication. It is all searchable instantaneously. Its main problem is that people try to add so much information to it that its (volunteer) staff are constantly busy deleting information that might be extraneous.

We have the ability to translate nearly major human language to any other major human language instantaneously at no cost with relatively high accuracy.

We have navigation technology that over fifty years has gone from “map and compass” to “you can say the name of your destination and a small box will tell you step by step which way you should be going”.

We have the aforementioned camera, TV, music, videophone, video games, search engine, encyclopedia, universal translator, and navigation system all bundled together into a small black rectangle that fits in your pockets, responds to your spoken natural-language commands, and costs so little that Ethiopian subsistence farmers routinely use them to sell their cows.

But, you tell 1969, we have something more astonishing still. Something even more unimaginable.

“We have,” you say, “people who believe technology has stalled over the past forty-five years.”

1969’s head explodes.

November 21, 2014

Elon Musk’s constant nagging worry

Filed under: Business, Technology — Tags: , , , — Nicholas @ 07:14

In the Washington Post, Justin Moyer talks about Elon Musk’s concern about runaway artificial intelligence:

Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

Unfortunately, Musk didn’t explain how humanity might be compromised by “digital superintelligences,” “Terminator”-style.

He never does. Yet Musk has been holding forth on-and-off about the apocalypse artificial intelligence might bring for much of the past year.

November 17, 2014

An online font specially designed to help dyslexics read more accurately

Filed under: Media, Technology — Tags: — Nicholas @ 00:02

On the LMB mailing list, Marc Wilson shared a link to a free downloadable Dyslexia Font:

Dyslexie font

November 13, 2014

Where’s the rimshot?

Filed under: Humour, Technology — Tags: , — Nicholas @ 09:37

Marc Wilson posted this to the Lois McMaster Bujold mailing list (off-topic, obviously):

Apparently the inventor of predictive text has died.

His funfair will be on Sundial.

October 24, 2014

Google Design open sources some icons

Filed under: Media, Technology — Tags: , , , — Nicholas @ 07:19

If you have a need for system icons and don’t want to create your own (or, like me, you have no artistic skills), you might want to look at a recent Google Design set that is now open source:

Today, Google Design are open-sourcing 750 glyphs as part of the Material Design system icons pack. The system icons contain icons commonly used across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on. They’re equally useful when building for the web, Android or iOS.

Google Design open source icons

October 21, 2014

A different approach to building your own PC case

Filed under: Technology — Tags: , — Nicholas @ 00:02

Published on 13 Nov 2012

In this video I show the features of my homemade silent wooden PC case, and how I built it.

Most silent PCs compromise on speed for silence, but not this one. Specs:

i7 2600k @ 4.4Ghz
GTX 460
24GB RAM
3TB HDD space + SSD

October 20, 2014

Marc Andreessen still thinks optimism is the right attitude

Filed under: Technology, USA — Tags: , — Nicholas @ 07:14

In NYMag, Kevin Roose talks to Marc Andreessen on a range of topics:

It’s not hard to coax an opinion out of Marc Andreessen. The tall, bald, spring-loaded venture capitalist, who invented the first mainstream internet browser, co-founded Netscape, then made a fortune as an early investor in Twitter and Facebook, has since become Silicon Valley’s resident philosopher-king. He’s ubiquitous on Twitter, where his machine-gun fusillade of bold, wide-ranging proclamations has attracted an army of acolytes (and gotten him in some very big fights). At a controversial moment for the tech industry, Andreessen is the sector’s biggest cheerleader and a forceful advocate for his peculiar brand of futurism.

I love this moment where you’re meeting Mark Zuckerberg for the first time and he says to you something like, “What was Netscape?”

He didn’t know.

He was in middle school when you started Netscape. What’s it like to work in an industry where the turnover is so rapid that ten years can create a whole new collective memory?

I think it’s fantastic. For example, I think there’s sort of two Silicon Valleys right now. There’s the Silicon Valley of the people who were here during the 2000 crash, and there’s the Silicon Valley of the people who weren’t, and the psychology is actually totally different. Those of us who were here in 2000 have, like, scar tissue, because shit went wrong and it sucked.

You came to Silicon Valley in 1994. What was it like?

It was dead. Dead in the water. There had been this PC boom in the ’80s, and it was gigantic—that was Apple and Intel and Microsoft up in Seattle. And then the American economic recession hit—in ’88, ’89—and that was on the heels of the rapid ten-year rise of Japan. Silicon Valley had had this sort of brief shining moment, but Japan was going to take over everything. And that’s when the American economy went straight into a ditch. You’d pick up the newspaper, and it was just endless misery and woe. Technology in the U.S. is dead; economic growth in the U.S. is dead. All of the American kids were Gen-X slackers — no ambition, never going to do anything.

October 4, 2014

The “Herod Clause” to get free Wi-Fi

Filed under: Britain, Business, Humour, Law, Technology — Tags: , , — Nicholas @ 10:48

I missed this earlier in the week (and it smells “hoax-y”, but it’s too good to check):

A handful of Londoners in some of the capital’s busiest districts unwittingly agreed to give up their eldest child, during an experiment exploring the dangers of public Wi-Fi use.

The experiment, which was backed by European law enforcement agency Europol, involved a group of security researchers setting up a Wi-Fi hotspot in June.

When people connected to the hotspot, the terms and conditions they were asked to sign up to included a “Herod clause” promising free Wi-Fi but only if “the recipient agreed to assign their first born child to us for the duration of eternity”. Six people signed up.

F-Secure, the security firm that sponsored the experiment, has confirmed that it won’t be enforcing the clause.

“We have yet to enforce our rights under the terms and conditions but, as this is an experiment, we will be returning the children to their parents,” wrote the Finnish company in its report.

“Our legal advisor Mark Deem points out that — while terms and conditions are legally binding — it is contrary to public policy to sell children in return for free services, so the clause would not be enforceable in a court of law.”

Ultimately, the research, organised by the Cyber Security Research Institute, sought to highlight public unawareness of serious security issues concomitant with Wi-Fi usage.

September 19, 2014

QotD: Faster computers, and why we “need” ‘em

Filed under: Quotations, Technology — Tags: , , — Nicholas @ 00:01

I recall, in the very early days of the personal computer, articles, in magazines like Personal Computer World, which expressed downright opposition to the idea of technological progress in general, and progress in personal computers in particular. There was apparently a market for such notions, in the very magazines that you would think would be most gung-ho about new technology and new computers. Maybe the general atmosphere of gung-ho-ness created a significant enough minority of malcontents that the editors felt they needed to nod regularly towards it. I guess it does make sense that the biggest grumbles about the hectic pace of technological progress would be heard right next to the places where it is happening most visibly.

Whatever the reasons were for such articles being in computer magazines, I distinctly remember their tone. I have recently, finally, got around to reading Virginia Postrel’s The Future and Its Enemies, and she clearly identifies the syndrome. The writers of these articles were scared of the future and wanted that future prevented, perhaps by law but mostly just by a sort of universal popular rejection of it, a universal desire to stop the world and to get off it. “Do we really need” (the words “we” and “need” cropped up in these PCW pieces again and again), faster central processors, more RAM, quicker printers, snazzier and bigger and sharper and more colourful screens, greater “user friendlinesss”, …? “Do we really need” this or that new programme that had been reported in the previous month’s issue? What significant and “real” (as opposed to frivolous and game-related) problems could there possibly be that demanded such super-powerful, super-fast, super-memorising and of course, at that time, super-expensive machines for their solution? Do we “really need” personal computers to develop, in short, in the way that they have developed, since these grumpy anti-computer-progress articles first started being published in computer progress magazines?

The usual arguments in favour of fast and powerful, and now mercifully far cheaper, computers concern the immensity of the gobs of information that can now be handled, quickly and powerfully, by machines like the ones that we have now, as opposed to what could be handled by the first wave of personal computers, which could manage a small spreadsheet or a short text file or a very primitive computer game, but very little else. And of course that is true. I can now shovel vast quantities of photographs (a particular enthusiasm of mine) hither and thither, processing the ones I feel inclined to process in ways that only Hollywood studios used to be able to do. I can make and view videos (although I mostly stick to viewing). And I can access and even myself add to that mighty cornucopia that is the internet. And so on. All true. I can remember when even the most primitive of photos would only appear on my screen after several minutes of patient or not-so-patient waiting. Videos? Dream on. Now, what a world of wonders we can all inhabit. In another quarter of a century, what wonders will there then be, all magicked in a flash into our brains and onto our desks, if we still have desks. The point is, better computers don’t just mean doing the same old things a bit faster; they mean being able to do entirely new things as well, really well.

Brian Micklethwait, “Why fast and powerful computers are especially good if you are getting old”, Samizdata, 2014-09-17.

July 25, 2014

QotD: The singularity already happened

Filed under: Media, Quotations, Technology — Tags: , , — Nicholas @ 00:01

The gulf that separates us from the near past is now so great that we cannot really imagine how one could design a spacecraft, or learn engineering in the first place, or even just look something up, without a computer and a network. Journalists my age will understand how profound and disturbing this break in history is: Do you remember doing your job before Google? It was, obviously, possible, since we actually did it, but how? It is like having a past life as a conquistador or a phrenologist.

Colby Cosh, “Who will be the moonwalkers of tomorrow?”, Maclean’s, 2014-07-24.

July 15, 2014

The attraction (and danger) of computer-based models

Filed under: Environment, Science, Technology — Tags: , , — Nicholas @ 00:02

Warren Meyer explains why computer models can be incredibly useful tools, but they are not the same thing as an actual proof:

    Among the objections, including one from Green Party politician Chit Chong, were that Lawson’s views were not supported by evidence from computer modeling.

I see this all the time. A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being “anti-science” by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything. Computer models are extremely useful when we have hypotheses about complex, multi-variable systems. It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

[…]

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate. But the techniques are substantially the same. And the pitfalls.

Confession time. In my very early days as a consultant, I did something I am not proud of. I was responsible for a complex market model based on a lot of market research and customer service data. Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results. In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion. It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy. A few tweaks to assumptions and I could get the answer I wanted. And no one would ever know. Someone could stare at the model for an hour and not recognize the tuning.

June 18, 2014

This is why computer security folks look so frustrated

Filed under: Technology — Tags: , , — Nicholas @ 07:41

It’s not that the “security” part of the job is so wearing … it’s that people are morons:

Security white hats, despair: users will run dodgy executables if they are paid as little as one cent.

Even more would allow their computers to become infected by botnet software nasties if the price was increased to five or 10 cents. Offer a whole dollar and you’ll secure a herd of willing internet slaves.

The demoralising findings come from a study lead by Nicolas Christin, research professor at Carnegie Mellon University’s CyLab which baited users with a benign Windows executable sold to users under the guise of contributing to a (fictitious) study.

It was downloaded 1,714 times and 965 users actually ran the code. The application ran a timer simulating an hour’s computational tasks after which a token for payment would be generated.

The researchers collected information on user machines discovering that many of the predominantly US and Indian user machines were already infected with malware despite having security systems installed, and were happy to click past Windows’ User Access Control warning prompts.

The presence of malware actually increased on machines running the latest patches and infosec tools in what was described as an indication of users’ false sense of security.

Older Posts »
« « The Red Ensign campaign| The liability concern in the future of driverless cars » »

Powered by WordPress