September 14, 2015

“Edge is advertiser friendly, not User friendly”

Filed under: Technology — Tags: , , , — Nicholas @ 05:00

Jerry Pournelle talks about his differing browser experiences on the Microsoft Surface:

Apple had their announcements today, but I had story conferences so I could not watch them live. I finished my fiction work about lunch time, so I thought to view some reports, and it is time I learned more about the new Windows and get more use to my Surface 3 Pro; a fitting machine to view new Apple products, particularly their new iPad Pro which is I expect their answer to the Surface Pro and Windows 10.

My usual browser is Firefox, which has features I don’t love but by and large I get along with it; but with the Surface it seemed appropriate to make a serious effort to use Edge, the new Microsoft Browser. Of course it has Microsoft Bing the default search engine. It also doesn’t really understand the size of the Pro. It gave me horizontal scrolling, even though I had Edge full screen. I looked up Apple announcements, and Bing gave me a nice list. Right click on the nice bent Microsoft pocket wireless mouse, and open a repost in a new screen. Lo, I have to do horizontal scrolling; Edge makes sure there are ads on screen at all times, so you have to horizontal screen the text to see all of it. Line by line. But I can always see some ads. Edge makes sure I don’t miss ads. It doesn’t care whether I can read the text I was looking for, but it is more careful about the ads. I’m sure that makes the advertisers happy, but I’m not so sure about the users. I thought I went looking for an article, not for ads.

Edge also kept doing things I hadn’t asked it to, and I’d lose the text. Eventually I found if I closed the window and went back to the Bing screen and right clicked to open that same window in a new tab, I was able to – carefully – screen through the text, and adjust the screen so all the text was on screen even though there was still horizontal scrolling possible. This is probably a function of inexperience, but using a touch screen and Edge is a new experience.

Even so it was a rough read. I gave up and went to Firefox on the Surface Pro. Firefox has Google as its default browser, and the top selections it offered me – all I could see on one screen – were different from the ones I saw with Bing. I had to do a bit of scrolling to find the article I had been trying to read, but eventually I found it. Right click to open it in a new Tab. Voila. All my text in the center. I could read it. Much easier. For the record: same site, adjusted to width in Firefox on the Surface Pro, horizontal scrolling of the same article viewed in Edge. Probably my fault, but I don’t know what I did wrong.

Now in Microsoft’s defense, I don’t know Edge very well; but if you are going to a Surface Pro, you may well find Firefox easier to use than Edge. A lot easier to use.

As to Google vs. Bing, in this one case I found Bing superior; what it offered me had more content. But Edge is advertiser friendly, not User friendly.

August 8, 2015

Tom Kratman on “killer ‘bots”

Filed under: Military, Technology — Tags: , , , — Nicholas @ 03:00

SF author (and former US Army officer) Tom Kratman answers a few questions about drones, artificial intelligence, and the threat/promise of intelligent, self-directed weapon platforms in the near future:

Ordinarily, in this space, I try to give some answers. I’m going to try again, in an area in which I am, at least at a technological level, admittedly inexpert. Feel free to argue.

Question 1: Are unmanned aerial drones going to take over from manned combat aircraft?

I am assuming here that at some point in time the total situational awareness package of the drone operator will be sufficient for him to compete or even prevail against a manned aircraft in aerial combat. In other words, the drone operator is going to climb into a cockpit far below ground and the only way he’ll be able to tell he’s not in an aircraft is that he’ll feel no inertia beyond the bare minimum for a touch of realism, to improve his situational awareness, but with no chance of blacking out due to high G maneuvers..

Still, I think the answer to the question is “no,” at least as long as the drones remain under the control of an operator, usually far, far to the rear. Why not? Because to the extent the things are effective they will invite a proportional, or even more than proportional, response to defeat or at least mitigate their effectiveness. That’s just in the nature of war. This is exacerbated by there being at least three or four routes to attack the remote controlled drone. One is by attacking the operator or the base; if the drone is effective enough, it will justify the effort of making those attacks. Yes, he may be bunkered or hidden or both, but he has a signal and a signature, which can probably be found. To the extent the drone is similar in size and support needs to a manned aircraft, that runway and base will be obvious.

The second target of attack is the drone itself. Both of these targets, base/operator and aircraft, are replicated in the vulnerabilities of the manned aircraft, itself and its base. However, the remote controlled drone has an additional vulnerability: the linkage between itself and its operator. Yes, signals can be encrypted. But almost any signal, to include the encryption, can be captured, stored, delayed, amplified, and repeated, while there are practical limits on how frequently the codes can be changed. Almost anything can be jammed. To the extent the drone is dependent on one or another, or all, of the global positioning systems around the world, that signal, too, can be jammed or captured, stored, delayed, amplified and repeated. Moreover, EMP, electro-magnetic pulse, can be generated with devices well short of the nuclear. EMP may not bother people directly, but a purely electronic, remote controlled device will tend to be at least somewhat vulnerable, even if it’s been hardened,

Question 2: Will unmanned aircraft, flown by Artificial Intelligences, take over from manned combat aircraft?

The advantages of the unmanned combat aircraft, however, ranging from immunity to high G forces, to less airframe being required without the need for life support, or, alternatively, for a greater fuel or ordnance load, to expendability, because Unit 278-B356 is no one’s precious little darling, back home, to the same Unit’s invulnerability, so far as I can conceive, to torture-induced propaganda confessions, still argue for the eventual, at least partial, triumph of the self-directing, unmanned, aerial combat aircraft.

Even, so, I’m going to go out on a limb and go with my instincts and one reason. The reason is that I have never yet met an AI for a wargame I couldn’t beat the digital snot out of, while even fairly dumb human opponents can present problems. Coupled with that, my instincts tell me that that the better arrangement is going to be a mix of manned and unmanned, possibly with the manned retaining control of the unmanned until the last second before action.

This presupposes, of course, that we don’t come up with something – quite powerful lasers and/or renunciation of the ban on blinding lasers – to sweep all aircraft from the sky.

August 2, 2015

Thinking about realistic security in the “internet of things”

Filed under: Technology — Tags: , , , , , — Nicholas @ 02:00

The Economist looks at the apparently unstoppable rush to internet-connect everything and why we should worry about security now:

Unfortunately, computer security is about to get trickier. Computers have already spread from people’s desktops into their pockets. Now they are embedding themselves in all sorts of gadgets, from cars and televisions to children’s toys, refrigerators and industrial kit. Cisco, a maker of networking equipment, reckons that there are 15 billion connected devices out there today. By 2020, it thinks, that number could climb to 50 billion. Boosters promise that a world of networked computers and sensors will be a place of unparalleled convenience and efficiency. They call it the “internet of things”.

Computer-security people call it a disaster in the making. They worry that, in their rush to bring cyber-widgets to market, the companies that produce them have not learned the lessons of the early years of the internet. The big computing firms of the 1980s and 1990s treated security as an afterthought. Only once the threats—in the forms of viruses, hacking attacks and so on—became apparent, did Microsoft, Apple and the rest start trying to fix things. But bolting on security after the fact is much harder than building it in from the start.

Of course, governments are desperate to prevent us from hiding our activities from them by way of cryptography or even moderately secure connections, so there’s the risk that any pre-rolled security option offered by a major corporation has already been riddled with convenient holes for government spooks … which makes it even more likely that others can also find and exploit those security holes.

… companies in all industries must heed the lessons that computing firms learned long ago. Writing completely secure code is almost impossible. As a consequence, a culture of openness is the best defence, because it helps spread fixes. When academic researchers contacted a chipmaker working for Volkswagen to tell it that they had found a vulnerability in a remote-car-key system, Volkswagen’s response included a court injunction. Shooting the messenger does not work. Indeed, firms such as Google now offer monetary rewards, or “bug bounties”, to hackers who contact them with details of flaws they have unearthed.

Thirty years ago, computer-makers that failed to take security seriously could claim ignorance as a defence. No longer. The internet of things will bring many benefits. The time to plan for its inevitable flaws is now.

June 23, 2015

Obama needs to convey a sense of urgency over the OPM hack

Filed under: Bureaucracy, Government, Technology, USA — Tags: , , , , — Nicholas @ 04:00

Megan McArdle on what she characterizes as possibly “the worst cyber-breach the U.S. has ever experienced”:

And yet, neither the government nor the public seems to be taking it all that seriously. It’s been getting considerably less play than the Snowden affair did, or the administration’s other massively public IT failure: the meltdown of the Obamacare exchanges. For that matter, Google News returns more hits on a papal encyclical about climate change that will have no obvious impact on anything than it does for a major security breach in the U.S. government. The administration certainly doesn’t seem that concerned. Yesterday, the White House told Reuters that President Obama “continues to have confidence in Office of Personnel Management Director Katherine Archuleta.”

I’m tempted to suggest that the confidence our president expresses in people who preside over these cyber-disasters, and the remarkable string of said cyber-disasters that have occurred under his presidency, might actually be connected. So tempted that I actually am suggesting it. President Obama’s administration has been marked by titanic serial IT disasters, and no one seems to feel any particular urgency about preventing the next one. By now, that’s hardly surprising. Kathleen Sebelius was eased out months after the Department of Health and Human Services botched the one absolutely crucial element of the Obamacare rollout. The NSA director’s offer to resign over the Snowden leak was politely declined. And now, apparently, Obama has full faith and confidence in the folks at OPM. Why shouldn’t he? Voters have never held Obama responsible for his administration’s appalling IT record, so why should he demand accountability from those below him?

Yes, yes, I know. You can’t say this is all Obama’s fault. Government IT is almost doomed to be terrible; the public sector can’t pay salaries that are competitive with the private sector, they’re hampered by government contracting rules, and their bureaucratic procedures make it hard to build good systems. And that’s all true. Yet note this: When the exchanges crashed on their maiden flight, the government managed to build a crudely functioning website in, basically, a month, a task they’d been systematically failing at for the previous three years. What was the difference? Urgency. When Obama understood that his presidency was on the line, he made sure it got done.

Update: It’s now asserted that the OPM hack exposed more than four times as many people’s personal data than the agency had previously admitted.

The personal data of an estimated 18 million current, former and prospective federal employees were affected by a cyber breach at the Office of Personnel Management – more than four times the 4.2 million the agency has publicly acknowledged. The number is expected to grow, according to U.S. officials briefed on the investigation.

FBI Director James Comey gave the 18 million estimate in a closed-door briefing to Senators in recent weeks, using the OPM’s own internal data, according to U.S. officials briefed on the matter. Those affected could include people who applied for government jobs, but never actually ended up working for the government.

The same hackers who accessed OPM’s data are believed to have last year breached an OPM contractor, KeyPoint Government Solutions, U.S. officials said. When the OPM breach was discovered in April, investigators found that KeyPoint security credentials were used to breach the OPM system.

Some investigators believe that after that intrusion last year, OPM officials should have blocked all access from KeyPoint, and that doing so could have prevented more serious damage. But a person briefed on the investigation says OPM officials don’t believe such a move would have made a difference. That’s because the OPM breach is believed to have pre-dated the KeyPoint breach. Hackers are also believed to have built their own backdoor access to the OPM system, armed with high-level system administrator access to the system. One official called it the “keys to the kingdom.” KeyPoint did not respond to CNN’s request for comment.

U.S. investigators believe the Chinese government is behind the cyber intrusion, which are considered the worst ever against the U.S. government.

May 14, 2015

Moore’s Law challenged yet again

Filed under: Business, History, Technology — Tags: , — Nicholas @ 02:00

In Bloomberg View, Virginia Postrel looks at the latest “Moore’s Law is over” notions:

Semiconductors are what economists call a “general purpose technology,” like electrical motors. Their effects spread through the economy, reorganizing industries and boosting productivity. The better and cheaper chips become, the greater the gains rippling through every enterprise that uses computers, from the special-effects houses producing Hollywood magic to the corner dry cleaners keeping track of your clothes.

Moore’s Law, which marked its 50th anniversary on Sunday, posits that computing power increases exponentially, with the number of components on a chip doubling every 18 months to two years. It’s not a law of nature, of course, but a kind of self-fulfilling prophecy, driving innovative efforts and customer expectations. Each generation of chips is far more powerful than the previous, but not more expensive. So the price of actual computing power keeps plummeting.

At least that’s how it seemed to be working until about 2008. According to the producer price index compiled by the Bureau of Labor Statistics, the price of the semiconductors used in personal computers fell 48 percent a year from 2000 to 2004, 29 percent a year from 2004 to 2008, and a measly 8 percent from 2008 to 2013.

The sudden slowdown presents a puzzle. It suggests that the semiconductor business isn’t as innovative as it used to be. Yet engineering measures of the chips’ technical capabilities have showed no letup in the rate of improvement. Neither have tests of how the semiconductors perform on various computing tasks.

April 19, 2015

The latest “breakthrough” in helping schizophrenics take their medicine

Filed under: Health, Humour — Tags: , , , — Nicholas @ 03:00

Scott Alexander recently attended a local psychiatry conference, with some essential themes being emphasized:

This conference consisted of a series of talks about all the most important issues of the day, like ‘The Menace Of Psychologists Being Allowed To Prescribe Medication’, ‘How To Be An Advocate For Important Issues Affecting Your Patients Such As The Possibility That Psychologists Might Be Allowed To Prescribe Them Medication’, and ‘Protecting Members Of Disadvantaged Communities From Psychologists Prescribing Them Medication’.

As somebody who’s noticed that the average waiting list for a desperately ill person to see a psychiatrist is approaching the twelve month mark in some places, I was pretty okay with psychologists prescribing medication. The scare stories about how psychologists might prescribe medications unsafely didn’t have much effect on me, since I continue to believe that putting antidepressants in a vending machine would be a more safety-conscious system than what we have now (a vending machine would at least limit antidepressants to people who have $1.25 in change; the average primary care doctor is nowhere near that selective). Annnnnyway, this made me kind of uncomfortable at the conference and I Struck A Courageous Blow Against The Cartelization Of Medicine by sneaking out without putting my name on their mailing list.

But before I did, I managed to take some notes about what’s going on in the wider psychiatric world, including:

– The newest breakthrough in ensuring schizophrenic people take their medication (a hard problem!) is bundling the pills with an ingestable computer chip that transmits data from the patient’s stomach. It’s a bold plan, somewhat complicated by the fact that one of the most common symptoms of schizophrenia is the paranoid fear that somebody has implanted a chip in your body to monitor you. Can you imagine being a schizophrenic guy who has to explain to your new doctor that your old doctor put computer chips in your pills to monitor you? Yikes. If they go through with this, I hope they publish the results in the form of a sequel to The Three Christs of Ypsilanti.

– The same team is working on a smartphone app to detect schizophrenic relapses. The system uses GPS to monitor location, accelerometer to detect movements, and microphone to check tone of voice and speaking pattern, then throws it into a machine learning system that tries to differentiate psychotic from normal behavior (for example, psychotic people might speak faster, or rock back and forth a lot). Again, interesting idea. But again, one of the most common paranoid schizophrenic delusions is that their electronic devices are monitoring everything they do. If you make every one of a psychotic person’s delusions come true, such that they no longer have any beliefs that do not correspond to reality, does that technically mean you’ve cured them? I don’t know, but I’m glad we have people investigating this important issue.

February 23, 2015

The “Internet of Things” (That May Or May Not Let You Do That)

Filed under: Liberty, Technology — Tags: , , , , — Nicholas @ 03:00

Cory Doctorow is concerned about some of the possible developments within the “Internet of Things” that should concern us all:

The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.

Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.

To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised.

The current kill-switch regime puts a lot of emphasis on the physical risks, and treats risks to your data as unimportant. It’s true that the physical risks associated with phone theft are substantial, but if a catastrophic data compromise doesn’t strike terror into your heart, it’s probably because you haven’t thought hard enough about it — and it’s a sure bet that this risk will only increase in importance over time, as you bind your finances, your access controls (car ignition, house entry), and your personal life more tightly to your mobile devices.

That is to say, phones are only going to get cheaper to replace, while mobile data breaches are only going to get more expensive.

It’s a mistake to design a computer to accept instructions over a public network that its owner can’t see, review, and countermand. When every phone has a back door and can be compromised by hacking, social-engineering, or legal-engineering by a manufacturer or carrier, then your phone’s security is only intact for so long as every customer service rep is bamboozle-proof, every cop is honest, and every carrier’s back end is well designed and fully patched.

January 7, 2015

Cory Doctorow on the dangers of legally restricting technologies

Filed under: Law, Liberty, Media, Technology — Tags: , , , , — Nicholas @ 02:00

In Wired, Cory Doctorow explains why bad legal precedents from more than a decade ago are making us more vulnerable rather than safer:

We live in a world made of computers. Your car is a computer that drives down the freeway at 60 mph with you strapped inside. If you live or work in a modern building, computers regulate its temperature and respiration. And we’re not just putting our bodies inside computers — we’re also putting computers inside our bodies. I recently exchanged words in an airport lounge with a late arrival who wanted to use the sole electrical plug, which I had beat him to, fair and square. “I need to charge my laptop,” I said. “I need to charge my leg,” he said, rolling up his pants to show me his robotic prosthesis. I surrendered the plug.

You and I and everyone who grew up with earbuds? There’s a day in our future when we’ll have hearing aids, and chances are they won’t be retro-hipster beige transistorized analog devices: They’ll be computers in our heads.

And that’s why the current regulatory paradigm for computers, inherited from the 16-year-old stupidity that is the Digital Millennium Copyright Act, needs to change. As things stand, the law requires that computing devices be designed to sometimes disobey their owners, so that their owners won’t do something undesirable. To make this work, we also have to criminalize anything that might help owners change their computers to let the machines do that supposedly undesirable thing.

This approach to controlling digital devices was annoying back in, say, 1995, when we got the DVD player that prevented us from skipping ads or playing an out-of-region disc. But it will be intolerable and deadly dangerous when our 3-D printers, self-driving cars, smart houses, and even parts of our bodies are designed with the same restrictions. Because those restrictions would change the fundamental nature of computers. Speaking in my capacity as a dystopian science fiction writer: This scares the hell out of me.

December 18, 2014

Admiral Grace Hopper

Filed under: History, Military, Technology, USA — Tags: , , , , — Nicholas @ 00:04

The US Naval Institute posted an article about the one and only Admiral Grace Hopper earlier this month to mark her birthday:

The typical career arc of a naval officer may run from 25-30 years. Most, however, don’t start at age 35. Yet when it comes to Rear Adm. Grace Hopper, well, the word “typical” just doesn’t apply.

Feisty. Eccentric. Maverick. Brilliant. Precise. Grace Hopper embodied all of those descriptions and more, but perhaps what defined her as much as anything else was the pride she had in wearing the Navy uniform for 43 years. Ironically, Rear Adm. Grace Hopper — “Amazing Grace” as she was known — had to fight to get into the Navy.

Grace Brewster Murray was born into a well-off family in New York on Dec. 9, 1906. She could have followed what many of her peers did during those times: attending college for a year or two, getting married then devoting their lives to their families and volunteer work.

Instead, Grace’s path would be less traveled. Encouraged to explore her innate curiosity on how things worked, a 7-year-old Grace dismantled all of the family’s alarm clocks trying to put them back together again. Rather than banishment from the practice, she was allowed one to practice on.


When she joined the WAVES in December 1943, Lt. j.g. Grace Hopper was 37 years old. Williams noted that after graduating at the top of her class of 800 officer candidates in June 1944, Hopper paid homage to Alexander Wilson Russell, her great-grandfather, the admiral who apparently took a “dim view of women and cats” in the Navy and laid flowers on his grave to “comfort and reassure him.”

Hopper was sent to the Bureau of Ordnance Computation Project at Harvard University under the guidance of Howard Aiken. The Harvard physics and applied mathematics professor helped create the first Automatic Sequence Controlled Calculator (ASCC), better known as Mark I. He ran a lab where design, testing, modification and analysis of weapons were calculated. Most were specially trained women called computers. “So the first ‘computers’ were women who did the calculating on desk calculators,” Williams said. And the time it took for the computers to calculate was called “girl hours.”

What happened next put Hopper on a new path that would define the rest of her life, according to a passage in the book Improbable Warriors: Women Scientists in the U.S. Navy during World War II also by Williams.

On July 2, 1944, Hopper reported to duty and met Aiken.

“That’s a computing engine,” Aiken snapped at Hopper, pointing to the Mark I. “I would be delighted to have the coefficients for the interpolation of the arc tangent by next Thursday.”

Hopper was a mathematician, but what she wasn’t was a computer programmer. Aiken gave her a codebook, and as Hopper put it, a week to learn “how to program the beast and get a program running.”

Hopper overcame her lack of programming skills the same way she always tackled other obstacles; by being persistent and stopping at nothing to solve problems. She eventually would become well-versed in how the machine operated, all 750,000 parts, 530 miles of wire and 3 million wire connections crammed in a machine that was 8-feet tall and 50-feet wide.

December 10, 2014

QotD: Quality, innovation, and progress

Filed under: Economics, Liberty, Quotations, Technology — Tags: , , , , — Nicholas @ 00:01

Measured by practically any physical metric, from the quality of the food we eat to the health care we receive to the cars we drive and the houses we live in, Americans are not only wildly rich, but radically richer than we were 30 years ago, to say nothing of 50 or 75 years ago. And so is much of the rest of the world. That such progress is largely invisible to us is part of the genius of capitalism — and it is intricately bound up with why, under the system based on selfishness, avarice, and greed, we do such a remarkably good job taking care of one another, while systems based on sharing and common property turn into miserable, hungry prison camps.

We treat the physical results of capitalism as though they were an inevitability. In 1955, no captain of industry, prince, or potentate could buy a car as good as a Toyota Camry, to say nothing of a 2014 Mustang, the quintessential American Everyman’s car. But who notices the marvel that is a Toyota Camry? In the 1980s, no chairman of the board, president, or prime minister could buy a computer as good as the cheapest one for sale today at Best Buy. In the 1950s, American millionaires did not have access to the quality and variety of food consumed by Americans of relatively modest means today, and the average middle-class household spent a much larger share of its income buying far inferior groceries. Between 1973 and 2008, the average size of an American house increased by more than 50 percent, even as the average number of people living in it declined. Things like swimming pools and air conditioning went from being extravagances for tycoons and movie stars to being common or near-universal. In his heyday, Howard Hughes didn’t have as good a television as you do, and the children of millionaires for generations died from diseases that for your children are at most an inconvenience. As the first 199,746 or so years of human history show, there is no force of nature ensuring that radical material progress happens as it has for the past 250 years. Technological progress does not drive capitalism; capitalism drives technological progress — and most other kinds of progress, too.

Kevin D. Williamson, “Welcome to the Paradise of the Real: How to refute progressive fantasies — or, a red-pill economics”, National Review, 2014-04-24

December 5, 2014

Ross Perot (of all people) and one of the earliest real computers

Filed under: History, Technology, USA — Tags: , , , — Nicholas @ 00:02

At Wired, Brendan I. Koerner talks about the odd circumstances which led to H. Ross Perot being instrumental in saving an iconic piece of computer history:

Eccentric billionaires are tough to impress, so their minions must always think big when handed vague assignments. Ross Perot’s staffers did just that in 2006, when their boss declared that he wanted to decorate his Plano, Texas, headquarters with relics from computing history. Aware that a few measly Apple I’s and Altair 880’s wouldn’t be enough to satisfy a former presidential candidate, Perot’s people decided to acquire a more singular prize: a big chunk of ENIAC, the “Electronic Numerical Integrator And Computer.” The ENIAC was a 27-ton, 1,800-square-foot bundle of vacuum tubes and diodes that was arguably the world’s first true computer. The hardware that Perot’s team diligently unearthed and lovingly refurbished is now accessible to the general public for the first time, back at the same Army base where it almost rotted into oblivion.

ENIAC was conceived in the thick of World War II, as a tool to help artillerymen calculate the trajectories of shells. Though construction began a year before D-Day, the computer wasn’t activated until November 1945, by which time the U.S. Army’s guns had fallen silent. But the military still found plenty of use for ENIAC as the Cold War began — the machine’s 17,468 vacuum tubes were put to work by the developers of the first hydrogen bomb, who needed a way to test the feasibility of their early designs. The scientists at Los Alamos later declared that they could never have achieved success without ENIAC’s awesome computing might: the machine could execute 5,000 instructions per second, a capability that made it a thousand times faster than the electromechanical calculators of the day. (An iPhone 6, by contrast, can zip through 25 billion instructions per second.)

When the Army declared ENIAC obsolete in 1955, however, the historic invention was treated with scant respect: its 40 panels, each of which weighed an average of 858 pounds, were divvied up and strewn about with little care. Some of the hardware landed in the hands of folks who appreciated its significance — the engineer Arthur Burks, for example, donated his panel to the University of Michigan, and the Smithsonian managed to snag a couple of panels for its collection, too. But as Libby Craft, Perot’s director of special projects, found out to her chagrin, much of ENIAC vanished into disorganized warehouses, a bit like the Ark of the Covenant at the end of Raiders of the Lost Ark.

Lost in the bureaucracy

An ENIAC technician changes a tube. (Photo: US Army)

An ENIAC technician changes a tube. (Photo: US Army)

November 25, 2014

When was it exactly that “progress stopped”?

Filed under: Environment, Health, Media, Technology — Tags: , , , , — Nicholas @ 00:05

Scott Alexander wrote this back in July. I think it’s still relevant as a useful perspective-enhancer:

The year 1969 comes up to you and asks what sort of marvels you’ve got all the way in 2014.

You explain that cameras, which 1969 knows as bulky boxes full of film that takes several days to get developed in dark rooms, are now instant affairs of point-click-send-to-friend that are also much higher quality. Also they can take video.

Music used to be big expensive records, and now you can fit 3,000 songs on an iPod and get them all for free if you know how to pirate or scrape the audio off of YouTube.

Television not only has gone HDTV and plasma-screen, but your choices have gone from “whatever’s on now” and “whatever is in theaters” all the way to “nearly every show or movie that has ever been filmed, whenever you want it”.

Computers have gone from structures filling entire rooms with a few Kb memory and a punchcard-based interface, to small enough to carry in one hand with a few Tb memory and a touchscreen-based interface. And they now have peripherals like printers, mice, scanners, and flash drives.

Lasers have gone from only working in special cryogenic chambers to working at room temperature to fitting in your pocket to being ubiquitious in things as basic as supermarket checkout counters.

Telephones have gone from rotary-dial wire-connected phones that still sometimes connected to switchboards, to cell phones that fit in a pocket. But even better is bypassing them entirely and making video calls with anyone anywhere in the world for free.

Robots now vacuum houses, mow lawns, clean office buildings, perform surgery, participate in disaster relief efforts, and drive cars better than humans. Occasionally if you are a bad person a robot will swoop down out of the sky and kill you.

For better or worse, video games now exist.

Medicine has gained CAT scans, PET scans, MRIs, lithotripsy, liposuction, laser surgery, robot surgery, and telesurgery. Vaccines for pneumonia, meningitis, hepatitis, HPV, and chickenpox. Ceftriaxone, furosemide, clozapine, risperidone, fluoxetine, ondansetron, omeprazole, naloxone, suboxone, mefloquine, – and for that matter Viagra. Artificial hearts, artificial livers, artificial cochleae, and artificial legs so good that their users can compete in the Olympics. People with artificial eyes can only identify vague shapes at best, but they’re getting better every year.

World population has tripled, in large part due to new agricultural advantages. Catastrophic disasters have become much rarer, in large part due to architectural advances and satellites that can watch the weather from space.

We have a box which you can type something into and it will tell you everything anyone has ever written relevant to your query.

We have a place where you can log into from anywhere in the world and get access to approximately all human knowledge, from the scores of every game in the 1956 Roller Hockey World Cup to 85 different side effects of an obsolete antipsychotic medication. It is all searchable instantaneously. Its main problem is that people try to add so much information to it that its (volunteer) staff are constantly busy deleting information that might be extraneous.

We have the ability to translate nearly major human language to any other major human language instantaneously at no cost with relatively high accuracy.

We have navigation technology that over fifty years has gone from “map and compass” to “you can say the name of your destination and a small box will tell you step by step which way you should be going”.

We have the aforementioned camera, TV, music, videophone, video games, search engine, encyclopedia, universal translator, and navigation system all bundled together into a small black rectangle that fits in your pockets, responds to your spoken natural-language commands, and costs so little that Ethiopian subsistence farmers routinely use them to sell their cows.

But, you tell 1969, we have something more astonishing still. Something even more unimaginable.

“We have,” you say, “people who believe technology has stalled over the past forty-five years.”

1969’s head explodes.

November 21, 2014

Elon Musk’s constant nagging worry

Filed under: Business, Technology — Tags: , , , — Nicholas @ 07:14

In the Washington Post, Justin Moyer talks about Elon Musk’s concern about runaway artificial intelligence:

Elon Musk — the futurist behind PayPal, Tesla and SpaceX — has been caught criticizing artificial intelligence again.

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

Unfortunately, Musk didn’t explain how humanity might be compromised by “digital superintelligences,” “Terminator”-style.

He never does. Yet Musk has been holding forth on-and-off about the apocalypse artificial intelligence might bring for much of the past year.

November 17, 2014

An online font specially designed to help dyslexics read more accurately

Filed under: Media, Technology — Tags: — Nicholas @ 00:02

On the LMB mailing list, Marc Wilson shared a link to a free downloadable Dyslexia Font:

Dyslexie font

November 13, 2014

Where’s the rimshot?

Filed under: Humour, Technology — Tags: , — Nicholas @ 09:37

Marc Wilson posted this to the Lois McMaster Bujold mailing list (off-topic, obviously):

Apparently the inventor of predictive text has died.

His funfair will be on Sundial.

Older Posts »

Powered by WordPress