The Hidden Laborers Training AI - REVIEWS
 
 
 
The AI Threat Isn’t Skynet. It’s the End of the Middle Class

The AI Threat-When Robots Take All the Work/JOBS
 the real reasons for the “hollowing out” of the middle class. The problem it’s, technology...

The AI Threat Isn’t Skynet. It’s the End of the Middle Class

  • AUTHOR: CADE METZ.CADE METZ BUSINESS
  •  
  • DATE OF PUBLICATION: 02.10.17.02.10.17
  •  
  • TIME OF PUBLICATION: 7:00 AM.7:00 AM
THE AI THREAT ISN’T SKYNET. IT’S THE END OF THE MIDDLE CLASS





THEN ONE/WIRED

IN FEBRUARY 1975, a group of geneticists gathered in a tiny town on the central coast of California to decide if their work would bring about the end of the world. These researchers were just beginning to explore the science of genetic engineering, manipulating DNA to create organisms that didn’t exist in nature, and they were unsure how these techniques would affect the health of the planet and its people. So, they descended on a coastal retreat called Asilomar, a name that became synonymous with the guidelines they laid down at this meeting—a strict ethical framework meant to ensure that biotechnology didn’t unleash the apocalypse.

Forty-two years on, another group of scientists gathered at Asilomar to consider a similar problem. But this time, the threat wasn’t biological. It was digital. In January, the world’s top artificial intelligence researchers walked down the same beachside paths as they discussed their rapidly accelerating field and the role it will play in the fate of humanity. It was a private conference—the enormity of the subject deserves some privacy—but in recent days, organizers released several videos from the conference talks, and some participants have been willing to discuss their experience, shedding some light on the way AI researchers view the threat of their own field.

The rise of driverless cars and trucks is just a start. It’s not just blue-collar jobs that AI endangers.

Yes, they discussed the possibility of a superintelligence that could somehow escape human control, and at the end of the month, the conference organizers unveiled a set of guidelines, signed by attendees and other AI luminaries, that aim to prevent this possible dystopia. But the researchers at Asilomar were also concerned with more immediate matters: the effect of AI on the economy.

“One of the reasons I don’t like the discussions about superintelligence is that they’re a distraction from what’s real,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, who attended the conference. “As the poet said, have fewer imaginary problems and more real ones.”

At a time when the Trump administration is promising to make America great again by restoring old-school manufacturing jobs, AI researchers aren’t taking him too seriously. They know that these jobs are never coming back, thanks in no small part to their own research, which will eliminate so many other kinds of jobs in the years to come, as well. At Asilomar, they looked at the real US economy, the real reasons for the “hollowing out” of the middle class. The problem isn’t immigration—far from it. The problem isn’t offshoring or taxes or regulation. It’s technology.

Rage Against the Machines

In the US, the number of manufacturing jobs peaked in 1979 and has steadily decreased ever since. At the same time, manufacturing has steadily increased, with the US now producing more goods than any other country but China. Machines aren’t just taking the place of humans on the assembly line. They’re doing a better job. And all this before the coming wave of AI upends so many other sectors of the economy. “I am less concerned with Terminator scenarios,” MIT economist Andrew McAfee said on the first day at Asilomar. “If current trends continue, people are going to rise up well before the machines do.”

McAfee pointed to newly collected data that shows a sharp decline in middle class job creation since the 1980s. Now, most new jobs are either at the very low end of the pay scale or the very high end. He also argued that these trends are reversible, that improved education and a greater emphasis on entrepreneurship and research can help feed new engines of growth, that economies have overcome the rise of new technologies before. But after his talk, in the hallways at Asilomar, so many of the researchers warned him that the coming revolution in AI would eliminate far more jobs far more quickly than he expected.

Indeed, the rise of driverless cars and trucks is just a start. New AI techniques are poised to reinvent everything from manufacturing to healthcare to Wall Street. In other words, it’s not just blue-collar jobs that AI endangers. “Several of the rock stars in this field came up to me and said: ‘I think you’re low-balling this one. I think you are underestimating the rate of change,'” McAfee says.

That threat has many thinkers entertaining the idea of a universal basic income, a guaranteed living wage paid by the government to anyone left out of the workforce. But McAfee believes this would only make the problem worse, because it would eliminate the incentive for entrepreneurship and other activity that could create new jobs as the old ones fade away. Others question the psychological effects of the idea. “A universal basic income doesn’t give people dignity or protect them from boredom and vice,” Etzioni says.

Also on researchers’ minds was regulation—of AI itself. Some fear that after squeezing immigration—which would put a brake on the kind of entrepreneurship McAfee calls for—the White House will move to bottle up automation and artificial intelligence. That would be bad news for AI researchers, but also for the economy. If the AI transformation slows in the US, many suspect, it will only accelerate in other parts of the world, putting American jobs at even greater risk due to global competition.

In the end, no one left Asilomar with a sure way of preventing economic upheaval. “Anyone making confident predictions about anything having to do with the future of artificial intelligence is either kidding you or kidding themselves,” McAfee says.

That said, these researchers say they are intent on finding the answer. “People work through the concerns in different ways. But I haven’t met an AI researcher who doesn’t care,” Etzioni says. “People are mindful.” But they feel certain that preventing the rise of AI is not the answer. It’s also not really possible—a bit like bringing those old manufacturing jobs back.

Go Back to Top. Skip To: Start of Article.

 
  • CADE METZ BUSINESS
  •  
  • DATE OF PUBLICATION: 04.19.17.04.19.17
  •  
  • TIME OF PUBLICATION: 7:00 AM.7:00 AM
FACEBOOK’S AUGMENTED REALITY ENGINE BRINGS AI RIGHT TO YOUR PHONE
STEPHEN LAM/REUTERS

WHEN HUSSEIN MEHANNA showed off a new incarnation of Facebook’s Big Blue App back in November, it seemed a tiny improvement—at least on the surface. The app could transform a photo from your cousin’s wedding into a Picasso or a Van Gogh or a Warhol, a bit of extra fun for your social media day. But Mehanna and his team of Facebook engineers were laying the groundwork for an audacious effort to change the future of computing—what Facebook CEO Mark Zuckerberg calls a platform for augmented reality.

Zuckerberg formally unveiled this platform on Tuesday morning during his keynote at F8, Facebook’s annual developer conference. In short, Facebook is transforming the camera on your smartphoneinto an engine for what is commonly called AR. The company will soon allow outside companies and other developers to build digital effects that you can layer atop what you see through your camera. “This will allow us to create all kinds of things that were only available in the digital world,” Zuckerberg said on stage at the civic center in downtown San Jose, California. “We’re going to interact with them and explore them together.”

Initially, Facebook will offer ways of applying these effects to still images, videos, or even live videos shot with your phone. On stage, Zuckerberg showed how you could add a digital coffee cup to a photo of your kitchen table—or even add a school of digital sharks that swim endlessly around your bowl of cereal. But the company is also working on ways of “pinning” digital objects to specific locations in the real world. You could “attach” a digital note to your refrigerator, and if your spouse views the fridge through her camera, she could see it too, as if the note was really there. In other words, Zuckerberg views his platform as a way of expanding a game like Pokémon Go into a fundamental means of interacting with the world around us.

That’s a bold play, to say the least. And frankly, it’s a very difficult thing to pull off—just in a technical sense, let alone all the logistical questions that surround AR. Facebook will grapple with many of these questions in the months and years to come, most notably among them: Do people really want to view the world through their phones? But the company is already making serious progress on the technical side, as Mehanna’s artist-filter demo made clear back in November.

Making AI Local

In applying Picasso’s style to personal snapshots, that new Facebook app leans on deep neural networks, a form of artificial intelligence that’s rapidly reinventing the tech world. But these neural networks are different. They run on the phone itself, not in a data center on the other side of the internet. This is essential to the kind of augmented reality Zuckerberg so gleefully pitched on Tuesday morning. You can’t do what he wants to do unless these AI techniques run right there on the phone. Going over the internet takes much too long. The effect is lost.

“You can think of those early demonstrations as somewhat frivolous,” says Yann LeCun, Facebook’s director of AI research and one of the founding fathers of the deep learning movement. “But the underlying techniques can be used for so much more.”

In order to layer a digital effect atop your smiling face, for instance, Facebook must identify exactly where your smiling face is within a camera’s field of vision, and that requires a neural network. As LeCun explains, the company is also using neural networks to track people’s movements, so that effects can move in tandem with the real world. And according to Facebook chief technology officer Mike Schroepfer, the company is exploring ways of adding effects based not only on what people are doing but what they’re saying. That too requires a neural network. “We’re trying to build a pipeline of the core technologies that will enable all of these common AR effects,” he says.

Some of the effects that Zuckerberg described—most notably the technology that will let you pin stuff in the real world—are still months down the road, if not more. “There’s a lot more that you have to get right to do that work,” Schroepfer says. To attach a digital artifact to a physical location, the Facebook app must build what is really a detailed map of that location and then offer a way of sharing that map with others.

“If I want to leave a note on the table at the bar,” he says, “I am both recording the precise location with GPS and recording the geometry of that scene in such a way that someone else, with a phone that was never there before, shows up and see the world and boot up this digital representation of it.”

What’s more, as these effects get more and more complex, they will run up against the very real hardware limits of our phones. Smartphones offer far less processing power than computer servers packed into data centers, and though Facebook has significantly slimmed down its deep learning tech for mobile devices, more complex models will require more juice. But here too, the groundwork is already being laid.

Intel, Qualcomm, and other chip makers are working to build mobile processors better suited to these kinds of machine learning techniquesAccording to Schroepfer, these types of hardware enhancements could provide a two to three-fold boost to the company’s machine learning models. “We’ve seen things go from 10 frames per second to thirty frames per second,” he says. “That’s the difference between it’s-not-really-usable and it’s-kinda-fun.”

Zuckerberg’s grand vision for camera AR is still under development. But the path is in place—at least technically.

Go Back to Top. Skip To: Start of Article.
 
 
 
  • AUTHOR: KLINT FINLEY.KLINT FINLEY BUSINESS
  •  
  • DATE OF PUBLICATION: 04.21.17.04.21.17
  •  
  • TIME OF PUBLICATION: 3:05 PM.3:05 PM
AD-BLOCKING JUST MIGHT SAVE THE AD INDUSTRY





The COALITION FOR BETTER Ads, a consortium of ad, publishing, and tech companies, wants to save the advertising industry—by killing it. Or at least parts of it. Companies in the coalition will discuss, among other idea, pre-installation of a selective ad-blocker on web browsers as a means to effectively purge the internet of the most intrusive types of ads, such as those that automatically play sound, take-up too much of your screen, or force you to wait a certain amount of time before you can dismiss them.

The idea was first reported Thursday by The Wall Street Journal, which suggested that ad-blockers would be built into Google’s Chrome web browser and turned on by default.

“We do not comment on rumor or speculation,” a Google spokesperson said in a statement. “We’ve been working closely with the Coalition for Better Ads and industry trades to explore a multitude of ways Google and other members of the Coalition could support the Better Ads Standards.”

Stuart Ingis, counsel for the Coalition for Better Ads, says the group will begin discussing specific ideas in coming weeks, though it would be six months to a year before anything is implemented. “To my knowledge Google has not made any decision,” Ingis says. “But certainly a natural way to solve this problem would be in the browsers, whether it’s Google or Microsoft or Apple or any of them.” Ingis doesn’t like to call this ad-blocking, because ad-blocking is generally associated with indiscriminate blocking of all ads on all sites.

Whatever solution the group arrives at, Ingis says, Google won’t be making decisions for the industry unilaterally. The ad formats that are blocked will be decided by the coalition’s members based on its research on what types of ads consumers find most intrusive. The technology, if the coalition moves forward with it, will likely be eventually supported by other browsers as well. (WIRED publisher Conde Nast is a member of Digital Content Next, a trade group that is part of the Coalition for Better Ads.)

It might sound strange for advertising companies to embrace ad-blocking in any capacity, but there is a clear upside to instating this practice. About 26 percent of internet users have ad-blockers on their computers, according to a survey conducted by the Interactive Advertising Bureau, and and about 10 percent use ad-blockers on their phones. The main reason people use ad-blockers, according to survey, is that ads make sites slower and harder to navigate. If the advertising industry can keep people from installing more strict ad-blocking tools by blocking the worst offending advertisements—or get some of those people who already use ad-blockers to turn them off—then perhaps it can save more advertising revenue than it stands to lose by running noisy ads that take over your screen.

Last month, the Coalition for Better Ads published research to determine which specific ad-formats and behaviors most bother people. Based on this research, it created the “Better Ads Standards,” which will form the basis of any efforts the group takes to kill-off bad ads and promote good ones.

What the group hopes to do is discourage the use of annoying and intrusive advertising practices across the web in an attempt to win back consumer trust. Advertisers and ad-purchasers will play an important role in this by shifting their spending to publishers and ad-networks that only run ads that comply with the coalition’s guidelines says John Montgomery, the executive vice president of brand safety at GroupM, a Coalition member and the largest ad-purchasing agency in the world. Browser-makers like Google and Microsoft, however, could also play a role by not just blocking annoying ads, but blocking all ads on sites that include ads that violate the coalition’s guidelines.

ADVERTISEMENT

That would be even more controversial than just selectively blocking ads, but it would also likely be the most effective way to pressure even the sketchiest of websites to comply. Google Chrome alone was used by about 53 percent of all web users last month according to web analytics company StatCounter. Few publishers are likely to risk losing more than half their ad-views just to run a few obnoxious ads. Ingis says if the group does go down this route, it will make sure decisions about which sites are blocked and which aren’t made by a single company, and that there will be an appeals process for publishers that feel they’ve been treated unfairly.

But the Coalition for Better Advertising is still only addressing one part of the problem with digital advertising. The thing is, web ads aren’t just annoying. They can also be dangerous.

Last year several mainstream sites, ranging from the New York Times to nfl.com, accidentally served ads containing code that tried to install malware on users’ computers. It wasn’t an isolated incident. Security researchers have been complaining about the scourge of “malvertising” for years. Meanwhile, adtech companies have a tendency to slurp up as much data about you as possible, likely violating your privacy in the process.

Most people use blockers for the sake of convenience, but many others use them to protect their privacy and reduce their risk of attracting malware. Ingis says that the Coalition for Better Ads isn’t looking at privacy–at least not yet—but other industry groups like the IAB are working on data protection and security standards for the industry. Until the industry cleans up its privacy act, ad-blockers will still be relevant.