Billionaire dreamer to see your thoughts ‘by 2019’
During the second day of its F8 developer conference on Wednesday, representatives of the social media giant provided an update on the company’s ten-year technology roadmap. Last year, CEO Mark Zuckerberg summarized his plan thus: “Give everyone the power to share anything with anyone.”
The Facebook-broadcast killing of a Cleveland man over the weekend suggests the need to modify that aspiration with a few caveats.
In years past, tech companies pitching vaporware drew scorn. But in the selfie era Facebook, like Google, can’t resist sharing schemes to save the world.
Such behavior is understandable as a way to make the company more appealing to the public, to potential employees, and to investors. But it comes without commitment to any specific deliverables or the need to allay concerns about dystopian fallout.
Regina Dugan, who leads Facebook’s Building 8 hardware research group, closed Facebook’s forward-looking keynote with a glimpse of the company’s effort to build “a brain mouse for AR.”
The goal, she said, was to create a system that “can type 100 words per minute straight from your brain.” Facebook aims to reach this milestone in two years.
The current state of the art is about eight words per minute, based on medically oriented work overseen by Krishna Shenoy at Stanford’s Neural Prosthetic Systems Lab. However, Dugan conceded that these systems don’t yet operate in real time and they require surgery. “That simply won’t scale,” she said, evidently unsold on crowdsourcing the scalpel work.
Dugan said the technology to read thoughts doesn’t exist today, but she expects advances in optical imaging will help make non-invasive brain interfaces viable.
“What if we make it possible to hear through your skin?” Dugan then asked, describing a system by which spoken words can be translated into localized vibration patterns. A Facebook employee identified as Francis has managed to learn nine words in this manner.
“She has learned to feel the acoustic shape on her arm,” said Dugan. “She processes these shapes in her brain as words. She’s learning how to use the artificial cochlea [a haptic sleeve] we made for her skin.”
Dugan suggested that such technology could one day allow someone to think in Mandarin and convey meaning to a conversation partner in Spanish – the semantic meaning behind words would be transmitted hapticly, independent of language.
An even more compelling application would be cheating at cards. A spotter could secretly convey information to a colleague without a visible earpiece or a covert video feed in eyeglasses. If Las Vegas casinos haven’t yet invested in thermal cameras capable of detecting electronics embedded in clothing or skin, they’d be well-advised to consider the possibilities.
But again, as Dugan observed, “These things are still a few years away.”
Those who recall the word “privacy” may be heartened to learn that Facebook isn’t presently interested in reading all your thoughts – despite the obvious value to marketers. It’s only concerned with thoughts you want to share.
“To be clear, we are not talking about decoding your random thoughts,” she said. “That might be more than any of us care to know. And it’s not something any of us should have a right to know.”
Leave that to Palantir.
Michael Abrash, chief scientist for Facebook’s Oculus, extolled the potential of augmented reality, even as he poured cold water on the prospect of seeing AR glasses anytime soon.
“The true breakthrough will come when the real and virtual worlds can mix freely, wherever we are, whatever we’re doing, so that the virtual world simply becomes part of our everyday reality,” said Abrash. “That will require AR glasses and those will be much more technologically challenging than VR headsets. In fact, the set of technologies needed to build them doesn’t yet exist.”
Those who have invested in AR-darling Magic Leap may not want to hear that.
Abrash predicted it will be another 20 or 30 years before we have true AR capabilities in glasses. In the meantime, Facebook’s Camera Effects Platform, introduced at F8 on Tuesday, has been created to dress up boring images of daily life with AR graphics, making them interesting enough to share on Facebook and ensuring a sufficient crop of attentive eyeballs for ad selling.
Joaquin Quiñonero Candela, director of applied machine learning at Facebook, highlighted Facebook’s advances using machine learning to understand the vast amount of video streaming through, and stored on, Facebook’s servers.
He said the company has already made great strides in terms of recognizing objects in images and being able to separate them from backgrounds – something a few years ago would have required meticulous manual labor in Adobe Photoshop. Its decision to open source its Caffe2 machine learning framework should bring further improvements.
While Facebook’s systems aren’t yet smart enough to block a live-streamed murder, that’s an obviously desirable goal, both for Facebook and for censorious regimes around the world.
Yael Maguire, engineering director and head of Facebook’s Connectivity Lab, announced three new wireless data transfer records: 36Gbps over 13km using millimeter wave technology, 80Gbps over the same distance with the company’s optical cross-link technology, and 16Gbps bidirectionally from a Cessna flying over 7km (4.3mi) away.
Maguire also introduced a Tether-tenna, a small helicopter tethered to the ground with a wire that carries power and wireless data hardware. It’s designed to provide instant network infrastructure in emergency situations.
Facebook CTO Michael Schroepfer, who opened the preview of things to come, showed off two 360-degree camera designs, the x24 and x6, for capturing VR imagery. The cameras are designed to work with Facebook’s new 360 Capture SDK, designed to help developers integrate 360-degree imagery into their apps.
Facebook’s hardware and software can in part solve the problem posed by fixed 360-degree cameras in VR applications – the fact that images captured from a specific point in space don’t necessarily contain all the information necessary to view the scene from every angle. Its system manages this feat by filling in the blank areas not captured from a given viewpoint, to the extent that it can.
“Because of the quality of depth information, the resolution, and the arrangement of the physical hardware of the camera, and the quality of the computer vision code on the backend, we’re able to create views that didn’t exist before, to give you that sense of immersion [when you move around],” said Schroepfer.
If and when these technologies mature, Facebook may turn some of them into sources of revenue that didn’t exist before. But why talk about something as grubby as ad selling when you can muse about connecting people? ®