What Meta’s wild headset prototypes really mean

What Meta’s wild headset prototypes really mean


This week a legion of the biggest players in the world of computer graphics descended on the Los Angeles Convention Center for the annual SIGGRAPH convention (the Special Interest Group on Computer Graphics and Interactive Techniques, in case you were wondering).

Possibly the biggest announcement made there this week was from Nvidia, which unveiled its new “Grace Hopper Superchip” meant to power the next wave of AI development (more on that in the item below). But beyond the headlines, conferences like SIGGRAPH are an opportunity for the world’s tech giants (and scrappy start-ups) to show off some of their most impractical, cutting-edge technology — the things you won’t see listed on Amazon anytime soon but without the likes of which we’d all still be playing around with a Commodore 64.

Case in point: Meta demonstrated some far-out, very much not-for-sale VR technology, which the company recently described in a very detailed blog post with some nifty video examples. “Varifocus” and “perspective-correct passthrough” might not mean anything to the non-VR nerd, but they’re intuitive concepts: The former is simply the ability to change visual focus between objects, and the latter the ability to seamlessly view the world around you while still immersed in virtual reality.

These are both extremely important problems to solve before any kind of 3D-centric digital future becomes reality. People want to use their eyes like they do in the real world, and they won’t consistently wear a headset that cuts them off from their surroundings.

Those things are also devilishly hard to get right, technologically.

Experimental prototypes shown off by a Reality Labs research team at SIGGRAPH this week purport to accomplish both. (I wasn’t there, but the video clips they showed off are pretty compelling.) Douglas Lanman, a top researcher at Reality Labs, wrote in the blog post announcing them that “These skunkworks-style projects are meant to “to consider what might be one day, rather than what needs to be right now.

That means that they also raise serious policy questions that might be addressed “one day.” One is simply how your inventions will affect society.

In Meta’s earlier, pre-world-bestriding-conglomerate days, Mark Zuckerberg likely didn’t anticipate that he might have consulted with, say, a team of social science researchers to measure the possible impact of his social media platform on political polarization. As a result, tech firms are worrying more in advance about what their devices might do to us.

I spoke with Zvika Krieger, a consultant and Meta’s former director of responsible innovation, about the role that experimental projects like these new headsets play in the tech-world ecosystem. He described a delicate dance where the big tech companies are trying to navigate economic headwinds, the world of corporate competition, and the vagaries of academia all at the same time, with the goal of positioning themselves to break new ground and make a lot of money without incurring the wrath (and bad press) that, for example, Facebook has endured in recent years.

Goggles that can capture reality at the astonishing level of detail showed off by Meta this week also have serious policy implications, with an entire field of research popping up around the ethics of biometric tracking and virtual reality immersion. Krieger told me that Big Tech’s in-house research projects are already paying very close attention to those social risks — especially as privacy-shattering brain-computer interfaces also lurk on the horizon.

“It’s not just Elon Musk and Neuralink. Multiple companies are working on this and I’ll let your imagination run wild in terms of all the ethical issues where these computers are being designed to read your brainwaves,” Krieger said. “Companies get that if this technology is ever going to be mainstream, there’s going to have to be super-intentional work done around consumer protections… I have seen some crazy shit.”

Turning that “crazy shit” into something acceptable to consumers and regulators alike is an ongoing process that involves major contributions from academic-led research labs like the ones churning out Meta’s new prototypes.

And those prototypes might, in fact, ultimately be part of that process just as much as they’re part of pushing the technology itself forward. Meta’s prototypes at SIGGRAPH might be a preview of real technology, coming someday to your consumer devices. But it might actually be more important as simply a way to launch a public conversation about them — so that when some version of this does hit the shelves, the freakout has already been both addressed and priced in.

“A lot of this stuff is all about trial balloons,” Krieger said. “Both in terms of whether the ethics community or the policy community are going to have a freak-out, as well as a trial balloon for the market in terms of whether the stock goes up after we prove these things… a lot of it is about signaling, and not whether this is going to hit the market anytime soon.”

when the chips are down

Is a hardware shortage really slowing down the AI revolution?

It’s a complaint that some of the loudest voices in the tech industry, including Sam Altman and Elon Musk, have lodged, and a recent extremely in-depth investigation of the subject by Clay Pascal at the blog GPU Utils seems to affirm their complaint.

Most big AI developers get access to top processors via cloud platforms like Amazon Web Services and Microsoft Azure. Now, basically, they can’t, with one anonymous employee telling Pascal the situation is “like it was a university mainframe in the 1970s.”

“I’m told that for companies seeking 100s or 1000s of [graphics cards] H100s, Azure and GCP are effectively out of capacity, and [Amazon Web Services] is close to being out,” Pascal writes, citing “Conversations with execs at cloud companies and GPU providers.”

To (vastly) simplify the cause and effect at play here, the Taiwanese chip giant TSMC is simply not able to manufacture enough high-end GPUs to meet the massive industry demand created by the past year’s AI boom. AI companies are now even securing their debt with GPUs as collateral.

And in related SIGGRAPH news, don’t blink, or the H100s Pascal writes about might be defunct before you know it: Nvidia announced a more powerful successor chip to the H100 yesterday, to be released next year.

the fed flexes on stablecoins

The Federal Reserve is setting its expectations for the stablecoin legislation coming out of the House of Representatives.

In a letter to banks issued late yesterday afternoon the Fed writes that if a bank is “issuing, holding, or transacting in” stablecoins it should demonstrate that it has guardrails in place to protect against fraud and cybercrime, and “receive a written notification of supervisory nonobjection,” as POLITICO’s Victoria Guida and Zach Warmbrodt reported for Pro subscribers yesterday.

The letter follows closely on the heels of House Republicans’ stablecoin bill advancing through the Financial Services Committee — a bill that some Democrats say leaves the Fed with too little authority to police the stablecoin and crypto world.

Tweet of the Day


It’s an open question whether Tesla’s Cybertruck will catch on with pickup fans.Microsoft is deepening its ties to the blockchain world as part of its AI effort.AI is developing wildly effective antibodies.…And a slew of unintentionally harmful deepfakes, in the guise of “avatars.”How effective are supposedly “AI-powered” dating apps, really?

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.