Digging into what that means, exactly, is revealing. The ATAP team hopes to enable smart home products, through Soli, to recognize the “social context” of their environment. Including recognizing head position and user intent, without voice commands or active gestures.
How might Soli make future Google smart home products better?
For context, Soli has already been used in several products. For the Google Pixel 4, Soli allowed for faster face-based unlocking and gesture controls. The Nest Hub uses Soli to help with sleep tracking by monitoring the user’s position, as well as gestures. And in Nest Thermostat, Soli wakes up the screen when somebody approaches or passes by the thermostat. Making it easier to get information at a glance, without physical interaction. And, of course, while saving energy. The new iteration takes all of that to a new level. As noted by Google’s team at ATAP, Soli can be used to better read a room and intent in Google smart home products. In one of Google’s examples, Soli is enabling devices to know whether a user is leaving, entering, and how they’re turning their head. In the latter case, whether that’s turning to look at a smart home product or looking away. As shown in the video below, that can result in some interesting interactions. For instance, a screen might show temperature when nobody’s looking at it. But upon looking at it, it might show the full forecast. Or, in another example shared in Google’s video, it might hold off on displaying incoming messages and alerts until the user looks at the screen. Display other things such as music playing or on-screen hints that a notification is waiting. Walking away from or toward the display entirely provides another example. Doing so might cause playback of a video to stop, for example. Or a smart device might automatically answer a call when approached. For now, whether or not AI-powered homes get a boost from this remains to be seen. Google hasn’t said when or if the new advancements will be used in consumer products just yet.