Archive for September, 2019

Clever hide-and-seek AIs learn to use tools and break the rules

The latest research from OpenAI put its machine learning agents in a simple game of hide-and-seek, where they pursued an arms race of ingenuity, using objects in unexpected ways to achieve their goal of seeing or being seen. This type of self-taught AI could prove useful in the real world as well. The study intended to, and successfully did look into the possibility of machine learning agents learning sophisticated, real-world-relevant techniques without any interference of suggestions from the researchers. Tasks like identifying objects in photos or inventing plausible human faces are difficult and useful, but they don’t really reflect actions one might take in a real world. They’re highly intellectual, you might say, and as a consequence can be brought to a high level of effectiveness without ever leaving the computer. Whereas attempting to train an AI to use a robotic arm to grip a cup and put it in a saucer is far more difficult than one might imagine (and has only been accomplished under very specific circumstances); the complexity of the real, physical world make purely intellectual, computer-bound learning of the tasks pretty much impossible. At the same time, there are in-between tasks that do not necessarily reflect the real world completely, but still can be relevant to it. A simple one might be how to change a robot’s facing when presented with multiple relevant objects or people. You don’t need a thousand physical trials to know it should rotate itself or the camera so it can see both, or switch between them, or whatever. OpenAI’s hide-and-seek challenge to its baby ML agents was along these lines: A game environment with simple rules (called Polyworld) that nevertheless uses real-world-adjacent physics and inputs. If the AIs can teach themselves to navigate this simplified reality, perhaps they can transfer those skills, with some modification, to full-blown reality. Such is the thinking behind the experiment, anyway, but it’s entertaining enough on its own. The game pits two teams against one another in a small 3D arena populated with a few randomly generated walls and objects. Several agents are spawned in it and the “hiders” are given a few seconds to familiarize themselves with the environment and hide. They can also interact with the objects, pushing them around and locking them in place. Once the “seeker” looks up they have a certain amount of time to spot the hiders. All the machine learning program was informed of were the basic senses — each agent has a view of the world and a sort of “lidar” to tell them the positions of nearby objects — and the knowledge that they can move objects around. But beyond that they were simply given a point when they succeeded at their job, either seeking or hiding — that’s their motivation. From these basic beginnings came wonderfully interesting results. At first the teams essentially moved randomly. But over millions of games the hiders found that by moving in a certain way — “crudely” running away — they […]

Google starts highlighting key moments from videos in Search

Google today announced an update to how it handles videos in search results. Instead of just listing relevant videos on the search results page, Google will now also highlight the most relevant parts of longer videos, based on timestamps provided by the video creators. That’s especially useful for how-to videos or documentaries. “Videos aren’t skimmable like text, meaning it can be easy to overlook video content altogether,” Google Search product manager Prashant Baheti writes in today’s announcement. “Now, just like we’ve worked to make other types of information more easily accessible, we’re developing new ways to understand and organize video content in Search to make it more useful for you.” In the search results, you will then be able to see direct links to the different parts of a video and a click on those will take you right into that part of the video. To make this work, content creators first have to mark up their videos with bookmarks for the specific segments they want to highlight, no matter what platform they are on. Indeed, it’s worth stressing that this isn’t just a feature for YouTube creators. Google says it’s already working with video publishers like CBS Sports and NDTV, who will soon start marking up their videos. I’m somewhat surprised that Google isn’t using its machine learning wizardry to mark up videos automatically. For now, the burden is on the video creator and given how much work simply creating a good video is, it remains to be seen how many of them will do so. On the other hand, though, it’ll give them a chance to highlight their work more prominently on Google Search, though Google doesn’t say whether the markup will have any influence on a video’s ranking on its search results pages.