Google’s AI bot uses language patterns to write code

Writing working code can be a challenge. Even relatively simple languages ​​like HTML require the coder to understand the specific syntax and tools available. Writing code to control robots is even more involved and often has several steps: there is code to detect objects, code to trigger the actuators that move the robot’s limbs, code to indicate when a task is complete, and so on. Something as simple as programming a robot to pick up a yellow block instead of a red one is impossible if you don’t know the coding language the robot is running in.

But Google robotics researchers are exploring a way to fix that. They have developed a robot that can write its own programming code based on natural language instructions. Instead of having to dive into the bot’s config files to change the block_target_color from #FF0000 to #FFFF00, you can just type “pick up the yellow block” and the bot will do the rest.

Code as Policies (or CaP for short) is a coding-specific language model developed by Google’s Pathways Language Model (PaLM) to interpret natural language instructions and turn them into executable code. Google researchers trained the model by giving it examples of instructions (formatted as code comments written by developers to explain what the code does to anyone viewing it) and the corresponding code. From this, it was able to take new instructions and “autonomously generate new code that recomposes API calls, synthesizes new functions, and expresses feedback loops to assemble new behaviors at runtime,” Google engineers explained in a blog post , posted this week, In other words, at a comment-like prompt, it might come up with some likely bot code. Read the reprint of their work here.

Google AI

To get CaP to write new code for specific tasks, the team provided him with “hints,” such as what APIs or tools were available to him, and several example pairs with coding instructions. From this he was able to write new code for new instructions. It does this using “hierarchical code generation,” which prompts it to “recursively define new functions, accumulate its own libraries over time, and independently design a dynamic codebase.” This means that given a set of instructions once, it can develop some code that it can then reconfigure for similar instructions later.

[Related: Google’s AI has a long way to go before writing the next great novel]

CaP can also use the arithmetic operations and logic of certain languages. For example, a model trained in Python can use the appropriate if/else and for/while loops when needed and use third-party libraries for additional functionality. It can also turn ambiguous descriptions like “faster” and “left” into the exact numerical values ​​needed to complete the task. And because CaP is built on top of a simple language model, it has several non-code features—like understanding emoticons and non-English languages.

So far, CaP is still very limited in what it can do. It relies on the language model it is based on to provide context to its instructions. If they don’t make sense or use parameters he doesn’t support, he can’t write code. Likewise, it can apparently only handle a few parameters in a row; more complex sequences of actions that require dozens of parameters are simply not possible. There are also safety concerns: Programming a robot to write its own code is a bit like Skynet. If he thinks the best way to accomplish a task is to spin very quickly with his arm outstretched and there’s a person nearby, someone could get hurt.

Still, it’s incredibly exciting research. With robots, one of the most difficult tasks is generalizing their learned behavior. Programming a robot to play ping pong does not make it capable of playing other games such as baseball or tennis. Although CaP is still miles away from such broad real-world applications, it allows the robot to perform a wide range of complex robot tasks without task-specific training. It’s a big step toward one day being able to teach a robot that can play one game how to play another—without having to break everything down to new code written by humans.

Leave a Comment