Robots, and artificial intelligence (AI) agents in general, do not have a foundation of concepts to build on, due in part to the fact that many advanced AI systems get trained with reinforcement learning (RL), which is essentially self-education through trial and error. AI agents trained with RL can execute the job they were trained to do very well, in the environment they were trained to do it. But change the job or the environment, and these systems will often fail.
To get around this limitation, computer scientists have begun to teach machines important concepts before setting them loose. It is like reading a manual before using new software: You could try to explore without it, but you will learn far faster with it.
From Quanta Magazine
View Full Article
No entries found