Braitenberg begins this chapter by sitting back and reflecting on the variety of vehicles that now exist in the laboratory, and he provides a very nice description of the complexity and variety of behaviors that would be observed. "On the whole, these vehicles are surprisingly smart, especially considering the limited amount of intelligence that we, their creators, have invested in them."
However, Braitenberg is quite clear in his opinion that these machines do not think; for instance, he appeals to the notion that they do not exhibit any originality in their problem solving.
Nevertheless, this does not imply to Braitenberg that thinking is in principle beyond vehicles of the sort that have been described to this point in his book. A vehicle that has been around in the environment for a long time will have generated a large number of associative concepts. He describes how this vehicle might, for example, conjoin a large number of pairwise associations into a wholistic chain, producing an "image" or "idea".
(NB: At this point, I'm pretty skeptical. There are lots and lots of well known limitations associated with the Hebbian learning that Braitenberg has described. Furthermore, there are lots of concerns about the limited ability of purely associative systems to demonstrate sufficient computational power to be of interest to cognitive psychologists. In short, I would be surprised if the machines described to this point could generate such complex "images". However, it is not beyond imagining that more complex learning schemes might break through the barrier of my skepticism.)
One problem that Braitenberg recognizes with this type of learning is the fact that it might produce overgeneralization (one aspect of Seidenberg and McClelland's three bears problem), and as a result the vehicle would only operate on class properties, and would ignore the properties of individual exemplars of the class. In other words, care must be taken that the forest (concepts) does not obscure the trees (exemplars).