In the 1950s and '60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data. The probabilistic approach has been responsible for most of the present progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. But MIT research scientist Noah Goodman thinks that AI gave up too much when it gave up rules. By combining the old rule-based systems with insights from the new probabilistic systems, Goodman has found a way to model thought that could have broad implications for both AI and cognitive science.... [read the full article]
fredag 2 april 2010
Grand unified theory of AI: New approach unites two prevailing but often opposed strains in artificial-intelligence research
In the 1950s and '60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data. The probabilistic approach has been responsible for most of the present progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. But MIT research scientist Noah Goodman thinks that AI gave up too much when it gave up rules. By combining the old rule-based systems with insights from the new probabilistic systems, Goodman has found a way to model thought that could have broad implications for both AI and cognitive science.... [read the full article]