Written by three economists, Prediction Machines analyzes the potential of AI and as a fall in the cost of prediction. The top review on Amazon reduced the book to three propositions:
- AI is mostly about prediction
- The cost and price of prediction is falling,
- This will increase demand for complementary skills, like judgement and decision making.
While accurate, the review does not do the book justice. Most interesting is the taxonomy of the different kinds of uncertainty: (known, knowns), (unknown, knowns), (known, unknowns),( unknown, unknowns). From a review:
Building on Donald Rumsfeld’s oft-repeated taxonomy of known knowns, known unknowns, and unknown unknowns, the trio of economists add another category: unknown knowns. For Agrawal, Gans, and Goldfarb, known knowns represent a sweet spot for artificial intelligence—the data are rich and we are confident in the predictions. In contrast, neither known unknowns nor unknown unknowns are suitable for artificial intelligence. In the former, there are insufficient data to generate a prediction—perhaps the event is too rare, as may often be the case for military planning and deliberations. In the latter, the requirement for a prediction isn’t even specified, a situation described by Taleb’s black swan. In the final case of unknown knowns, the data may be plentiful and we may be confident in the prediction, but the answer can be very wrong due to unrecognized gaps in the data set, such as omitted variables and counterfactuals that can contribute to problems of reverse causality.
This last problem is illustrated by the story of an early AI/Chess program. After analyzing thousands of chess games played by humans, the program began sacrificing its its Queen after only a couple of moves. In human games, sacrificing the queen usually lead to checkmate in only a few moves. This is a classic mistake, confusing correlation (losing your queen often led to victory) with causation (humans only sacrifice a queen only if doing so would lead to checkmate.
Here is a good summary: .
Too often in the public discourse, artificial intelligence is portrayed as magical fairy dust that should be applied liberally to our most challenging problems. Agrawal, Gans, and Goldfarb’s Prediction Machines dismisses this fallacy. Although written for a business audience, its insights are not confined to the boardroom. Prediction Machines provides a compelling, fresh perspective to help us understand what artificial intelligence is and its potential impact on our world. The text is essential reading for those grappling to make sense of the field.
For Agrawal, Gans, and Goldfarb, artificial intelligence is simply a prediction machine—it uses information we possess to generate information we do not possess. This simple realization immediately refocuses contemporary discussions and guides fruitful development of artificial intelligence. It underscores the situation-specific nature of its data and tools. It discloses its fallibility. And it reveals the role of predictions in our decision process, not as determinants but rather as inputs that must be evaluated according to our uniquely-human judgement. According to the three economists, that is the “most significant implication of prediction machines”—they “increase the value of judgement.”