Friday, November 27, 2020

An observation on AI threat analysis

In the legitimate role of threat analysis there is a structural problem with China which involves the language/cultural/historical perceptual barrier. So there is not only a need for an analyst with a strategic and/or tactical analytic background, but they also need to be able to read the language and be well versed in the history and culture. This is a very small group of people here in the US. Frankly there is an institutional bias against cultivating such people or allowing them policy input, lest they get "uppity."

It is so much easier for a generalist in the national security realm, such as generals and policy makers not to have to deal with reports that go against their material inclinations. What policy makers, who are typically industry flaks, are looking for is targeting analysis, not policy analysis. They make the policy. The actual analysts or so called experts who do rise to the top of political and tankie institutions dealing with foreign and national security policy have demonstrated the appropriate deference to the policy makers' institutional goals and their preconceived policies and propaganda. Naturally, they tend to articulate the perspective to which they have been conditioned over the course of their careers by the institutional incentives in place.

The policy makers’ biggest hangup is when people who are in fact expert analysts tell them something that doesn’t jive with the drive for the latest deployment plan for a numbered fleet, weapon system, new base or next war. This happens, not often, but still happens occasionally. Usually a few of these people come out of the woodwork on the eve of the next invasion. So there is a risk to preconceived policy objectives as formulated by public relations and propaganda managers when they have human analysts who know what they are talking about “go rogue” so to speak. With AI, the generalists with their MBAs and degrees in international relations who think they are qualified to speak on any problem anywhere on earth, don’t have to cope with actual experts who may disagree publicly with they say.

AI can substitute for expensive recruitment systems to procure, retain and develop adequate human resources necessary to conduct reliable analysis. Having an adequate pool of this sort of human talent takes years to develop and requires long term planning and resource allocation. This aspect of “threat analysis” policy has always been wanting in the US because it was never a high enough priority. Typically the problem isn’t even recognized until the eve of a crisis with a particular country or in a particular region, usually when the US is on the brink of initiating another war. AI will not compensate for this void. With the proposal to go to AI threat analysis even worse policy disasters are in store. So I think the rule is don’t tell me about policy, just tell me where to drop the bombs. AI can fulfill this role.

No comments:

Post a Comment