Creating Artificial Stupidity Out Of Artificial Intelligence

01 Jun, 2022 Bob Wilson

                               

It was, in my opinion, one of the best sessions at this year’s Annual Insights Symposium produced by NCCI last month. “Human Factors - Expanding the Science of Predictive Analytics and Artificial Intelligence (AI),” was presented by James Guszcza, Research Affiliate with the Center for Advanced Study in the Behavioral Sciences at Stanford University. Guszcza discussed how AI is reshaping economic and societal landscapes across the nation, but he also emphasized how AI failures and data science projects that yield no economic value are a growing problem. In particular, the phrase he used repeatedly (and to great effect), was “Artificial Stupidity.” Artificial stupidity is created when AI and other technologies are employed without “guardrails” and safety systems to prevent a process from completely going awry. 

The “Key Insights” listed for his presentation were:

  • Human-machine hybrid intelligence is a better framework to guide practice than “AI.”
  • The focus of hybrid-intelligence design is real-world results, not machine outputs.
  • Hybrid-intelligence design goes beyond machine learning to take into account human values, needs, and relative cognitive strengths and limitations.

Guszcza’s points were related to the fact that too much faith can be placed in technology and AI, and often the focus on the final data output fails to account for any human factors that should be considered in the equation. The result is a process of policy that on its face does not make sense or creates greater problems than the ones it supposedly solved, hence the resulting Artificial Stupidity. Guszcza gave numerous examples of situations where system designers had failed to take these factors into account, earning less than stellar results as an outcome. 

Some of the examples he shared were of an AI “medical chatbot” that in testing told a mock patient to commit suicide, an Amazon AI project that was scrapped over its bias towards women, and an article outlining how a Tesla driver was killed when he ignored warnings from the cars autopilot system. All of the systems in those examples were designed with a focus on data output, with no consideration for an unpredictable human equation. 

It is a critical warning for the workers’ compensation industry, as a growing list of experts are predicting the claims management function will soon become the realm of computers and artificial intelligence. Nowhere will you find a better example of the need to consider human frailties and unpredictability than the world of workers’ comp. A full court press towards automation could spell disaster for the industry and the people it serves if we fail to understand its potential from the correct perspective.

Simply put, AI technology may become an effective tool to aid humans in managing the needs of injured workers, but it will never become the ‘be all, end all” solution for the industry. Systems using artificial intelligence will need to be part of an integrated process that gives humans better information with which to make decisions. However, the AI system itself should never be placed in that decision-making role. 

As Guszcza summarized in his presentation, AI experts “must use a broader, design-led perspective that blends ethics, psychology, and the local knowledge of end users in developing hybrid human-machine intelligence systems.” That is advice the workers’ compensation industry really needs to heed.

Lord knows we have enough real stupidity to contend with. We don’t need to go out and create artificial stupidity to accompany it.

Read More

Request a Demo

To request a free demo of one of our products, please fill in this form. Our sales team will get back to you shortly.