Comment 8: AI has a Schrödinger problem…

In disrespect to the way writing is taught, I am going to put my thesis right at the beginning: AI is an irradiated cat in a box. Many of you may be wondering what in the world I’m talking about, but I’m certain that some of you have an idea. It’s going to be a bit of a journey. We’re going to visit a well-known thought experiment; think about AI in non-technical terms; and explore why I’m marrying the two. Short introduction out of the way, let’s get started.

 

What’s in the box?

Erwin Schrödinger, a physicist and contemporary of Albert Einstein, developed a thought experiment as part of the scientific conversation around quantum physics. Fortunately, for us all, we can engage with this experiment without winning a Nobel prize. Schrödinger’s Cat, as the experiment is known, looked to help bring understanding to the concept of superposition. Again, keeping this easy, superposition is the idea that there is a moment where something can exist in multiple states.

Good?

To illustrate this Schrödinger imagined a situation: An observer acquires a box where it is impossible view/hear/feel what is happening inside. In the box is a cat, a vial of poison, radioactive material, and a radiation sensor. When the sensor detects radiation it triggers a mechanism that breaks the vial, thus poisoning the cat. The question, Schrödinger puts to the observer is whether the cat is alive or dead once the vial breaks. Logically, the observer knows that poisoning, radiation, or maybe even starvation will kill the cat at some point; however, since the observer has no way to know the state of the cat it exists in both. It exists in a superposition, both alive and dead, until its true state is finally observed.

 

What’s in the AI?

Artificial Intelligence (AI), as it exists today, is a generative algorithm that pulls from a big pool of content. This type of algorithm is known as a Large Language Model (LLM). While AI can generate different forms of media and the input process is moving toward multi-modality, we’re going to limit ourselves to the mechanics of prompt-into-written response. If you’ve no experience with AI the process looks something like this:

  1. You ask the AI a question or give it a task (i.e. Write a paper about World War II).
  2. Something happens.
  3. The AI returns what you asked for, hopefully.

Ask and receive. Seems pretty simple, but how does the AI do that? The AI is trained by taking information that already exists and placing it into the “big pool of content” mentioned previously. Where does the content come from? It’s kind of impossible to tell, though it can be said that it came from the internet. There are a lot of issues being considered about the Intellectual Property (IP) rights of the content that’s been fed into LLMs but that’s something for the courts to decide and is outside the scope of this post. What isn’t out of scope is that we know that AI works by accepting a request, interacts with a big pool of content from the internet, and returning what you asked for.

 

That sounds great!

It does sound great, but is it? There are three huge issues with AI today:

  1. AI is terribly inconsistent. It is known to hallucinate, to sound confidently wrong, and to misunderstand human language.
  2. Like a sewer, the quality of what AI gives you is dependent on what you put in it, and we can’t really say what’s in there.
  3. There is no way to observe or understand what AI does with the unknown stuff in there.

This isn’t a problem if we’re just messing around with AI to see what happens; however, when it comes to school and work this is just unacceptable. Think about writing a paper for class or making an argument in court. Both these actions require us to present the authority used, which is usually done by citing our sources or the cases that we researched. Can we cite anything with what we understand about AI? Can we trust what it tells us? It’s worth noting that the AI preview of a search on Google recently told people to add glue to keep the cheese falling off of pizza.

 

How is AI like a cat in a box?

Schrödinger’s cat is in an impenetrable box either living or dead until we can finally observe its actual state. AI is a bunch of unknown content, that is interacted with in an unknowable way by a process that we don’t know anything about, that then gives us a response to a prompt. Because we know nothing about what is in AI and how it works, everything within exists in a superpositional state between authoritative and garbage information. That is, until we observe it and can see what AI creates for what it is.

This isn’t saying that AI is useless, nor is it suggesting that AI won’t be able to do the things that the marketing departments of AI companies are selling us, but today isn’t that day. Until things improve, or the way that AI works is more transparent, maybe don’t rely on it in situations where accuracy and authority are important.

 

The ABA Rules of Professional Conduct, Model Rule 1.1 Comment 8 requires, “To maintain the requisite knowledge and skill, a lawyer shall keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” To that end, we have developed this regular series to develop the competence and skills necessary to responsibly choose and use the best technologies for your educational and professional lives. If you have any questions, concerns, or topics you would like to see discussed, please reach out to e.koltonski@csuohio.edu.