PLEASE NOTE: This blog is a bigotry free zone open to all persons, regardless of age, race, religion, color, national origin, sex, political affiliations, marital status, physical or mental disability, age, or sexual orientation. Further, this blog is open to the broad variety of opinions out there and will not delete any comments based upon point of view. However, comments will be deleted if they are worded in an abusive manner and show disrespect for the intellectual process.
Showing posts with label ARTIFICIAL INTELLIGENCE. Show all posts
Showing posts with label ARTIFICIAL INTELLIGENCE. Show all posts

Monday, August 11, 2025

IN LEGAL RESEARCH, BE CAUTIOUS ABOUT USING AI!

Just up front, I will tell you that I am mostly a liberal with a few areas of conservatism thrown in for good measure.  What it adds up to is that I have many liberal friends, but also many conservative ones as well.  What with the Trump movement thrown in nowadays, I now have friends who are what I'd call right wing extremists.  Because I am a purveyor of ideas, I'm not the kind who defriends people who have radically different political views than my own. 

It is because of my generally tolerant attitude that a friend recently sent to my inbox an email with what I consider an extremely offensive article about Derek Chauvin, the killer of George Floyd, basically saying the guy was dealt with unjustly and should get a Trump pardon.  The contents were absolutely ugly and racist, but I will spare you that and just give you the headline: "Why Derek Chauvin Will Languish in Prison--Regardless of the Facts".  

This led me to wonder about the issue of whether George Floyd actually used a $20 counterfeit bill.  In my old days, prior to AI, if I typed a question into google, I'd get results including a series of articles most often going from latest to oldest.  Now when I type questions into google, I get a summary of what appears to be the top articles on the subject.  To be honest, I appreciate getting a summary so I don't have to spend time reading the articles in detail.  It saves a lot of work which, I admit, I'm too lazy to do.  But if you are doing legal research, I urge you to be very careful about the results AI gives you.  Because as I found to my chagrin, AI can be inaccurate.  This is a situation you can not afford in you are going to Court.  

As I proceed, I want to let you know that anything I type into this article in reference to how AI responded to me is directly typed from the computer.  I don't want you to think I plagiarized.  I just thought you should get the words straight from the source.  So here goes.  This was my interaction with AI.  First, I typed in my question as follows:

Was the $20 George Floyd used counterfeit?

AI responded:

Yes, the $20 bill that George Floyd used at the Cup Foods convenience store was suspected to be counterfeit.

I was satisfied with that answer for a few days.  But then I thought to ask the question again.  That's when I looked at the word "suspected".  What does that mean, "suspected".  The answer is either yes or no.  When I first looked at the answer, I saw the word "Yes" and stopped reading.  In a more critical state of mind that next day, I was like, what do you mean "suspected" and I wanted more answers. I then clicked on AI's link for further information and found out that there had been no official confirmation from the Minnesota field office of the Secret Service as to whether George Floyd's $20 was counterfeit or not. Huh?  How did AI get "Yes" out of that kind of statement?

Returning to google, I decided to ask a different question because I wanted AI to explain itself.  This was my question:

Why am I getting a wrong answer from AI?  I asked whether George Floyd's $20 was counterfeit.  AI responded "Yes" but reports online indicate authorities don't know if it was counterfeit or not.

This is the answer I received from google:

Here is why the AI might have given you a definitive "yes" when the situation is more nuanced.

1.  Training Data Limitations:

AI models are trained on vast amounts of data, which includes new articles, social media posts, and other online content.  This data can be biased, incomplete, or contain misinformation.  If the AI's training data contained a strong correlation between the $20 bill and the word "counterfeit" (due to media reports at the time, for example), it might have learned to associate the two even if there wasn't official confirmation.

2. Lack of Context and Nuance:

AI excels at recognizing patterns and generating responses based upon those patterns.  However, it lacks the ability to understand the subtleties of human situations, including the intricacies of legal investigations or the potential for initial reports to be unconfirmed or even inaccurate.

3. "Hallucination" or Generation Plausible Content:

AI models are designed to generate plausible content, sometimes even making up details if they don't have enough information.  This is sometimes referred to as "hallucination" and it means the AI prioritized creating a coherent response over absolute factual accuracy.

4. Evolving Information:

News and investigations surrounding sensitive topics like the George Floyd case are constantly evolving.  Information that seems definitive at one point can be later challenged or proven incorrect.  An AI's knowledge base may not be updated in real-time, leading to responses based on outdated information.

In Summary:

It is important to remember that AI models are still under development and can sometimes provide inaccurate or misleading information, especially when dealing with complex and evolving situation.  Always fact-check information from AI with reliable source, especially for topics with significant social impact.

None of this sounds very reassuring.  

What this means is that you have to be very careful about what these AI summaries are telling you.  Again, I find myself returning to the wisdom and integrity of my pre-technologically informed College professors who advised me always to review my originally sources very carefully, and whenever possible use primary, not secondary resources.  I will admit the concept of AI hallucinating was very, very funny--almost human.  

At the end of its lengthy explanation on why AI can be inaccurate, AI then gave me the following answer to my question regarding whether George Floyd's $20 was counterfeit.  See AI's elaboration on the question below:

What was the actual story about George Floyd's $20.00?

The 911 that led to the police encounter stemmed from a report by a store clerk alleging that George Floyd had used a counterfeit $20 bill to purchase cigarettes.

However, there is no official confirmation from authorities or the Secret Service, whose Minneapolis Field Office investigates counterfeit money cases, as to whether the bill was definitively counterfeit.

The clerk who accepted the bill testified that he believed it was fake because of an unusual blue pigment, but he also thought Floyd might not have known it was counterfeit and even considered paying for the cigarettes himself.

The owner of the store where the incident took place has stated that they had previously called the police about similar issues, and the police had never responded in the past.

I ended that google search, and started a new one.  This time I decided to ask my original question again. Interestingly enough, upon returning to my original question, I found myself still getting the original answer that "Yes" George Floyd's $20 was counterfeit. I pressed on the link asking AI to give me more detail.  AI then quoted to me a CBS News report dated May 26, 2021 stating as follows, "It has been confirmed by the US Secret Service that the $20 bill used by George Floyd to purchase cigarettes at the Cup Foods convenience store was counterfeit...their Minneapolis Field Office investigates counterfeit money and determines its authenticity." When I tried to get back to the response page that stated authorities hadn't yet provided any confirmation, I was unable to do so.

From that time on, as I tried to get at the truth, AI became more and more adamant that, indeed, the bill was counterfeit. On social media, however, I found comments to the effect that the George Floyd $20 bill had never been determined to be counterfeit along with comments stating that the $20 bill had been determined to be counterfeit.  In the end, I wasn't sure if AI was responding to a perceived need on my part to get a Yes answer so it was looking more and more at that kind of data, or whether, in fact, that was the correct answer.  Perhaps you folks can let me know what the story is in the comments below.

To my friend who sent me the original article, I responded no one should be killed over a $20 bill.  Police should have better judgment than that.  I was also reminded of a recent experience I had.  A few months ago, I went into a local CVS. While I was there, I tried to purchase an item at a self-service machine using, you guessed it, a $20 bill.  The machine refused to accept the bill, so I called a sales clerk over to help me.  He took a look at the bill and announced that it was counterfeit.  He made a bit of a display about the bill, holding it up and waving it around so that I became embarrassed.  I ended up using a credit card to pay for my purchases instead.  

Still, I was fired up about that $20 bill because it was humiliating the way the sales clerk had behaved. Immediately, I drove over to my bank, showed my $20 bill to the bank teller and asked if it was counterfeit.  She reviewed it carefully, and then told me it was perfectly fine and not counterfeit.  I was nervous about taking it back after the big fuss the clerk had made, so that bank teller gave me a new $20 and put mine in the till with the others.  I can just imagine how the outcome might have been different if I were a stocky black man like George Floyd.

Returning to the issue of legal research, I again want to emphasize--you must be very critical and observant when you are doing your legal research with the help of AI because AI can be wrong, and you don't want to make a fool of yourself, particularly when it counts and when the lives of your children depend upon you being accurate and credible.