From an article in Tech Xplore. The new AI systems trained on Large Language Models, the new technique to create an AI. well, they fail at attempts to do basic logic?? Seems odd for a computer
I took a bit of a logic class in college, but found it not real interesting at the time. It is a skill to be trained in this, I am sure lawyers use it all the time. It is also good to know what doesn’t work in Logic, such as the logical fallacies.
That being said, my understanding is they train the AI LLM’s with tons of reading materials from all over the internet, good bad or ugly that is. That might even be implied in the “large Language Model” definition here.
The problem that could come in to play here is…..Most of the written material in the web or even human catalog of writing. Is not necessarily very logical. Likely the contrary, if you ask me.
So the question is, if you feed a AI LLM’s with material that likely mostly is not logical, then you probably shouldn’t expect it to know or produce logical results.?? This is probably an oversimplification, but…..
IS IT GARBAGE IN, GARBAGE OUT??!!
That would explain some of the AI’s behavior of just “making up ” answers or facts. What they refer to as the the AI “Hallucinating”?.
Just something to consider today.
See link to article here
https://techxplore.com/news/2024-07-ai-reveals-breakdown-large-language.html


Leave a Reply