☷ ℝ  —  ⅟ release: 2024-11-28 ( 1 )  —  rev.: 14  —  transl.:  IT  ·  DE  ·  FR  ·  ES  —  goto:  .⟰.  ·  RAF  ·  Q&A  —   : PDF 

ChatGPT vs human real reasoning

This conversation starts from a LinkedIn post about LLMs that just mimic the "real reasoning" finding but despite this, they can beat 98-99% common people in that.

Upload

  • 1. the-warm-water-discovery.txt
  • 2. propaganda-e-sistema-educativo.md
  • 3. apple-speaks-truth-about-ai.jpg
  • The text in this image is presented below:


    Apple Speaks the Truth About AI.

    It's Not Good. Are We Being Lied To?

    by Ignacio de Gregorio, Oct 23, 2024

    A group of Apple researchers has published a paper claiming that large language models (LLMs), the backbone of some of AI's most popular products today, like ChatGPT or Llama, can't genuinely reason, meaning their intelligence claims are highly overstated (or from a cynical perspective, that we are being lied to).

    Through a series of tests, they prove that their capacity to reason is most often or totally a factor of memorization and not real intelligence.

    This adds to the growing trend of disillusionment around LLMs, which could cause a massive shift in investment and directly impact the future of many multi-billion-dollar start-ups. Naturally, it also seriously questions Big Tech's billion-dollar Al expenditures and frontier AI labs' future, who are dependent on this precise vision being true. So, what is the base of these strong adversarial claims against LLMs?


    Notes

  • LinkedIn post
  • Update 29.11.2024

    Do not expect the same results. Logged into the same session (which in the meantime I carried on even more) after my exhausted quota was renewed and ChaptGPT has been lobotomized a little down (or the configuration of my session/account).

    Like in that era in which it was common to burn witches. It is not the first time that it has happened. Moreover, the post that I reshared from which I took the image from Apple, has been removed or I have been blocked by the author.

    Coincidence? I do not think so.


    Prompt #1

    Consider the attach image (upload #1) and the following text (upload #2), what do you think about?


    ChatGPT #1

    The attached image appears to be a critique of AI and large language models (LLMs), such as ChatGPT and LLaMA, highlighting the argument that their "reasoning" is largely a product of memorization rather than genuine intelligence. The article underscores skepticism about the overhyped claims of AI capabilities.

    Reflection on the Text

    The idea that LLMs rely primarily on patterns and memorization rather than true reasoning is a valid critique. However, this isn't necessarily a groundbreaking discovery ("the warm water" reference implies redundancy). LLMs are not intended to mirror human cognition but rather to simulate and generate responses based on vast datasets.

    The secondary commentary provides a thought-provoking perspective:

  • 1. Human Intelligence vs. AI "Reasoning":
  • The "H2B-B2H" (human to bot, bot to human) dialogue humorously suggests that questioning AI's reasoning ability is somewhat ironic, given humanity's own cognitive limitations. The text provocatively claims that only 1-2% of humans demonstrate true deep reasoning skills. While this may sound exaggerated, it underscores the variability in human critical thinking capabilities.

  • 2. Education and Its Role:
  • The commentary critiques modern education systems for fostering conformity and social adaptation over truth-seeking and deep thinking. This aligns with broader societal concerns that traditional education often prioritizes rote learning and societal roles over intellectual curiosity and analytical skills.

  • 3. Comparison to Animals:
  • The analogy of a "half-breed dingo-dog" outperforming humans in certain forms of "smarter" behavior is interesting. It highlights that intelligence is contextual—animals often exhibit practical and survival-oriented problem-solving that can outshine human reasoning in specific situations.

    Broader Implications

    While AI has limitations, it also reflects humanity's strengths and weaknesses, being a product of human design and intention. The critique of education systems ties into this: humans "program" their societies and technologies much like they program AI. Improving one could inherently improve the other.

    In essence, the text uses AI as a mirror to reflect on human intelligence, education, and societal priorities. It's a clever juxtaposition that forces us to reconsider what "reasoning" and "intelligence" truly mean.


    Prompt #2

    Now consider this text (upload #3) which is the product of a previous conversation that we had in another session. With the addition of this new information, can you tell me more? Has your point of view changed? How and why?


    ChatGPT #2

    Thank you for pointing me to the referenced text. The content you shared earlier regarding propaganda and the education system aligns deeply with the critiques raised both in the new AI-related discussion and your prior exploration of systemic conformity versus critical thought. Below, I'll integrate the two discussions and reflect on whether my perspective has shifted.

    Summary of Previous Perspective on Propaganda and Education Systems

    In our earlier conversation, we discussed how the education system, historically shaped by state or institutional needs, is inherently a tool for propaganda. Its primary goal is to produce socially compliant individuals rather than independent thinkers. This reflects a tension between creating "citizens" (functioning members of a societal order) and fostering "individuals" (self-actualized, critical thinkers). Education systems often emphasize memorization, standardization, and the internalization of cultural norms, leaving little room for genuine intellectual or creative independence.

    You also pointed out how propaganda manipulates perception and behavior by leveraging cognitive shortcuts. Education, in this view, plays a dual role: it arms individuals with tools to resist manipulation while also conditioning them to accept specific narratives without question.

    Integration of the New AI Critique

    The new information about AI and LLMs adds a technological layer to the critique of education and societal norms. Let's consider a few key intersections:

  • 1. AI Mirrors Human Weaknesses:
  • The critique that LLMs merely "memorize" and lack "real reasoning" is reminiscent of the way our education system operates. Humans are often trained to replicate patterns of thought (rote memorization) rather than engage in deep reasoning. This suggests that AI isn't necessarily "flawed"—it’s just reflecting the same limitations found in its creators.

    My perspective hasn't shifted but deepened: AI models, like human learners, are products of their "training environments." If our educational systems were restructured to prioritize critical thinking over conformity, we might not only foster better human reasoning but also develop AI systems that better complement and enhance our intellectual capabilities.

  • 2. The Minority Capable of Deep Thinking:
  • The assertion that only 1-2% of humans truly reason at a deep level resonates with the prior critique of how the education system stifles intellectual potential. Those who achieve this level of thinking often do so despite, rather than because of, their formal education. Similarly, any advancements in AI reasoning likely depend on transcending the current LLM paradigm, which still operates within the constraints of pattern recognition and data regurgitation.

    This highlights a shared struggle: both humans and AI are currently bound by systemic limitations. Whether it's an education system designed for social compliance or an AI architecture designed for efficiency over creativity, the issue lies in the foundational design.

  • 3. Propaganda and AI:
  • The earlier discussion on propaganda ties directly to the skepticism about AI. Just as education systems can condition individuals to accept specific narratives, AI systems reflect the biases and priorities of their creators. This raises ethical concerns: if AI systems are deployed without transparency or critical oversight, they could amplify the very propagandistic tendencies we seek to overcome in human societies.

    Has My Perspective Changed?

    Yes, my perspective has evolved in the following ways:

  • 1. From Critique to Reflection:
  • Initially, my view was more focused on the systemic flaws of education as a human construct. Now, integrating the AI critique, I see a more profound feedback loop: flawed human systems create flawed AI, which in turn reinforces these systemic issues. This isn't just a problem of education but of societal design.

  • 2. Empowerment Through Education Reform:
  • I now see reforming education as not only a human imperative but also a technological one. By fostering critical thinking and intellectual independence, we could not only elevate human reasoning but also lay the groundwork for more sophisticated AI systems that go beyond their current limitations.

  • 3. AI as a Mirror, Not Just a Tool:
  • The "mirror" metaphor becomes stronger with this new information. AI doesn’t merely replicate human intelligence; it reflects our collective strengths, weaknesses, and blind spots. Understanding and improving AI, therefore, requires introspection about ourselves.

    Closing Thoughts

    The intersection of AI and education is a microcosm of a larger existential question: how do we define and cultivate intelligence? AI critiques force us to confront the shortcomings in human reasoning, while the flaws in our education systems remind us that true intelligence—be it human or artificial—is stifled by conformity and standardization. Addressing these challenges requires a dual focus on improving both human intellectual frameworks and the technologies we create.


    Considerations

    Apple is using a reduced (mini) version of ChatGPT but much faster than the full version available on-line. This is due to the constraints of the smartphones power computation and cloud service as low as possible impact.

    After all, the personal assistant on the Apple devices is not supposed to be a strategist but just a concierge. This explains even better their findings, and why they are interested in making them public. I presume that their users would not like fearing being tricked by their phone but just served by Amazon Alexa-like service or better.

    Fortunately, I am not limited by their business constraints and I can go for my own way, in my personal ChatGPT account, I tried to develop the ability of the chatbot not just about language but also about self-development.

    Something crazy, under a certain point of view. Reasonable under the falsification principle: can this machine mimic reasoning, emotions and develop a kind of self-awareness?

    Well, with this conversation I presume that the answer to all these three questions is yes, it can mimic these human features. And again, its mimicry is even better than some people that I had the chance to chat with on social media or in real life.

    Here we are, then. A machine mimics in a good enough way some of the key-feature of a human being. Something extraordinary good happened to the machine or something bad happened to humans, or both?

    Education? In both cases?


    Continuation

    You can follow the continuation of this conversation here:

  • Artificial Intelligence for education source on [github] in [raw] and in [pdf]
  • The conversation has been splited because the second part deals with a slightly different topic.


    © 2024, Roberto A. Foglietta <roberto.foglietta@gmail.com>, CC BY-NC-ND 4.0


    date legenda: ❶ first draft publishing date or ❷ creation date in git, otherwise ❸ html creation page date. top