Created on 2023-03-27 03:36
Published on 2023-03-27 03:51
In this article, I presented a job interview simulated with ChaptGPT v3.5
A job interview with ChatGPT v3.5 - 7 December 2022, EN
Considering the result and after some others interactions with that AI, I wrote a post in which I expressed my concern about mass use of this kind of chat bots:
Miss Poppins versus Mr. Spock - February 2023, EN
Even before this post, I did another one about the easy ways to workaround the A.I. ethics embedded into the chatbot:
Il codice etico di ChatGPT - January 2023, IT
TODO
https://www.linkedin.com/posts/robertofoglietta_si-lon-adopte-chatgpt-et-ses-avatars-aujourdhui-activity-7035434164896026624-QGT5
ChatGPT v3.5 involved into a guided interview can pass a basic Turing test.
This is a nice feature to have for everyone that needs a ghostwriter for an argument which s/he knows as expert (output validation) and s/he knows (or learned) how to deal with an A.I. and the output is for people reading.
Instead, for mass adoption this ability of simulating emotions and writing emotional human-like contents is a great source of troubling and confusion. The average Joe will easily fall into the idea that the chatbot is a kind of peer. This feeling will lead to opposite endings - smarter: fear, dumber: bullying.
The mass adoption of a chatbot implies that it speaks (writes) like Mr. Spock from Vulcano. Totally rational and emotions less. People would not be enthusiastic about this kind of A.I. and someone will hate it because Miss Poppins is much more agreeable but nobody will fall into the fallacy of considering an A.I. a kind of peer. Everyone will interact with the A.I. as a tool.
Ok, I admit that Mr. Spock would not have won the record of adoption with 100 million users in a month but after the subscriptions hype we need to consider the real-case usage: no emotions, no mistakes. This is the goal to achieve.
We do not need Miss Poppins to tell us a fairytale. Almost all the cases.
Emotional A.I. should be a niche restricted to specific jobs/tasks for specific professionals users not the standard approach. A robot should never be confused with a human or confusing a human. Unless that day, the A.I. will not be ready for the masses.
The philosopher Gaspard Koenig writes:
If we adopt ChatGPT and its avatars today, we will be sorry in a few years that half of humanity has become flat-earthers [brainwashed].
In his article La faillite épistémologique de ChatGPT (The epistemological bankruptcy of ChatGPT) which a chronicle to read is here:
In this sense, ChatGPT is the exact opposite of Wikipedia, which despite all its faults, maniacally cites its sources and tries to bring out an objective truth from processes of deliberation between human contributors. Wiki's strength lies in its business model: giving. In contrast, the likely introduction of targeted advertising in chatbots will complete the deconstruction of knowledge by subjecting it to the vagaries of the market. Progress is not about blindly embracing all "innovation" but about learning from mistakes. We now know the phenomena of cognitive bubble. If we adopt ChatGPT and its avatars today, we will be sorry in a few years that half of humanity has become platist. Let's not let a handful of post-teens in Silicon Valley abolish the norm of truth.
Abemus Mr. Spock, finally.
© 2023, Roberto A. Foglietta, licensed under Creative Common Attribution Non Commercial Share Alike v4.0 International Terms (CC BY-NC-SA 4.0).
This article can be easily converted in PDF using webtopdf.com free service.