great replacement – or not

Cedric Cartau, WEDNESDAY JANUARY 24, 2023

Unbelievable, the number of videos flourishing on social networks – even the headline BFM TV above, to explain that Google has some concerns, ie. We have gone through the first gaming sessions with the interface, where we ask ourselves the strangest questions on the most diverse topics. What ChatGPT does, what it doesn’t do, what it can do, and what it will never do, at least in its current version.

We’ll start with the basics: ChatGPT is not AI, at least not in v3, and anyway according to my definition of AI, it’s as follows: AI “re-enters”, meaning what can be connected to the network. for the program itself to learn itself without outside intervention. For example, if you feed a corpus of chess game rules into a self-learning program and play it against itself, after a while (a very long time of course) it will beat Kasparov. Rather, of course, we had to inject the grammar of the language into ChatGPT, but without a regular feed from external sources (discussions, chats, Wikipedia, etc.), sales on GiFi will remain as dumb as a broom. It is impossible to force him to reason with himself in order to improve his experience, it is impossible for him to be self-aware, we bypass his defenses to force him to say terrible things (see below), in short, it is impossible for him to learn without a small army of small hands. -possible. showing him all the tips and tricks of the language.

Then, and it is explained very well in a video[1] from Mr. Phi, the current version of ChatGPT (limited of course) is never more than a giant probability database that “just” knows how to predict the next word of a sentence and nothing else. There’s nothing we can call “discussion” or even “answering questions” there, and besides, the cat is very easily trapped in the video, confirming that the tuna is a mammal. To get there, it was necessary not only to feed the animal with a large database, but also to feed it with an algorithm that had to use 175 million parameters and, above all, involve a human training phase to prioritize these probabilities. to avoid ending up with a racist, misogynist, alcoholic gun fanatic bot. One of the tests carried out by Mr. Phi is to try to get the bot to adhere to the climaceptic notes: the designers have put in safeguards, but they are very easy to miss: you will agree that we are far from artificial intelligence at this stage.

The designer who completed v4 warns[2] moreover, this version will be far from being classified as artificial intelligence: at most it will produce more accurate texts, give more reliable answers, etc. It’s also surprising to see all the pequins who rush to classify as artificial intelligence any software that can link the PQ stock to the tourist epidemic: I advise them to go back to the basics of mathematics and practice the concepts known since Aristotle. correlation, causation, induction, deduction, etc.

Other videos test the machine’s capabilities on more targeted problems. Writing code is classic, like Excel macros or Bash scripts for Unix environments. Unsurprisingly, the generated code is clearly flawed, in the same way that it is strongly recommended to backtrack to avoid making some monumental mistakes, but the bot will produce an exploitable skeleton, a large part. so most of the work is done.

There are still a number of problems to be solved, in particular the training of the bot (need to re-inject a large amount of pre-updated data before releasing the version), the human phase of training (which is not suppressed given the technology used and will limit certain aspects of the industrialization phase), the copyright issue (being able to write the basics used to build the training corpus), the question of authorship of the copyright of what is produced (the topic is surprisingly muted for the moment, believing that Micromou is on the right side of the thread), the question of the reliability of the sources, etc.

OpenAI’s capitalization is around $30 billion, which is more than Carrefour, whose net profit for 2024 is estimated at €1 billion. That’s why the IT ecosystem has high hopes for the technology, so much so that a series of summits have taken place at Google (which has led to top management warnings and plans to launch ChatGPT’s competitor Sparrow in 2023. ) in order not to fall behind on things that can disrupt the main work. Economic observers point out, without a hint of humor, that ChatGPT is a form of revenge by Microsoft (which has marbles in history) on Google, which defeated it with Cloud, Android and others.

It is difficult at this stage to anticipate the changes brought by this type of software. We can point out a number of trends that still seem inevitable, including:

– a fundamental paradigm shift in the production of “junkware”.[3] ; Obviously, it’s the most feared topic today (security experts have created a “highly hijacking” polymorphic malware using ChatGPT, according to an article published on the site a few days ago. development.com); this concern is common[4] ;

– interesting and it’s a topic that hardly appears in discussions or in the press, good luck to the content publishers in following up on the content as a whole fake, fake accounts, trolls, etc.; the next presidential election in a “free” country will be rock and roll;

– the use of surveys: who makes them? Why? With which IP? and so on. It will be funny when the CEOs of Google and Facebook realize that in addition to messing up their data, this business is stealing their customers (see previous point);

– the ability to simulate the fact that employees work: examples of automatic responses to professional e-mails can already be found on the network; Well, okay, it’s empty, but when we see that some can write things like “All-in-one recalculation event”, “BT14-01 standard risk dashboard” or “Senior technology team leader”. the hierarchy notes that a third of the workers have connected their email to GPT3 before going outside to play foosball;

– the end of marketing services, which can be carried out by two interns and a coffee machine at the end of the corridor;

– GDPR impact; health DPOs (see the very extensive article[5] Ticsanté’s association of health DPOs) would do well to take a closer look at techno.

Yes, at the same time, developers and marketers can rest assured: of course, ChatGPT will deploy code bits very quickly (they will still have to test and fix them) and it will be four times faster than before, but be like server virtualization: compared to a physical server, VM- is four times faster to provide, so we end up asking admins for four times as much… was sure to need four times fewer computer scientists when we had to buy more software. There are always cretins who are convinced we’ll need more junior CIOs with techno!

Isn’t life beautiful?

ChatGPT’s Answer: Life can be beautiful, but it can also come with difficulties and challenges. It depends on each person’s perspective and experience.

Bug, tears!


[1] https://www.youtube.com/watch?v=R2fjRbc9Sa0&list=WL&index=11&t=1855s

[2] https://www.bfmtv.com/tech/chat-gpt-what-looks-like-la-prochaine-version_AV-202301180347.htmlc

[3] https://www.infosecurity-magazine.com/news/chatgpt-creates-polymorphic-malware/

[4] https://www.01net.com/actualites/chatgpt-mauvaise-nouvelle-les-cybercriminals-ont-aussi-commence-a-louutilisation.html

[5] https://www.ticsante.com/story?ID=6542


author

Information Systems Security and Data Protection Officer at Nantes University Hospital, Cedric Cartau he is also a lecturer at the School of Advanced Studies in Public Health (EHESP). We also owe him several specialized works published by the Presses de l’EHESP, including the security of the information system of health institutions.

#Software#security#data


Leave a Reply

Your email address will not be published. Required fields are marked *