Between a rock and a hard place: Elon Musk's open letter and the Italian ban of Chat-GPT





Interview by Adele Sarno for HuffPost, the Italian original is here

The following English translation is provided by Google, apologies for any imprecision.





Luciano Floridi, the digital philosopher, works between Oxford and Bologna, from next summer, he will leave Oxford to direct the Center for Digital Ethics at Yale. He has received the highest honour granted by the Italian Republic: Cavaliere di Gran Croce. According to the Elsevier Scopus database, he is the most cited living philosopher in the world. If today we talk about the "philosophy of information", it is thanks to him, who for 30 years, studied the connections between philosophy and the digital world.




Professor Floridi, ChatGPT has been at the centre of the debate, especially these days. Elon Musk and a thousand other experts have written a letter asking for its development to be stopped for six months. In Italy, on the other hand, the privacy guarantor has decided to stop for 20 days until it complies with the privacy regulations. What's going on?

"As soon as ChatGPT came out, the controversy started, but I suggested not to block such tools, and to teach their proper use in school. They are handy tools, flexible, powerful, and easy to use. It makes no sense to demonise them. But when there is something new, the first instinct is to stop it, waiting for something to happen. You see the problem, but you don't offer a solution."

What can happen now that the Italian Data Protection Authority has stopped the use of ChatGPT in Italy?

"It is a draconian reaction which to me seems potentially excessive because the solution should be a compromise, not a blockade. The Italian Data Protection Authority is right when it says that the service is aimed at those over 13, but the program has no real verification filters. It collects all your data when you interact with it; they inform you clearly before using it. And advertising will probably come too. Not to mention the training data and the data leaks. So there is a privacy risk, and data management is certainly not up to European standards. But introducing more serious online registration, and a more regulated use of data should be the way forward. In short, going from 'free for all, do anything' to 'blocked' seems excessive and risky. I believe that looking for other possible solutions is a must. I hope that we are working in this direction".

The Italian Data Protection Authority on HuffPost says he has nothing against progress. But innovation cannot be done at the expense of people's rights. In particular, he claims that ChatGPT is trained thanks to billions of data from billions of people, so it must be blocked.

"There are at least two main problems with doing this. The first is that ChatGPT will continue to be used, because, in this case, a VPN is all you need, and therefore an underground usage and the usual rift will be created between those who know how to do these things and those who have no idea. With consequences that reach even a deeper level: just think of the schools where it will be impossible, at least legally, to educate in the good use of this tool, of the world of research - I use GPT4 daily - or of the world of work, where for example, it is commonly used to write lines of code. Furthermore, a context is created which replaces dialogue with confrontation. Then there is the uncertainty regarding all those contexts in which systems, such as ChatGPT and other so-called Large Language Models, are already integrated into search engines, as in the case of Microsoft".

The Italian Data Protection Authority also says that the block depends on the fact that the information provided by this technology is inaccurate.

"On inadequacy, two aspects can be distinguished. On the one hand, the system errs on the side of caution because you have to work hard to make it say something bad or wrong. It's a do-gooder, doesn't even give you the recipe for horsemeat, and lectures you if you don't ask the right question about it. On the other hand, it is a statistical system, and sometimes the answers are completely unreliable or made up. I recently had to summarise my work for the usual, bureaucratic, mindless reasons, and GPT4 described me as a Belgian philosopher. At the same time, the summaries of the books were very good.".

So the block doesn't make much sense?

I don't know all the reasons that led to the decision. The effect is that Italy is now out of the development of this technology. Because I expect that, for consistency, the block should be applied to all the various similar applications produced by Google, Meta, Microsoft, and so on. But how can we curb a phenomenon that is already underway? Do we stop search engines along with ChatGPT? In a context where there is commercial competition, and where the interests are tens of billions of dollars, it is difficult to stop everything voluntarily, for fear of science fiction scenarios, as in the case of the letter, or by going through a total blockade of technology, as in Italy. Not only will no one stop. But, in the case of the letter, it is a hypocritical operation. If then one requires that the instrument, to be usable, must be infallible like a calculator, we know that it will never be, because it is neither deterministic nor controllable like a calculator. It is inherently fallible because it is based on statistical analysis done on billions of data points. If this is the reasoning, then you are asking the impossible; in fact, you are saying that this tool can never be used. It reminds me of what happened with synthetic meat in Italy [note: the production has been banned]. Instead, we should try to understand how best to regulate the use of these technologies, mind you, not the technologies themselves, but how they are used, for what purposes, what we do with them, and in which contexts. In part, European legislation is coming to this, but only in part".

Yet Musk and a thousand other GPT4 experts wrote a letter saying we are creating super-powerful digital minds.

"The letter is a bad soup of things: good and a little trivial, wrong and science fiction. That passage about super-powerful digital minds sounds like something from a bad Hollywood script. It disinforms and scares, distracting from the real issues. A bit like raising the alarm for the possible arrival of zombies.".

Yet it was signed by over a thousand experts in the AI sector.

"It's as if we had a rope with many strands. One is that of disinformation. Another is that of mass media fame and prominence as an influencer. Then there are the naive, those who want to feel part of a community, those who think it's better than nothing, those who believe that zombies are coming, those who have good intentions and click on "sign here" without thinking twice, following the flock, those who want to shift attention to technology and not to those who produce or use it, those who wish to promote self-regulation and postpone the arrival of legislation. I certainly won't sign it. A single rope is created from all these strands that drags the same effects: alarmism about the wrong things, scientific disinformation, protagonism, and public distraction. Two small examples: there is no reference to all the significant legislative developments on AI, not only in Europe but also in America, or to the environmental impact of these technologies. And it is omitted that we have been recommending self-regulation for a decade, without any effect, remaining unheard of precisely by the producers of the AI in question and by some promoters of the letter, such as Elon Musk. In the rope, there is, therefore, also a strand called hypocrisy.

He talked about it several times on HuffPost. The infosphere is still a new place in the common imagination and is based on the circulation of information, here whoever controls the information has the keys to everything. Isn't there a risk that in the absence of clear rules, the same mistake that the Clinton administration made 30 years ago will still be made? Leave such fundamental decisions to the big Big Tech companies of Silicon Valley, and somehow delegate everything to self-regulation?

"I don't know if this is the end of the letter. But that's the risk we run. As I was saying, there are many strands in the rope, and the reasons that hold them together are different, but together these strands deliver something. And I fear that this something is delaying legislation, a further attempt at self-regulation, a mass distraction on problems that are not the real ones, namely manipulation, disinformation, the extraordinary power of control of the producers of these tools over who uses them, and then the misuse by those who ordinarily deploy these tools for immoral or illegal purposes, just think of organised crime or regime propaganda. All of this is without mentioning the environmental impact, which is very significant. It is sensational to shift the attention and blame for everything on generative and non-generative artificial intelligence, when actually the real problems are upstream, with those who produce it, and downstream, with those who use it badly and hence with their misuse. I fear that there is bad faith in those who lead this operation, and a lot of naivety in those who have joined the queue".

Let's talk about legislation, a fundamental element that unites the provision of the Guarantor and the letter. Where are we?

"ChatGPT is a great tool, and one needs to know how to use it well. But above all, we need to understand how to manage it from a regulatory point of view. European legislation is on the way, with the AI Act. It can be criticised and improved, but it is a good step in the right direction. However, it has a fundamental problem: it focuses on artificial intelligence as if it were a product whose safety must be guaranteed, imagine a microwave oven, and not on its use and applications, which can be benevolent, or malevolent and highly risky. But AI is not an artefact or a product; it is a service, i.e., a form of agency, an ability to carry out tasks or solve problems. For this reason, now is the time to understand what regulatory framework must be designed for this technological innovation. There is still time to regulate its uses, straighten the course, and work on the "how" and not the "what".

So, without a clear regulatory framework, did the Italian Data Protection Authority appeal to the GDPR?

"The Italian Data Protection Authority has requested Open AI to comply with the GDPR. But if we are asking – and I emphasise the if – that these systems must, for example, obtain permission to use all public web pages written in Italian to learn how to interact in Italian, and that they must then always be correct when they provide the requested information from an Italian user, then we are asking for the impossible. It seems to me that the regulatory framework is not adequate, because the only thing that can be done with it is to ban it. A bit like using the legislation on carriages to apply it to cars. But history teaches us, prohibition is useless: it's like blocking the sea with a colander. This is why I hope that the initial "if" is just my interpretative error".

Much ado about something that someone has compared to a parrot.

"No, no, a parrot is much smarter. GPT4 is much more like a "linguistic calculator", shall we say. It is a syntactic calculator, which treats natural language as if it were mathematics: it doesn't memorise and repeat a solution, it creates it. The question is not what answers it can give, but what you do with those answers. Plato already said it: the expert is the one who, above all, knows how to ask the right questions. It is better to understand how to use and teach it, than to ban it. It seems to me that the arrival of these tools actually purifies the true essence of human intelligence, because it detaches it from any form of encyclopedism, the ability to remember a thousand facts, and mere erudition. It is the question, and the purpose of the question that makes the difference.".

Is there a risk, especially in the absence of legislation, that our social networks are literally overwhelmed by fake news and deepfakes?

"It is a severe danger, which is also briefly indicated in the letter, one of the few good things about that text: we run the risk of deep and serious pollution, especially on social media. This can be summed up in two words: disinformation and manipulation. For those who want to perpetrate both, such powerful tools, which create and manage any content, language, image, sound, or video, are ideal. They allow for industrial processing. This problem is serious. The solution is: more and better legislation, as soon as possible. No sci-fi worries or Draconian blocks".


PS "Notes to myself" is available as a book on Amazon: ow.ly/sGyh50KfRra



 

 

Comments

  1. I’m really impressed with your writing skills as well
    as with the layout on your blog. Is this a paid
    theme or did you customize it yourself? Either way keep up the excellent quality writing, it is rare to see a great blog like this one
    today.
    온라인카지노

    ReplyDelete

Post a Comment

Popular posts from this blog

On the importance of being pedantic (series: notes to myself)

Mind the app - considerations on the ethical risks of COVID-19 apps

On being mansplained (series: notes to myself)

Call for expressions of interest: research position for a project on Digital Sovereignty and the Governance, Ethical, Legal, and Social Implications (GELSI) of digital innovation.

On the art of biting one's own tongue (series: notes to myself)

Il sapore della felicità condivisa

Gauss Professorship

The ethics of WikiLeaks

The fight for digital sovereignty: what it is, and why it matters, especially for the EU

ECAP 2008