Chat GPT: a risk or an asset for European values?

ChatGPT is everywhere, and many articles discuss whether it is a risk or an asset. Italy did temporally banished it, and it started a critical review by many authorities. ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations, and has basically changed the way we look for information, and how we create information.

The advent of artificial intelligence (AI) presents a series of risks that need to be addressed and tackled. Among them, AI could play a key role in manipulating information, influencing politics, and overturning value systems that form the foundations of a society or a country. In this regard, a distorted use of artificial intelligence could lead to an increase in anti-Europeanism and strengthen populist movements that aim to destroy the project of the European Union. Imagine, for example, a programmed application developing instantly an infinite quantity of articles releasing false information or imagine a software which manipulates photos and videos – all these things already exist, and can challenge immensely traditional media which are human-based, and have hard time to chase them. The point is, should they chase artificial intelligence?

Here are some of the major points of the risks about the use of artificial intelligence, and some advice on how to mitigate them. Let’s have a look:

  1. Privacy 

One crucial aspect to consider is privacy and data protection. The massive collection of sensitive information could threaten individual rights guaranteed by the European Union, as established by the General Data Protection Regulation (GDPR). It is essential to ensure that AI is used in compliance with privacy regulations and that adequate measures are taken to protect personal data. In this regard, a recent case in Italy is worth mentioning. The Italian Data Protection Authority (Garante per la Protezione dei Dati Personali) imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the U.S. company that developed and manages the ChatGPT platform. On March 20, the artificial intelligence software experienced a data breach involving users’ conversations and information related to the payment of the subscription service. The Garante suspended the service in Italy after detecting the lack of information provided to users and all individuals whose data is collected by OpenAI. Furthermore, the Garante highlighted the absence of a legal basis justifying the massive collection and storage of personal data for the purpose of “training” the algorithms underlying the platform’s functioning.

Furthermore, according to the conducted investigations, the information provided by ChatGPT did not always correspond to real data, resulting in inaccurate processing of personal data. Additionally, despite the service being intended for users over the age of 13, the Authority highlighted the absence of any age verification filter, exposing minors to responses that are completely unsuitable for their level of development and self-awareness.

On April 28, after partially complying with the privacy authority’s requests, ChatGPT reactivated its service in Italy. Who won this clash? It is worth to have a look at the case, and reflect on how AI is discussed in our countries.

  1. Discrimination and Bias

 AI can be influenced by biases and discrimination if not developed and trained properly. Artificial intelligence systems can learn from historical data that reflects existing inequalities and biases in society. This can lead to discriminatory decisions or the exclusion of certain groups of people. For example, if an AI-based recruitment system is trained using historical data that shows a tendency to select candidates of a particular gender or ethnicity, the system may perpetuate these inequalities in future selection processes. This type of algorithmic discrimination can have serious consequences, such as inequitable access to job opportunities, financial credit, or public services.

The European Union should promote regulations that require transparency and accountability in the implementation of artificial intelligence to avoid such situations. It is crucial to address this issue through ethical design and training of AI systems. The responsibility of organizations developing and implementing AI, as well as appropriate regulation by competent authorities, is crucial to mitigate the risk of discrimination and bias in the use of artificial intelligence. It is worth to look through the European AI policy, a fundamental step to channel the immense power of AI in Europe, and set a world example.

  1. Unemployment and Social Impact

 Automation fuelled by artificial intelligence could lead to significant transformations in the job market, potentially resulting in the loss of traditional jobs. The introduction of advanced robotics and AI systems could reduce the need for workers in sectors such as manufacturing, transportation, logistics, and many others, at the same time it requires SMEs and research centres, as well as VET institutes. The impact of this transformation will depend on various factors, including the rate of AI adoption, the speed of automation, and people’s ability to adapt and acquire new skills. This could create economic and social inequalities if adequate policies are not implemented to manage this transition.

The European Union should focus on education, professional retraining, and the creation of job opportunities in the AI sector to mitigate the negative impact on employment. Automation fuelled by artificial intelligence has the potential to transform various sectors and work processes by replacing human labour with intelligent machines.

Remarks

It is necessary to implement measures that can prevent the emergence of high unemployment. European policy must therefore act by providing training programmes for workers and informing citizens of the new job opportunities that are opening up, guaranteeing social protection programmes to those who cannot immediately find a new job. In addition, the European Union will have to intervene to stem the negative effects of public manipulation (to deepen, we also recommend the interventions of David Tozzo and Lorenzo Fattori, on the occasion of the International Conference “Media in the age of populism and polarisation“), investing in a programme of good practice that allows people to receive reliable information. This series of actions will create a citizenship aware of the new digital job opportunities and able to exploit them, thanks to the skills it will acquire. Furthermore, active labour policies, social protection systems and the acquisition of knowledge to address misinformation will avoid the risks of an economic crisis, policy and social policy which could encourage the dramatic growth of political movements that aim to destroy European values.

In conclusion, the AI has to be safely used in the European Union, but it has to be developed by principles of transparently and in accordance with democratic principles. This means that users who exploit programs such as CHAT GPT must be informed in a transparent manner about the correct use of the algorithm, about the procedures for protecting personal data and about the truthfulness of the information that is provided.

By fostering collaboration between governments, educational institutions, businesses, and civil society, and by implementing targeted policies and regulations, the European Union can navigate the challenges posed by AI while maximising its benefits. It is crucial to ensure that the development and use of AI technologies align with ethical principles, societal values, and the well-being of individuals and communities. Through responsible and inclusive AI practices, we can harness the potential of artificial intelligence while mitigating its negative impacts and fostering a fair and equitable future.


Sources

Header picture developed by Dall-E, under the input “Make a pop illustration of a portrait of Artificial Intelligence as it was an android with the colours and stars of the Euro”

ChatGPT banned in Italy over privacy concerns, BBC, https://www.bbc.com/news/technology-65139406 

Artificial intelligence: stop to ChatGPT by the Italian SA. Personal data is collected unlawfully, no age verification system is in place for children, GarantePrivacy, https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847 

A European approach to artificial intelligence, European Commission, https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence 

Media in the age of populism and polarization, We-Europeans project, https://www.youtube.com/watch?v=BLGZid0WB-I&t=535s 

General Data Protection Regulation (GDPR), Intersoft consulting, https://gdpr-info.eu/ 

Chat GPT: a risk or an asset for European values?
Scroll to top