COMMUNICATING ARTIFICIAL INTELLIGENCE THROUGH SOCIOLOGY – INTERVIEW WITH VERONICA MORETTI.

Foto del docente

Would you like to introduce yourself...

I am a researcher in sociology at the Department of Sociology and Economic Law at the University of Bologna, and my field of training is the sociology of health. In my research I give us two important aspects: the role of the relationships between the states of illness, health and care and the possibilities offered by new techniques defined as creative (diaries and comics) in the training of health professionals and in the socialization of issues that are complex because they are loaded with taboos and/or stereotypes.

What challenges does sociology face with AI?

AI poses challenges in all areas of knowledge. So sociology is also called to look at how existing phenomena change and how new ones are created.
Some are more immediate than others and some have more long-term effects such as: socio-cultural impact, public understanding and awareness, and the value that AI-stored data have.

socio-cultural impact.
AI’s constant and pressing progress in fieri also compels regulation not to be outdone, although the relationship between the two is more like an exasperated and nearly impossible chase.
Regulating AI is necessary because through laws we can impose limits on possible risks and threats. Laws applied to AI, are indispensable because society is now also interwoven with digitized relationships. There are really rare cultural realities that have remained excluded from this innovation; almost all societies today share networked reality and have a digital identity.

Public understanding and awareness.
I am collaborating with other researchers on an international project on the application of AI in Alzheimer’s care, and finding a shared and unambiguous definition of AI has been very difficult. Generalizing it to a broad audience is even more difficult, complicit in the fact that we do not yet know how AI chooses information online.
In the area of health and care there is much concern about the depersonalization of the doctor-patient relationship. The fear lies in not being able or not knowing how to determine a priori what the fate of the relationship is that, today instead, exists between doctor, patient and caregiver.
The use of AI in the medical field, moreover, may generate an increase in digital inequalities because not all areas can benefit from it for the care of their patients.
Resources should be made available to each area for the infrastructure needed to implement this technological change.
But the approach to these new technologies in the area of healthcare is not always correct, because despite having the right information, physicians, in some cases, do not use them because they are considered, perhaps, less safe.

We must also emphasize another issue, which I think is very fascinating, and that is the issue of data surveillance from which surveillance capitalism originated, understood as a system that perceives data as real commodities. Companies, for example, used to collect and buy them in order to improve their products and pander to the needs and wants of possible buyers.
The main risk, however, to which some realities, or people, are still exposed is the theft of online data or their deceptive and manipulative use by third parties.

How can sociology contribute in understanding AI and its development?

A deterministic look at AI, i.e., a catastrophic (techno-skeptic) or overly enthusiastic look, should be avoided.
It would be helpful to adopt an integrated viewpoint, that is, to recognize AI as a constant presence in our lives. Understanding that it is not an abstract entity but has an impact in our daily lives and that we have created a relationship with AI is crucial; blending and overlaying its value with everyday learning. In the practice of care, for example, there are physicians and health care professionals who specialize on the basic knowledge of information technology.
But it is a change that takes time. The ways in which AI makes decisions in the medical field are little known and unregulated. For example, we, in Bologna, are working on a project dealing with algorithms and social discrimination. These algorithms, if not analyzed and corrected, reproduce the inequalities present in a given cultural context. In the United States, a variety of experiments have been conducted on this same issue and it has been shown that in America AI algorithms on the medical area tend to disfavor women and especially women of color. Sociology can obviate these problems and be helpful by acquiring an interdisciplinary approach, with which it can take into account multiple variables in looking at the same phenomenon.

How do you think to obviate what Pope Francis said in the G7 about AI? Is it true that it could limit the worldview to realities expressible in numbers and enclosed in pre-packaged categories?

Avoiding deterministic positions or hypercritical, hyper-enthusiastic approaches and looking at the relationship that is created with human beings. We will have to focus on how we understand them or how we relate to these technologies. There is, in fact, a relationship between the two that is not just an exploitation of technology by humans.

To also prevent man from humanizing it?

In the world of work, for example, AI will replace humans in some tasks however, we do not know in what way. Its introduction, can create, and this is already visible, new figures and new possibilities for the human being. Particularly in the medical field, thanks to access to the Internet and various AI assistants, the patient has become an “expert” person, that is, more aware. However, I don’t think that the figure of the physician could ever be completely replaced by these new technologies despite the fact that a very small percentage of risk, for this kind of prediction, is always there.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.