There is no denying it: artificial intelligence is both fascinating and frightening. Neural networks capable of emulating human thinking are revolutionizing various sectors of technology – let alone those who don't like using assistants like Alexa and Siri.
- Artificial Intelligence on Mobile Phones: We Talk to Mediatek
- Artificial intelligence is trained to turn brain activity into text
- Artificial intelligence is used to predict collisions of exoplanets
In the cybersecurity area, for example, we have the birth of solutions capable of learning criminal behavioral patterns and responding more quickly to any attacks or attempts to break into corporate networks. Something similar occurs in the scientific sector, where machine learning proves to be invaluable for streamlining analyzes and conducting automated studies.
At the same time, discussions about how far artificial intelligences can go without symbolizing a threat to the human race are recurrent. Entrepreneur Elon Musk, CEO of Tesla and SpaceX, constantly issues alerts imagining a future in which robots, smarter than us, could decide to get rid of their creators and found a new society composed solely of machines.
Of course, there is a possibility that Musk is exaggerating a little. But even if we are still far from a post-apocalyptic scenario in the Matrix, artificial intelligence can already be used for evil purposes, symbolizing disrespect for basic rights and human dignity. And that is precisely why the United Nations Educational, Scientific and Cultural Organization (UNESCO) is developing a booklet of recommendations that could become the first global law on artificial intelligence.
Imminent danger
Named simply as “Ethics in Artificial Intelligence”, the 19-page document discusses standards and good practices for creating and maintaining artificial intelligence systems, respecting human rights and avoiding any type of discrimination, prejudice and social inequality. The text can be accessed on the UNESCO website and this first draft is currently under public consultation for all member states of the entity.
“Even today, where this technology is still childish, we have a series of human rights violations. If we look, for example, at fake news, it is a clear violation of human rights, our freedom of expression, our respect and our dignity. Fakes news directly influence the type of information you are receiving, which also impacts your decisions, ”explains Dr. Edson Prestes, the only Brazilian involved in the project, in an interview with FreeGameGuide.
Edson, who is a professor at the Federal University of Rio Grande do Sul and a senior member of the Institute of Electrical and Electronic Engineers (IEEE), participates in the UN High Level Panel on Digital Cooperation. "We know that, for example, artificial intelligence manipulation mechanisms were used in different ways to manipulate elections, as happened in England", he exemplifies.
The expert also points out that these tools can be used to track social minorities and even to shape the behavior of citizens. “Imagine an artificial intelligence system trained according to your profile, with its characteristics, that suggests certain actions or contents. If you are going through a very difficult phase, you end up entering a vicious cycle, where the system will recommend things that are negative ”, he says.
Challenges in open source
A challenge present in an eventual regulation of artificial intelligence is the fact that, in this segment, it is very common to find open source frameworks and algorithms, made available for free so that anyone can create their own applications on top of a ready “skeleton”. This is the case, for example, with TensorFlow, OpenAI and PyTorch. How to ensure that these resources are not used for improper purposes?
“I don't see any problems with open source technologies, because the advantages are much greater than the disadvantages. It is necessary to have literature and tools that will allow you to get started. The point is that when you have this type of system, it is difficult to know what the future uses of this will be. You can try to make a type of block, knowing who accesses this type of system, who is downloading and so on ”, defends Edson.
For the professor, governance is a key point to ensure a conscious use of artificial intelligence, and, therefore, it is crucial to adopt mechanisms that allow the tracking of who is using such systems for the application of penalties. "I believe that this is possible, given the ease with which we can discover, for example, the origin of cyber attacks through the IPs of the processes that were executed", he adds.
And the future?
It is important to note that the UNESCO text is currently not a law, but a letter of recommendations that may or may not be transformed into local legislation for each member state of the entity. Edson points out that such an initiative is only the first step towards a broader ecosystem, which aims at a scenario of international collaboration to ensure that this type of technology has the same treatment around the globe.
“This action is within a larger context. What we are doing is creating a kind of framework that will define recommendations that can be transformed into local legislation. The UN, however, is thinking at a higher level, so that the states can cooperate with each other, so that they can deal with these problems arising from the misuse of technology ”, he concludes.