Artificial intelligence (AI) has made a few spectacular-sounding headlines this year, with various organizations showing off their own digital creations cognitive capabilities. These tend to relate to logical intelligence, dealing with mathematics, rationality, and decision-making.
However, just this month, a team of researchers from the National Research Nuclear University Moscow Engineering Physics Institute (NRNU MEPhI) has announced that they are developing an AI that is able to have both narrative and emotional intellect. If this so-called Virtual Actor (VA) is able to understand human emotions, it will buck the trend in the types of AIs emerging from other research teams across the world.
Googles DeepDream, for example, is a truly surreal convoluted neural network that can think of hallucinogenic images. AlphaGo is able to trounce human masters in the mathematically complex game of Go, and the unrelated ALPHA AI can best very experienced fighter pilots in aerial combat simulations. As impressive as these are, human emotions a notoriously abstract subject at times do not enter into their decision-making processes.
Although it is unlikely to be ready no earlier than 18 months from now, its developers have given some insights as to how it may operate. VA will be able to understand the context of what is happening around it, as well as predicting the outcomes of unfolding scenarios involving human participants.
One of its roles is as an actor in the most literal sense of the word taking on roles of various people in group discussions. Instead of being programmed to understand people, it would set itself questions and seek out the answers for itself.
The first form the VA would take would be one embodied in a simplified videogame. Every action in this game has an emotional connection, with the objective of the VA to gain rapport with its human participants. The team notes that if a human player cannot tell the difference between the VA and another human essentially the Turing test then their AI would have succeeded in its objectives.
Biological solutions are in many respects superior to artificial solutions in terms of their adaptability, learning ability, resistance to unexpected interventions, and so on, and we would like to model these principles on the computer, Alexei Samsonovich, VA project member and professor of the cybernetics department at MEPhI, told Sputnik News.
Image in text: Google’s DeepDream looks for patterns in images and enhances them after “thinking” about what the patterns look like, much to the consternation of Albert, IFLS’s office dog. Google
This is easier said than done, of course. Neuroscientists still have a long way to go to understand the complexities of human emotions, so developing an AI that can understand them will be a gargantuan task.
The team notes that there are several approaches to this. The bottom-up approach involves the reproduction of the human brain, or parts of it, from scratch, which means that a series of artificial neural networks are required. The team wishes to take another approach, which they describe as bottom-up and top-down at the same time, combining neural network construction and higher level cognitive modeling and psychology.
If this seems quite unclear, then its probably because it is supposed to be. MEPhI has received a grant to study biologically substantiated cognitive system models, and VA will be the outcome, but as noted by Samsonovich, there is a huge gap in our understanding of thought processes, particularly when it comes to replicating them.
Essentially then, the team isnt entirely sure what they will do to achieve their goal, but they are open to trying a number of approaches simultaneously. One key point, however, is that they feel current computing power is enough to replicate humanoid intelligence, and that biologically realistic artificial neural networks arent necessary.
Only time will tell if theyre right.
Artificial, emotional intelligence is just 18 months away? That’s probably somewhat optimistic. Christian Lagerek/Shutterstock