One field of psychology that sometimes works with A.I. is cognitive psychology. Researchers in this field sometimes use the results from human behavioural studies to program A.I., both to test hypothesis about cognition and to try and create better A.I., I presume. For those of you who are unfamiliar with cognitive psychology, it's an experimental branch that uses mostly behavioural data to infer things about our thoughts, strategies and such and how they are influenced by for instance emotions. Embodied cognition is a sub-field of cognitive psychology. Where the traditional view sees our cognitions and brains much like absolute data and computers, which operate separately from our bodies (in a top-down manner), embodied cognition is all about bottom-up influences, how our bodies and perceptions interact with our cognitions. This field of psychology is now, like the traditional view before it, influencing how A.I. and machines are being made.
Let me try to explain this in other words. Traditionally created A.I. may be able to solve complex mathematical problems, but there isn't a chat-bot on the internet that can actually make you think you're talking to another human being (if you know of one, please let me know). Robots with traditional A.I. are barely able to navigate through a room. So although they may be intelligent, they are so in a 'cold' way. Something about them immediately informs us we're dealing with machines.
Now let me invite you to watch this TEDtalk by Guy Hoffman.
He is inspired by the field of embodied cognition to create robots with A.I. that 'feel' organic, or 'warm'. I don't know if you agree with me, but I think they seem to express certain emotions that instantly feel familiar. And judging by this video I wasn't the only one.
This made me wonder if the Turing-test is really valid the way it is. Maybe NOT letting the participants see each other actually makes it harder to pass the test. Now don't get me wrong, when you see a robot obviously you know it's not a human. But somehow the test as it is now reminds me of the traditional view of cognition, like the mind is something completely separate from the rest of the body. Like we only judge things with our mind, and we don't also use visual information for instance. To give an example, if someone attaches lights to al their major joints and runs around in the dark, you immediately know it's a human because of the way the lights move with respect to each other.
And it might also work the other way around of course. Maybe no A.I. can trick us in to thinking it's human without at least having a body that's similar to ours. How else will we ever be able to relate to one and other?