Artificial intelligence (AI) is here to stay. It is an undeniable reality and, perhaps, one that will be difficult to control in several cases.
The voices are out there. Israeli historian Yuval Noah Harari, for example, warned recently that “for the first time, we’ve invented something that takes power away from us.” In Harari’s view, that fact calls for regulations that ensure that the decisions made by AI are good.
But what is a good decision? The process of decision-making is directly influenced by the decision makers; by their environment, values, beliefs and the overall components of their culture.
Culture Beyond Folklore
Culture is a universal orientation system typical of a society, organization or group. It goes beyond its manifestations (especially in the arts), reflecting deeply held beliefs, traditions, heritage and emotions.
Culture is not just a set of surface features, such as our mannerisms, our dress codes or our ways of speaking to each other. In fact, these surface social behaviors, as Thomas & Inkson put it, are often manifestations of deeply embedded, culturally based values and principles.
Cultural intelligence is not only knowing about other cultures, but being mindful about the differences
In a global business community, the successful interaction with people from different cultural backgrounds becomes a must. That’s where cultural intelligence comes in.
Thomas & Inkson define cultural intelligence as “the capability to deal effectively with people from different cultural backgrounds […] a multifaceted competency consisting of cultural knowledge, the practice of mindfulness and a repertoire of behavioral skills.”
Cultural intelligence is not only knowing about other cultures, but being mindful about the differences, which will lead us to behave in a proper way.
Cross-cultural management allows understanding of cultural matters that determine consumers’ behavior, trends and preferences. Here, culture goes beyond the folklore that represents people’s heritage and traditions; it encompasses the way they are.
While technology has brought the world together, it is the people who remain on focus. Technology is but the means to communicate and assist what truly lies at the heart of every product and service: human beings.
AI and Cultural Intelligence
In 1955, Stanford Professor John McCarthy coined the term “artificial intelligence”. McCarthy defined AI then as “the science and engineering of making intelligent machines.”
Though the technology itself is impressive due to its ability to solve problems under determined circumstances, human beings are still at the core of the process. Scientists and engineers are the ones working behind the curtain, creating, modeling, programming. It is they who feed the machines in an attempt to emulate human intelligence, perhaps expecting to surpass it.
How are the cultural peculiarities of these scientists and engineers creating cultural gaps in the machine? We’re all human, yes, but we’re not all the same, and our needs differ.
I have the impression that AI is not being as helpful when it comes to cultural gaps
Maslow’s hierarchy is still a reference for understanding human needs. From the bottom of the hierarchy upwards, the needs are: physiological (food and clothing), safety (job security), love and belonging (friendship), esteem and self-actualization. Needs lower down in the hierarchy must be satisfied before individuals can attend to higher ones.
When moving upwards in Maslow’s hierarchy, satisfying needs grows more complex. Perceptions of safety, love and belonging, for example, differ considerably between cultures. Is this a consideration that’s taken into account when building AI? I doubt it sometimes.
Multidisciplinary teams are working to develop awesome, life-changing technology. In many fields, things have become easier and even better. Nevertheless, I have the impression that AI is not being as helpful when it comes to cultural gaps.
What To Do?
AI needs to be fed by teams who are not only multidisciplinary, but cross-cultural and diverse too.
Facial recognition technology –which can be fouled up by stereotyping– is one of the most relevant cases for cross-cultural AI teams. There have been studies and reports that warn of racial bias creeping into facial recognition systems, which can lead to racial profiling. Outputs depend on the data that’s being fed into the machine, and the diet is determined by people.
AI should not immediately classify a person wearing a hijab as a potential terrorist. That can happen, nevertheless, if the data fed into the system is being curated by a team that’s culturally insensitive or which lacks the levels of mindfulness required to build a “culturally intelligent” bot.
Another relevant case for cultural intelligence among AI teams is the creation of “personalities” for each AI.
Let’s remember Microsoft’s AI assistant, Cortana. The bot required extensive training to develop just the right personality: confident, caring and helpful, but not bossy. Instilling those qualities took countless hours of attention and a multidisciplinary team which included a poet, a novelist and a playwright.
AI needs to be fed by teams who are not only multidisciplinary, but cross-cultural and diverse too
Apple’s Siri and Amazon’s Alexa also needed human trainers to mold their personalities in a way that reflected the brand of each company. Siri, for example, is known to be a bit sassy, a calculated move based on consumer expectations.
Though these companies have been successful in connecting AI “personalities” to their brands, can we say those bots are built to be cross-cultural? How will their interactions with human beings turn out under different cultural scenarios?
A crime can be regarded as such in different parts of the world, but the penalty may vary from place to place. We might not agree, but the fact is that decisions are made around the world based on values which differentiate the good from the bad, and sharp contrasts can be found between the value systems that inform those decisions.
Is AI being fed taking these nuances into account?
In a recent article, Colombian bioscientist Moises Wasserman commented on the calls to regulate AI. He addressed, among other things, a widely circulated letter signed by some of the biggest names in tech, underscoring the lack of a cross-cultural perspective in the document.
“The letter had the right components: a smitch of apocalyptic threat, a declaration of virtue and little practical scope. For starters, I saw no signatures from Chinese, Iranian, Russian or Indian representatives. The writing style made the piece itself read like a product of ChatGPT; as if the signatories themselves had decided to play a prank and ask the bot to write a letter which warned of its own risks,” Wasserman wrote.
We might find ourselves in a point of no return regarding the development of AI. The whole of humanity is indeed watching as the technology develops. Nevertheless, scientific and tech leadership should not forget that the eyes and minds of each observer are shaped by a variety of cultural backgrounds and environmental peculiarities.
Add comment