In 1945 Vannevar Bush, in a significant and pioneer article, As we may think, introduced the concept of “expanding human experience”, announcing the possibility of sharing scholarly knowledge through technology (memex device).
Nowadays, Artificial Intelligence is an hot topic in many field and context. One of the discussion point is about AI consequences and reliablility. But what is left beyond is what, until today, AI systems are: computational systems. What we do with our conscious thinking is very different from anything that can be achieved computationally.
To use AI in a worthwhile way for humans, we have to create a trust relationship, starting from systems transparency.
WHAT’S INTELLIGENCE? (AND CONSCIOUSNESS?)
The AllWords dictionary (2006) defines intelligence as “The ability to use memory, knowledge, experience, understanding, reasoning, imagination and judgement in order to solve problems and adapt to new situations.”
Despite a long history of research there is still no one standard definition of intelligence, there are many definitions of intelligence including and focusing on specific capabilities. Even if we focus only on psychology perspective, we can see diverse points of view.
The treaty of experimental psychology L'intelligenza (Pierre Oléron, Jean Piaget, Barbel Inhelder e Pierre Gréco) embrace multiple aspects of intelligence, talking about cognitive schema, subsumption, logics, intuition, behavioural aspects, creativeness, abstraction, and so on. “Intelligence is assimilation to the extent that it incorporates all the given data of experience within its framework [. . .]” J. Piaget
Gardner proposed the theory of multiple intelligences, eight, where he suggested that all people have different kinds of intelligences. He suggested also the addition of existential intelligence, as the ability to conceptualize or be sensitive about human existence.
Thus, Intelligence is described in so many ways, and by many abilities. However is there something beyond such abilities?
Gadner identifies existential intelligence as a specific capability, but extending such capability to every action and perception, as the willingness or understanding capability, it embrace the totality of intelligences. This is the human consciousness beyond intelligence.
[…]I define it as your skill in achieving whatever it is you want to attain[…] R. J. Sternberg
According to Sternberg definition of intelligence, and to Roger Penrose, Intelligence requires an understanding, or consciousness, of both passive (es. perception) and active (es. decision) aspects. An example about our conscious experience of the world, is the perception of time passing as well as the composition of a new song.
There are multiple field about consciousness in computer science. Some think that it is a characteristic that will emerge as technology develops, others that think that consciousness may come from quantum theory. But there are physicists and philosophers who says there’s something in human behavior that cannot be computed.
“there are fundamental limitation to any computational system, whether top-down or bottom-up.”
As Penrose underlines in Shadows of the mind, this doesn't mean that machine consciousness is not obtainable, but that it is not in a computational model. What Penrose propose is a new non-computational action, that doesn’t imply something beyond science, but something that is not attainable with nowadays computation.
Considering machine limits, humans have the willingness to determine how machines have to act. Thus human has to be the starting point to determine how to use machines' ability.
This is possible only by creating awareness about machine limits and human role. A trust relationship is needed to create a scenario in which people make a real worthwhile use of AI.
human centred design and TRANSPARENCY
Today there is a lot of focus on data and algorithm, less in human role. Identifying bias in AI systems is essential to building AI system reliable and useful to people.
Data can contain implicit racial or gender biases. Such biases make user's trust more difficult to gain. At the same time it is important to recognise good knowledge and maintain biases helpful in specific domains.To eliminate or guide bias, users and domain experts should be included in system development.
Design for trust means being transparent in what we know about the user and how the system works, enabling users to modify, within limits, their data if needed. Doing so, user's conscious feedback could improve machine learning.
Stakeholders participation on AI systems development allow data clarity and control, opening the ‘black box’ to understand computed results and to manage input data, creating a transparent mechanism that stakeholders and developers are able to manage.
Showing what the system is and how it works is the first step towards people trusting, using and ultimately improving the system. 
By clarifying the domain and ability of an AI system we can empower people to think of AI systems as supporting, not replacing their thinking.This is a mind shift for researchers and developers to change the goal, starting from humans rather from technology possibilities and performances.
As a new technology arise, trust is an important factor on determining it's success, this is particularly true for Artificial Intelligence, considering the amount of unclear and amplified news.
For this reason, in AI design process, designers have to analyse human-machine interactions to reverse the tendency, and "expanding human experiences" giving new way of access to knowledge through accessibility and transparency.
 Trattato di psicologia sperimentale, L'intelligenza, Pierre Oléron, Jean Piaget, Barbel Inhelder e Pierre Gréco.
Contact me at