Categories
Latest
Popular

Will AI Manage to Gain the Trust of Users?

AI Manage to Gain the Trust of Users
Image Source: https://assets.weforum.org/editor/large_ty0-_eTr2F55S3ndWNiLr3BJSaz6Da6cMybIHF-Cqpc.jpg

AI technology is all around us and while it is helpful, many people are still concerned about trusting AI, believing that the technology is learning more about an individual than the person itself. 

As a multidisciplinary endeavor, AI helps build machines capable of learning, making decisions, and acting intelligently. The decision AI-powered machines make can be the inputs to generate new learning processes. These machines comprise hardware and software components.

These machines are already into several domains, such as financial services, marketing, retail, and healthcare. There is a high preference for AI technology because it is capable of endowing services and products with cognitive functions, as they can learn and suggest decisions using digital data. The success of AI is more eminent in machine learning (ML), where computer systems become capable of automatically generating predictions and supporting decision making by learning from input data. 

Reluctance of users

Many people are still reluctant to give their personal information to AI-powered systems. The system needs input so the company can provide more personalized and accurate services. According to a new study, the reluctance may be because of how the system asks for information.

Researchers at Penn State revealed that users respond differently when AIs asked help from the user or offered to help the user. The researchers added that the initial introductions from the AI could be a way to increase the trust of users in the technology, and improve the users’ awareness of why personal information is important. 

The researchers presented their findings at the 2021 ACM CHI Conference on Human Factors in Computing Systems, which was originally scheduled to take place in Yokohama, Japan on May 8-13, 2021. Organizers made it a virtual conference due to the ongoing health crisis.

In their presentation, they stated there should be some changes in the script so that the AIs can introduce themselves as help-providers and help-seekers. This approach will facilitate collaborative and cooperative communication.

As helpers

According to the researchers, the traditional AI dialogues focus on the role of AI as a helper. Even the power users are sometimes put off with this approach because they feel that the AI is patronizing them. But the way AIs ask questions, they appear paternalistic when they tell users they will help them if the users will give them their information.

The attitude of users changes when the AI system asks them for help. When asking for help, the AI system appears social and believe they have a social presence or social intelligence.

If someone is looking for help, it is related to social behavior and becomes more interpersonal. This social presence leads to trust and increases the intent of providing AI with additional information.

AI
Image Source: https://scx2.b-cdn.net/gfx/news/hires/2019/8-ai.jpg

The trust issues with AI systems

While the findings of the recent study may change the perception of people regarding AI systems, the trust issue is deeper than how AI talks to humans. Since the introduction of AI, humans are concerned about being replaced by them. Their competence can be seen in different industries already, and AI-enabled applications have streamlined and transformed a wide range of business processes. In 2018 for example, Goldman Sachs replaced 99% of its traders with robots. AI-enabled chatbots are now taking over many customer service jobs.

Another concern is the projection that AI will be smarter than humans. By 2045, experts predict that AI-powered robots will be more intelligent.

Another issue is the ability of advances AI-enabled robots may develop behavioral and cognitive intelligence. If such a thing happens, AI can have morals and feelings and understand what is right and wrong, but this will be according to AI’s definition, which may not align with the morals of humans.

What to do

Developers and tech companies should consider many factors to build trustworthy AI systems.

Explainability

AI systems use machine learning algorithms rather than big data when making crucial decisions. This aspect makes developers and end-users fail to understand why the system makes such as decision. With the lack of explanation, users may doubt the results’ accuracy. Developers should build AI systems that will explain how it processes information and arrive at specific conclusions. It will help people understand AI better if it is transparent.

Integrity

The integrity of machine learning is needed to make AI systems trustworthy. They should have the integrity to generate output according to the predefined technical and operational parameters. Developers have to make sure that the systems will work as intended. It is the responsibility of the developers to set up specific limitations for AI systems. The limitations will ensure the regulation of the systems’ usage.

Conscious development

Developers should ensure that the decisions AI systems make will benefit humans, and must be aligned with human values and principles. Developers must make it their object to design AI systems that will make the lives of humans better. But some AI systems become too invasive in collecting confidential data. Developers should consider this and create systems with security protocols.

The European Union has already developed ethics guidelines for building trustworthy AIs. In the U.S., the Institute of Electrical and Electronics Engineers (IEEE) has standardization efforts for AI transparency. The EU, UN, OECD and other countries are all working to develop a legal framework to regulate AI. 

It might take some time for humans to fully trust AI systems, but with concerted efforts, there may come a time when humans and AIs will happily co-exist.