The modern leader must lead humans, as they have done for centuries, but also autonomous AI agents. In addition to these mixed teams of humans and AI agents, the modern leader must lead the organization through a transformation to fully utilize AI across all its operations. There is no simple answer to this challenge, but there is a structured approach illustrated in this book that will increase the chances of success.

Reference

Zarifis A. (2025) ‘Leadership with AI and trust: Adapting popular leadership styles for AI’, De Gruyter: Berlin. ISBN: 978-3-11-163004-5

Available from:

https://www.degruyterbrill.com/document/isbn/9783111630137/html

https://blackwells.co.uk/bookshop/product/Leadership-With-AI-and-Trust-by-Alex-Zarifis/9783111630045

https://www.amazon.com/Leadership-AI-Trust-Adapting-leadership/dp/3111630048

https://www.beck-shop.de/zarifis-leadership-with-ai-trust/product/38874873

https://www.deutscher-apotheker-verlag.de/Leadership-With-AI-and-Trust/9783111630045

https://shop.lexisnexis.at/leadership-with-ai-and-trust-9783111630045.html

While ride-hailing platforms such as Didi, Uber, and Lyft have been with us for some years, it is an innovation that is still evolving, and customers beliefs on it, are still evolving also. Some are happy to use it, while others have some reservations.

Promoting the passengers’ trust in platform and customer citizenship behaviour (CCB) is both challenging and important. It refers to voluntary and discretionary behaviours that are not required for the successful production or delivery of the service, but that help the organization offering the service overall. In ride-hailing services, customer citizenship behaviour (CCB) is the voluntary behaviour of passengers, which is not necessary for the process of ride-hailing services.

This study looks at three aspects of the relationship of passengers’ trust in platform and customer citizenship behaviour (CCB): (1) What are the signals sent by the ride-hailing platforms that impact passengers’ trust in platform? (2) What are the dimensions of customer citizenship behaviour in the context of ride-hailing? (3) How does passengers’ trust in ride-hailing platforms influence their customer citizenship behaviour towards the platforms? The outcome of this research is the trust-customer citizenship behaviour (CCB) model in the ride-hailing context shown in figure 1.

The findings reveal that platforms can foster passengers’ trust by sending service-related signals (i.e., service quality and structure assurance) and a firm-related signal (i.e., platform reputation). Customer-company identification (CCI) mediates the relationship between passengers’ trust and customer citizenship behaviour (CCB), where passengers engage in CCB by providing recommendations, exhibiting forgiving behaviour, and providing feedback. Customer-company identification (CCI), is related to social identity theory, and refers to the positive and emotional attachment that passengers feel towards the values and concepts of a ride-hailing platform.

Additionally, firm-related signals, including platform size and reputation, enhance the positive relationship between trust and customer-company identification (CCI). These findings contribute to the body of knowledge on trust, customer citizenship behaviour (CCB), and signalling theory, and offer practical guidance to ride-hailing platforms.

Understanding how to build trust, and the specific benefits of a trusting relationship, encourages ride-hailing companies to work harder to build trust better. It also shows customers of these services the power they have, and how important they are to the success of these companies.

Reference
Su L., Cheng X. & Zarifis A. (2025) ‘Passengers as defenders: Unveiling the role of customer-company identification in the trust-customer citizenship behaviour relationship within ride-hailing context’, Tourism Management, vol.107, 105086. https://doi.org/10.1016/j.tourman.2024.105086
(open access)

(chapter 15 in book)

Fintech companies face the challenge of trying to lead in AI adoption while navigating potential pitfalls. The board of directors plays a critical role in demonstrating leadership and building trust with key stakeholders during the implementation of AI.

This research interviewed board members from Fintech companies to identify the most effective strategies for fostering trust among shareholders, staff, and customers. These three groups have different concerns and face different risks from AI. The findings reveal that the most effective methods for building trust differ among these three groups of stakeholders. Leaders should build trust for these three stakeholders in two ways: First, through the effective and trustworthy implementation of AI, and second, by transparently communicating how AI is used in a manner that addresses stakeholders concerns. The practical ways to build trust with the implementation and the communication for these three groups, shareholders, staff, and consumers, are presented in tables 1-3.

The findings show significant overlap between the effective overall implementation and governance of AI. However, several issues are identified that relate specifically to how AI innovations should be communicated to build trust. The findings also indicate that certain applications of Generative AI are more conducive to building trust in AI, even if they are more restrained and limited in scope, and some of Generative AI’s performance may be sacrificed as a result. Thus, there are trade-offs between unleashing Generative AI in all its capacity and a more constrained, transparent, and predictable application that builds trust in customers, staff, and shareholders. This balancing act, between a fast adoption of Generative AI and a more cautious, controlled approach is at the heart of the challenge the board faces.

Leaders and corporate boards must build trust by providing a suitable strategy and an effective implementation, while maintaining a healthy level of scepticism based on an understanding of AI’s limitations. This balance will lead to more stable and sustainable trust.

Table 1. How leaders can build trust in AI with shareholders

Implementation:
1) Use AI in a way that does not increase financial or other risks.
2) Build in-house expertise, don’t rely on one consultant or technology provider.
3) Make new committee focused on the governance of AI and data. Accurately evaluate new risks (compliance etc.).
4) Develop a framework of AI risk that board will use to evaluate and communicate risks from AI implementations. Management should regularly update the framework.
5) Renew board and bring in more technical knowledge and have sufficient competence in AI. Keep up with developments in technology. Ensure all board members understand how Generative AI and traditional AI work.
6) Make the right strategic decisions, and collaboration, for the necessary technology and data (e.g. through APIs etc.).  

Communication:
1) Clear vision on AI use. Illustrate sound business judgement. Showcase the organization’s AI talent.
2) Clear boundaries on what AI does and does not do. Show willingness to enforce these.
3) Illustrate an ability to follow developments: Show similar cases of AI use from competitors, or companies in other areas.
4) If trust is concentrated on specific leaders that will have a smaller influence with the increased use of AI, the trust lost must be re-built.
5) Be transparent about AI risks so shareholders can also evaluate them as accurately as possible.

Table 2. How leaders can build trust in AI with staff

Implementation:
1) Show long term financial commitment to AI initiatives.
2) Encourage mindset of experimentation but with an awareness of the risks such as privacy, data protection laws and ethical behaviour.
3) Involve staff in process of digital transformation. Share new progress and new insights gained to illuminate the way forward.
4) Make AI ethics committee with staff from a variety of seniorities.
5) Give existing staff the necessary skills to effectively utilize Generative AI, rather than hiring new people with technological knowledge that do not know the business. Educate staff on when to not follow, and when to challenge the findings of AI.
6) Key performance indicators (KPIs) need to be adjusted. Some tasks become easier with AI, but the process of digital transformation is time consuming.  

Communication:
1) Communicate a clear coherent, long-term vision, with a clear role for staff. The steps towards that vision should reflect the technological changes, business model changes, and the changes in their roles.
2) Be open and supportive to staff reporting problems, so whistleblowing is avoided.

Table 3. How leaders can build trust in AI with customers

Implementation:
1) Avoid using unsupervised Generative AI to complete tasks on its own.
2) Only use AI with clear transparent processes, and predictable outcomes, to complete tasks on its own.
3) Have clear guidelines on how staff can utilize Generative AI, covering what manual checks they should make.
4) Monitor competition and don’t fall behind in how trust in AI is built.  

Communication:
1) Explain where Generative AI and other AI are used and how.
2) Emphasise the values and ethics of the organization and how they still apply when Generative AI, or other AI, is used.

The authors thank the Institute of Corporate Directors Malaysia for their support, and for featuring this research: https://pulse.icdm.com.my/article/how-leadership-in-financial-organisations-build-trust-in-ai-lessons-from-boards-of-directors-in-fintech-in-malaysia/

References

Zarifis A. & Yarovaya L. (2025) ‘Building Trust in AI: Leadership Insights from Malaysian Fintech Boards’ In Zarifis A. & Cheng X. (eds.) Fintech and the Emerging Ecosystems – Exploring Centralised and Decentralised Financial Technologies, Springer: Cham. https://doi.org/10.1007/978-3-031-83402-8_15 (open access)

(chapter 4 in book)
Central bank digital currencies (CBDC) have been implemented by some countries and trialled by many more. As the name suggests, the fundamental characteristics are that this is money that is digital, without a physical note or coin, and issued by a central bank.
The consumer has an increasing range of financial services to choose from including decentralised blockchain based cryptocurrencies. A CBDC may use blockchain technology, but it is centralized, so the institutions that support it play an important role. While being centralised may reduce some risks, it may inadvertently increase others. Despite the centralised top-down nature of this financial technology, it still needs to be adopted so the consumer’s perspective, particularly their trust in it, is very important. Each CBDC implementation can be different, and each country’s context can be different, therefore it is important to understand each case separately.
This research models the Brazilian consumer’s trust in their two-tier CBDC, where the central bank and the retail banks retain their current role (Zarifis and Cheng, 2025). This implementation is not a one tier solution where retail banks are bypassed in some ways, and the citizen interacts mostly with the central bank.
Existing research that identified six ways to build trust in a different CBDC (Zarifis and Cheng, 2024) was used as a basis. This research tested a model with one additional way to build trust, but this additional way to build trust was not supported. The seventh hypothesized way that is not supported is that the implementation process, including pilot implementations, would build trust. Therefore, despite the differences in the Brazilian CBDC, the original model applies here also which suggests the model applies for both two-tier solutions, and mixed one and two-tier solutions.

Figure 1. Model of consumer trust in Brazil’s two-tier CBDC, adapted from (Zarifis and Cheng 2024)

Three institutional, and three technological factors, are found to play a role. The six ways to build trust that are supported are: (a) Trust in government and central bank offering the CBDC, (b) expressed guarantees for those using it, (c) the favourable reputation of other active CBDCs, (d) the CBDC technology, the automation and limited human involvement necessary, (e) the trust building features of the CBDC wallet app, and (f) the privacy features of the CBDC wallet app and back-end processes.
It is important to develop user centered services in Brazil so that trust is built in the services themselves, and the government institutions that deliver them, sufficiently for broad adoption.

References
Zarifis A. & Cheng X. (2024) ‘The six ways to build trust and reduce privacy concern in a Central Bank Digital Currency (CBDC)’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.115-138. https://doi.org/10.1007/978-3-031-33665-2_6 (open access)

Zarifis A. & Cheng X. (2025) ‘A model of trust in Central Bank Digital Currency (CBDC) in Brazil: How trust in a two-tier CBDC with both the central and retail banks involved changes consumer trust’ In Zarifis A. & Cheng X. (eds.) Fintech and the Emerging Ecosystems – Exploring Centralised and Decentralised Financial Technologies, Springer: Cham. https://doi.org/10.1007/978-3-031-83402-8_4 (open access)

E-government can utilise the many new technologies to offer better services. Given the potential benefits of e-government, it is crucial to understand how to successfully achieve agile responses with e-government systems. An agile response in e-government, is when government employees use technology and are very effective in their role. The transformation of technology and collaboration methods, driven by the e-government systems, forces government employees to reconsider their daily workflow and collaboration with colleagues.

Despite the extensive existing knowledge of technology usage and collaboration, there are limitations in explaining the synergy between technology usage and group collaboration in achieving agile responses, from the perspective of government employees.

To address these challenges, this study provides a holistic understanding of the successful pathway to an agile response in e-governance, from the perspective of government employees. Two parallel paths are needed to achieve an agile response in e-governance. This study identifies five layers of mechanisms that lead to an agile response in e-governance, considering both the government-employee technology usage path, and the group collaboration path.

Figure 1. Model of how to achieve an agile response in e-governance
The dual pathways are as follows: Level 5 is positioned at the bottom of the model. It includes the fundamental factors that contribute to an agile response in e-governance, including ease of use, usefulness, and being traceable. Traceable in this context is more related to government employees’ work flow.

Levels 2, 3, and 4, are the intermediate factors, which play a bridging role, and are mainly composed of system quality, technology mindfulness, software reliance, communication transparency, trust, and collaboration efficiency. Specifically, system quality, technology mindfulness, and software reliance belong to the government employee technology usage pathway, while communication transparency, trust, and collaboration efficiency belong to the government employee collaboration pathway.

Level 1, at the top of the model is the ultimate goal, an agile response in e-governance. This research shows that to achieve an agile response in e-government, both the perspective of government employee technology usage, and the perspective of group collaboration efficiency must be taken into account.

Reference
Bao Y., Cheng X., Su L. & Zarifis A. (2024) ‘Achieving employees’ agile response in e-governance: Exploring the synergy of technology and group collaboration’, Group Decision and Negotiation. https://doi.org/10.1007/s10726-024-09911-y (open access)

Generative AI (GenAI) has seen explosive growth in adoption. However, the consumer’s perspective in its use for financial advice is unclear. As with other technologies that are used in processes that involve risk, trust is one of the challenges that need to be overcome. There are personal information privacy concerns as more information is shared, and the ability to process personal information increases.

While the technology has made a breakthrough in its ability to offer financial insight, there are still challenges from the users’ perspective. Firstly there is a wide variety of different financial questions that are asked by the user. A user’s financial questions may be specific such as ‘does stock X usually give a higher dividend than stock Y’, or vague, such as ‘how can my investments make me happier’. Financial decisions often have far reaching, long term implications.

Figure 1. Model of building trust in advise given by Generative AI, when answering financial questions

This research identified four methods to build trust in Generative AI in both of the scenarios, specific and vague financial questions, and one method that only works for vague questions. Humanness has a different effect on trust in the two scenarios. When a question is specific, humanness does not increase trust, while (1) when a question is vague, human-like Generative AI increases trust. The four ways to build trust in both scenarios are: (2) Human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness, and finally (5) ease of use and support. For the best results all the methods identified should be used together to build trust. These variables can provide the basis for guidelines to organizations in finance utilizing Generative AI.

A business providing Generative AI for financial decisions must be clear what it is being used for. For example analysing past financial performance to attempt to predict future performance is very different to analysing social media activity. The advise of Generative AI needs to feel like a fully integrated part of the financial community, not just a system. Trust must be built sufficiently to overcome the perceived risk. The findings suggest that the consumer will not follow the ‘pied piper’ blindly, however alluring ‘their song’ of automation and efficiency is.

Reference
Zarifis A. & Cheng X. (2024) ‘How to build trust in answers given by Generative AI for specific, and vague, financial questions’, Journal of Electronic Business & Digital Economics, pp.1-15. https://doi.org/10.1108/JEBDE-11-2023-0028 (open access)

Cryptocurrencies’ popularity is growing despite short-term fluctuations. Peer-reviewed research into trust in cryptocurrency payments started in 2014 (Zarifis et al., 2014, 2015). While the model created then is based on proven theories from psychology, and supported by empirical research, a-lot has changed in the past 10 years. This research re-evaluates and extends the first model of trust in cryptocurrencies and delivers the second extended model of consumer trust in cryptocurrencies CRYPTOTRUST 2 (Zarifis & Fu, 2024) as seen in figure 1.

Figure 1: The second extended model of consumer trust in cryptocurrencies (CRYPTOTRUST 2)
Trust in a cryptocurrency is a multifaceted issue. While some believe that the consumer does not need to trust cryptocurrencies because they utilize blockchain, most people appreciate that you must trust cryptocurrencies, just as you must trust any other technology you use that involves some risk.

The first three variables of the model come from the individual’s psychology: Personal innovativeness is divided into (1) personal innovativeness in technology and (2) personal innovativeness in finance. These two influence (3) personal disposition to trust.

There are then six variables that come from the specific context, and not the person’s psychology: The first three are related to the cryptocurrency itself. These are (4) the stability in the cryptocurrency value, (5) the transaction fees and (6) reputation. Institutional trust is shaped by (7) regulation and (8) payment intermediaries that may be involved in fulfilling the transaction. The last contextual factor is (9) trust in the retailer. The six variables from the context influence (10) trust in the cryptocurrency payment which then, finally, influences (11) the likelihood of making the cryptocurrency payment.

Separating personal innovativeness to personal innovativeness in (1) technology and (2) finance, is a useful distinction as some consumers may have different levels of personal innovativeness for technology and finance. The analysis here supports that these are separate constructs.

This research shows that trust in cryptocurrencies has not changed fundamentally, but it has evolved. All the main actors in the value chain still play a role in building trust. There is more emphasis from the consumer on having a stable value and low transaction fees. This may be because consumers now have more experience with cryptocurrencies, and they are better informed. It may also be because there are more cryptocurrencies available, and other alternatives such as Central Bank Digital Currencies (CBDC), so consumers can review the many alternatives and try to identify the best one.

References

Zarifis A., Cheng X., Dimitriou S. & Efthymiou L. (2015) ‘Trust in digital currency enabled transactions model’, Proceedings of the Mediterranean Conference on Information Systems (MCIS), pp.1-8. https://aisel.aisnet.org/mcis2015/3/

Zarifis A., Efthymiou L., Cheng X. & Demetriou S. (2014) ‘Consumer trust in digital currency enabled transactions’, Lecture Notes in Business Information Processing-Springer, vol.183, pp.241-254. https://doi.org/10.1007/978-3-319-11460-6

Zarifis A. & Fu S. (2024) ‘The second extended model of consumer trust in cryptocurrency payments, CRYPTOTRUST 2’, Frontiers in Blockchain, vol.7, pp.1-11. https://doi.org/10.3389/fbloc.2024.1220031 (open access)

There are many benefits for researchers that take part in a project but there are also several challenges that can create a cumulative, negative, effect on their mental health. This research identifies the challenges researchers face in projects, so that the leader of the project can reduce them as far as possible.

Existing research focuses on four stages of a project: Forming, Storming, Norming and Adjourning. This research adds a fifth stage, Post-Project Collaboration, as this stage is implicitly or explicitly a part of most research projects. For example, a post-doctoral researcher expects to be credited for their work even if it is published after the end of the project. The specific challenges for each of the five stages are identified. This enables the leader to focus on a manageable number of challenges at each stage.

Some challenges are in only in one stage of the process, while other challenges are across several stages. It is notable that there is no conflict at the start, but trust is a challenge at the start. This suggests that low trust at the start causes problems later. Therefore, there is a delayed reaction, and once the conflict happens it might be too late, as the trust should have been built earlier.

Figure 1: A model for reducing the challenges for researchers in projects across five stages

Trust is important in several collaboration settings, particularly at the start, until participants familiarise themselves with each other and the project team matures. In research teams, due to the long period of time until the research is published, often over five years, there is an additional, long-term cause for risk and distrust that is only resolved once the research is published.

Trust should be built during the first stage to cover four specific topics: Trust in the leader, process, evaluation method, and trust in being credited in published work.

In the final two stages of the project, adjourning and post-project collaboration, a new vision needs to be communicated effectively as the original vision stops resonating after the norming stage.

For those challenges that cannot be solved outright, the leader of the research project must show an awareness. The leader should be ambidextrous, in the sense of focusing on the project deliverables and the socio-psychological aspects of the teamwork.

Reference

Zarifis A. & Cheng X. (2024) ‘A model reducing researchers’ challenges in projects: build trust first for better mental health’, Cogent Business & Management, vol.11., no.1, pp.1-13. https://doi.org/10.1080/23311975.2024.2350786 (open access)

Financial technology often referred to as Fintech, and sustainability are two of the biggest influences transforming many organizations. However, not all organizations move forward on both with the same enthusiasm. Leaders in Fintech do not always prioritize operating in a sustainable way. It is, therefore, important to find the synergies between Fintech and sustainability.

One important aspect of this transformation many organizations are going through is the consumersʹ perspective, particularly the trust they have, their personal information privacy concerns, and the vulnerability they feel. It is important to clarify whether leadership in Fintech, with leadership in sustainability, is more beneficial than leadership in Fintech on its own.

This research evaluates consumers’ trust, privacy concerns, and vulnerability in the two scenarios separately and then compares them. Firstly, this research seeks to validate whether leadership in Fintech influences trust in Fintech, concerns about the privacy of personal information when using Fintech, and the feeling of vulnerability when using Fintech. It then compares trust, privacy concerns and vulnerability in two scenarios, one with leadership in both Fintech and sustainability, and one with leadership just in Fintech without sustainability.

Figure 1. Leadership in Fintech, trust, privacy and vulnerability, with and without sustainability

The findings show that, as expected, leadership in both Fintech and sustainability builds trust more, which in turn reduces vulnerability more. Privacy concerns are lower when sustainability leadership and Fintech leadership come together; however, their combined impact was not found to be sufficiently statistically significant. So contrary to what was expected, privacy concerns are not reduced more effectively when there is leadership in both together.

The findings support the link between sustainability in the processes of a Fintech and being successful. While the limited research looking at Fintech and sustainability find support for the link between them by taking a ‘top‐down’ approach and evaluating Fintech companies against benchmarks such as economic value, this research takes a ‘bottom‐up’ approach by looking at how Fintech services are received by consumers.

An important practical implication of this research is that even when there is sufficient trust to adopt and use Fintech, the consumer often still feels a sense of vulnerability. This means the leaders in Fintech must not just think about how to do enough for the consumer to adopt their service, but they should go beyond that and try to build trust and reduce privacy concerns to the degree that the consumer’s belief that they are vulnerable is also reduced.

These findings can inform a Fintech’s business model and the services it offers consumers.

Reference

Zarifis A. (2024) ‘Leadership in Fintech builds trust and reduces vulnerability more when combined with leadership in sustainability’, Sustainability, 16, 5757, pp.1-13. https://doi.org/10.3390/su16135757 (open access)

Featured by FinTech Scotland: https://www.fintechscotland.com/leadership-in-fintech-builds-trust-and-reduces-vulnerability/

A Non-Fungible Token, usually referred to by its acronym NFT, uses technology that involves data on a blockchain that cannot be changed after they have been added. Therefore, while they share similar blockchain technology with cryptocurrencies, the functionality is different. NFTs’ functionality enables them to be used to prove ownership of an intangible-digital, or tangible-physical, asset, and the associated rights the owner has. The most popular practical application of NFTs for digital assets is proving ownership of digital art, virtual items in computer games, and music.

The unique features of NFTs are becoming increasingly appealing as we spend more of our time online. Despite this increased popularity there is a lack of clarity over the final form this digital asset will take. The purchasing process in particular needs to be clarified.

This research developed a model of the purchasing process of NFTs and the role of trust in this process. The model identified that the purchasing process of NFTs has four stages and each stage requires trust.
You can see in the figure, the four stages in the purchasing process on the left, and the trust required in each of these stages along the center. Finally, on the right you see that trust in all four stages leads to trust in an NFT purchase.

Figure 1. Model of consumer trust at each stage of the NFT purchasing process

The four stages of the purchase are: First, set up a cryptocurrency wallet to pay for the NFT, and to be able to receive it. Second purchase cryptocurrency with the cryptocurrency wallet, third use the cryptocurrency wallet to pay for an NFT on an NFT marketplace and finally, there is the fourth, after sales service that may involve returns, or some other form of support.

The model that is supported by our analysis identified four stages to trust: First trust in the cryptocurrency wallet, second trust in the cryptocurrency purchase, third trust in the NFT marketplace, and fourth trust in after-sales services and resolving disputes.

Reference

Zarifis, A. & Castro, L.A. (2022) ‘The NFT purchasing process and the challenges to trust at each stage’, Sustainability, vol.14, no.24:16482, pp.1-13. Available from (open access): https://doi.org/10.3390/su142416482