A Non-Fungible Token, usually referred to by its acronym NFT, uses technology that involves data on a blockchain that cannot be changed after they have been added. Therefore, while they share similar blockchain technology with cryptocurrencies, the functionality is different. NFTs’ functionality enables them to be used to prove ownership of an intangible-digital, or tangible-physical, asset, and the associated rights the owner has. The most popular practical application of NFTs for digital assets is proving ownership of digital art, virtual items in computer games, and music.
The unique features of NFTs are becoming increasingly appealing as we spend more of our time online. Despite this increased popularity there is a lack of clarity over the final form this digital asset will take. The purchasing process in particular needs to be clarified.
This research developed a model of the purchasing process of NFTs and the role of trust in this process. The model identified that the purchasing process of NFTs has four stages and each stage requires trust. You can see in the figure, the four stages in the purchasing process on the left, and the trust required in each of these stages along the center. Finally, on the right you see that trust in all four stages leads to trust in an NFT purchase.
The four stages of the purchase are: First, set up a cryptocurrency wallet to pay for the NFT, and to be able to receive it. Second purchase cryptocurrency with the cryptocurrency wallet, third use the cryptocurrency wallet to pay for an NFT on an NFT marketplace and finally, there is the fourth, after sales service that may involve returns, or some other form of support.
The model that is supported by our analysis identified four stages to trust: First trust in the cryptocurrency wallet, second trust in the cryptocurrency purchase, third trust in the NFT marketplace, and fourth trust in after-sales services and resolving disputes.
Reference
Zarifis, A. & Castro, L.A. (2022) ‘The NFT purchasing process and the challenges to trust at each stage’, Sustainability, vol.14, no.24:16482, pp.1-13. Available from (open access): https://doi.org/10.3390/su142416482
The interest in Non-fungible Tokens (NFTs) has ‘exploded’ recently, but it is not clear what final form they will take. This innovation will have difficulties reaching a wider audience until more clarity is achieved on two main issues: What exactly are the NFT business models, and how do they build trust. The findings of recent research (Zarifis and Cheng, 2022), illustrated in figure 1, show that there are four NFT business models:
(1) The first business model is an NFT creator: They can create digital art that is then minted as an NFT, and sold on an NFT platform. The NFT competitive advantages include having proof of irrefutable ownership, and the ability to sell a piece of art that is unique or limited to a low number. The reliability and transparency of the NFT, build trust with the consumer.
(2) The second business model is an NFT marketplace, selling creators’ NFTs: The competitive advantage of NFTs as part of this business model is once again the irrefutable ownership, and that it gives consumers digital art they can own. The purchase history of the consumers is transparent, so this gives insights into their interests. As with the previous business model, a community and trust are built between the collectors.
(3) The third business model is a Company offering their own NFT, typically a fan token: This business model has several NFT processes. These are to sell NFTs for profit, to give NFTs as rewards, make payment with fan tokens, give an NFT so that the person receiving it has certain utilities and rights, such as voting rights. The competitive advantages of NFTs, within this business model, are that they allow fans to feel closer to their team and builds a community and trust between the fans.
(4) The fourth business model is a Computer game with NFT sales: There can be in-game purchases of NFT minted virtual items, limited or unique in game purchases and players can be rewarded for playing, know as ‘play to earn’. This offers incentives to game developers to continue producing rare items, provides an ongoing revenue stream for existing games, and builds a community and trust between the players.
Reference
Zarifis A. & Cheng X. (2022) ‘The business models of NFTs and Fan Tokens and how they build trust’, Journal of Electronic Business & Digital Economics, vol.1, pp.1-14. Available from: https://doi.org/10.1108/JEBDE-07-2022-0021
New Fintech and Insurtech services are popular with consumers as they offer convenience, new capabilities and in some cases lower prices. Consumers like these technologies but do they trust them? The role of consumer trust in the adoption of these new technologies is not entirely understood. From the consumer’s perspective, there are some concerns due to the lack of transparency these technologies can have. It is unclear if these systems powered by artificial intelligence (AI) are trusted, and how many interactions with consumers they can replace. There have been several adverts recently that emphasize that their company will not force you to communicate with AI and will provide a real person to communicate with are evidence of some push-back by consumers. Even pioneers of AI like Google are offering more opportunities to talk to a real person an indirect acknowledgment that some people do not trust the technology. Therefore, this research attempts to shed light on the role of trust in Fintech and Insurtech, especially if trust in AI in general and trust in the specific institution play a role (Zarifis & Cheng, 2022).
This research validates a model, illustrated in figure 1, that identifies the four factors that influence trust in Fintech and Insurtech. As with many other models of human behavior, the starting point is the individual’s psychology and the sociology of their environment. Then, the model separates trust in a specific organization and trust in a specific technology like AI. This is an important distinction: Consumers have beliefs about the organization they bring with them and other pre-existing beliefs on AI. Their beliefs on AI might have been shaped by experiences with other organizations.
Therefore, the validated model shows that trust in Fintech or Insurtech is formed by the (1) individual’s psychological disposition to trust, (2) sociological factors influencing trust, (3) trust in either the financial organization or the insurer and (4) trust in AI and related technologies.
This model was initially tested separately for Fintech and Insurtech. In addition to validating a model for trust in Fintech and Insurtech separately, the two models were compared to see if they are equally valid or different. For example, if one variable is more influential in one of the two models, this would suggest that the model of trust in one of them is not the same as in the other. The results of the multigroup analysis show that the model is indeed equally valid for Fintech and Insurtech. Having a model of trust that is suitable for both Fintech and Insurtech is particularly useful as these services are often offered by the same organization, or even the same mobile application side by side.
Reference
Zarifis A. & Cheng X. (2022) ‘A model of trust in Fintech and trust in Insurtech: How Artificial Intelligence and the context influence it’, Journal of Behavioral and Experimental Finance, vol. 36, pp. 1-20. Available from (open access): https://doi.org/10.1016/j.jbef.2022.100739
The volatile times we live in create many real and perceived risks for people (Zarifis et al. 2022). This makes trust harder to build and maintain. Since the turn of the century, the more impersonal nature of many parts of our lives due to the increased reliance on technology made trust harder. So, technology was often a barrier to trust. Now, we no longer necessarily prioritise trust in humans over trust in machines and there are many technologies that support trust.
Trustech is technology that builds and protects user, or consumer, trust. The importance of trust and the technology that supports it, have increased over the years. We are at a point now where a specialised term is needed to represent the technology that supports trust. After similar terms such as Fintech and Insurtech comes Trustech. The use of the term Trustech will focus the minds on this important area.
Dr Alex Zarifis, FHEA
Reference
Zarifis A., Cheng X., Jayawickrama U. & Corsi S. (2022) ‘Can Global, Extended and Repeated Ransomware Attacks Overcome the User’s Status Quo Bias and Cause a Switch of System?’, International Journal of Information Systems in the Service Sector (IJISSS), vol.14, iss.1, pp.1-16. Available from (open access): https://doi.org/10.4018/IJISSS.289219
Theo Andersson, Business Intelligence Officer / Omvärldsbevakare at DIGG – Swedish Agency for Digital Government
In July 2020, the Swedish Public Employment Service, the Swedish eHealth Agency, the Agency for Digital Government (DIGG) and the Swedish Tax Agency were collectively tasked with showing how an individual’s ability to have insight and control over data that have been stored about them by the public sector – and, in the long term, in the private sector – can be enhanced.
Large quantities of data about individuals are processed by public administration, for which there is support in statutes or agreements. The public sector often not only has the right but also an obligation to process personal data in order to perform its duties.
It is widely believed that it would make things easier for individuals, and introduce efficiency improvements for the public sector, if it were possible for another party to reuse information already possessed by an authority as the basis for making decisions or taking certain measures. It is however unclear exactly how this can be achieved and what prerequisites need to be in place for such user-centricity to be achieved in the design of these services.
The initial external monitoring and analysis of these issues focused on various publications from amongst others, the EU, UN and OECD in order to determine if there exists a consensus regarding whether individuals are or should be, entitled to increased insight into and control over the data stored about them. After confirming that indeed there seemed to be such a consensus, we were interested in how well technological developments and discussions on policy, data management and public agencies instructions agreed with the vision for how this insight and control will be achieved and then given to, and managed, by the individual.
The citizen perspective
Most citizen surveys we reviewed showed that the public sector has an important role to play with regards to legitimising new methods of sharing and using data in the minds of potential users i.e. citizens. Trust in the public sector is an important aspect of how private individuals view being part of a data-sharing ecosystem that also includes private companies (Zarifis, Cheng, & Kroenung, 2019). The degree to which the public sector adds trust in the data-sharing ecosystem is naturally dependent on political realities in various countries. What was common however was the notion that increased control over an individual’s own personal data is generally desirable by the individual. However, few are clear on how this control can be exercised or whether they wish to take upon themselves, the greater personal responsibility for sharing data that becomes necessary if more efficient public services are to be realised.
Examples from a number of different countries (Finland, France, Norway, Denmark, the United States, the United Kingdom and India) were also analysed and they emphasised various and differing aspects of insight and control, such as technical solutions, legal aspects, the involvement of trade and industry, and the value of political governance.
Legal aspects
Technical solutions, financial means and clear political governance are important and vary between different countries, but they are not usually the primary obstacle to the development of a proof of concept or solution. Usually, it is legal interpretations and uncertainties that appear to be the most problematic and that are considered obstacles to developments in this area. The legal basis for sharing data between private and public entities and for an individual’s ability to have increased control over this data-flow is not specified by any national or supranational organisation, although there are suggestions, most of which are centred on the concept of consent. How consent is interpreted varies, however, between different nations (even within the EU). Several future investigations will have to be conducted in order to clarify rules, responsibilities and possibilities for initiating data-sharing.
There are also uncertainties regarding which incentives and prerequisites (in addition to legal certainty) would motivate individuals, trade and industry to actively demand increased insight and control for individual citizens from political policy-makers. It is not obvious, by any means, that the individual’s incentives are aligned with that of private sector representatives in the proposed data-sharing ecosystem.
Digital maturity
In order to keep the government systems working in accordance with the increasing expectations that citizens have for public services, there is a risk that it is assumed that all eligible citizens understand the consequences, costs, benefits and risks of sharing personal data (or not doing so). Without this assumption, the basis upon which more and more collected data is expected to be used for more effective public services based on data that are shared by consent becomes a problem.
In Sweden, it is this exact fear of the individual’s incapacity to make informed decisions about data-sharing that renders consent as an obsolete basis for data-sharing when designing an ecosystem that aims to allow for an individual to initiate or allow for, data-sharing between a lone individual and government authorities or between a public authority and a private company. The risk for perceived or overt pressure to influence the individual’s decision-making is deemed too high.
No matter what the basis for data-sharing will be in the future, this development will have to be accompanied by a public discussion and strategy for handling individuals who opt-out of the systems for reasons ranging between a lack of digital competence or cognitive ability, to simply preferring the established processes of the past.
Opt-in
Since the ecosystem will be based on voluntary participation, at least initially and for the foreseeable future, clear incentives need to be created for all parties in the value chain to use shared data, from the individual to organisations. Individuals need to understand how data can be used and what benefit this can bring for them, such as new value-creating services and reduced lag-time from idea to citizen service delivery. Solutions for providing increased insight and control will otherwise result only in passiveness, inactivity and a reduction of shared data. This places high demands on the design and interfaces that affect the user experience, and the generation of commitment in continued use of the services that suppliers of different areas for insight and control can offer.
The private sector also needs to identify new possibilities and business models based on their customers controlling more of the interaction between them, otherwise they will not take part in initiatives that attempt to explore the opportunities for increased innovation and value creation (Weill, Woerner, Patricia, & Baquero, 2021; Zarifis, Kawalek, & Azadegan, 2021). The public sector has a responsibility to ensure an inclusive environment as regards to the identification and analysis of potential hindrances and/or opportunities that private sector actors may have. Without a joint effort to realise the vision for increased insight and control for the individual who is at once, resident and recipient of public services as well as customer, there will be no progress outside of areas that may be deemed as vital. These areas could be vital for either the public sector’s ability to handle health policies or tackle diseases (e.g. the mandatory sharing of health data) or for certain private companies work with corporate social responsibility projects where certain customer data may be offered back to the individual in order allow for good-will creating incentives.
Testing environments
As mentioned, there is a need for policy that allows for co-creation of a data-sharing ecosystem, and this may require venues (physical and digital) in which public and private entities can jointly produce solutions based on clearly identified needs amongst citizens, (European Commission, 2019). This may involve part of a life event or, for example, a business opportunity that a company wishes to explore. A common understanding is first needed on what is meant by data portability and of how existing structures, business models and policies may be affected by giving individuals insight and control into personal data that is kept by public administration as well as consumer and behavioural data kept by businesses and organisations. The transition is complex, and the need of test environments and expertise in designing digital environments can be considered great. Such testing environments, be they called green houses or regulatory sandboxes, should incorporate legal considerations and expertise, human-centric design, use cases taken from political priorities, technical expertise and user tests.
End
The work carried out during the latter part of 2020 until June 2021 resulted in a governmental report containing a model/conceptualisation of a data-sharing ecosystem centred on the individual (see below), a proof of concept showcasing the ability to populate a digital CV with data from government sources such as address, completed studies, authenticated driving license(s) and so on. The report also contained a visualisation of a scenario for sharing data related to receiving a doctor confirmation of illness and sharing this with relevant authorities.
Figure 1. The individual in the centre with overview, insight and control.
The report identified that there is great value in giving private individuals increased insight and control over their data. It has also become apparent that there are several different ways of technically realising an arrangement that puts the individual in the centre, where they can be given an overview, insight and control. Whilst we can see great potential in increasing user-centricity with regards to the way data is processed, we can also see a need for other prerequisites to move at the same pace and direction. It has become clear that there is a potential conflict between an individual’s entitlement to insight and control on the one hand, and the prerequisites for authorities to realise such insight and control on the other. The report therefore highlighted the following recommendations for further efforts relating to the issue of increased insight and control over personal data:
An investigation should be made regarding how users of tools and services for increased insight and control want these to be designed. The need of individuals must have a direct effect on how the tools, including the data overview or digital wallet, should be designed and what functionality should be prioritised. An increased understanding of important design aspects for these tools can have a significant impact on the utilisation rate and is important for digital inclusiveness.
There is a need to clarify responsibilities for user-areas i.e. log-in areas on public websites where data on an individual is displayed. Under the current system, it may be challenging for authorities to comply with controllership as defined in the EU Data Protection Regulation, (European Parliament, 2016), and the many obligations in relation to the individuals whose personal information is processed. The potential conflict is between the ability of authorities to provide services by means of such user areas whilst at the same time, avoiding making details stored therein from becoming official documents. The authority must not have any real power over the information and documents that are processed in the user area in order for it to be considered as such according to the regulations in the Freedom of the Press Act (European Court of Human Rights, 2021). It must therefore be clarified how these two conflicting interests can be united in sustainable solutions.
Sharing information with other authorities and individuals, even when done on behalf of the individual in question, is not covered by the service provision obligations of authorities. The same applies for a possible exchange of information between authorities and private entities. Today, the way authorities process personal data is regulated by means of sector-specific legislation. The legal status for the digital transmission of information and the possibilities of providing individuals with insight and control can thus be considered uncertain. The report suggests that public authority service obligations come to include disclosing and transmitting information to the individual and other authorities using digital media as this is not specified today and would mean authorities would have greater opportunities to provide for an individual’s insight and control.
In short, the report highlights the necessity for a government commissioned impact analysis with regards to expanding the service obligations, incorporating the special data protection regulations (data registry laws) and the possibility of disclosing data in electronic form.
Key concepts
Insight: Insight is taken to mean that individuals are able to visualise or otherwise understand the type of data kept about them by whatever organisation is holding the information in an easily understandable and unified manner, as well as being able to see for what purposes the information is collected, and on what legal basis it is processed there.
Control: Control refers mainly to possibilities for an individual to be able to digitally access their data and request corrections, deletion or transferral of data to and from the entity that has it.
Recommended Citation:
Andersson T. (2021) ‘Individual Insight and Control Over Personal Data Held by The Public And Private Sector: A Swedish Perspective’, TrustUpdate.com, Available from: https://www.trustupdate.com/case-studies/
Biography:
Theo Andersson studied Business Administration and International Relations at the University of Gothenburg. Having worked with bilateral trade and investment relations as an officer of the Cyprus Chamber of Commerce and Industry for several years, Theo returned to Sweden and started working with questions relating to the digitalisation of public sector services as an officer of the Swedish Public Employment Service. This led to a short stint as a regional digitalisation coordinator at the Region of Halland before accepting a position as trend surveyor/business intelligence analyst at the then recently formed Agency for Digital Government (DIGG).
References
European Commission. (2019). Ethics Guidelines for Trustworthy AI. Retrieved from https://ec.europa.eu/digital
European Court of Human Rights. (2021). Guide on Article 10 of the European Convention on Human Rights, Freedom of Expression. Retrieved from https://echr.coe.int/Documents/Guide_Art_11_ENG.pdf
European Parliament. (2016). EU General Data Protection Regulation GDPR. Official Journal of the European Union. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
Weill, B. P., Woerner, S., Patricia, A., & Baquero, D. (2021). Hello Domains, Goodbye Industries. Retrieved from https://cisr.mit.edu/publication/2021_0101_HelloDomains_WeillWoernerDiaz
Zarifis, A., Kawalek, P., & Azadegan, A. (2021). Evaluating If Trust and Personal Information Privacy Concerns Are Barriers to Using Health Insurance That Explicitly Utilizes AI. Journal of Internet Commerce, 20(1), 66–83. https://doi.org/10.1080/15332861.2020.1832817
Zarifis, Alex, Cheng, X., & Kroenung, J. (2019). Collaborative consumption for low and high trust requiring business models: from fare sharing to supporting the elderly and people with disability. International Journal of Electronic Business, 15(1), 1. https://doi.org/10.1504/ijeb.2019.10020272
The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.
Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.
Recent research at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.
Mitigate the lower trust with explicit AI
The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.
Mitigate the higher privacy concerns with explicit AI
The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.
References
European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital
Zarifis A., Kawalek P. & Azadegan A. (2021). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, vol.20, pp.66-83. Available from (open access): https://doi.org/10.1080/15332861.2020.1832817
Here we see ‘The dance of life’ by Edvard Munch. Someone arranged when this event would happen, what the music would be, but nevertheless the two people had to choose to dance together. If the context was different in some way would they still chose to dance together?
Research exploring the language of the sharing economy (Zarifis et al. 2019), looked at how we build trust and reduce privacy concern on Airbnb in German and English. The findings indicate that the landlord does not usually reduce privacy concern but leaves this to the platform. The findings also illustrate that language has a role in the interaction, but it is limited, and the platform norms and habits are more influential. Language plays a role primarily in three ways:
Firstly, in the way the landlord expresses the benefits of the vacancy,
Secondly, the terms, conditions and fines and
Lastly the landlord’s self-presentation in the personal profiles.
‘The dance of life’ Edvard Munch
The expansion of the sharing economy has spread around the world. The SE is growing in popularity despite several challenges such as insufficient regulation, legal restrictions and personal information privacy breaches. The push of SE champions such as Airbnb and Uber is met by an equally strong pull from consumers across the world. The differences between consumers from different geographic locations and cultures, who speak different languages and have different habits, do not appear to be a major challenge to the diffusion of SE models.
SE champions like Airbnb provide the online platform for the sharing. These platforms fulfil some functions, like processing the payment, but leave others, like describing the room, to landlords that want to share.
The person that wants to rent the room must have a sufficiently high trust and sufficiently low privacy concern to engage. The individual must provide personal information to book the flat, but they are also vulnerable during their stay, in several ways including video surveillance. It is clear that the individual’s privacy concerns are also elevated and compounded in the SE because of the physical privacy risks that are added to information risks. Therefore, it is necessary to explore trust and privacy together.
As the platform stops short of offering all the information and providing all the functionality, the landlord must build trust and reduce privacy concern. Despite the role of the platform in bringing the two sides together, the renter will stay in a room of a stranger, not an organization with a recognized brand and reputation. There are higher risks and likelihood of distrust than a traditional hotel.
In addition to bringing the two sides together, the platform such as Airbnb, takes several steps to increase the limited trust such as making reviews from previous renters available. Nevertheless, the landlord must also reduce the feeling of risk, increase trust and reduce privacy concerns in the way they communicate the information about themselves and what they are offering. The landlord is an individual the renter has not met before and not an established organization with a recognized brand, so they only have a few words and pictures to achieve this.
Users in different countries have similar, if not the same, experiences on the Internet using popular global platforms. It is easy to neglect that different languages are still used, and they influence the interaction differently. The language used shapes the way a message is coded and decoded based on standardized language norms and culture.
Nevertheless, the role of language and linguistics in information systems, and especially in the SE, has not been sufficiently covered. Therefore, the aim of that research was to explore what the trust building methods are and how privacy concerns are reduced in the SE, in German and English, using the case of Airbnb. Understanding how collaborative consumers build trust also answers the question whether there are differences in building trust and reducing privacy concern in these two languages. The room descriptions and the profiles of the landlords offering their properties in Germany in German and in England in English were contrasted.
The division of responsibility for building trust and reducing privacy concern between the platform and the landlord were clarified. The landlord focuses on building trust through their property description and profile while the platform supports trust and decreases privacy concerns.
Furthermore, the linguistic approaches to building trust and reducing privacy concern were identified. For building trust these are:
The level of formality
Distance and proximity
Emotiveness and humour
Being assertive, including passive aggressive, but avoiding anger
Conformity to the platform language style and terminology and lastly
Setting boundaries
The practical implications are the following: Firstly, there appears to be limited benefit in adapting the platform for Germany and England because of the limited role of language compared to the platform norm in communication. It appears that the efforts of the platform to reduce risk in several ways, including the aggregated information they collect, are effective.
In terms of building trust there are practical implications for the landlord and the platform. The constituent participants of the SE should be clear about where the other participant is building trust effectively and where they need to build trust. An example is that the landlord should build trust and reduce privacy concern for physical privacy, while privacy of financial information related to the payment can be covered by the platform. It is important for platforms, landlords and the related institutions to support trust transference. For example, the landlord aligns their message to that of Airbnb.
To conclude, using the metaphor of the painting ‘the dance of life’ we can say that the platform has choreographed this dance sufficiently between the participants that different languages have a limited influence on the outcome. Whether they were speaking in English or German the man in the blue suit and the lady in the red dress would say similar things before agreeing to dance together.
References
Zarifis A., Ingham R. & Kroenung, J. (2019). Exploring the language of the sharing economy: Building trust and reducing privacy concern on Airbnb in German and English, Cogent Business & Management, vol.6, iss.1, pp.1-15. Available from (open access): https://www.tandfonline.com/doi/full/10.1080/23311975.2019.1666641