Seven Metaverse Privacy Principles

Jessica Outlaw
18 min readMar 10, 2022

Written by Jessica Outlaw and Sara Carbonneau

In the past twenty years, privacy has been challenged by the widespread adoption of technologies that have the capacity to track people in their everyday lives. Cell phones, computers, smart home assistants and more have become relied upon household items that track everything from users’ locations to their interests to their digital identifiers. Meanwhile, a new set of devices — virtual and augmented reality devices — are gaining in prevalence, which will expand the types of data collected about individual users and their environments.

Biometric data (gaze, saccades, blinking frequency and duration, pupil reactivity, eye color, shape, skin color, and facial expressions) could be used to infer gender, age, identity, physical and mental health, and personality traits, leaving people vulnerable to new types of privacy harms. (Kröger, Lutz & Müller, 2019). The makers of the metaverse(s), embodied 3D social mediums where people can perform many different roles (employee, family member, friend, church attendee, raver, shopper, patient visiting their doctor, etc.), have the potential to collect and consolidate information about a person in all of those contexts. This consolidation of sensitive information across a variety of contexts raises new challenges to people’s ability to control information about themselves.

Meanwhile, privacy as a concept is difficult to define. It means different things to different people. And in the age of big data, privacy is often mediated through a company’s terms of service and privacy policies, which allow the collection, algorithmic training, and sometimes selling of a person’s personal information. As privacy researcher Jacob Leon Kröger notes:

“The common, traditional understanding of data privacy is: personal data + personal choice about the data. However, this view is way too narrow to address the problems we have with modern forms of data collection and processing… In today’s socio-technical environment, our privacy choices are typically neither “free” nor “informed”. Plus, these “personal choices” affect other people and society at large.” (2022)

There is a need to amplify the needs of people involuntarily contributing their data to this ecosystem. As an example of this happening today, tagging pictures of someone on a photo app allows companies to build shadow profiles of people who may not even use the app. And in the future, passive data capture from cameras mounted on augmented reality glasses has the potential to capture the identifying characteristics and routines of non-users.

Both of these examples above currently utilize a privacy self-management model where the person who controls the device modulates data collection and use. However, the privacy self-management model, on which current practices (consent, transparency, consumer empowerment etc.) are based, is clearly failing consumers (Solove, 2008). People are simultaneously offered too many ‘choices’ (terms of service, cookies, etc.) while not being offered meaningful options to improve their online privacy. Or as the Electronic Frontier Foundation puts it:

Research has shown that people’s privacy “choices” to let businesses process their data are typically involuntary, prone to cognitive biases, and/or circumventable due to human limitations, dark patterns, legal loopholes, and the complexities of modern data processing.” (Rodriguez, K., Opsahl, K., & Mir, R., 2021)

As metaverse researchers, we specialize in studying how people’s direct experiences with the current tech ecosystem influence their attitudes about the next generations of personal devices. To that end, we conducted a survey of 1,000 people in the U.S. in 2021 and learned consumers’ privacy concerns, what steps they take to manage their privacy and tested some metaverse specific data-use scenarios (Outlaw, Carbonneau, et al., 2021). In short, people are distressed, disempowered, and 28% of respondents (n=283) report there is a product or service they will not use due to privacy concerns. We predict that the percentage of people who reject products or services due to privacy will increase, especially amongst those who use XR devices, unless there is a major shift in how companies use their data.

In response to the expansion of data streams collected by virtual and augmented reality devices, and how the data could be used by metaverse applications across multiple contexts, we propose 7 Principles of Privacy in the Metaverse that provide an alternative vision for how companies can relate with those who they wish to serve. These principles are based on the actual preferences of consumers and their expressed privacy concerns. If companies adopt this vision, we anticipate it will reduce the distress people feel about current data practices, and build trust in future generations of technological products and services.

Privacy Principles

  1. IDENTITY / privacy protects identity
  2. MARGINALIZED GROUPS / privacy centers on marginalized groups
  3. MENTAL PRIVACY / privacy protects mental experience
  4. CONTEXT / privacy settings should adapt to context
  5. SPACES / privacy settings should respect the norms of the space
  6. TRUST / privacy requires trust
  7. POWER DYNAMICS / privacy requires new models of consent

IDENTITY / privacy protects identity

“[I] want companies to have less access to my data, don’t want data about me/my habits sold, don’t want anyone to have access to my online profiles without my express permission.”*

Biometric and anatomical data that can be used to identify individuals should be given the highest priority of privacy and security.

MARGINALIZED GROUPS / privacy centers on marginalized groups

“My location was given to someone that I had a restraining order on”*

Data privacy should be designed with equity in mind and should especially center on the most vulnerable populations.

MENTAL PRIVACY / privacy protects mental experience

“Google captures too much information and makes too many predictive presumptions.”*

Companies should respect the privacy of users by not tracking mental states and refraining from making predictions or inferences about them that could influence their future behaviors.

CONTEXT / privacy settings should adapt to context

“I set all my settings to private and turned off my location trackers while out with my child and fiance for security reasons. I turn them on when I am alone.”*

People’s privacy preferences are contextual and the metaverse will need to account for people’s various relationship dynamics, which drives their privacy preferences.

SPACES / privacy settings should respect the norms of the space

“The idea of [smart home assistants] having 24/7 access makes me feel as though my privacy is not secure.”*

Every public and private space has its own meaning and purpose. Emerging technology should honor the spaces that data collection takes place in and not violate reasonable expectations of norms of those spaces.

TRUST / privacy requires trust

“I would not use some apps recommended online due to privacy issues and a lot of them breaking your trust about not sharing your private information with online marketers”*

Companies have a chance to rebuild trust with people in the metaverse if they align their standards of privacy with consumer’s privacy expectations.

POWER DYNAMICS / privacy requires new models of consent

“New updates contain new ways to sneak into my personal space”*

Privacy settings and defaults should be redesigned to reverse the power dynamics between companies and consumers to be truly consensual and non-coercive.

* Indicates that this is a quote form a respondent to the 2021 The Extended Mind survey on attitudes towards privacy. You can download the full report for additional quotes and findings here: https://www.extendedmind.io/survey

Conclusion

These privacy principles are informed by the key concerns of the people who will populate the metaverse. We have tried to show a vision for privacy that metaverse creators can use to rebuild consumer trust in their products and services.

While we have focused heavily on data collection, we acknowledge these principles are incomplete and must be in dialogue with other facets of responsible innovation. Our lens was based on research of consumer attitudes and opinions about privacy, but does not fully account for security, content moderation, safety, accessibility, and other facets of the metaverse. What do you think needs to be considered or included in the next iteration of these metaverse privacy principles?

You can read the expanded version of each privacy principle below. We have compiled each principle with quotes from consumers in our survey, experts from academic and industry, and elaborated on why the principle is important.

IDENTITY

#1 Privacy protects identity

“Privacy is too important to be left entirely to chance and taste” (Allen, 2011)

“[I] want companies to have less access to my data, don’t want data about me/my habits sold, don’t want anyone to have access to my online profiles without my express permission.” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

Personal data should be treated with care and respect, but consumers feel current data collection and sharing practices are often falling below this standard. As a result, consumer’s privacy expectations aren’t being met. The use of HMDs in the metaverse will exacerbate the problem by expanding the types of data streams collected on individuals to include highly personal and potentially sensitive information. These sensitive data streams should be given the highest standards of privacy.

  • HMDs will require certain forms of biometric data in order to function (eye tracking, hand tracking, pupil dilation, etc.). Biometric data will be able to identify individuals according to their anatomical signatures. The collection, use, and sharing of this information should be highly limited and preferably should not leave the user’s device.
  • HMDs will have the capacity to capture other data points, such as people’s behaviors, environments, and locations, that can also be used in the process of identifying individuals. However, devices should only collect these data points as far as it is necessary for their functionality and should never be stored longer than necessary, shared, or sold.
  • These issues are not exclusive to data collected from HMDs. Users’ passive digital footprint data from which identity can be inferred (browser version, cookie information, login time, browsing history, IP address, etc.) shouldn’t be collected, shared, or stored whenever possible.
  • Collection, use, storage, sharing or selling of identifiable data streams (location, behavior, biometrics, etc.) is a form of surveillance and should be avoided at all costs.
  • When personal and identifiable data must be collected and stored, it should be stored with the highest standards of security. In these cases, data protection should be a priority of teams planning to store such data from the outset and should never be an afterthought.
  • Data anonymization is not sufficient to protect users’ identities. Research has shown that it is easy to re-identify a person from an anonymized data set (Bushwick, 2019).
  • Different policy solutions are being explored. The German Data Ethics Commission has proposed that data de-anonymization potentially be made illegal and put under heavy fines (Data Ethics Commission, 2019).

MARGINALIZED GROUPS

#2 Privacy centers on marginalized groups

“AI systems can perpetuate racism, sexism, ableism, and other harmful forms of discrimination, therefore, presenting significant threats to our society — from healthcare, to economic opportunity, to our criminal justice system” — (Algorithmic Justice League, n.d.)

“My location was given to someone that I had a restraining order on” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

The potential consequences from data misuses are not equally distributed amongst users. The metaverse should acknowledge that people’s privacy concerns vary based on a wide range of factors such as demographics, organizational associations, employment status and relationship status. Privacy in the metaverse should be designed around those who are most vulnerable.

  • Misused or insecure data has the potential to exacerbate existing forms of discrimination and create real life consequences for users.
  • Different groups are going to have different privacy concerns. For instance, facial recognition technology has been proven to more often falsely identify minorities than white people and smart home devices have become tools of domination and surveillance for women in domestic violence situations (Najibi, 2020, Bowles, 2018). And US army members’ use of fitness tracking apps revealed the location of US army bases overseas (Hern, 2018).
  • Metaverse developers should do research to understand the potential harms their technologies pose to various groups and should design for the most vulnerable groups to minimize potential harms.
  • Users’ personal data should never be used to harm or target them. It is a violation of their privacy and a breach of their trust.
  • In 2021, 18% (n=182) of people reported experiencing a privacy violation and 54% (n=545) were unsure if they had experienced a privacy violation (Outlaw, Carbonneau, et al. 2021).
  • The extension of Fourth Amendment protections to online data would help protect the privacy and security of users. Currently when data is stored on a company’s servers, it becomes more easily searched/seized by law enforcement, and is no longer due Fourth Amendment protection. How data is managed has consequences on the groups who are most vulnerable to police surveillance, search, and seizures.

MENTAL PRIVACY

#3 Privacy protects mental experiences

“Models that aggregate individual data points in order to apply a generalization to a future data subject deny the individuality and autonomy of that future data subject, and the notion that truths, and perhaps all truths, about an individual can be rationally computed destroys the core idea of privacy” — (Mhlambi, 2020)

“Biometric psychography uses behavioral and anatomical information (e.g., pupil dilation) to measure a person’s reaction to stimuli over time. This can reveal both a person’s physical, mental, and emotional state, and the stimuli that caused him or her to enter that state.” — (Heller, 2020)

“Google captures too much information and makes too many predictive presumptions.” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

The combination and analysis of multiple data streams about an individual has the potential to be aggregated into computed profiles which could make predictions about their mental and emotional states, potential actions in the future or even to manipulate their opinions and decisions through choice architecture. This should be prevented for the sake of human dignity and autonomy.

  • Technological devices and the companies that create them should respect the mental privacy of users.
  • Data should not be fed into machine learning algorithms for training or prediction without express permission. If you wish to use people’s data for machine learning or algorithmic training, specifically invite people to opt-in and make the default opt-out. Also, make it possible for people to delete the data they provided if they change their mind about opting-in.
  • Data should never be used to profile people into categories based on demographics or behaviors. Within algorithmic models, these ‘tags’ could use an individual’s data against them.
  • Existing data about a user should never be used to manipulate their future decisions or behaviors. Choice architecture is powerful and should be deployed with extreme care.
  • Data should never be used to make predictions about an individual’s future. Not only could these inferences be algorithmically used to steer an individual in the direction of the prediction, but that data in the wrong hands could lead to negative consequences like higher health insurance premiums or employment rejection.
  • Any data inferred from biometric data is only ever an inference (such as assuming people’s interests based on pupil dilation). Biometric data has the potential to be used to make all types of assumptions about people from their sexual orientation to their political leanings as well as predicting their future behavior, but it is an invasion of peoples’ privacy to make assumptions about them based on involuntary responses.
  • Humans have a right to autonomy and growth. When technology supports (or pushes) existing habits, directs people through choice architecture, or predicts the future based on data from the past, they are actively working against the human capacity for transformation. Data should never have the power to erase the human element.
  • Chile recently became the first nation to protect neuro-rights. To learn more about neuro-rights and their proposed protections, read Towards a Governance Framework for Brain Data.
  • For more background on the application of brain technologies to decode and interpret mental states and processes, check out (Mecacci & Hasselager, 2017).

CONTEXT

#4 Privacy settings should adapt to context

“Our humanity is relational, defined by how we are tied to one another. Technology has an important role in those relationships.” — (Decolonial AI Manyfesto, n.d.)

“Choosing to get a camera for our front door raises questions about what it means to be a good neighbor in the age of mass surveillance, gentrification, and police violence.” (O’Gieblyn, 2022)

“I set all my settings to private and turned off my location trackers while out with my child and fiance for security reasons. I turn them on when I am alone.” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

Humans are social beings, but companies often treat users as isolated individuals. The metaverse should acknowledge the social and contextual influences on people and their privacy.

  • People’s privacy preferences tend to be contextual, differing depending on where they are, who they are with, and the type of relationship dynamics they have with those people and/or companies.
  • People may prioritize privacy differently, even in the same context, which poses questions about whose settings would take precedence in an interaction between two people wearing HMDs with divergent privacy settings (one person with high privacy, one with low). HMD settings may need to be as relational as their users.
  • HMDs will pose new privacy concerns in social settings. Recording will be much easier than with phones and questions remain about how others will know when they are being recorded by friends, acquaintances, or strangers in the vicinity. These privacy concerns should be acknowledged by developers and social settings should be established that allow users to agree to privacy settings, between friends, within a group, at an event, or prior to any social interaction.
  • Our 2021 survey validated that people pay close attention to context in forming their privacy preferences. Only 31% (n=313) of people are uncomfortable with their data being used for product customization while 62% (n=616) are uncomfortable with that same data being sold to advertisers, which may be understood by how people contextually expect data to be used (Outlaw, Carbonneau, et al. 2021).
  • Applin and Flick (2021) critique an approach of ‘design individualism,’ arguing instead that tech should create for “a collective that is enmeshed, intertwined and exists based on multiple, multiplex, social, technological, and socio technological relationships.”
  • For additional reading, Helen Nissenbaum has written extensively on Privacy as Contextual Integrity (2004)

SPACES

#5 Privacy settings should respect the norms of the space

“The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.” — (Hartzog, W., & Selinger, E., 2018)

“The idea of [smart home assistants] having 24/7 access makes me feel as though my privacy is not secure.” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

HMDs have the potential to alter public spaces and individuals’ relationships to them. However, the obscurity offered to people in public spaces is valuable. Public spaces should be preserved through careful consideration of how the space is transformed, which spaces are transformed and who controls the transformation. Respect that physical locations have their own meaning and purpose.

  • New technology should not turn public spaces in physical reality into virtual surveillance spaces. While HMDs have the potential to record events in social spaces through the actors occupying the space, they should never become a means of surveillance.
  • HMDs should also not surveil users’ private spaces. It would be an invasion of privacy to record what’s happening in people’s homes.
  • HMDs and other technological devices should not violate the reasonable expectations of norms in public and private spaces.
  • Current location tracking practices are uncomfortable for the majority of users, which is one indicator of how environmental tracking would be perceived (Outlaw, Carbonneau, et al., 2021).
  • Sixty-two percent (n=626) of people reported they were uncomfortable being the bystander in a hypothetical video that was captured by a friend or neighbor and 66% (n=667) of people were uncomfortable being the bystander in a hypothetical video captured by a stranger (Outlaw, Carbonneau, et al., 2021).
  • AR glasses will likely give users the ability to customize their world view through color palettes, artist-designed templates, or even personal graffiti. However these customizations should not become publicly visible or invade the privacy of other users or non-users in the physical world.
  • The metaverse should not invade people’s public or private spaces. Users should opt-into the maps they want to see and participate in.
  • Bystanders and non-users should be protected from potential capture by HMDs.

TRUST

#6 Privacy requires trust

“90% believe the ways their data is treated reflects how they are treated as customers” — (Redman, T., & Waitman, R., 2020)

“I would not use some apps recommended online due to privacy issues and a lot of them breaking your trust about not sharing your private information with online marketers” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

Companies claim that their data collection practices respect people’s privacy, but those claims often don’t align with how people feel about data collection practices. This leads to dissonance between consumer expectations of how their data will be handled and how it is actually handled. Companies have a chance to rebuild trust with consumers in the metaverse if they align their standards of privacy with consumer’s privacy expectations.

  • Companies tend to frame their definition of privacy in ways that benefit themselves, rather than centering on people’s concerns about how their data is being used. For example, a company may highlight privacy features that control what other users can see, while not allowing a person to opt-out of data aggregation by the company. The 2021 survey from The Extended Mind documented the following prevalence of consumer distrust:
    - 87% (n=879) of people use a privacy or security service (VPN, virus protection software, etc.)
    - 50% (n=505) pay for a privacy or security service (VPN, virus protection software, etc.)
    - 37% (n=374) of people have changed their privacy settings in the past 3 months
    - 28% (n=283) won’t a product or service due to privacy concerns
    - 18% (n=182) of people have experienced a privacy violation
  • Transparency does not solve consumer privacy concerns. Companies attempt to offer data ‘transparency’ to reduce consumer concerns about their data collection practices, but transparency is not the same as creating defaults that actually center on people’s desired privacy preferences.
  • The dissonance between companies’ proclaimed priority of privacy and people’s experiences of those companies’ products and services leads to consumer distrust. The Extended Mind’s 2021 privacy survey respondents expressed this distrust of various technology companies in their own words:
    - “They are all about their own security but not the consumers.”
    - “I do not trust that company at all.”
    - “They are not consistent with their data privacy or posting policies.”
    - “ I don’t trust their ability to keep data secure.”
  • To dive into more specific responses on consumer attitudes towards specific companies and products, you can download the codebook which is an appendix to the full privacy report from The Extended Mind at https://www.extendedmind.io/survey
  • Consumer education is not recommended as a strategy to build people’s trust in a company because it doesn’t fundamentally shift the power dynamics (see #7).

POWER DYNAMICS

#7 Privacy requires new models of consent

“So, each and every Internet user, were they to read every privacy policy on every website they visit would spend 25 days out of the year just reading privacy policies! If it was your job to read privacy policies for 8 hours per day, it would take you 76 work days to complete the task.” (Madrigal, 2012).

“Defaults are one of the strongest nudges out there that people, especially if they don’t have time to think about it, they’re just not going to make the effort to switch the default”. — Vanessa Bohns (Carbonneau, 2021).

“New updates contain new ways to sneak into my personal space” — Survey Respondent (Outlaw, Carbonneau, et al., 2021)

Current models for communication about, and management of, privacy between companies and consumers favor the companies’ goals. These practices lay bare the power dynamics at the heart of online privacy. As the metaverse emerges, we need to create new models of communication and management that allow the consumer to still access products and services even if they don’t agree to one-size-fits-all terms-of-service and privacy policies.

  • Existing data collection practices leave people vulnerable to surveillance from companies, law enforcement, and other entities. This can lead to subsequent data misuses or data uses they did not knowingly consent to.
  • Users should be able to access products or services without being forced into opt-ing into a company’s data collection practices. Users need to be able to have easy ways to set boundaries around their privacy, such as via third-party intermediaries (King, 2022).
  • ‘Notify and consent’ models can disempower users by forcing them to agree to terms & conditions without any alternative. This often means users agree to what they consider privacy violations in exchange for a service.
  • Companies often employ ‘dark patterns’ that force users into certain choices, such as employing default settings that favor the company’s data collection goals or changing user’s settings with updates. These patterns are indicative of the power dynamics at play between users and companies, with consumers being at a disadvantage because they are evaluating the benefit of a product or service while being at a tremendous knowledge deficit compared to the people who designed the technology or wrote the ToS.
  • Social psychology has shown that the human default is to be agreeable. What this means, according to Vanessa Bohns, is that after agreeing to terms of service in order to use a product or service, “they may feel violated because they agreed to something they didn’t really want to do” (Carbonneau, 2021).
  • The work is often put on consumers to opt-out of data collection practices they don’t like (when the option is even available). However, data collection should not be the standard. Consumers should be automatically opted-out with the option to opt-in.
  • Users should hold the power of control over their sensitive data, not companies. The power dynamics between consumers and companies at minimum need to be reversed in favor of the consumer. Due to the knowledge asymmetry between the two groups, the companies have responsibility to create accessible new models for privacy.
  • Defaults and updates should maintain or increase people’s privacy (and trust), not weaken it.
  • To read a recent classification of data misuses, which show how personal data can be weaponized against people, check out Kröger, Miceli and Müller (2021).

Acknowledgements: Brittan Heller, Liv Erickson, Avi Bar-Zeev, Jacob Leon Kröger, Blake Praharaj, Beth Duckles, Tyesha Snow, Tori Wheeler and all of the collaborators we have been lucky to have.

Resources

For a full list of resources cited, click here.

The source for all consumer quotes comes from: Outlaw, J., Carbonneau, S., et al. The Extended Mind. (2021). “Don’t Track My Life:” Virtual and Augmented Reality Consumer Data & Privacy Survey. https://www.extendedmind.io/survey

--

--

Jessica Outlaw

Culture, Behavior, and Virtual Reality @theextendedmind