This article was originally published on The Conversation, an independent source of news and views from the academic and research community.As Facebook users around the world are coming to understand, some of their favorite technologies can be used against them. It\u2019s not just the scandal over psychological profiling firm Cambridge Analytica getting access to data from tens of millions of Facebook profiles. People\u2019s filter bubbles are filled with carefully tailored information \u2013 and misinformation \u2013 altering their behavior and thinking, and even their votes.People, both individually and as a society at large, are wrestling to understand how their newsfeeds turned against them. They are coming to realize exactly how carefully controlled Facebook feeds are, with highly tailored ads. That set of problems, though, pales in comparison to those posed by the next technological revolution, which is already underway: virtual reality.On one hand, virtual worlds hold almost limitless potential. VR games can treat drug addiction and maybe help solve the opioid epidemic. Prison inmates can use VR simulations to prepare for life after their release. People are racing to enter these immersive experiences, which have the potential to be more psychologically powerful than any other technology to date: The first modern equipment offering the opportunity sold out in 14 minutes.In these new worlds, every leaf, every stone on the virtual ground and every conversation is carefully constructed. In our research into the emerging definition of ethics in virtual reality, my colleagues and I interviewed the developers and early users of virtual reality to understand what risks are coming and how we can reduce them.Intensity is going to level up\u201cVR is a very personal, intimate situation. When you wear a VR headset \u2026 you really believe it, it\u2019s really immersive,\u201d says one of the developers with whom we spoke. If someone harms you in VR, you\u2019re going to feel it, and if someone manipulates you into believing something, it\u2019s going to stick.This immersion is what users want: \u201cVR is really about being immersed \u2026 as opposed to a TV where I can constantly be distracted,\u201d one user told us. That immersiveness is what gives VR unprecedented power: \u201creally, what VR is trying to do here is duplicate reality where it tricks your mind.\u201dThese tricks can be enjoyable \u2013 allowing people to fly helicopters or journey back to ancient Egypt. They can be helpful, offering pain management or treatment for psychological conditions.But they can also be malicious. Even a common prank that friends play on each other online \u2013 logging in and posting as each other \u2013 can take on a whole new dimension. One VR user explains, \u201cSomeone can put on a VR head unit and go into a virtual world assuming your identity. I think that identity theft, if VR becomes mainstream, will become rampant.\u201dData will be even more personalVR will be able to collect data on a whole new level. Seemingly innocuous infrared sensors designed to help with motion sickness and alignment can capture near-perfect representations of users\u2019 real-world surroundings.Further, the data and interactions that give VR the power to treat and diagnose physical and mental health conditions can be used to hyper-personalize experiences and information to the precise vulnerabilities of individual users.Combined, the intensity of virtual reality experiences and the even more personal data they collect present the specter of fake news that\u2019s much more powerful than text articles and memes. Rather, immersive, personalized experiences may thoroughly convince people of entirely alternate realities, to which they are perfectly susceptible. Such immersive VR advertisements are on the horizon as early as this year.Building a virtual futureA person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears \u2013 and perhaps even feels or smells \u2013 is totally created by another person. That surrender brings both promise and peril. Perhaps in carefully constructed virtual worlds, people can solve problems that have eluded us in reality. But these virtual worlds will be built inside a real world that can\u2019t be ignored.While technologists and users are cleaning up the malicious, manipulative past, they\u2019ll need to go far beyond making social media healthier. As carefully as developers are building virtual worlds themselves, society as a whole must intentionally and painstakingly construct the culture in which these technologies exist.In many cases, developers are the first allies in this fight. Our research found that VR developers were more concerned about their users\u2019 well-being than the users themselves. Yet, one developer admits that \u201cthe fact of the matter is \u2026 I can count on my fingers the number of experienced developers I\u2019ve actually met.\u201d Even experts have only begun to explore ethics, security and privacy in virtual reality scenarios.The developers we spoke with expressed a desire for guidelines on where to draw the boundaries, and how to prevent dangerous misuses of their platforms. As an initial step, we invited VR developers and users from nine online communities to work with us to create a set of guidelines for VR ethics. They made suggestions about inclusivity, protecting users from manipulative attackers and limits on data collection.As the debacle with Facebook and Cambridge Analytica shows, though, people don\u2019t always follow guidelines, or even platforms\u2019 rules and policies \u2013 and the effects could be all the worse in this new VR world. But, our initial success reaching agreement on VR guidelines serves as a reminder that people can go beyond reckoning with the technologies others create: We can work together to create beneficial technologies we want.Elissa Redmiles is a Ph.D. student in computer science, with a concentration in survey methodology, at the University of Maryland. Her work focuses on understanding how users make security decisions and developing security education interventions for at-risk users.