Skip to Content
Sustainability Matters

After Roe, Collecting Your Data Puts Companies at Risk

Digital surveillance and data monetization create privacy risks for companies as consumers seek information about abortions.

Mentioned: , , , , , , ,

The high-profile overturning of Roe v. Wade by the U.S. Supreme Court has far-reaching ramifications for citizens and businesses alike, including shining a light on the collection and privacy of consumers’ personal information. Amid the furor of shifting regulations on a state-by-state basis, consumers have been reminded that the real price to be paid for using free services such as social media and search-engine tools is the disclosure and monetization of their personal information. This user data has created a rich pool of information for digital surveillance by law enforcement, and it’s an emerging privacy risk, especially as more data is collected for every additional product and service.

Digital surveillance is controversial, yet legal. This issue came to the fore with the disclosure of the U.S. government’s collection of telephony metadata under the Patriot Act, as well as with related controversies around Foreign Intelligence Surveillance Act warrants (and equivalents in other countries). This has led to the advent and increase in the disclosure of transparency reports by large companies that focus on government requests for company data, both received and fulfilled.

Surveillance primarily impacts internet software and services companies like Alphabet GOOGL and Meta Platforms META, and telecommunication services companies like AT&T T and Verizon Communications VZ. However, any company collecting and storing user data can be subject to government requests. In practice, this means companies ranging from DNA testing services like 23andMe ME to networking provider Cisco Systems CSCO and e-commerce platform Shopify SHOP can be (and have been) subject to government requests for user data. This illustrates how user data is routinely collected and stored by companies, often without the user fully comprehending the potential privacy implications.

This surveillance can have real-world impacts for both consumers and investors, given potential negative reputational and financial impacts on companies with actual or perceived involvement. In particular, the June 2022 Supreme Court decision to overturn Roe v. Wade, which for decades had determined that the U.S. Constitution conferred the right to have an abortion, has ignited calls for more stringent data privacy regulation and highlighted to consumers how personal information disclosed voluntarily, or collected through tracking devices, can end up in the hands of unintended parties. Privacy professionals have flagged a potential increase in the volume of subpoenas and warrants issued for user data such as search engine results, location data, and text messages to support criminal investigations. In this context, we anticipate companies will be forced to either alter their data collection practices or risk reputational damage for facilitating the incrimination of those seeking or facilitating abortion healthcare.

Data monetization practices such as online behavioral advertising, which is core to social-media and search-engine providers’ business models, and data brokering fuel the vast digital surveillance landscape. These practices involve collecting and aggregating data to build consumer profiles, which are either leveraged for targeted advertising or sold or licensed to other organizations or individuals. We have observed increasing public concern over this practice and strong regulatory action that threatens associated business models. In our view, disruptors like Apple AAPL are actively undermining the ability of its Big Tech peers to leverage the necessary data through opt-out prompts. Despite operating within the bounds of the law at present, we expect that greater consumer awareness of data monetization could lead to reputational damage—particularly outside the usual suspects like social media.

While it is common practice for law enforcement to request user data to support the investigation of criminal activity, it is yet to be seen if or how new antiabortion laws will be enforced, including whether there is sufficient resourcing to target individuals at scale. In the absence of a constitutional right to abortion after Roe, such user data is potentially subject to subpoenas for information used in criminal investigations around pregnancy loss and pregnancy termination. The expanded criminalization of abortion in the United States means that law enforcement could use surveillance of digital footprints created with or without the users’ explicit consent to incriminate U.S. consumers.

There will likely be a fragmented legal approach between states. However, we expect that the spotlight on potential privacy violations has created greater urgency for federal regulatory oversight. Democratic lawmakers in the U.S. have responded with a string of proposed policies aimed at enforcing tighter protections for consumers. According to Time, this legislation includes the My Body, My Data Act, which would task the Federal Trade Commission with enforcing a national privacy standard for reproductive health data collected by apps, mobile phones, and search engines, and the Health and Location Data Protection Act, which would ban data brokers from selling or transferring medical and sensitive personal information.

On such a deeply divisive issue within the U.S., companies will need to find the middle ground between protecting user privacy over reproductive health decisions and complying with prevailing laws. Companies such as Alphabet and Meta Platforms that use personal data as a core product or service through data monetization practices or to monetize their services now face greater risks. These include reputational damage for sharing user data and facilitating the incrimination of those seeking or assisting abortion healthcare, especially if the companies have publicized support for employees seeking such services. While companies have some scope to reject requests from law enforcement that they deem too broad or improper (as stipulated in Meta Platforms’ privacy policy, for instance), these companies may have no choice but to comply if that user data is deemed to be evidence of a crime.

To reduce exposure to this risk, companies can alter their practices to avoid collecting potentially incriminating user data in the first place through data minimization practices. An example of this is Alphabet’s recent decision to no longer retain location data for users visiting sensitive locations such as abortion clinics. The Electronic Frontier Foundation, a nonprofit digital rights group, also suggests companies allow users to remain anonymous through encryption, reduce data collection to the bare minimum required, and increase transparency about what types of data may be disclosed to law enforcement. Companies can also limit the transfer of user data by not selling data to data brokers.

But these practices, alongside a potential shift in consumer behavior, will at least marginally limit the volume of data that companies can leverage to monetize their offerings or sell or license to third parties. In the absence of tighter data privacy protections or altered data collection practices, consumers may elect to disclose less data voluntarily, opt out of tracking technologies, or abstain from using the services entirely, further reducing company’s access to valuable data.

We encourage investors to consider the competitive positioning of a company (screened using the Morningstar Economic Moat Rating) when assessing the financial materiality of data-privacy-related risks. For instance, while more stringent privacy regulation or a shift in consumer behavior could undermine the business models of companies reliant on user data and user engagement such as Alphabet and Meta Platforms, we think the companies’ respective competitive positions limit the financial materiality of these risks. At present, both companies are trading at an attractive discount to our fair value estimates.

Wide-moat Alphabet (parent company of Google) leverages user data to increase the return on investment on a customer’s advertising spend. Google is the world’s most widely used search engine, benefiting from strong brand assets—perhaps best illustrated by “Google it” being eponymous with searching. The more people that use the search engine, the more relevant the search results, reinforcing the appeal and creating a network effect and mild barriers to exit. Given these attributes and the integral role search engines play for navigating the internet, we do not foresee privacy concerns driving a material deterioration in demand or usage of the platform. Further, even in the event of tighter restrictions on targeted advertising, or marginally lower user data owing to shifts in consumer behavior or self-imposed data minimization, we expect the large audience that Google’s product suite draws will continue to attract advertisers.

Wide-moat Meta Platforms also uses user data to monetize its offering, and similarly to Alphabet, we expect the company’s competitive positioning will lessen the financial impact of heightened regulatory and societal scrutiny. Meta Platforms benefits from a network-effect moat source that we expect has supported the company’s ability to achieve continued user base growth and maintain user engagement despite high-profile privacy issues. We believe Meta’s network-effect moat source should help the company maintain healthy engagement levels from its massive user base of over 3.6 billion users. With such a large user base, Meta may have reached a saturation point resulting in relatively slower or stagnated growth going forward. However, we expect that healthy engagement from this base will continue to attract demand from advertisers seeking a captive audience even if the ability to target users directly is diminished.

Emma Williams does not own (actual or beneficial) shares in any of the securities mentioned above. Find out about Morningstar’s editorial policies.