Privacy Please: Privacy Law, Social Media Regulation and the Evolving Privacy Landscape in the US

Social media regulation is a touchy subject in the United States.  Congress and the White House have proposed, advocated, and voted on various bills, aimed at protecting and guarding people from data misuse and misappropriation, misinformation, harms suffered by children, and for the implications of vast data collection. Some of the most potent concerns about social media stem from use and misuse of information by the platforms- from the method of collection, to notice of collection and use of collected information. Efforts to pass a bill regulating social media have been frustrated, primarily by the First Amendment right to free speech. Congress has thus far failed to enact meaningful regulation on social media platforms.

The way forward may well be through privacy law. Privacy laws give people some right to control their own personhood including their data, right to be left alone, and how and when people see and view them. Privacy laws originated in their current form in the late 1800’s with the impetus being one’s freedom from constant surveillance by paparazzi and reporters, and the right to control your own personal information. As technology mutated, our understanding of privacy rights grew to encompass rights in our likeness, our reputation, and our data. Current US privacy laws do not directly address social media, and a struggle is currently playing between the vast data collection practices of the platforms, immunity for platforms under Section 230, and private rights of privacy for users.

There is very little Federal Privacy law, and that which does exist is narrowly tailored to specific purposes and circumstances in the form of specific bills. Somes states have enacted their own privacy law scheme, California being on the forefront, Virginia, Colorado, Connecticut, and Utah following in its footsteps. In the absence of a comprehensive Federal scheme, privacy law is often judge-made, and offers several private rights of action for a person whose right to be left alone has been invaded in some way. These are tort actions available for one person to bring against another for a violation of their right to privacy.

Privacy Law Introduction

Privacy law policy in the United States is premised on three fundamental personal rights to privacy:

  1. Physical right to privacy- Right to control your own information
  2. Privacy of decisions– such as decisions about sexuality, health, and child-rearing. These are the constitutional rights to privacy. Typically not about information, but about an act that flows from the decision
  3. Proprietary Privacy – the ability to protect your information from being misused by others in a proprietary sense.

Privacy Torts

Privacy law, as it concerns the individual, gives rise to four separate tort causes of action for invasion of privacy:

  1. Intrusion upon Seclusion- Privacy law provides a tort cause of action for intrusion upon seclusion when someone intentionally intrudes upon the reasonable expectation of seclusion of another, physically or otherwise, and the intrusion is objectively highly offensive.
  2. Publication of Private Facts- One gives publicity To a matter concerning the Private life of another that is not of legitimate concern to the public, and the matter publicized would be objectively highly offensive. The first amendment provides a strong defense for publication of truthful matters when they are considered newsworthy.
  3. False Light – One who gives publicity to a matter concerning another that places the other before the public in a false light when The false light in which the other was placed would be objectively highly offensive and the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
  4. Appropriation of name and likeness- Appropriation of one’s name or likeness to the defendant’s own use or benefit. There is no appropriation when a persona’s picture is used to illustrate a non-commercial, newsworthy article. This is usually commercial in nature but need not be. The appropriation could be of “identity”. It need not be misappropriation of name, it could be the reputation, prestige, social or commercial standing, public interest, or other value on the plaintiff’s likeness.

These private rights of action are currently unavailable for use against social media platforms because of Section 230 of the Decency in Communications Act, which provides broad immunity to online providers for posts on their platforms. Section 230 prevents any of the privacy torts from being raised against social media platforms.

The Federal Trade Commission (FTC) and Social Media

Privacy law can implicate social media platforms when their practices become unfair or deceptive to the public through investigation by the Federal Trade Commission (FTC). The FTC is the only federal agency with both consumer protection and competition jurisdiction in broad sectors of the economy. FTC investigates business practices where those practices are unfair or deceptive. FTC Act 15 U.S.C S 45- Act prohibits “unfair or deceptive acts or practices in or affecting commerce” and grants broad jurisdiction over privacy practices of businesses to the FTC. Trade practice is unfair if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and is not outweighed by countervailing benefits to consumers or competition. A deceptive act or practice is a material representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.

Critically, there is no private right of action in FTC enforcement. The FTC has no ability to enforce fines for S5 violations but can provide injunctive relief. By design, the FTC has very limited rulemaking authority, and looks to consent decrees and procedural, long-lasting relief as an ideal remedy. The FTC pursues several types of misleading or deceptive policy and practices that implicate social media platforms: notice and choice paradigms, broken promises, retroactive policy changes, inadequate notice, and inadequate security measures. Their primary objective is to negotiate a settlement where the company submits to certain measures of control of oversight by the FTC for a certain period of time. Violations of the agreements could yield additional consequences, including steep fines and vulnerability to class action lawsuits.

Relating to social media platforms, the FTC has investigated misleading terms and conditions, and violations of platform’s own policies. In Re Snapchat, the platform claimed that user’s posted information disappeared completely after a certain period of time, however, through third party apps and manipulation of user’s posts off of the platform, posts could be retained. The FTC and Snapchat settled, through a consent decree, to subject Snapchat to FTC oversight for 20 years.

The FTC has also investigated Facebook for violation of its privacy policy. Facebook has been ordered to pay a $5 billion penalty and to submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy to settle FTC charges claiming that they violated a 2012 agreement with the agency.

Unfortunately, none of these measures directly give individuals more power over their own privacy. Nor do these policies and processes give individuals any right to hold platforms responsible for being misled by algorithms using their data, or for intrusion into their privacy by collecting data without allowing an opt-out.

Some of the most harmful social media practices today relate to personal privacy. Some examples include the collection of personal data, the selling and dissemination of data through the use of algorithms designed to subtly manipulate our pocketbooks and tastes, collection and use of data belonging to children, and the design of social media sites to be more addictive- all in service of the goal of commercialization of data.

No current Federal privacy scheme exists. Previous Bills on Privacy have been few and narrowly tailored to relatively specific circumstances and topics like healthcare and medical data protection by HIPPA, protection of data surrounding video rentals as in the Video Privacy Protection Act, and narrow protection for children’s data in Children’s Online Protection Act. All the schemes are outdated and fall short of meeting the immediate need of broad protection of widely collected and broadly utilized data from social media.

Current Bills on Privacy

Upon request from some of the biggest platforms, outcry from the public, and the White House’s request for Federal Privacy regulation, Congress appears poised to act. The 118th Congress has pushed privacy law as a priority in this term by introducing several bills related to social media privacy. There are at least ten Bills currently pending between the House of the Senate addressing a variety of issues and concerns from Children’s data privacy to the minimum age for use and designation of a new agency to monitor some aspects of privacy.

S744The Data Care Act of 2023 aims to protect social media user’s data privacy by imposing fiduciary duties on the platforms. The original iteration of the bill was introduced in 2021 and failed to receive a vote. It was re-introduced in March of 2023 and is currently pending. Under the act, social media platforms would have the duty to reasonably secure user’s data from access, refrain from using the data in a way that could foreseeably “benefit the online service provider to the detriment of the end user” and to prevent disclosure of user’s data unless the party is also bound by these duties. The bill authorizes the FTC and certain state officials to take enforcement actions upon breach of those duties. The states would be permitted to take their own legal action against companies for privacy violations. The bill would also allow the FTC to intervene in the enforcement efforts by imposing fines for violations.

H.R.2701 – Perhaps the most comprehensive piece of legislation on the House floor is the Online Privacy Act. In 2023, the bill was reintroduced by democrat Anna Eshoo after an earlier version on the bill failed to receive a vote and died in Congress. The Online Privacy Act aims to protect users by providing individuals rights relating to the privacy of their personal information. The bill would also provide privacy and security requirements for treatment of personal information. To accomplish this, the bill established a new agency – the Digital Privacy Agency- which would be responsible for enforcement of the rights and requirements. The new individual rights in privacy are broad and include the rights of access, correction, deletion, human review of automated decision, individual autonomy, right to be informed, and right to impermanence, amongst others. This would be the most comprehensive plan to date. The establishment of a new agency with a task specific to administration and enforcement of privacy laws would be incredibly powerful. The creation of this agency would be valuable irrespective of whether this bill is passed.

HR 821– The Social Media Child Protection Act is a sister bill to one by a similar name which originated in the Senate. This bill aims to protect children from the harms of social media by limiting children’s access to it. Under the bill, Social Media platforms are required to verify the age of every user before accessing the platform by submitting a valid identity document or by using another reasonable verification method. A social media platform will be prohibited from allowing users under the age of 16 to access the platform. The bill also requires platforms to establish and maintain reasonable procedures to protect personal data collected from users. The bill affords for a private right of action as well as state and FTC enforcement.

S 1291The Protecting Kids on Social Media Act is similar to its counterpart in the House, with slightly less tenacity. It similarly aims to protect children from social media’s harms. Under the bill, platforms must verify its user’s age, not allow the user to use the service unless their age has been verified, and must limit access to the platform for children under 12. The bill also prohibits retention and use of information collected during the age verification process. Platforms must take reasonable steps to require affirmative consent from the parent or guardian of a minor who is at least 13 years old for the creation of a minor account, and reasonably allow access for the parent to later revoke that consent. The bill also prohibits use of data collected from minors for algorithmic recommendations. The bill would require the Department of Commerce to establish a voluntary program for secure digital age verification for social media platforms. Enforcement would be through the FTC or state action.

S 1409– The Kids Online Safety Act, proposed by Senator Blumenthal of Connecticut, also aims to protect minors from online harms. This bill, as does the Online Safety Bill, establishes fiduciary duties for social media platforms regarding children using their sites. The bill requires that platforms act in the best interest of minors using their services, including mitigating harms that may arise from use, sweeping in online bullying and sexual exploitation. Social media sites would be required to establish and provide access to safeguards such as settings that restrict access to minor’s personal data and granting parents the tools to supervise and monitor minor’s use of the platforms. Critically, the bill establishes a duty for social media platforms to create and maintain research portals for non-commercial purposes to study the effect that corporations like the platforms have on society.

Overall, these bills indicate Congress’s creative thinking and commitment to broad privacy protection for users from social media harms. I believe the establishment of a separate body to govern, other than the FTC which lacks the powers needed to compel compliance, to be a necessary step. Recourse for violations on par with the EU’s new regulatory scheme, mainly fines in the billions, could help.

Many of the bills, for myriad aims, establish new fiduciary duties for the platforms in preventing unauthorized use and harms for children. There is real promise in this scheme- establishing duty of loyalty, diligence and care for one party has a sound basis in many areas of law and would be more easily understood in implementation.

The notion that platforms would need to be vigilant in knowing their content, studying its affects, and reporting those effects may do the most to create a stable future for social media.

The legal responsibility for platforms to police and enforce their policies and terms and conditions is another opportunity to further incentivize platforms. The FTC currently investigates policies that are misleading or unfair, sweeping in the social media sites, but there could be an opportunity to make the platforms legally responsible for enforcing their own policies, regarding age, against hate, and inappropriate content, for example.

What would you like to see considered in Privacy law innovation for social media regulation?

Update Required: An Analysis of the Conflict Between Copyright Holders and Social Media Users

Opening

For anyone who is chronically online as yours truly, in one way or another we have seen our favorite social media influencers, artists, commentators, and content creators complain about their problems with the current US Intellectual Property (IP) system. Be it that their posts are deleted without explanation or portions of their video files are muted, the combination of factors leading to copyright issues on social media is endless. This, in turn, has a markedly negative impact on free and fair expression on the internet, especially within the context of our contemporary online culture. For better or worse, interaction in society today is intertwined with the services of social media sites. Conflict arises when the interests of copyright holders clash with this reality. They are empowered by byzantine and unrealistic laws that hamper our ability to exist as freely as we do in real life. While they do have legitimate and fundamental rights that need to be protected, such rights must be balanced out with desperately needed reform. People’s interaction with society and culture must not be hampered, for that is one of the many foundations of a healthy and thriving society. To understand this, I venture to analyze the current legal infrastructure we find ourselves in.

Current Relevant Law

The current controlling laws for copyright issues on social media are the Copyright Act of 1976 and the Digital Millennium Copyright Act (DMCA). The DMCA is most relevant to our analysis; it gives copyright holders relatively unrestrained power to demand removal of their property from the internet and to punish those using illegal methods to get ahold of their property. This broad law, of course, impacted social media sites. Title II of the law added 17 U.S. Code § 512 to the Copyright Act of 1976, creating several safe harbor provisions for online service providers (OSP), such as social media sites, when hosting content posted by third parties. The most relevant of these safe harbors to this issue is 17 U.S. Code § 512(c), which states that an OSP cannot be liable for monetary damages if it meets several requirements and provides a copyright holder a quick and easy way to claim their property. The mechanism, known as a “notice and takedown” procedure, varies by social media service and is outlined in their terms and conditions of service (YouTube, Twitter, Instagram, TikTok, Facebook/Meta). Regardless, they all have a complaint form or application that follows the rules of the DMCA and usually will rapidly strike objectionable social media posts by users. 17 U.S. Code § 512(g) does provide the user some leeway with an appeal process and § 512(f) imposes liability to those who send unjustifiable takedowns. Nevertheless, a perfect balance of rights is not achieved.

The doctrine of fair use, codified as 17 U.S. Code § 107 via the Copyright Act of 1976, also plays a massive role here. It established a legal pathway for the use of copyrighted material for “purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” without having to acquire right to said IP from the owner. This legal safety valve has been a blessing for social media users, especially with recent victories like Hosseinzadeh v. Klein, which protected reaction content from DMCA takedowns. Cases like Lenz v. Universal Music Corp further established that fair use must be considered by copyright holders when preparing for takedowns. Nevertheless, failure to consider said rights by true copyright holders still happens, as sites are quick to react to DMCA complaints. Furthermore, the flawed reporting systems of social media sites lead to abuse by unscrupulous actors faking true ownership. On top of that, such legal actions can be psychologically and financially intimidating, especially when facing off with a major IP holder, adding to the unbalanced power dynamic between the holder and the poster.

The Telecommunications Act of 1996, which focuses primarily on cellular and landline carriers, is also particularly relevant to social media companies in this conflict. At the time of its passing, the internet was still in its infancy. Thus, it does not incorporate an understanding of the current cultural paradigm we find ourselves in. Specifically, the contentious Section 230 of the Communication Decency Act (Title V of the 1996 Act) works against social media companies in this instance, incorporating a broad and draconian rule on copyright infringement. 47 U.S. Code § 230(e)(2) states in no uncertain terms that “nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.” This has been interpreted and restated in Perfect 10, Inc. v. CCBill LLC to mean that such companies are liable for user copyright infringement. This gap in the protective armor of Section 230 is a great concern to such companies, therefore they react strongly to such issues.

What is To Be Done?

Arguably, fixing the issues around copyright on social media is far beyond the capacity of current legal mechanisms. With ostensibly billions of posts each day on various sites, regulation by copyright holders and sites is far beyond reason. It will take serious reform in the socio-cultural, technological, and legal arenas before a true balance of liberty and justice can be established. Perhaps we can start with an understanding by copyright holders not to overreact when their property is posted online. Popularity is key to success in business, so shouldn’t you value the free marketing that comes with your copyrighted property getting shared honestly within the cultural sphere of social media?  Social media sites can also expand their DMCA case management teams or create tools for users to accredit and even share revenue with, if they are an influencer or content creator, the copyright holder. Finally, congressional action is desperately needed as we have entered a new era that requires new laws. That being said, achieving a balance between the free exchange of ideas and creations and the rights of copyright holders must be the cornerstone of the government’s approach to socio-cultural expression on social media. That is the only way we can progress as an ever more online society.

 

Image: Freepik.com

https://www.freepik.com/free-vector/flat-design-intellectual-property-concept-with-woman-laptop_10491685.htm#query=intellectual%20property&position=2&from_view=keyword”>Image by pikisuperstar

Jonesing For New Regulations of Internet Speech

From claims that the moon landing was faked to Area 51, the United States loves its conspiracy theories. In fact, a study sponsored by the University of Chicago found that more than half of Americans believe at least one conspiracy theory. While this is not a new phenomenon, the increasing use and reliance on social media has allowed misinformation and harmful ideas to spread with a level of ease that wasn’t possible even twenty years ago.

Individuals with a large platform can express an opinion that creates a harm to the people that are personally implicated in the ‘information’ being spread. Presently, a plaintiff’s best option to challenge harmful speech is through a claim for defamation. The inherent problem is that opinions are protected by the First Amendment and, thus, not actionable as defamation.

This leaves injured plaintiffs limited in their available remedies because statements in the context of the internet are more likely to be seen as an opinion. The internet has created a gap where we have injured plaintiffs and no available remedy. With this brave new world of communication, interaction, and the spread of information by anyone with a platform comes a need to ensure that injuries sustained by this speech will have legal recourse.

Recently, Alex Jones lost a defamation claim and was ordered to pay $965 million to the families of the Sandy Hook victims after claiming that the Sandy Hook shooting that occurred in 2012 was a “hoax.” Despite prevailing at trial, the statements that were the subject of the suit do not fit neatly into the well-established law of defamation, which makes reversal on appeal likely.

The elements of defamation require that the defendant publish a false statement purporting it to be true, which results in some harm to the plaintiff. However, just because a statement is false does not mean that the plaintiff can prove defamation because, as the Supreme Court has recognized, false statements still receive certain First Amendment protections. In Milkovich v. Lorain Journal Co., the Court held that “imaginative expression” and “loose, figurative, or hyperbolic language” is protected by the First Amendment.

The characterization of something as a “hoax” has been held by courts to fall into this category of protected speech. In Montgomery v. Risen, a software developer brought a defamation action against an author who made a statement claiming that plaintiff’s software was a “hoax.” The D.C. Circuit held that characterization of something as an “elaborate and dangerous hoax” is hyperbolic speech, which creates no basis for liability. This holding was mirrored by several courts including the District Court of Kansas in Yeagar v. National Public Radio, the District Court of Utah in Nunes v. Rushton, and the Superior Court of Delaware in Owens v. Lead Stories, LLC.

The other statements made by Alex Jones regarding Sandy Hook are also hyperbolic language. These statements include: “[i]t’s as phony as a $3 bill”, “I watched the footage, it looks like a drill”, and “my gut is… this is staged. And you know I’ve been saying the last few months, get ready for big mass shootings, and then magically, it happens.” While these statements are offensive and cruel to the suffering families, it is really difficult to characterize them as something objectively claimed to be true. ‘Phony’, ‘my gut is’, ‘looks like’, and ‘magically’ are qualifying the statement he is making as a subjective opinion based on his interpretation of the events that took place.

It is indisputable that the statements Alex Jones made caused harm to these families. They have been subjected to harassment, online abuse, and death threats from his followers. However, no matter how harmful these statements are, that does not make it defamation. Despite this, a reasonable jury was so appalled by this conduct that they found for the plaintiffs. This is essentially reverse jury nullification. They decided that Jones was culpable and should be held legally responsible even if there is no adequate basis for liability.

The jury’s determination demonstrates that current legal remedies are inadequate to regulate potentially harmful speech that can spread like wildfire on the internet. The influence that a person like Alex Jones has over his followers establishes a need for new or updated laws that hold public figures to a higher standard even when they are expressing their opinion.

A possible starting point for regulating harmful internet speech at a federal level might be through the commerce clause, which allows Congress to regulate instrumentalities of commerce. The internet, by its design, is an instrumentality of interstate commerce by enabling for the communication of ideas across state lines.

Further, the Federal Anti-Riot Act, which was passed in 1968 to suppress civil rights protestors might be an existing law that can serve this purpose. This law makes it a felony to use a facility of interstate commerce to (1) incite a riot; or (1) to organize, promote, encourage, participate in, or carry on a riot. Further, the act defines riot as:

 [A] public disturbance involving (1) an act or acts of violence by one or more persons part of an assemblage of three or more persons, which act or acts shall constitute a clear and present danger of, or shall result in, damage or injury to the property of any other person or to the person of any other individual or (2) a threat or threats of the commission of an act or acts of violence by one or more persons part of an assemblage of three or more persons having, individually or collectively, the ability of immediate execution of such threat or threats, where the performance of the threatened act or acts of violence would constitute a clear and present danger of, or would result in, damage or injury to the property of any other person or to the person of any other individual.

Under this definition, we might have a basis for holding Alex Jones accountable for organizing, promoting, or encouraging a riot through a facility (the internet) of interstate commerce. The acts of his followers in harassing the families of the Sandy Hook victims might constitute a public disturbance within this definition because it “result[ed] in, damage or injury… to the person.” While this demonstrates one potential avenue of regulating harmful internet speech, new laws might also need to be drafted to meet the evolving function of social media.

In the era of the internet, public figures have an unprecedented ability to spread misinformation and incite lawlessness. This is true even if their statements would typically constitute an opinion because the internet makes it easier for groups to form that can act on these ideas. Thus, in this internet age, it is crucial that we develop a means to regulate the spread of misinformation that has the potential to harm individual people and the general public.

Miracles Can Be Misleading

Want to lose 20 pounds in 4 days? Try this *insert any miracle weight-loss product * and you’ll be skinny in no time!

Miracle weight-loss products (MWLP) are dietary supplements that either work as an appetite suppressant or forcefully induce weight loss. These products are not approved or indicated by pharmaceutical agencies as weight loss prophylactics. Social media users are continuously bombarded with the newest weight-loss products via targeted advertisements and endorsements from their favorite influencers. Users are force fed false promises of achieving the picture-perfect body while companies are profiting off their delusions. Influencer marketing has increased significantly as social media becomes more and more prevalent. 86 percent of women use social media for purchasing advice. 70 percent of teens trust influencers more than traditional celebrities. If you’re on social media, then you’ve seen your favorite influencer endorsing some form of a MWLP and you probably thought to yourself “well if Kylie Jenner is using it, it must be legit.”

The advertisements of MWLP are promoting an unrealistic and oversexualized body image. This trend of selling skinny has detrimental consequences, often leading to body image issues, such as body dysmorphia and various eating disorders. In 2011, the Florida House Experience conducted a study among 1,000 men and women. The study revealed that 87 percent of women and 65 percent of men compare their bodies to those they see on social media. From the 1,000 subjects, 50 percent of the women and 37 percent of the men viewed their bodies unfavorably when compared to those they saw on social media. In 2019, Project Know, a nonprofit organization that studies addictive behaviors, conducted a study which suggested that social media can worsen genetic and psychological predispositions to eating disorders.

Who Is In Charge?

The collateral damages that advertisements of MWLP have on a social media user’s body image is a societal concern. As the world becomes more digital, even more creators of MWLP are going to rely on influencers to generate revenue for their products, but who is in charge of monitoring the truthfulness of these advertisements?

In the United States, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) are the two federal regulators responsible for promulgating regulations relating to dietary supplements and other MWLP. While the FDA is responsible for the labeling of supplements, they lack jurisdiction over advertising. Therefore, the FTC is primarily responsible for advertisements that promote supplements and over-the-counter drugs.

The FTC regulates MWLP advertising through the Federal Trade Commission Act of 1914 (the Act). Sections 5 and 12 of the Act collectively prohibit “false advertising” and “deceptive acts or practices” in the marketing and sales of consumer products, and grants authority to the FTC to take action against those companies. An advertisement is in violation of the Act when it is false, misleading, or unsubstantiated. An advertisement is false or misleading when it contains “objective, material representation that is likely to deceive consumers acting reasonably under the circumstances.” An advertisement is unsubstantiated when it lacks “a reasonable basis for its contained representation.” With the rise of influencer marketing, the Act also requires influencers to clearly disclose when they have a financial or other relationship with the product they are promoting.

Under the Act, the FTC has taken action against companies that falsely advertise MWLP. The FTC typically brings enforcement claims against companies by alleging that the advertiser’s claims lack substantiation. To determine the specific level and type of substantiation required, the FTC considers what is known as the “Pfizer factors” established In re Pfizer. These factors include:

    • The type and specificity of the claim made.
    • The type of product.
    • The possible consequences of a false claim.
    • The degree of reliance by consumers on the claims.
    • The type, and accessibility, of evidence adequate to form a reasonable basis for making the particular claims.

In 2014, the FTC applied the Pfizer factors when they brought an enforcement action seeking a permanent injunction against Sensa Products, LLC. Since 2008, Sensa sold a powder weight loss product that allegedly could make an individual lose 30 pounds in six months without dieting or exercise. The company advertised their product via print, radio, endorsements, and online ads. The FTC claimed that Sensa’s marketing techniques were false and deceptive because they lacked evidence to support their health claims, i.e., losing 30 pounds in six months. Furthermore, the FTC additionally claimed that Sensa violated the Act by failing to disclose that their endorsers were given financial incentives for their customer testimonials. Ultimately, Sensa settled, and the FTC was granted the permanent injunction.

What Else Can We Do?

Currently, the FTC, utilizing its authority under the Act, is the main legal recourse for removing these deceitful advertisements from social media. Unfortunately, social media platforms, such as Facebook, Twitter, Instagram, etc., cannot be liable for the post of other users. Under section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That means, social media platforms cannot be held responsible for the misleading advertisements of MWLP; regardless of if the advertisement is through an influencer or the companies own social media page and regardless of the collateral consequences that these advertisements create.

However, there are other courses of action that social media users and social media platforms have taken to prevent these advertisements from poisoning the body images of users. Many social media influencers and celebrities have rose to the occasion to have MWLP advertisements removed. In fact, in 2018, Jameela Jamil, an actress starring on The Good Place, launched an Instagram account called I Weigh which “encourages women to feel and look beyond the flesh on their bones.” Influencer activism has led to Instagram and Facebook blocking users, under the age of 18, from viewing posts advertising certain weight loss products or other cosmetic procedures. While these are small steps in the right direction, more work certainly needs to be done.

What Evidence is Real in a World of Digitally Altered Material?

Imagine you are prosecuting a child pornography case and have incriminating chats made through Facebook showing the Defendant coercing and soliciting sexually explicit material from minors.  Knowing that you will submit these chats as evidence in trial, you acquire a certificate from Facebook’s records custodian authenticating the documents.  The custodian provides information that confirms the times, accounts and users.  That should be enough, right?

Wrong.  Your strategy relies on the legal theory that chats made through a third-party provider fall into a hearsay exception known as the “business records exemption.”  Under the Federal Rules of Evidence 902(11) “self-authenticating” business records “provides that ‘records of a regularly conducted activity’ that fall into the hearsay exception under Rule 803(6)—more commonly known as the “business records exception”—may be authenticated by way of a certificate from the records custodian.”  (Fed. R. Evid. 902(11)), (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Why does this certification fail to actually show authenticity?  The Third Circuit answers, saying there must be additional, outside evidence (extrinsic) establishing relevance of the evidence.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016)).

Relevance is another legal concept where “its existence simply has some ‘tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.’”  (United States v. Jones, 566 F.3d 353, 364 (3d Cir. 2009) (quoting Fed. R. Evid. 401)).  Put simply, the existence of this evidence has a material effect on the evaluation of an action.

In Browne, the Third Circuit says the “business records exemption” is not enough because Facebook chats are fundamentally different than business records.  Business records are “supplied by systematic checking, by regularity and continuity which produce habits of precision, by actual experience of business in relying upon them, or by a duty to make an accurate record as part of a continuing job or occupation,” which results in records that can be relied upon as legitimate.

The issue here deals with authenticating the entirety of the chat – not just the timestamps or cached information.  The court delineates this distinction, saying “If the Government here had sought to authenticate only the timestamps on the Facebook chats, the fact that the chats took place between particular Facebook accounts, and similarly technical information verified by Facebook ‘in the course of a regularly conducted activity,’ the records might be more readily analogized to bank records or phone records conventionally authenticated and admitted under Rules 902(11) and 803(6).”

In contrast, Facebook chats are not authenticated based on confirmation of their substance, but instead on the user linked to that account.  Moreover, in this case, the Facebook records certification showed “alleged” activity between user accounts but not the actual identification of the person communicating, which the court found is not conclusive in determining authorship.

The policy concern is that information is easily falsified – accounts may be created with a fake name and email address, or a person’s account may be hacked into and operated by another.  As a result of the ruling in Browne, submitting chat logs into evidence made through a third party such as Facebook requires more than verification of technical data.  The Browne court describes the second step for evidence to be successfully admitted – there must be, extrinsic, or additional outside evidence, presented to show that the chat logs really occurred between certain people and that the content is consistent with the allegations.  (United States v. Browne, 834 F.3d 403 (3d Cir. 2016))

When there is enough extrinsic evidence, the “authentication challenge collapses under the veritable mountain of evidence linking [Defendant] and the incriminating chats.”  In the Browne case, there was enough of this outside evidence that the court found there was “abundant evidence linking [Defendant] and the testifying victims to the chats conducted… [and the] Facebook records were thus duly authenticated” under Federal Rule of Evidence 901(b)(1) in a traditional analysis.

The idea that extrinsic evidence must support authentication of evidence collected from third-party platforms is echoed in the Seventh Circuit decision United States v. Barber, 937 F.3d 965 (7th Cir. 2019).  Here, “this court has relied on evidence such as the presence of a nickname, date of birth, address, email address, and photos on someone’s Facebook page as circumstantial evidence that a page might belong to that person.”

The requirement for extrinsic evidence represents a shift in thinking from the original requirement that the government carries the burden of only ‘“produc[ing] evidence sufficient to support a finding’ that the account belonged to [Defendant] and the linked messages were actually sent and received by him.”  United States v. Barber, 937 F.3d 965 (7th Cir. 2019) citing Fed. R. Evid. 901(a), United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir. 2016).  Here, “Facebook records must be authenticated through the ‘traditional standard’ of Rule 901.” United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020).

The bottom line is that Facebook cannot attest to the accuracy of the content of its chats and can only provide specific technical data.  This difference is further supported by a District Court ruling mandating traditional analysis under Rule 901 and not allowing a business hearsay exception, saying “Rule 803(6) is designed to capture records that are likely accurate and reliable in content, as demonstrated by the trustworthiness of the underlying sources of information and the process by which and purposes for which that information is recorded… This is no more sufficient to confirm the accuracy or reliability of the contents of the Facebook chats than a postal receipt would be to attest to the accuracy or reliability of the contents of the enclosed mailed letter.”  (United States v. Browne, 834 F.3d 403, 410 (3rd Cir. 2016), United States v. Frazier, 443 F. Supp. 3d 885 (M.D. Tenn. 2020)).

Evidence from social media is allowed under the business records exemption in a select-few circumstances.  For example, United States v. El Gammal, 831 F. App’x 539 (2d Cir. 2020) presents a case that does find authentication of Facebook’s message logs based on testimony from a records custodian.  However, there is an important distinction here – the logs admitted were directly from a “deleted” output, where Facebook itself created the record, rather than a person.  Accordingly, the Tenth Circuit agreed that “spreadsheets fell under the business records exception and, alternatively, appeared to be machine-generated non-hearsay.”  United States v. Channon, 881 F.3d 806 (10th Cir. 2018).

What about photographs – are pictures taken from social media dealt with in the same way as chats when it comes to authentication?  Reviewing a lower court decision, the Sixth Circuit in United States v. Farrad, 895 F.3d 859 (6th Cir. 2018) found that “it was an error for the district court to deem the photographs self-authenticating business records.”  Here, there is a bar on using the business exception that is similar to that found in the authentication of chats, where photographs must also be supported by extrinsic evidence.

While not using the business exception to do so, the court in Farrad nevertheless found that social media photographs were admissible because it would be logically inconsistent to allow “physical photos that police stumble across lying on a sidewalk” while barring “electronic photos that police stumble across on Facebook.”  It is notable that the court does not address the ease with which photographs may be altered digitally, given that was a major concern voiced by the Browne court regarding alteration of digital text.

United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) further supports the idea that photographs found through social media need to be authenticated traditionally.  Here, the court explains the authentication process, saying “The standard [the court] must apply in evaluating a[n] [item]’s authenticity is whether there is enough support in the record to warrant a reasonable person in determining that the evidence is what it purports to be.” United States v. Vazquez-Soto, 939 F.3d 365 (1st Cir. 2019) quoting United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (internal quotation marks omitted); Fed. R. Evid. 901(a).”  In other words, based on the totality of the evidence to include extrinsic evidence, do you believe the photograph is real?  Here, “what is at issue is only the authenticity of the photographs, not the Facebook page” – it does not necessarily matter who posted the photo, only what was depicted.

Against the backdrop of an alterable digital world, courts seek to emplace guards against falsified information.  The cases here represent the beginning of a foray into what measures can be realistically taken to protect ourselves from digital fabrications.

 

https://www.rulesofevidence.org/article-ix/rule-902/

https://www.rulesofevidence.org/article-viii/rule-803/

https://casetext.com/case/united-states-v-browne-12

https://www.courtlistener.com/opinion/1469601/united-states-v-jones/?order_by=dateFiled+desc&page=4

https://www.rulesofevidence.org/article-iv/rule-401/

https://www.rulesofevidence.org/article-ix/rule-901/

https://casetext.com/case/united-states-v-barber-103

https://casetext.com/case/united-states-v-lewisbey-4

https://casetext.com/case/united-states-v-frazier-175

https://casetext.com/case/united-states-v-el-gammal

https://casetext.com/case/united-states-v-channon-8

https://casetext.com/case/united-states-v-farrad

https://casetext.com/case/united-states-v-vazquez-soto-1?q=United%20States%20v.%20Vazquez-Soto,%20939%20F.3d%20365%20(1st%20Cir.%202019)&PHONE_NUMBER_GROUP=P&sort=relevance&p=1&type=case&tab=keyword&jxs=

Say Bye to Health Misinformation on Social Media?

A study from the Center of Countering Digital Hate, found that social media platforms failed to act on 95% of coronavirus-related disinformation reported to them.

      Over the past few weeks, social media companies have been in the hot seat regarding their lack of action against limiting the amount of fake news and misinformation on their platforms. Especially, the information regarding COVID-19 and the vaccine. Even President Biden remarked on social media platforms- stating Facebook and other companies were “killing people” by serving as platforms for misinformation about the Covid-19 vaccine. Later, Biden clarified his earlier statements by saying that he wasn’t accusing Facebook of killing people, but that he meant he wanted the companies to do something about the misinformation, the outrageous information about the vaccine.”

A few weeks later, Senator, Amy Klobuchar introduced the Health Misinformation Act, which would ultimately create an exemption to Section 230 of the Communication Decency Act. Section 230 has always shielded social media companies from being liable for almost any of the content that is posted on their platform. However, under the Health Misinformation Act, social media companies would be liable for the spread of health-related misinformation. Further, the bill would only apply to social media platforms that use an algorithm that promotes health misinformation- which most social media platforms use algorithms and would only apply to health misinformation during a health crisis. Additionally, if this bill were to pass, then the Department of Health and Human Services would be authorized to define “health misinformation.” Finally, the proposed bill would only apply during a national public health crisis, such as COVID-19. Therefore, this exemption would not apply during “normal” times, when there is no public health crisis.

        Senator Amy Klobuchar and some of her peers believe the time has come to create an exemption to Section 230 because “for far too long, online platforms have not done enough to protect the health of Americans.” Further, Klobuchar believes that the misinformation spread about COVID-19 the vaccine proves to the world that the social media companies have no desire to do anything about this because the misinformation gives the social media platform more activity and because the companies cannot be liable for this misinformation because of Section 230.
Instead, these social media companies, use this misinformation to their advantage to get more activity on their platform by creating features within to incentivizing their users to share the misinformation and to get likes, comments, and other engagements, which rewards engagement rather than accuracy.” Furthermore, a study conducted by MIT found that false news stories are 70% more likely to be retweeted than true stories. Therefore, social media platforms, have no reason to limit this information because of the activity they receive for the misinformation. Especially, when this misinformation benefits the social media platform.

What are the concerns with the Health Misinformation Act?

How will the Department of Health and Human Services define “health misinformation?” it seems very difficult to define such a narrow topic, that the majority will agree upon. Also, I believe there will be a huge amount of criticism from the social media companies about this act. For instance, I can imagine the social media companies arguing how will they be able to implement the definition of “health misinformation” to their algorithm? Such as, what if the information on the health crisis changes? Will the social media company have to constantly change the algorithms with the constant change of health information? For example, at the beginning of the pandemic the information of the importance of the mask changed; from mask not being necessary to masking being crucial to ensure the health and safety of yourself and others.

Will the Bill Pass?

With that being said, I do like the concept of the Health Misinformation Act, because it’s wanting to hold the social media companies accountable for their inaction while trying to protect the public so they receive accurate health-related information. However, I do not believe this bill will pass because of a few issues; first, it may violate the First Amendment, for people’s freedom of speech. Second, while it isn’t right; it is not illegal for individuals to post their opinion or misinformation on social media. Therefore, the bill might not pass because it may violate the First Amendment. Finally, as stated earlier how would social media companies implement these new policies and the change of “health misinformation” and how would the federal agencies regulate the social media companies?

What should be done?

“These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

     I believe we need to create more regulations and create more exemptions to Section 230. Especially, because Section 230 was created in 1996, our world looks and operates very differently than it did in 1996. Social media is an essential part of our business and cultural world.
Overall, I believe there need to be more regulations put into place to oversee social media companies. We need to create transparency with these companies, so the world can understand what is going on behind the closed doors of these companies. Transparency will allow for agencies to fully understand the algorithms and make for proper regulations.

To conclude, social media companies are a monopoly- even though there are a lot of them, there is only a handful that holds most of the popularity and power. With that being said, all major businesses and monopolies must follow strict regulations from the government. Yet, social media companies seem exempt from these types of strict regulations.

While there has been a push over the past few years to repeal or make changes to Section 230, do you think this bill can pass? If not, what can be done to create more regulations?

California Law Mandates Allowing Minors to Delete Social Media Posts

California has recently become the first state to enact a law requiring social media companies to give young users (under-18) the chance to delete regretful posts. Federal law lacks such a provision due mainly to the opposing argument that this would be too burdensome on social media companies. Many young social media users do not think before posting irresponsible, reputation-damaging words and pictures to the Internet. The “erase bill” was signed Monday by Governor Jerry Brown and comes into effect in January 2015.

The erase bill is lauded by many such as the founder and CEO of Common Sense Media, who stated, “[t]his puts privacy in the hands of kids, teenagers and the parents, not under the control of an anonymous tech company.” Senate leader Darrell Steinberg noted, “This is a groundbreaking protection for our kids who often act impetuously…before they think through the consequences. They deserve the right to remove this material that could haunt them for years to come.” The law also mandates that social media companies inform minors about their right to erase posts.

One blatant flaw in the legislation is that the law does not force the companies to remove the content completely from the servers. The posts thus survive in the vast cyber-sphere. However, allowing minors to retract ignorant statements and posts from the Internet seems to be a good start in the direction of future federal protection.

The article discussing this new legislation notes that pictures and posts discoverable online could ruin a young person’s ability to land a prestigious summer internship or even admittance into college. After all, employers and recruiters certainly Google young applicants, probably even before reading their applications.

The aim of this legislation is to get other states on board, and eventually to persuade Washington to construct binding law. As a graduate student without any social media, I never had to worry about the potential issues arising from regrettable social media posts. However, as we all make mistakes, especially in our teenage years, it seems appropriate to me that lawmakers would want to give minors the ability to right their wrongs in the days following such posts. I often regret words that come out of my mouth, let alone statements and/or photos that are memorialized on the Internet.

Do you think a young person’s future should be jeopardized for posting substance on the Internet that reflects a moment of their stupidity? We all undoubtedly must be held accountable for what we say, but shouldn’t minors get some leeway? Or, should schools and companies seeking to hire these minors be privy to the potential for such misconduct? I for one support this type of legislation. What do you think?

 

“Erase Law” News Article

Skip to toolbar