Don’t Talk to Strangers! But if it’s Online, it’s Okay?

It is 2010.  You are in middle school and your parents let your best friend come over on a Friday night.  You gossip, talk about crushes, and go on all social media sites.  You decide to try the latest one, Omegle.  You automatically get paired with a stranger to talk to and video chat with.  You speak to a few random people, and then, with the next click, a stranger’s genitalia are on your screen.

Stranger Danger

Omegle is a free video-chatting social media platform.  Its primary function has become meeting new people and arranging “online sexual rendezvous.”  Registration is not required.  Omegle randomly pairs users for one-on-one video sessions.  These sessions are anonymous, and you can skip to a new person at any time.  Although there is a large warning on the home screen saying “you must be 18 or older to use Omegle”, no parental controls are available through the platform.  Should you want to install any parental controls, you must use a separate commercial program.

While the platform’s community guidelines illustrate the “dos and don’ts” of the site, it seems questionable that the platform can monitor millions of users, especially when users are not required to sign up, or to agree to any of Omegle’s terms and conditions.  It, therefore, seems that this site could harbor online predators, raising quite a few issues.

One recent case surrounding Omegle involved a pre-teen who was sexually abused, harassed, and blackmailed into sending a sexual predator obscene content.  In A.M. v. Omegle.com LLC, the open nature of Omegle ended up matching an 11-year-old girl with a sexual predator in his late thirties.  Being easily susceptible, he forced the 11-year-old into sending pornographic images and videos of herself, perform for him and other predators, and recruit other minors.  This predator was able to continue this horrific crime for three years by threatening to release these videos, pictures, and additional content publicly.  The 11-year-old plaintiff sued Omegle on two general claims of platform liability through Section 230, but only one claim was able to break through the law.

Unlimited Immunity Cards!

Under 47 U.S.C. § 230 (Section 230), social media platforms are immune from liability for content posted by third parties.  As part of the Communications Decency Act of 1996, Section 230 provides almost full protection against lawsuits for social media companies since no platform is seen as a publisher or speaker of user-generated content posted on the site.  Section 230 has gone so far to say that Google and Twitter were immune from liability for claims that their platforms were used to aid terrorist activities.  In May of 2023, these cases moved up to the Supreme Court.  Although the court declined to rule for the Google case, they ruled on the Twitter case.  Google was found not liable for the claim that they stimulated the growth of ISIS through targeted recommendations and inspired an attack that killed an American student.  Twitter was immune for the claim that the platform aided and abetted a terrorist group to raise funds and recruit members for a terrorist attack.

Wiping the Slate

In February of 2023, the District Court in Oregon for the Portland Division found that Section 230 immunity did not apply to Omegle in a products liability claim, and the platform was held liable for these predatory actions committed by the third party on the site.  By side-stepping the third-party freedom of speech issue that comes with Section 230 immunity for an online publisher, the district court found Omegle responsible under the Plaintiff’s products liability claim, which targeted the platforms’ defective design, defective warning, negligent design, and failure to warn.

Three prongs need to be proved to preclude a platform from liability under Section 230:

  1. A provider of an interactive site,
  2. Whom is sought to be treated as a publisher or speaker, and
  3. For information provided by a third-party.

It is clear that Omegle is an interactive site that fits into the definition provided by Section 230.  The issue then falls on the second and third prongs: if the cause of action treated Omegle as the speaker of third-party content.  The sole function of randomly pairing strangers causes the foreseen danger of pairing a minor with an adult. Shown in the present case, “the function occurs before the content occurs.” By designing the platform negligently and with knowing disregard for the possibility of harm, the court ultimately concluded that the liability of the platform’s function does not pertain to third-party published content and that the claim targeted specific functions rather than users’ speech on the platform.  Section 230 immunity did not apply for this first claim and Omegle was held liable.

Not MY Speech

The plaintiff’s last claim dealing with immunity under Section 230 is that Omegle negligently failed to apply reasonable precautions to provide a safe platform.  There was a foreseeable risk of harm when marketing the service to children and adults and randomly pairing them.  Unlike the products liability claim, the negligence claim was twofold: the function of matching people and publishing their communications to each other, both of which fall directly into Section 230’s immunity domain.  The Oregon District Court drew a distinct line between the two claims, so although Omegle was not liable under Section 230 here through negligent service, they were liable through products liability.

If You Cannot Get In Through the Front Door, Try the Back Door!

For almost 30 years, social media platforms have been nearly immune from liability pertaining to Section 230 issues.  In the last few years, with the growth of technology on these platforms, judges have been trying to find loopholes in the law to hold companies liable.  A.M. v. Omegle has just moved through the district court level.  If appealed, it will be an interesting case to follow and see if the ruling will stand or be overruled in conjunction with the other cases that have been decided.  

How do you think a higher court will rule on issues like these?

Private or not private, that is the question.

Section 230 of the Communications Decency Act (CDA), protects private online companies from liability for content posted by others. This immunity also grants internet service providers the freedom to regulate what is posted onto their sites. What has faced much criticism of late however, is social media’s immense power to silence any voices the platform CEOs disagree with.

Section 230(c)(2), known as the Good Samaritan clause, states that no provider shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

When considered in the context of a ‘1996’ understanding of internet influence (the year the CDA was created) this law might seem perfectly reasonable. Fast forward 25 years though, with how massively influential social media has become on society and the spread of political information, there has developed a strong demand for a repeal, or at the very least, a review of Section 230.

The Good Samaritan clause is what shields Big Tech from legal complaint. The law does not define obscene, lewd, lascivious, filthy, harassing or excessively violent. And “otherwise objectionable” leaves the internet service providers’ room for discretion all the more open-ended. The issue at the heart of many critics of Big Tech, is that the censorship companies such as Facebook, Twitter, and YouTube (owned by Google) impose on particular users is not fairly exercised, and many conservatives feel they do not receive equal treatment of their policies.

Ultimately, there is little argument around the fact that social media platforms like Facebook and Twitter are private companies, therefore curbing any claims of First Amendment violations under the law. The First Amendment of the US Constitution only prevents the government from interfering with an individual’s right to free speech. There is no constitutional provision that dictates any private business owes the same.

Former President Trump’s recent class action lawsuits however, against Facebook, Twitter, Google, and each of their CEOs, challenges the characterization of these entities as being private.

In response to the January 6th  Capitol takeover by Trump supporters, Facebook and Twitter suspended the accounts of the then sitting president of the United States – President Trump.

The justification was that President Trump violated their rules by inciting violence and encouraged an insurrection following the disputed election results of 2020. In the midst of the unrest, Twitter, Facebook and Google also removed a video posted by Trump, in which he called for peace and urged protestors to go home. The explanation given was that “on balance we believe it contributes to, rather than diminishes the risk of ongoing violence” because the video also doubled down on the belief that the election was stolen.

Following long-standing contentions with Big Tech throughout his presidency, the main argument in the lawsuit is that the tech giants Facebook, Twitter and Google, should no longer be considered private companies because their respective CEOs, Mark Zuckerberg, Jack Dorsey, and Sundar Pichai, actively coordinate with the government to censor politically oppositional posts.

For those who support Trump, probably all wish to believe this case has a legal standing.

For anyone else who share concerns about the almost omnipotent power of Silicon Valley, many may admit that Trump makes a valid point. But legally, deep down, it might feel like a stretch. Could it be? Should it be? Maybe. But will Trump see the outcome he is looking for? The initial honest answer was “probably not.”

However, on July 15th 2021, White House press secretary, Jen Psaki, informed the public that the Biden administration is in regular contact with Facebook to flag “problematic posts” regarding the “disinformation” of Covid-19 vaccinations.

Wait….what?!? The White House is in communication with social media platforms to determine what the public is and isn’t allowed to hear regarding vaccine information? Or “disinformation” as Psaki called it.

Conservative legal heads went into a spin. Is this allowed? Or does this strengthen Trump’s claim that social media platforms are working as third-party state actors?

If it is determined that social media is in fact acting as a strong-arm agent for the government, regarding what information the public is allowed to access, then they too should be subject to the First Amendment. And if social media is subject to the First Amendment, then all information, including information that questions, or even completely disagrees with the left-lean policies of the current White House administration, is protected by the US Constitution.

Referring back to the language of the law, Section 230(c)(2) requires actions to restrict access to information be made in good faith. Taking an objective look at some of the posts that are removed from Facebook, Twitter and YouTube, along with many of the posts that are not removed, it begs the question of how much “good faith” is truly exercised. When a former president of the United States is still blocked from social media, but the Iranian leader Ali Khamenei is allowed to post what appears nothing short of a threat to that same president’s life, it can certainly make you wonder. Or when illogical insistence for unquestioned mass emergency vaccinations, now with continued mask wearing is rammed down our throats, but a video showing one of the creators of the mRNA vaccine expressing his doubts regarding the safety of the vaccine for the young is removed from YouTube, it ought to have everyone question whose side is Big Tech really on? Are they really in the business of allowing populations to make informed decisions of their own, gaining information from a public forum of ideas? Or are they working on behalf of government actors to push an agenda?

One way or another, the courts will decide, but Trump’s class action lawsuit could be a pivotal moment in the future of Big Tech world power.

Has Social Media Become the Most Addictive Drug We Have Ever Seen?

Before we get started, I want you to take a few minutes and answer the following questions to yourself:

  1. Do you spend a lot of time thinking about social media or planning to use social media?
  2. Do you feel urges to use social media more and more?
  3. Do you use social media to forget about personal problems?
  4. Do you often try to reduce the use of social media without success?
  5. Do you become restless or troubled if unable to use social media?
  6. Do you use social media so much that it has had a negative impact on your job or studies?

How did you answer these questions?  If you answered yes to more than three of these questions then according to the Addiction Center you may have or be developing a Social Media Addiction.  Research has shown that there is an undeniable link between social media use, negative mental health, and low self-esteem.  Negative emotional reactions are not only produced due to the social pressure of sharing things with others but also the comparison of material things and lifestyles that these sites promote.
On Instagram and Facebook, users see curated content – advertisements and posts that are specifically designed to appeal to you based on your interests.  Individuals today unlike any other time in history are seeing how other people live, and how their lifestyles differ significantly from their own.  This sense of self-worth is what is being used to curate information, children at a young age are being taught that if you are not a millionaire then you are not successful, and they are creating barometers of success based on invisible benchmarks, this is leading to an increase in suicide and depression among young adults.

Social Media has become a stimulant whose effects mimic that of someone addicted to gambling, and recreational drugs.  It has been shown that retweets, likes, and shares from these sites affect the dopamine part of the brain that becomes associated with reward. “[I]t’s estimated that people talk about themselves around 30 to 40% of the time; however, social media is all about showing off one’s life and accomplishments, so people talk about themselves a staggering 80% of the time. When a person posts a picture and gets positive social feedback, it stimulates the brain to release dopamine, which again rewards that behavior and perpetuates the social media habit.”  “Chasing the high”, is a common theme among individuals with addictive personalities, and when you see people on Social Media posting every aspect of their lives, from the meal they ate to their weekend getaway, and everything in between, that is what your chasing, but the high is the satisfaction of other people liking your post.  We have all been there you post a picture or a moment of great importance in your life, and the likes and reactions start pouring in, the reaction you garner from that love, differs significantly from the reaction you get when there is no reaction.  A recent Harvard study showed that “the act of disclosing information about oneself activates the same part of the brain that is associated with the sensation of pleasure, the same pleasure that we get from eating food, getting money or having even had sex.” Our brains have become to associate self-disclosure with being a rewarding experience.  Ask yourself when was the last time you posted something about a family or friend who died, why was this moment of sadness worth sharing with the world?  Researchers in this Harvard Study found that “when people got to share their thoughts with a friend or family member, there was a larger amount of activity in the reward region of their brain, and less of a reward sensation when they were told their thoughts would be kept private.”

“The social nature of our brains is biologically based,” said lead researcher Matthew Lieberman, Ph.D., a UCLA professor of psychology and psychiatry and biobehavioral sciences. This in itself helps you to understand where Social Media has gone to, it has evolved into a system that takes advantage of our biological makeup, “although Facebook might not have been designed with the dorsomedial prefrontal cortex in mind, the social network is very much in sync with how our brains are wired.” There is a reason when your mind is idling the first thing it wants to do is to check Social Media, Liberman one of the founders of the study of social cognitive neuroscience explains that “When I want to take a break from work, the brain network that comes on is the same network we use when we’re looking through our Facebook timeline and seeing what our friends are up to. . . That’s what our brain wants to do, especially when we take a break from work that requires other brain networks.”

This is a very real issue, that has very real consequences.  The suicide rate for children and teens is rising.  According to a September 2020 report by the U.S. Department of Health and Human Services, the suicide rate for pediatric patients rose 57.4% from 2007 to 2018. It is the second-largest cause of death in children, falling short only of accidents.  Teens in the U.S. who spend more than 3 hours a day on social media may be at a heightened risk for mental health issues, according to a 2019 study in JAMA Psychiatry. The study, which was adjusted for previous mental health diagnoses, concludes that while adolescents using social media more intensively have an increased risk of internalizing problems or reporting mental health concerns, more research is needed on “whether setting limits on daily social media use, increasing media literacy, and redesigning social media platforms are effective means of reducing the burden of mental health problems in this population.” Social Media has become a coping mechanism for some to deal with their stress, loneliness, or depression.  We have all come into contact with someone who posts their entire life on social media, and more often than not we might brush it off, even make a crude joke, but in fact, this is someone who is hurting and looking for help in a place that does not offer any solitude.

I write about this to emphasize a very real, and dangerous issue that is growing worse every single day.  For far too long Social Media have hidden behind a shield of immunity.

Section 230, a provision of the 1996 Communications Decency Act that shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.  Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230)

In 1996 when this Law was introduced and passed, the internet was still in its infancy, and no one at that time could have ever envisioned how big it would become.  At this point, Social Media Corporations operate in an almost Omnipotent capacity.  Creating their governing boards, and moderators to filter out negative information.  However, while the focus is often on the information being put out by the users what gets ignored is how that same information gets directed to the consumer.  You see Facebook, Snap Chat, Twitter, even YouTube, rely on the consumer commonly known as “influencers” to direct posts, and information to the consumer also known as the “User”, to direct advertisement and product placement.  To accomplish their goals which at the end of the day is the same as anyone Corporation to create a profit, information is directed at a person that will keep their attention.  At this point, there are little to no regulations, on how information is directed at an individual.  For instance, the FCC has rules in place that “limits the number of time broadcasters, cable operators, and satellite providers can devote to advertisements during children’s programs.” however, there are no such rules when dealing with children, there is only one such case in which the FTC has levied any fines for directed content at Children. Yet this suit was based more on  the notion that Google through their subsidiary YouTube “illegally collected personal information from children without their parents’ consent.”  When dealing with an advertisement for children Google itself sets the parameters.

Social Media has grown too large for itself and has far outgrown its place as a private entity that cannot be regulated.  The FCC was created in 1934 to replace the outdated Federal Radio Commission an outdated entity.  Therefore, just as it was recognized in 1934 that technology calls for change, today we need to call on Congress to regulate Social Media, it is not too farfetched to say that our Children and our Children’s futures depend on this.

In my next blog, I will post how regulation on Social Media could look and explain in more detail how Social Media has grown too big for itself.

 

 

Hashing out Weed Adverstising Rules on Social Media

Adweek published an article this morning discussing the issues facing Colorado’s legal marijuana purveyors.  Seems that Twitter and Google prohibit, and Apple’s app store limits, advertisements for weed, which is legal in only  two states.  While Colorado published its own  set of rules and regulations for selling recreational marijuana, many national advertising platforms have yet to come up with their own strategies.   The issue is a significant one for advertisers using social media given its inevitable national reach. The matter begs the question: Is it possible to localize social media advertising?

 

 

Context Doesn’t Matter When Posting Rants

A defendant who posted a series of rants on the website “Ripoff Reports” claimed that the nature and tone of the website, and the posts that appeared on it, were enough to defeat a claim of libel.  Plaintiff, Piping Rock Partners, and its sole shareholder posted a series of rants about David Lerner Associates.  Piping Rock claimed that the rants were just that, and raised an “everyone knows the internet is just for ranting and not to be taken too seriously” defense.

The Court disagreed and with a shoutout to a popular search engine, ruled that anything that is searchable on google is presumed true.

Piping Rock Partners Inc. v. David Learner Associates Inc, (here) represents another case in the shifting tide toward giving more credibility to website postings.  Is it time to shift the presumption of posts from false to true ?  I would argue context matters.  After all, think about all those dating website posts.   Looks like Poppy won this one.

 

The Government’s Ability to Read Your Email

The New York Times recently published an article entitled  “Google Says Electronic Snooping by Government Should Be More Difficult.” According to the article, “If a government wants to peek into your Web-based e-mail account, it is surprisingly easy, most of the time not even requiring a judge’s approval.” Click  here to read the article.

Skip to toolbar