I don’t know how much money Trump’s lawsuits against Facebook, Twitter, and YouTube (and their CEOs) will help him raise, or whether it will gain him political support, but I do know one thing about these cases – they have no basis in current law.
Of course it’s not outside the realm of possibility that Republican judges in Florida will see it his way, but it seems very unlikely.
At issue is the infamous Section 230 of the Communications Decency Act (CDA) – 47 USC Section 230. The relevant part of this law states:
No provider or user of an interactive computer service shall be held liable on account of–
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
Trump argues that the companies are state actors and are required to host content protected by the First Amendment.
However, the courts have consistently held that social media companies like Facebook are not state actors subject to the First Amendment, and that their decisions to delete or block access to a user’s account fall squarely within Section 230 immunity.
Not surprisingly, the prolific Prof. Eric Goldman explains why Trump has no case in this interview by Michael Smerconish:
Prof. Goldman has a paper coming shortly which analyzes 61 prior lawsuits by users over having their account terminated or content removed. In every case Internet service providers have won lawsuits challenging termination/removal decisions:
I have a paper coming very soon which analyzed 61 prior lawsuits by users over having their account terminated or content removed. A snippet: pic.twitter.com/PbE5Z5SvCz
On May 28, 2020 President Trump issued an “Executive Order on Preventing Online Censorship” (the Order). It takes aim at Twitter, Facebook and Google through the lens of 47 U.S. Code § 230 (Section 230), the federal law that allows internet platforms to host and moderate user created content free of liability under state law. The Order came just days after Twitter, for the first time, added warning labels and fact-checking to several of Trump’s tweets.
A lot has already been written about the politics behind the Order. But what does the Order accomplish as a legal matter? Here’s my take, in brief.
First, the Executive Order directs the Commerce Department to ask the FCC to do rulemaking to interpret Section 230. Section 230 does not delegate to the FCC rule-making authority, so I don’t see how the FCC could exercise rule making authority with respect to Section 230. If they try, expect litigation. For in-depth discussion of this issue, see Harold Feld’s analysis here.
Second, the Executive Order instructs all federal agencies to report their online advertising expenditures. The Order doesn’t state what will be done with this information. Perhaps the agencies will be instructed to pull their advertising from these services? However, federal agency spending on Twitter is trivial, as discussed by CNBC here.
Third, it encourages the FTC to bring Section 5 enforcement actions against Internet companies for false marketing statements. The FTC already has enforcement authority over “unfair or deceptive acts or practices.” Whether it will exercise that authority against Twitter remains to be seen. However, it’s hard to believe that anything Twitter has done vis-a-vis Trump (or anyone else) constitutes an unfair or deceptive act or practice. This would have to be proven in court, so if the FTC pursues this, expect litigation.
Fourth, it instructs the U.S. attorney general (William Barr) to form a working group of state attorneys general to investigate how state laws can be used against Internet services, and to develop model state legislation to further the goals of the Order. Section 230 and the First Amendment would likely preempt any state law attempting to regulate Twitter, so this is a non-starter.
Fifth, it instructs the U.S. attorney general to draft legislation that would reform Section 230 and advance the goals of the Executive Order. OK, but this would require that a law reforming Section 230 could be enacted. Unless the Republicans control both legislative branches and the executive branch, this seems unlikely.
Section 230 of the Communications Decency Act has, once again, protected a website from a claim of defamation based on user postings.
Simply put, Section 230 of the CDA provides that a website isn’t liable for defamation (or any other non-intellectual property claim) based on user postings. The poster may be liable (if she can be identified), but the website is not. Typically, Section 230 cases involve defamation or interference with contract by the poster — copyright infringement based on user postings is handled by a separate statute, the DMCA.
Craft Beer Stellar, LLC’s suit against Glasdoor ran into this law head-first in a recent case decided by Massachusetts U.S. District Court Judge Dennis Saylor.
Craft Beer complained to Glassdoor over a critical posting by a Craft Beer franchisee (the fact that the post was by a franchisee rather than an employee is legally irrelevant). Glassdoor removed the posting on the ground that it violated Glassdoor’s community guidelines. The franchisee reposted, this time in compliance with the guidelines, and Glassdoor denied a request by Craft Beer to remove the second posting.
Craft Beer argued that by taking down the first review and allowing the second review to be posted Glassdoor lost its Section 230 immunity. The judge summarized its argument as follows:
Glassdoor essentially contends that Glassdoor’s decision to remove a “review” from its website for violating its community guidelines, combined with its subsequent decision to allow the updated, guidelines-compliant version of the “review” to be re-posted, constituted a material revision and change to the post’s content. Such a material revision, it contends, constituted an act of creating or developing the post’s content, and accordingly transformed Glassdoor from an (immunized) interactive computer service into an information-content provider not subject to the protections of §230.
Judge Saylor rejected this argument, noting that Glassdoor wrote neither of the two posts; it just made a decision to publish or withdraw the posts. First Circuit precedent holds that these kinds of “traditional editorial functions” — deciding whether to publisher or withdraw content — fall squarely within Section 230’s grant of immunity. See Jane Doe No. 1 v. Backpage.com LLC (1st Cir. March 14, 2016) (“lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred”).
Craft Beer also claimed that Glassdoor had violated the Defend Trade Secrets Act (“DTSA”), 18 U.S.C. § 1836. However, as noted above, Section 230 provides protection for non-intellectual property claims. Although one would ordinarily think of a trade secret claim as an intellectual property claim (and therefore not covered by Section 230), the DTSA expressly states that the DTSA “shall not be construed to be a law pertaining to intellectual property for purposes of any other Act of Congress.” Accordingly, Section 230 provided Glassdoor with protection from the DTSA claim as well. (For an in-depth discussion of this issue see Professor Eric Goldman’s article, The Defend Trade Secrets Act Isn’t an ‘Intellectual Property’ Law.)
The larger problem for Craft Beer may be that not only did the judge dismiss its complaint, but the case probably has added publicity to the bad reviews Craft Beer sought to quash. Indeed, even if it had won the case and forced Glassdoor to take down the offending posts, potential franchisees researching the company online would find the posts quoted in court decisions in the case. As things now stand, Craft Beer is probably suffering to some extent from the Streisand Effect (for another example of Section 230 and the “Streisland Effect” see here). And, if it is considering an appeal to the First Circuit (a bad move, in my opinion), a decision from the First Circuit will only make matters worse.
The Communications Decency Act (CDA) is a federal law that protects online publishers from liability for the speech of others. The CDA gives online platforms the right to publish (or decline to publish) the ideas and opinions of users without the threat of being held liable for that content or forced to remove it.
However, people who are defamed online will sometimes go to extreme lengths to try to force online publishers to remove defamatory content posted by users. A notable First Circuit case that I wrote about recently illustrates how a lawyer attempted, unsuccessfully, to obtain copyright ownership of several defamatory posts and then force Ripoff Report to remove the posts. (See: The Copyright Workaround and Reputation Management: Small Justice v. Ripoff Report).
A California attorney tried something similar in Hassell v. Bird, a case decided by the California Supreme Court on July 2, 2018. In that case a lawyer (Dawn Hassell) sued a former client and the author of a Yelp review (Ava Bird) over a review that Hassell claimed was defamatory. Hassell got a default judgment holding that the review was defamatory along with an injunction ordering the review to be removed. She then delivered the injunction to Yelp (which was not a party in the suit) and demanded that it honor the injunction against Bird and remove the review. Yelp refused to do so. The case proceeded through appeals, ending up before the California Supreme Court.
The attorney’s strategy in this case was to purposefully not name Yelp as a defendant, since Yelp would easily have been dismissed from the case under the CDA. Instead, her strategy was to get an injunction against the defendant ordering her to remove the Yelp post, and then attempt to enforce that injunction against Yelp. Ava Bird assisted in the first part of this strategy by defaulting, although it appears she may not have been properly served.
The court addressed Hassell’s strategy, and answered the central issue in the case, as follows:
The question here is whether a different result should obtain because plaintiffs made the tactical decision not to name Yelp as a defendant. Put another way, we must decide whether plaintiffs’ litigation strategy allows them to accomplish indirectly what Congress has clearly forbidden them to achieve directly. We believe the answer is no . . . an order that treats an Internet intermediary ‘as the publisher or speaker of any information provided by another information content provider’ nevertheless falls within the parameters of [the CDA].
The court observed that even an injunction (as opposed to money damages) can impose a substantial burden on an online publisher:
An injunction like the removal order plaintiffs obtained can impose substantial burdens on an Internet intermediary. Even if it would be mechanically simple to implement such an order, compliance still could interfere with and undermine the viability of an online platform . . . furthermore, as this case illustrates, a seemingly straightforward removal order can generate substantial litigation over matters such as its validity or scope, or the manner in which it is implemented. The CDA allows these litigation burdens to be imposed upon the originators of online speech. But the unique position of Internet intermediaries convinced Congress to spare republishers of online content, in a situation such as the one here, from this sort of ongoing entanglement with the courts.
The court criticized Hassell’s strategy:
. . . plaintiffs’ maneuver, if accepted, could subvert a statutory scheme intended to promote online discourse and industry self-regulation. What plaintiffs did in attempting to deprive Yelp of immunity was creative, but it was not difficult. If plaintiffs’ approach were recognized as legitimate, in the future other plaintiffs could be expected to file lawsuits pressing a broad array of demands for injunctive relief against compliant or default-prone original sources of allegedly tortious online content. . . . Congress did not intend this result, any more than it intended that Internet intermediaries be bankrupted by damages imposed through lawsuits attacking what are, at their core, only decisions regarding the publication of third party content.
Yelp itself had the last laugh in this case, and it posted it on its blog:
The Hassell Law Group, which has always been a highly-rated business on Yelp and currently maintains five stars, has spent many years in the court system (and endured the resulting Streisand Effect) in an effort to force Yelp to silence a pair of outlier reviews. As we have observed before, litigation is never a good substitute for customer service and responsiveness, and had the law firm avoided the courtrooms and moved on, it would have saved time and money, and been able to focus more on the cases that truly matter the most — those of its clients.
While many performing artists and record companies complain that the Digital Millennium Copyright Act (the “DMCA”) puts them to the unfair burden of sending endless takedown notices, and argue that the law should require notice and “stay down,” supporters of Internet intermediaries and websites argue that court decisions have unreasonably narrowed the DMCA safe harbor.
A recent decision by the influential Ninth Circuit Court of Appeals (which includes California) adds to the concerns of the latter group.
LiveJournal, the defendant in this case, displayed on its website 20 photographs owned by Mavrix. Mavrix responded, not by sending DMCA “takedown” notices, as you might expect, but by filing suit for copyright infringement. LiveJournal responded that it was protected by the DMCA. However, to successfully invoke the DMCA’s safe harbor LiveJournal had to satisfy all of the legal requirements of the DMCA.
A key requirement is that infringing content have been posted “at the direction of the user.” In other words, the DMCA is designed to make websites immune from copyright infringement based on postings by users; it doesn’t protect a site from content posted or uploaded by the site itself – that is, by the site’s employees.The photos at issue were submitted by users and posted at their direction and therefore, LiveJournal argued, it satisfied this DMCA requirement.
However, when it comes to the DMCA, the devil is in the details, and the outcome in any case depends on how the courts interpret those details. In the case of LiveJournal photos are submitted by users, but they are posted only after they are reviewed and approved by volunteer moderators. For this reason, Mavrix argued, the photographs were not “posted at the direction of the user,” rather they were posted by moderators who selected them from user submissions. Further, Mavrix argued that the moderators were “agents” of LiveJournal, and therefore their actions were legally attributed to LiveJournal. In other words, as “agents” of LiveJournal their actions were the same as if they were employees.
The district court rejected Mavrix’s arguments and ruled for LiveJournal, but the Ninth Circuit reversed, holding that Mavrix had a point – the moderators might very well be “agents” of LiveJournal, in which case LiveJournal would have failed thisrequirement of the DMCA and be liable for copyright infringement. In reaching this conclusion the court emphasized that the critical inquiry is not who submitted content, but who posted the content. The court rejected LiveJournal’s position that the words “at the direction of the user” include all user submissions, even when they are reviewed and selected by agents or employees of the service provider.
In the case of LiveJournal, because moderators screened and posted user submissions the issue is whether the moderators are “agents” of LiveJournal whoseactions should be attributed to LiveJournal. In effect, the court equated agents with employees.
To make matters worse for websites hoping to use volunteer moderators, the legal “test” to determine whether moderators are agents gets into the arcane subject of agency law, a topic that rightly triggers the limbic system of any lawyer who suffered through agency law in law school. In this case the question is whether the level of control LiveJournal exercised over its volunteer moderators created an agency relationship based on “actual” or “apparent” authority. Trust me when I say that these are complex issues that no website owner would want to have to parse out while running its business, much less have to present and argue to a jury.
This ruling was a blow to LiveJournal, but the Ninth Circuit had more bad news to deliver. Even if LiveJournal was able to establish that the moderators were not agents of LiveJournal, Mavrix might be able to show that LiveJournal had “actual” or “red flag” knowledge that the postings were infringements. While the Ninth Circuit stated that “red flag” knowledge requires that the infringement be “immediately apparent to a non-expert,” the court ruled that the fact that some of the photos contained watermarks could have created red flag knowledge. Whether they did will be up to a jury to decide. If a jury decides that LiveJournal had the requisite knowledge with respect to one or more photos, LiveJournal will lose DMCA protection for those photos.
However, after this ruling the Ninth Circuit was still not done with LiveJournal.The DMCA requires that LiveJournal not have received a financial benefit from infringements that it had the right and ability to control. The court held that this benefit “need not be a substantial or a large proportion” of the website’s revenue, and added to the confusion around DMCA law by suggesting that such a benefit could be established based on the volume of infringing material on the site, even if this material did not belong to Mavrix and was not the subject of the current litigation.
What lessons can website operators draw from this case?
First, this is a very bad decision for any social media website that would like to moderate content, whether through the use of volunteers or by employees.
Any business based on LiveJournal’s business model – volunteer moderators who review user submissions and decide whether or not to post them – is at serious risk of losing DMCA protection. The degree of planning, administration and ongoing legal supervision necessary to be confident that moderators are not agents would be daunting.
It’s worth noting that this decision will be difficult to evade – the fact that a site may not be in one of the states included in the Ninth Circuit is not likely to provide protection. A site incorporated and operating outside the Ninth Circuit can be sued in the Ninth Circuit if it has minimum contacts there, and this is often easily established in the case of popular websites. The Mavrix case is so favorable for copyright owners seeking to challenge a DMCA safe harbor defense that it is likely to motivate forum shopping in the Ninth Circuit.
Second, theirony of this case is obvious – Mavrix creates an incentive to engage in no moderation or curation and post “all comers.” If there is no moderator (whether an agent or employee) to view a warning watermark, there can be no knowledge of infringement. Unregulated postings of user generated content is likely to result in more copyright infringement, not less.
Third, to add to the confusion over how the DMCA should be applied to websites that host user-generated content screened by moderators, the Court of Appeals for the Tenth Circuit issued a decision in 2016 that appears to come to the opposite conclusion regarding the use of moderators. BWP Media USA Inc. v. Clarity Digital Group., LLC. This case may give LiveJournal a chance to persuade the Supreme Court to accept an appeal of the Mavrix case (based on a “circuit split”) – assuming, that is, that LiveJournal has the stomach, and the budget, to prolong this case further, rather than settle.
Lastly, it’s important to view this decision in the context of the DMCA as a whole. Any service provider hosting user-generated content has to pass through a punishing legal gauntlet before reaching the DMCA’s safe harbor:
(i) content must be stored at the direction of a user;
(ii) the provider must implement a policy to terminate repeat infringers, communicate it to users and reasonably enforce it;
(iv) the provider must respond expeditiously to take down notices;
(v) the provider may not have actual or red flag knowledge of infringement, nor may it be willfully blind to infringements;
(vi) the provider may not have the right and ability to control infringing content; and
(vii) the provider may not have a direct financial interest in the infringing content.
This is a challenging list of requirements and, as illustrated by the Mavrix case, each requirement is complex, subject to challenge by a copyright owner and subject varying interpretations by different courts. If the service provider fails on even one point it loses its DMCA safe harbor protection.
After the Ninth Circuit’s decision in Mavrix the chances that a service provider will be able to successfully navigate this gauntlet are significantly reduced, at least in the Ninth Circuit.
Update: The Ninth Circuit clarified its position on whether the use of moderators deprives a website of DMCA protection in Ventura Content v. Motherless (2018). In Ventura the court held that screening for illegal material is permissible, and distinguished Mavrix on the ground that in that case the moderators screened for content that would appeal to LiveJournal’s readers (“new and exciting celebrity news”). “Because the users, not Motherless, decided what to post — except for [its] exclusion of illegal material . . . — the material . . . was posted at the direction of users.”
The U.S. Copyright Office has issued a new rule that has important implications for any website that allows “user generated content” (UGC). This includes (for example), videos (think Youtube), user reviews (think Amazon or Tripadvisor), and any site that allows user comments.
In order to avoid possible claims of copyright infringement based on UGC, website owners rely on the Digital Millennium Copyright Act (the “DMCA”). However, the DMCA imposes strict requirements on website owners, and failure to comply with even one of these requirements will result in the loss of protection.
One requirement is that the website register an agent with the Copyright Office. The contact information contained in the registration allows copyright owners to request a “take down” of the copyright owner’s content.
The Copyright Office is revamping its agent registration system, and as part of this process it is requiring website owners to re-register their DMCA agents by the end of 2017, and re-register every three years thereafter. Gesmer Updegrove LLP’s Client Advisory on this new rule is embedded in this post, below. You can also click here to go directly to the pdf file on the firm’s website.
The cases arises from an issue inherent in the Digital Millennium Copyright Act. The DMCA allows copyright owners to request the “takedown” of a post that uses infringing content.
But, what does the copyright owner have to do to determine, first, whether fair use applies? Does it need to do anything at all?
This question has finally been decided by the Ninth Circuit in a much-anticipated decision issued on September 14, 2015.
The case had inauspicious beginnings. In 2007 Stephanie Lenz posted to YouTube a 29 second video of her toddler son cycling around the kitchen, with Prince’s song “Let’s Go Crazy” playing in the background. Universal sent a DMCA takedown notice to YouTube, but Ms. Lenz contended her use of the song was fair use, and therefore was non-infringing. Eventually the dispute made its way to federal court in California, with Ms. Lenz asserting that her use of the song was protected by fair use, and that Universal had failed to take fair use into consideration before requesting takedown of her video.
The issue before the court was whether, before sending a DMCA takedown notice, copyright holders must first evaluate whether the offending content qualifies as fair use. The court held that the copyright statute does require such an evaluation, but that the copyright holder need only form a “subjective good faith belief” that fair use does not apply. And, the copyright holder may not engage in “willful blindness” to avoid learning of fair use.
In this case Universal arguably failed to consider fair use at all.
The court does not answer the practical question now faced by Universal and others: what, exactly must a copyright holder do to show subjective good faith under the DMCA? Noting that it was “mindful of the pressing crush of voluminous infringing content that copyright holders face in a digital age,” the court described generally what appears to be a low standard to satisfy the “good faith” test. The court opined that subjective good faith belief does not require investigation of the allegedly infringing content. And, “without passing judgment,” that the use of computer algorithms appeared to be a “valid … middle ground” for processing content. However, the court failed to provide a standard for an computerized algorithmic test that might apply in the notoriously uncertain legal context of copyright fair use.
It seems difficult to conclude other than that this decision will increase the cost burden on the part of content holders who wish to use the DMCA to force the takedown of copyright-infringing content on the Internet. While the court provides little guidance as to what a copyright content owner will have to do to show that it exercised “subjective good faith” before sending a takedown notification, it seems likely that the ruling will involve increased human involvement, and perhaps even legal consultation in “close cases.”
This case was originally filed by Ms. Lenz in 2007, eight years ago, however it is far from concluded. The Ninth Circuit’s decision only sends the case back to the trial court for a trial under the legal standard enunciated by the Ninth Circuit. And, even that determination can only be reached after the court (or a jury) concludes that the 29 second video was fair use of the Prince song in the first place, an issue that has yet to be taken up by the court.
What, one might ask, can Ms. Lenz expect to receive in the event she prevails at trial? First, The Ninth Circuit decision explicitly allows her to recover “nominal damages” — in other words, damages as low as $1. However, even if she prevails and recovers only one dollar, she would be entitled to her costs and attorney’s fees, which could be a substantial amount, notwithstanding the fact that Ms. Lenz is represented bycounsel pro bono.
Of course, given the economics of this type of case, its unlikely we’ll see too many similar cases of this sort in the future. Clearly, this was a “test case,” in which the principle, not monetary compensation, was the motivation. Not many recipients of DMCA takedown notices will bring suit when at best they can hope to recover nominal damages plus attorney’s fees.
Many observers have commented that if they had to identify one law that has had the greatest impact in encouraging the growth of the Internet, they would chose the Communications Decency Act (“CDA”) (47 USC § 230).
Under the CDA (also often referred to as “Section 230”) web sites are not liable for user submitted content. As a practical matter, in most cases this means Internet providers are not liable for defamation posted by users (many of whom are anonymous or judgment-proof).*
*note:The DMCA, not the CDA, provides Internet providers with safe harbors for claims of copyright infringement based on user submitted content.
Two recent cases illustrate the reach and limitations of this law. In one case the CDA was held to protect the website owner from liability for defamation. In the other, the law did not protect the website from potential liability based on negligence.
Jones v. Dirty World
The CDA provides immunity from information provided by users. However, if a site itself is the “content provider” — for example, the author of defamation — it is legally responsible for the publication. In other words, the CDA does not give Internet providers or web site owners license to engage in defamation, only immunity when their users do so.
Under the CDA the term “content provider” is defined as a person “that is responsible, in whole or in part, for the creation or development of information ….” Therefore, in many cases, the issue has been who is responsible for the “creation or development” of the defamatory content – the poster or the site owner?
Nik Richie owns Dirty World, an online tabloid (www.thedirty.com). Users, not Mr. Richie or his company, create most of the content, which often is unflattering to its subjects. However, Dirty World encourages offensive contributions by its “dirty army,” and it selects the items that are published from user contributions. In addition, Mr. Richie often adds a sentence or two of commentary or encouragement to the user contributions.
Sarah Jones, a teacher and cheerleader for the Cincinnati Bengals was repeatedly and crudely defamed on the site. However, the defamation was contained in the posts written and contributed by users, not Richie or his company. In fact, it’s easy to see that Ritchie had been carefully coached as to what he can and cannot say on the site (as distinct from what his contributors say).
Dirty World refused to remove the defamatory posts, and Sarah Jones (who apparently was unaware of the Streisland Effect) sued Richie. Two federal court trials ensued (a mistrial and a $338,000 verdict for Jones).
Before and during the trial proceedings Richie asserted immunity under the CDA. The trial judge, however, refused to apply the law in Dirty World’s favor. The district court held that “a website owner who intentionally encourages illegal or actionable third-party postings to which he adds his own comments ratifying or adopting the posts becomes a ‘creator’ or ‘developer’ of that content and is not entitled to immunity.” Of course, there was a reasonably strong argument that Dirty World and Ritchie did exactly this – encouraged defamatory postings and added comments that ratified or adopted the posts — and hence the jury verdict in Jones’ favor.
After the second trial Richie appealed to the U.S. Court of Appeals for the Sixth Circuit, which reversed, holding that Dirty World and Richie were immune from liability under the CDA.
The first question before the Sixth Circuit was whether Dirty World “developed” the material that defamed Sarah Jones. In a leading CDA case decided by the Ninth Circuit in 2008 — Fair Housing Council of San Francisco Valley v. Roommates, LLC — the Ninth Circuit established the following “material contribution” test: a website helps to develop unlawful content, and therefore is not entitled to immunity under the CDA, if it “contributes materially to the alleged illegality of the conduct.”
The Sixth Circuit adopted this test, and held a “material contribution” meant ” being responsible for what makes the displayed content allegedly unlawful.” Dirty World was not responsible for the unlawful content concerning Ms. Jones.
Second, consistent with many other cases applying the CDA, the court held that soliciting defamatory submissions did not cause Dirty World to lose immunity.
Lastly, the Sixth Circuit rejected the district court’s holding that by “ratifying or adopting” third-party content a web site loses CDA immunity: A website operator cannot be responsible for what makes another party’s statement actionable by commenting on that statement post hoc. To be sure, a website operator’s previous comments on prior postings could encourage subsequent invidious postings, but that loose understanding of responsibility collapses into the encouragement measure of ‘development,’ which we reject.”
The $338,000 verdict was set aside, and the district court instructed to enter judgment in favor of Richie and Dirty World.
The Sixth Circuit’s decision was no surprise. Many people in the legal community believed that the trial court judge was in error in failing to dismiss this case before trial. Nevertheless, it is a reminder of how far the CDA can go in protecting website owners from user postings, and adds to the road map lawyers can use to make sure their clients stay on the “safe” side of the line between legal and illegal conduct under this law.
Jane Doe 14 v. Internet Brands (dba Modelmayhem.com)
Like Dirty World, this case involved a sympathetic plaintiff. The plaintiff, “Jane Doe,” posted information about herself on the “Model Mayhem” site, a networking site for the modeling industry. Two rapists used the site to lure her to a fake audition, at which they drugged and raped her. She alleged that Internet Brands knew about the rapists, who had engaged in similar behavior before her attack, but failed to warn her and other users of the site. She filed suit, alleging negligence based on “failure to warn.”*
*note: The two men have been convicted of these crimes and sentenced to life in prison.
In this case, like Dirty World, the district court again got it wrong and was reversed on appeal. However, in this case the district court wrongly held that the site was protected by the CDA.
The Ninth Circuit disagreed, stating –
Jane Doe … does not seek to hold Internet Brands liable as a “publisher or speaker” of content … or for Internet Brands’ failure to remove content posted on the website. [The rapists] are not alleged to have posted anything themselves. … The duty to warn … would not require Internet Brands to remove any user content or otherwise affect how it publishes such content. … In sum, Jane Doe’s negligent failure to warn claim does not seek to hold Internet Brands liable as the “publisher or speaker of any information provided by another information content provider.” As a result, we conclude that the CDA does not bar this claim.
This ruling has raised the hackles on advocates of broad CDA coverage. Their “parade of horribles” resulting from this decision includes questioning how broadly the duty to warn extends, practical questions about how a web site would provide effective warnings, and concerns about various unintended (and as yet hypothetical) consequences that may result from this decision. However, based on the broad interpretation the courts have given the CDA in the last two decades, it seems unlikely that this case will have significant implications for CDA jurisprudence. Nevertheless, like Jones v. Dirty World, it is one more precedent lawyers must take into consideration in advising their clients.
Surprisingly few observers have asked the pertinent question here: do the Supreme Court’s 1995 Grokster decision and the DMCA (the Digital Millennium Copyright Act) protect YouTube from liability for copyright-protected works posted by third parties . . ..?
In fact, Youtube was acquired by Google for $1.65 billion. It was then sued by a group of media companies, resulting in a marathon lawsuit that never went to trial, but yielded two district court decisions and one Second Circuit decision on the issues I identifed in 2006. As I described in a two-part post in December 2013/January 2014, the second appeal to the Second Circuit had been fully briefed and was awaiting oral argument. Now the case has settled, on confidential terms of course. However, demonstrating the extent to which the interests of the media companies and Youtube have converged, the joint press release contained the unusual statement that the “settlement reflects the growing collaborative dialogue between our two companies on important opportunities, and we look forward to working more closely together.”
We may never know the terms of the settlement, but rumor has it that the plaintiffs received no money in this settlement. My guess is they recovered a token amount, if anything. All three decisions favored Youtube, and Viacom’s case had been whittled down to next to nothing, even if it had been able to persuade the Second Circuit to crack the door a bit and remand the case a second time for damages on a limited number of video clips.
However, the settlement leaves some important questions unanswered:
Viacom’s argument that web sites don’t have to take any actions to “induce infringement” – that this basis for liability can be found based on the owner’s intent or state of mind alone – remains unresolved. This is the Grokster issue I identified in 2006. While I think Viacom’s argument was weak, it would have been helpful to have the Second Circuit resolve it.
Since the Second Circuit’s first ruling in April 2012 the courts have read the decision to reduce protection for web sites. Courts in New York applying the Second Circuit decision have held that a website can lose DMCA protection if it becomes aware of a specific infringement, or if it is aware of facts that would make it “obvious to a reasonable person” that a specific clip is infringing. Because the case has settled, the Second Circuit will have no opportunity to clarify this standard, at least in this case.
The Second Circuit will have no opportunity to clarify the “actual knowledge”/”facts or circumstances” sections of the DMCA. The distinction between these two provisions remains confusing to the lower courts and to lawyers who must advise their clients under this law.
The Second Circuit will have no opportunity to clarify its controversial comments (in its first decision) on “willful blindness,” and help the courts reconcile this concept with the DMCA’s notice-and-takedown procedure. As noted above, the settlement leaves in place the Second Circuit’s implication that awareness of specific infringement may result in infringement liability even in the absence of a take-down notice.
It’s likely that other cases presenting these issues will make their way to the Second Circuit (arguably the nation’s most influential copyright court), but it could be years before that happens. The industry could have used additional guidance in the meantime, and one consequence of this settlement is that it will get it later rather than sooner, if at all.
Not true, assert some of their competitors and the plaintiffs in this case. The plaintiffs allege that MyMovingReviews manipulates the reviews, deleting positive reviews of the plaintiffs and deleting negative reviews of their own company.
Web sites that allow consumer reviews are protected from copyright infringement under the DMCA, and from tort (e.g. defamation) claims under the Communications Decency Act (CDA) assuming, in each instance, that they meet the often strict requirements of the statutes. The defendants claimed the protection of the CDA, and moved to dismiss under that law. Not so, held Judge O’Toole –
The plaintiffs’ claims do not arise from the content of the reviews, whether they be disparaging, laudatory, or neither, but instead, the defendants’ alleged ill-intentioned deletion of positive reviews of the plaintiffs’ moving companies and deletion of negative reviews of their own company, coupled with various representations – that the website offers “accurate” data, that it is “serious about reviews quality,” and that readers “see the most accurate and up to date rating information to base your decision on.” The manner in which the information is presented, or withheld, is the conduct at issue, as well as the allegedly misleading ratings which result from such alleged manipulations. Such conduct provides substantial basis to find that the defendants were developers of the alleged misinformation.
The defendants argued that they were allowed to delete reviews under Section 230(c)(2) of the CDA, which provides that no operator of a website shall be liable for restricting access to material it deems (among other things) objectionable. However, the CDA requires that the website owner show good faith, and the plaintiffs had alleged bad faith.
The court also refused to dismiss a claim of copyright infringement based on the allegation that MyMovingReviews had copied text from a website belonging to one of the plaintiffs.
The court did, however, dismiss claims based on false advertising, unfair competition, tortious interference and trademark infringement. As to the last of these claims, the court rejected an argument in support of trademark infringement based on the controversial (and largely discredited) “initial interest confusion” doctrine, finding that plaintiffs had failed to adequately allege “confusion.”
A couple of weeks ago I returned to the offices of the URBusiness Network to discuss the Digital Millennium Copyright Act (DMCA). This was my second trip to the URBusiness Network, an online radio network with a wide range of business shows.
The subject of the first show, recorded last October, was web site liability for third party postings under the Communications Decency Act (CDA). However, the CDA does not protect web sites for user postings that violate copyright law, so copyright liability and the DMCA were the topics of the current show.
Once again it was a pleasure to be interviewed by Ruck Brutti, who was joined on this occasion by co-host Nathan Roman.
[Update: Viacom v. Youtube was settled before the Second Circuit rendered its decision on the appeal discussed in this post]
After the events described in part 1, Kevin Kickstarter, founder of YouPostVid, meets with his lawyer, Mr. Jagger, to discuss whether YouPostVid needs to change its approach to managing copyrighted videos posted by users of the site. In preparation for this meeting Kevin has read the decisions in Viacom v. YouTube, and Mr. Jagger* has familiarized himself with YouPostVid’s compliance practices under the DMCA.
*[Note] Mr. Jagger’s name is a play on the infamous lawyer in Great Expectations, by Charles Dickens.
Kevin Kickstarter: Mr. Jagger, what I don’t understand is this – we comply with valid DMCA takedown notices. We take down thousands of video clips a month in response to takedown notices.
However, the DMCA also says that we can lose our immunity if we have “actual knowledge” of infringement, or if we are “aware of facts or circumstances from which infringing activity is apparent” — what you call “red flag” knowledge — or if we engage in “willful blindness,” a concept I don’t understand at all. At the same time the DMCA says that we have no obligation to monitor for infringement. How can I run a business based on this confusing set of rules?
And, as if this were not enough, I see that in the first appeal in the YouTube case the Second Circuit said that we could lose immunity if we have induced infringement. And, I read some of the briefs filed in the second appeal, and I see that Viacom is arguing that web sites don’t have to take any actions to “induce infringement” – this can be found based on the owner’s intent or state of mind. I’m totally confused. It seems that in terms of protecting YouPostVid the DMCA has more holes than swiss cheese.
Mr. Jagger: You’re right, Kevin – this is a confusing statute, and if Viacom gets its way in this appeal, the DMCA’s protection of Internet service providers will be narrowed. You need to play it safe in anticipation of an adverse ruling in this case. OK?
Kevin: Yes, I understand. But this is an intolerable situation for my company, and I hope the Second Circuit is able to clarify the law, and soon. We need to know what the rules are. I can’t believe that the online industry doesn’t have clear rules for web site liability based on user generated content 15 years after this law was passed.
Lets get going – I have a hard stop at 3:00 today.
Mr. Jagger: OK. Let me begin by reminding you that there are a lot of ways that you can lose DMCA protection that are easy to avoid. We know that your company has registered an agent to receive takedown notices, and that you have a repeat infringer policy, both of which are essential under the DMCA. You’ve also told me that you have told your employees that they may not upload videos to YouPostVid, since the DMCA only protects uploads by third parties. More than one video sharing site has been caught doing this and denied DMCA protection for employee uploads. You monitor for pornography and hate speech, but you don’t monitor for copyright infringement by means of employee monitoring or automated blocking software, and the law is clear, you have no obligation to do so, although whether you should use blocking software is something we’ll touch on at some point today.
These are all areas of the DMCA that are relatively clear. However, there are additional ways to lose the DMCA’s “safe harbor” protection that are at issue in the Viacom case and other cases pending in the federal courts in New York, and that are not so clear. I want to talk about those.
Kevin: OK, but make it quick – I don’t have much time today. I have a business to run.
Mr. Jagger: Understood. Lets start with what I think, in your company, is the easiest issue, “induced infringement.” You know that Viacom has argued, based on the Supreme Court’s 1995 Groksterdecision, that it is illegal to distribute a device (including software, and by implication a web site), “with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement.” There is no question that this is true – a service provider that “fosters” or encourages infringement risks copyright liability for acts of infringement by third parties who use its software, and cannot rely on the DMCA as a defense.
However, I think Viacom is overreaching on this issue – it argues that the service provider’s state of mind or “secret intent” is a factor in determining whether there has been inducement. In effect, it is arguing that YouTube should lose immunity under the DMCA safe harbor based on a “thought tort.” This argument seems particularly weak where, as in YouTube’s case and in your company, the service has substantial non-infringing uses.
I think that the Second Circuit will hold that an Internet service provider must engage in purposeful expression or conduct — affirmative steps — to induce infringement, and you have never done that.
But, better safe than sorry. I’m going to suggest that you hedge your bets – no emails like the ones you showed me in which you tell employees that popular, copyrighted videos are good for business. In fact, I recommend you stop emails discussing the pros and cons of various clips altogether – they have been YouTube’s worst enemy, and they can be your worst enemy.
Kevin: OK, I understand, I’ll do that. I hope you’re right. We have never encouraged or “fostered” copyright infringement in any way. Should I destroy the emails you’re referring to?
Mr. Jagger: No, I think it’s too late for that. The publisher we met with had its lawyer send me a letter demanding that we preserve internal communications, and you have to comply with that. Sorry.
Kevin: Darn ….
Actual or “Red Flag” Knowledge
The next two ways to lose protection involve what the law calls actual knowledge and “red flag” knowledge. As you noted, the law doesn’t actually use the term “red flag,” it uses the expression “facts and circumstances,” but “red flag” is a common shorthand. And, it’s difficult to separate these two concepts, so I’m going to discuss them together.
What a lot of people don’t appreciate is that since the Second Circuit’s first ruling in Viacom v. YouTubein April 2012 DMCA protection has actually narrowed. Courts in New York applying the Second Circuit decision have held that a website can lose DMCA protection if it becomes aware of a specific infringement, or if it is aware of facts that would make it “obvious to a reasonable person” that a specific clip is infringing. Rather than go into the legal theory I’m going to tell you what this means as a practical matter.
First, if you receive a valid takedown notice you must act on it. We have discussed what a valid takedown notice is in the past, so I won’t repeat that here, but you might want to review the rules on that.
Second, if you receive any information from a non-copyright owner — for example, the “good samaritans” you told me about that have sent you emails informing you that there are specific infringing works on your site — you must investigate and remove those specific clips if you find that they were copied without the permission of the owner. In other words, something less than a formal takedown notice may establish “red flag” knowledge, requiring a site owner to investigate whether the clip is copyrighted, and if so whether it has been posted with permission.
Third, and perhaps most difficult, if you or any employee becomes aware of a specific clip where it would be objectively obvious to a reasonable person that the work is copyrighted, you must investigate whether it is an authorized or an illegal copy. The courts have held that if an employee of a web site “interacts” with a copyrighted clip — views the clip, “likes” it, features it or promotes it to a different location on the site — the site must apply the “objectively obvious” test. If it doesn’t, and the owner sues for copyright infringement, the site may not be protected by the DMCA.
Kevin: Whoa, Mr. Jagger, hold on a minute, hold up, please. I don’t get it. This makes no sense.
We have no obligation to look for copyrighted clips, but if we’re told about one, or one of my employees becomes aware of one, we need to take it down? This makes no sense. I understand that we need to investigate these “good samaritan” emails identifying infringing clips, but I don’t get the second piece of this. If an employee becomes aware of a specific clip I need to decide whether it would be “objectively obvious to a reasonable person” that the clip is infringing? If that’s the case I’m going to have to tell my employees not to browse the site. Is that what the law requires? You’ve got to be kidding me!
Mr. Jagger: I agree with you, Kevin, but I do not make the laws, would that I did. I am reminded of the comment by Mr. Bumble in Oliver Twist, “If the law supposes that the law is a ass – a idiot.”
Kevin: Mr. Jagger you are always quoting Dickens, or some other long-dead writer, most of whom I have never heard of. Stop your literary peregrinations, and get to the point, please! I haven’t had a migraine in years, but I feel one coming on now ….
Mr.Jagger:Sadly, I often hear this from my clients. I am sorry.
Kevin: Proceed, sir, please. Get this over with. This is worse than a root canal procedure.
Mr. Jagger: Very well. Yes, the safest legal course of action is to tell your employees to limit their viewing of uploads to your site.
Kevin: But our employees organize clips by subject, and move popular clips to the home page and various topic pages. They have to look at the clips to do that. Are you suggesting we stop this? That we can’t even look at video uploads on our own site? That what we’ve done in the past could lead to liability?
Mr. Jagger: I am afraid so. If you don’t you will need to train your staff to recognize copyrighted works under the “objectively obvious” test, and you will need to take down clips that satisfy that test, at least until the law is further clarified.
Kevin: You want me to provide legal training to my staff? Half of these people are high school students, interns, engineers . . . They aren’t lawyers; I can’t possibly ask them to apply this test.
Besides, this is not as simple as you — or these courts — appear to assume What if the clip uses copyrighted material but it falls under fair use? What if the clip is just a bunch of kids lip dubbing a song? Is that illegal or fair use? What if its a home-made video that uses only 20 or 30 seconds of a copyrighted song – is that infringement or fair use? How about an amateur music lesson of a copyrighted song? What if its a creative videomashup? You expect me to tell my employees that they need to make these kinds of distinctions? They are not lawyers, and frankly I’m not convinced that even lawyers could sort infringing from fair use clips, given the confusing court decisions on fair use!
Mr. Jagger: You can, and you must, or you must tell them not to view clips on the site, unless you are willing to risk a lawsuit. Or, of course, you can license Audible Magic. I can give you guidelines on how to apply the “objectively obvious” test or identify fair use, and when they are in doubt you can send a link to the clips to me, and I’ll advise you.
Kevin: I bet you will ….
Mr. Jagger: So, shall we continue? I’m afraid I have more bad news …..
Kevin: Before you do that Mr. Jagger, let me just point out how ridiculous this has become. I know enough about copyright law to know that not only are network and cable shows and Justin Bieber songs and music videos copyrighted, but the fact is that almost every video clip on my site is protected by copyright law. If someone creates a cat video, or a baby video, isn’t that automatically protected by copyright law? Isn’t the Charlie Bit My Finger video copyrighted? How am I supposed to apply this “red flag” standard when almost every video on my site is copyrighted? Even Audible Magic won’t block those videos. You’re telling me that I, the founder and owner of this company, can’t look at my the video clips on the site without applying the “red flag” test to every clip I view?
Mr. Jagger: You are 100% correct, Kevin. Very likely, all of those videos are protected by copyright law. It’s difficult to reconcile the “notice and takedown” parts of the law with the “actual knowledge”/”facts or circumstances” sections. I think that this is the key issue the Second Circuit will have to deal with in the Viacom/YouTube appeal, and in an appeal in a case involving Vimeo, presenting this precise issue. In that case the court held that Vimeo had to weigh the type of music (professional v. amateur), how well-known the artist is, and other factors that could be used to distinguish amateur material from professional material. The court seems to assume that amateur material has been posted with the consent of the owner, but professional material has not. The court held that a “recognizable song,” played essentially in its entirety and in unedited form triggers the red flag, and the website owner must either take it down, or investigate to confirm it is not a copyright violation. And songs that are lip synced or mashups are no exception.
Unfortunately, the court ignores the fact that what is immediately “recognizable” to one person may not be recognizable in the least to another, so for now the best rule of thumb is, “if in doubt, take it down.” You cannot be liable for taking down a clip (the DMCA expressly states this), but you can be liable if you make the wrong decision and leave it up.
I know we are past your 3:00 stop time, but I should also mention that the Second Circuit in Viacom stated that a web site could be liable for “willful blindness” where the site owner is “aware of a high probability of the fact of infringement and consciously avoids confirming this fact.” However, it’s not clear how this is different from “red flag” knowledge, so I suggest we leave it be until the courts provide more guidance.
Kevin: Thanks Mr. Jagger, are we done.
Mr. Jagger: I do have one more issue I need to warn you about. One federal district court in New York has held, in the MP3Tunes case, that the DMCA safe harbor applies to sound recordings — in other words music — recorded before February 15, 1972. However, more recently a different court in the same district, in the Vimeo case, held that the DMCA does not apply to pre-1972 sound recordings. The judge in the Vimeo case has authorized the parties to ask the Second Circuit to decide this issue while the Vimeo case is still pending — an “interlocutory” appeal — but there’s no guarantee the Second Circuit will exercise its discretion and hear this issue at this point in the case. This is obviously a question of great importance, but at the moment the law is not clear.
Kevin: Wait a minute – if we don’t take down pre-1972 sound recordings we risk liability, but if we review our clips to try to find pre-1972 recordings, we have to evaluate every clip we review to determine if it is an infringing copy? This is a catch-22 that we just can’t win.
Mr. Jagger: I agree ….
Kevin: Mr. Jagger, let me ask you a question – what do you see as the big picture here? Where are the courts trying to guide companies like mine with this ridiculous line of cases.
Mr. Jagger: I will tell you my theory, Kevin. Under the language of the DMCA the courts cannot force web sites like yours to use filtering software like Audible Magic, or ContentID, a system developed by Google. However, they can use the law to make it so difficult to avoid liability that, as a practical matter, that will be the only option. I think that, in the not too distant future, we may see an Internet in which almost every significant site that allows a significant number of uploads will either have some form of licensing agreement with content owners, or will use filtering software to block most copyrighted works. Other than renegade sites that can’t be reached by the courts, the early days of sites like YouTube, Vimeo, Veoh and MP3Tunes will become as much a distant memory as 5 1/4 inch floppy disks.
Kevin: I need to go. I think I can sell my company for a decent profit before the forces of darkness you describe put me out of business.