FacultyFaculty/Author Profile

An Overview of Social Media Law and Best Practices: What Every Company Needs to Know (NY)


JOHN DELANEY: So my co-speaker is Aaron Rubin. He's a partner at Morrison and Foerster San Francisco office. And he's co-chair of the firm's Technology Transactions Group. Aaron and I co-edit a blog focused on the types of issues we're discussing today. It's called Socially Aware. You can find it at sociallyaware.com.

And what we try to do is capture best practices, emerging case law, and try to help educate our readers and alert them to new developments around social media mobile apps, cloud computing, artificial intelligence, and other new emerging disruptive technologies. And I should say that Aaron's practice focuses on technology-driven transactions and counseling, including advising clients on risk mitigation strategies around emerging technologies like social media and mobile apps.

So without further ado, we'll get started. And, again, I'll start. We've got a lot of ground to cover. But again, I want to sort of set the table before we turn to the legal issues and just some statistics.

Those of you who have been coming to this program for a couple of years-- remember, I used to do a lot of stats. We now know the statistics around social media are really astonishing. But I want to just highlight a few.

China population, 1.38 billion people. Facebook, their monthly active user community is now 2 billion people, so now significantly larger than the population in China. So if the Facebook community were a country-- in some ways it functions like a country-- that Facebook community would be the world's largest country by some measure.

The Facebook community is more than six times larger than the population of the United States. Nearly one in four people on Earth is a monthly active user of Facebook-- one in four. And you've got to remember, if you're under 13, you're not supposed to be on Facebook. So there's a big segment of the world population that is not supposed to be on Facebook.

And if you've ever been to China and have tried to log into your Facebook account, you know it's blocked in China. So Facebook's been able to achieve this incredible growth to one in four human beings on Earth, despite not being able to sign up as members people below age 13 or people who reside in China.

And in social media circles, we now have what we call the "One Billion Club." So the "One Billion Club" are social media platforms with over one billion monthly active users. There's currently four-- Facebook, YouTube, WhatsApp, and Facebook Messenger. And, of course, WhatsApp and Facebook Messenger are owned by Facebook. And Instagram is at 700 million users. So maybe by this time next year, we'll have entered the "One Billion Club."

Every 15 minutes, more than 49 new pieces of content are posted to Facebook. We did the math. That's 4.75 billion pieces of content that are shared on Facebook daily.

We saw one study-- it's now a couple of years old-- but it estimated that 4% of all photos ever taken are hosted on Facebook through the entire history of photography. And an estimated 20% of all photos taken in a given year will end up on Facebook. So there's nothing to compare this to in the pre-social-media age of such a large aggregation of content being provided by others and hosted. Host photos on Facebook, and especially if you include Instagram, far outnumber the number of photos that Corbis, or any large archives, would have available.

For the more mature members of the audience like myself, a little walk down memory lane. If you were practicing law in 2004, on the left, we show what were the 12 largest websites at the time. On the right, we show, as of January of this year, what were the 12 largest.

I know that 8 of the 12 have dropped out. And of the eight that have dropped out, seven of the eight new sites-- the replacement sites-- are social media sites. So YouTube, Facebook, Reddit, Wikipedia, Twitter, Instagram, and Imager, which is the first time I've seen it on this list, which is if you have a teenager, they're very familiar with Imager. It's where you go to get memes, and GIFs, and things like that.

But I should note, just on number 13-- it's LinkedIn, which actually, last year, it was in the top 12. Number 14 is Twitch, which, if you have a teenager, you're probably very familiar with Twitch. It's where you go to watch other people play video games-- hugely popular, one of the most popular sites on the web. And number 15 is Craigslist. So if we extend this out to the top 15, it's social media and interactive-- highly interactive-- sites, most of which are built on user-generated content, are the most popular sites in the country, or destinations.

For this slide, I'll turn it over to Aaron.

AARON RUBIN: All right. Thank you, John. So one of the results of all of this social media use is that it's now, more likely than not, that if you find yourself at some piece of content on the internet-- so you find yourself at the CNN site or the New York Times site-- that you got there through some social channel, rather than through traditional internet search.

Just a few years ago, you might very well have arrived at some publisher website or some item of content on the internet because you typed a search term into a traditional search engine. But what this slide here shows is that that's no longer the case much of the time, and that, in fact, much more traffic is now driven to publisher content by social than it is through search.

Now this site is a couple of years old. This is actually from 2014. And we haven't been able to find updated numbers. But we keep this slide in the presentation because it shows that snapshot in time, which you can see on the slide is sort of mid-2014, where social overtook search as a means of getting to publisher content on the internet.

And we have to assume that this trend has continued. Although I will say, you may have seen in the news in the past month or so that Facebook is sort of tweaking its algorithm to favor of friends and family content or connections over publisher content. So it's possible that that tweak could affect this trend line somewhat, although I think that it probably is continuing to move in that same direction.

Another phenomenon that contributes to all of this social media use is, of course, the switch to mobile. We now do most of our internet activity on mobile platforms rather than on traditional desktop computers.

And, of course, what are you doing when you're there typing away and you're on your phone? You're checking your Instagram, and you're looking at Facebook, and so forth. So the switch to mobile computing over desktop computing also contributes to this phenomenal growth of social media.

I'll now turn it back over to John to talk a bit about the corporate use of social media.

JOHN DELANEY: Yes, great. Thanks, Aaron. So moving now from having set the table to talking a little bit about, specifically, corporate use of social media. You know, we've heard marketing people at our clients say things like social media is the greatest development for marketers since the printing press.

And if you think about it, it really is an astonishing development. Because for the first time, companies on a very large scale can have one-on-one conversations and interactions with customers, with each customer. It's staggering.

That wasn't possible prior to the rise of social media, unless at some huge great expense. Or it was usually a one way conversation, as in direct mail. So this is now a two-way conversation.

And this next slide kind of highlights one of the impacts of this. I think this is intuitive. Nothing surprising here. And this is based on an older study. We haven't been able to find an updated version of this.

But this study showed that your biggest fans, your best customers, are the ones that are engaging with you on Facebook. So in other words, McDonald's best customers are the ones who are taking the time to go to McDonald's Facebook page and liking it, or following McDonald's on Twitter.

And for those of you who work closely with marketing people, or are in the marketing industry, these are your so-called evangelists. These are the people that can help promote your goods and services to their network of friends and family. And that can be a extremely effective way of marketing.

Because when people learn about your product or service through a friend, it has more credibility than when they learn about it through an advertisement-- a traditional advertisement-- or television ad. So it's very powerful.

And we'll be hearing a lot today about social influencers. These are the people who are fans that are so popular on social media that you can use them, subject to some important legal considerations we'll be talking about later today, to help convert their followers to become your customers. So that's the promise of social media.

But let's take a minute to talk about the dark side of social media. So on our blog, Aaron and I used to try to cover the social media fails, but they just come too fast and furiously. You know, there's pretty much one of these, or two or three a week. So we stopped trying to track them, but the ones that catch our eye, we still will take note of.

But these are basically companies getting into trouble in connection with their use of social media. And the fails fall into different categories. So we'll talk a little bit about those type of categories.

Probably the most popular category is mistakes by the company's own social media staff, or, in some cases, senior executives at the company. This is an image that got a lot of attention. Or this is actually a post Instagram by the company American Apparel.

This was posted on July 4th, 2014. And it was in celebration of Independence Day. And so American Apparel thought this was a picture of fireworks, so they presumably just took off the web. For those of you-- I see a couple of you shaking your head-- you probably know this is actually a photo of the Challenger explosion, one of the great, sad, mournful moments in American history.

So they accidentally posted this in celebration of July 4th. It immediately went viral. There was a big backlash against American Apparel. They eventually posted an apology where they say the image was reblogged in error by one of our international social media employees who was born after the tragedy and was unaware of the event. So they're basically admitting, we allow interns to run around and post photos without any type of review process.

So they kind of throw this European intern under the bus. But ultimately, the damage to the reputation, of course, is to the company. And so this was two or three days of bad publicity online that can often-- the reputation for some of your potential customers is going to stick in their minds for years.

AARON RUBIN: All right. So this next example illustrates issues around using celebrities in your social media promotions. And what we see here is the actress Katie Heigl, a picture of her coming out of a Duane Reade drugstore. Some quick-witted Duane Reade employee saw this image on the internet, said, oh, that's great. We're going to use this and post it here to Instagram, I believe it was. Says, love a quick #Duane Reed run? Even Katie Heigl can't resist shopping at #NYC's favorite drugstore.

The unfortunate thing is that they never spoke with Katie Heigl about this or got her permission. She sued Duane Reade for $6 million. The case ended up settling for an undisclosed amount, so we don't know exactly what happened.

But just goes-- should be, I think, intuitive that you can't use a celebrity image for a promotional purpose without that person's permission. It's just that things happen so quickly and spontaneously in the social media world that this is the type of thing that can slip through fairly easily.

JOHN DELANEY: Yeah, and I would add if you and I, at home in our personal lives, had seen that photo and retweeted it, it would have been fine, because we're not commercial entities. And that's how users engage on social media. They share photos of celebrities and so forth.

The problem is when a for-profit company wants to participate in the fun and share photos too-- and ultimately, there's someone in the social media group who tweets this type of thing at home and now they want to tweet it at work. Well, when you're connected with a big company, this looks promotional and can run afoul the right of publicity.

Our next example is from a senior executive. So this tweet, I have to set it up a little bit. The tweet at the bottom is from-- I think he just recently left Twitter-- but it was Twitter's CFO Anthony Noto, a very respected senior executive in the social media industry. And he tweeted this cryptic message, I think we should buy them. He is on your schedule for December 15th or 16th. We need to sell him. I have a plan.

So this immediately attracted a lot of attention. It turns out he was apparently talking about an acquisition. And it led to a lot of speculation on who Twitter was planning to buy, what company. And you'll see the other tweet someone retweeted said, looks like Twitter's CFO just had the first ever M&A direct messaging fail.

So what people speculated was Twitter's CFO meant to send this as a direct message so it wouldn't be public, but accidentally failed to send it in that manner. So the message was public and immediately seized upon. And analysts and stock traders would start speculating on what was happening.

And the point here is this person is a senior executive at Twitter, and he made a mistake using Twitter. So it can happen to any of us if it can happen to the CFO of Twitter. And this is a good horror story to remind your senior executives, these social media platforms really aren't a good communications channel for confidential communications around sensitive information, like M&A deals, or investigations, or responses to litigation. Really should be using more secure means of communications where these types of mistakes are less likely to happen.

AARON RUBIN: OK. So this next example is from last year, from right at the beginning of the Trump administration, which actually feels like about 100 years ago at this point. But in any event, it actually involves a tweet that was posted even earlier than that.

So here's the story. In 2011, somebody at Trump Hotels posted this tweet, tell us your favorite travel memory-- was it a picture, a souvenir, a sunset? We'd love to hear it. And of course, this is your typical Twitter promotion where they were trying to get users to engage and post in response to this tweet. So that's all the way back in 2011.

Then Trump gets elected. And, of course, there's a lot of controversy around immigration policies and so forth. And this tweet suddenly reemerged. And people started posting, just last year, various things in response to it. So you could see some of the examples here, at Trump hotels, my asylum grant, thanks for asking. My grandmothers traveled to England after being freed from Auschwitz.

So the lesson here, I guess, is the same lesson that you tell your kids, that once it's on the internet, it's there forever and can reemerge to haunt you. I don't know if we can even really call this one a fail, because the original tweet was perfectly OK. You can't really guard against an innocuous tweet like this that was fine in 2011 when it was issued. But it just goes to show that things can reemerge and bite you years later.

JOHN DELANEY: Great. And the next example is the rogue employee. So these things happen a lot. It's surprisingly common. And this one's very recent, I think just from January of this year, so about a month ago. So Spike TV, those of you may know, has actually shut down. And it's now going to be Paramount TV. They're rebranding.

But on the day they announced that they were shutting down Spike TV, the Spike Twitter account just went crazy and started tweeting, really making fun-- a lot of this is not sufficient for work or for a PLI conference, so you can read it at your leisure, I'm not going to say it out loud-- but started criticizing the shows on Spike, started criticizing the other employees, started criticizing the talent.

They changed the description for the account holder to "soon to be ex-Viacom employee." And there is probably 30 or more tweets, very derogatory towards the company. And they remained up for about 24 hours, if I recall. And they went viral. I learned about it just-- I looked at my Twitter feed, and people were retweeting these.

And this is a pretty common situation. And of course, if you're going to have a reduction in force or a change of business model, one of the things that should be on any company's checklist is thinking about the social media accounts of the companies and who has access to the passwords. We'll talk a little bit later today about companies' social media policies which should address these type of issues and making sure there's some type of oversight and control.

Another common issue is failure to adequately secure a company's social media accounts. Again, an issue that really should be addressed in your company's social media policy. But we hear too often of corporate Twitter accounts, or Facebook accounts, that had a password as the password and things that we wouldn't do, we know we shouldn't do, as individuals with our own account. We shouldn't be doing them, especially, on corporate accounts.

So I'm not sure what McDonald's password here is. But in March of last year, a tweet went out under McDonald's account, directed to Donald Trump, saying, you are actually a disgusting excuse of a president, and we would love to have Barack Obama back. Also, you have tiny hands. So this is kind of unusual.

If you're following McDonald's, you may actually be a Trump fan. So the Trump supporters went ballistic. And for the 15 minutes that this tweet was up, they were calling for boycotts. They were setting up "We hate McDonald's" type campaigns. And it turned out, of course, the account was hacked.

And so McDonald's sent out an email later in the day, based on our investigation, we have determined our Twitter account was hacked by an external source. So in a way, they handled it perfectly. They caught it, unlike the Spike situation where it lingered for hours.

They caught it. They addressed it as quickly as possible. They had a link to a description of what happened. So they had transparency.

But even then, in the 15 minutes, there are very likely people that saw the original post but never saw the correction. That's just the way social media works. So it could be the reputational damage, at least with some of their customers, may have already occurred.

So another thing we want to talk about is a real rising category, is, again, companies on social media want to be just like us. They want to go, be spontaneous, and they want to make clever witticisms in response to someone else's tweets, and they want to be supportive and respond in real time to serious questions raised by customers. And we see this creating problems over and over again because these tweets and Facebook posts are viewed as official company statements.

So here is Keurig. Many of you heard about this. This happened over the fall. And so there was-- in connection with Roy Moore running for Senate in Alabama, and Sean Hannity had made some statements in support of Roy Moore, and there was an effort to boycott advertisers who advertised on Sean Hannity's show. And it was a campaign that was primarily organized on social media.

So here Keurig, which is a coffee maker that advertises on Hannity's show, got a tweet directed to them saying, you are currently sponsoring Sean Hannity's show. He defends Roy Moore and attacks women who speak out against sexual harassment. Please reconsider.

So Keurig just responds of the cuff a few minutes later, Angelo, thank you for your concern and for bringing this to our attention. And they said, we're working with our media partner and Fox News to stop our ads from showing on the Hannity show-- innocuous statement in response to a serious issue raised by someone on Twitter.

But it immediately sparked a firestorm. So Sean Hannity fans began-- to me, this is a little strange-- they began filming themselves smashing and destroying their expensive Keurig instant coffee makers. And so these videos went viral.

It created a PR issue for Keurig. There were calls for boycotting Keurig, et cetera, et cetera. And Keurig CEO eventually issued an apology and said-- and this is a lesson for all companies-- we probably shouldn't have addressed such an important issue in an off-the-cuff social media response, so something-- that these types of important company decisions really should be made at the senior executive level, maybe even at the board level, and there needs to be carefully thought through-- the PR consequences and so forth. But when you do it on social media, you can very quickly create a firestorm around the response that might not happen if it's a more measured, considered approach that's announced through a press conference or through a traditional press release, rather than just off the cuff on social media.

So Volvo did the exact same thing. Someone tweeted Volvo about their ads on Sean Hannity's show. They tweeted, we've spoken to our ad agency, and we're going to stop advertising on that show. And the liberals on Twitter went crazy with photos, allegedly, of Hannity supporters destroying their Volvo cars. I don't know if that actually happened.

And then Aaron and I wanted to touch on what we see as the disturbing next wave for social media being used to kind of embarrass companies. Some of you may have seen this image. It was circulated widely on Twitter and Instagram in September of 2017, in connection with the national anthem protests happening in the NFL.

And so it's an image, purportedly, of a Seattle Seahawks football player burning an American flag in the Seattle Seahawks locker room. This is Michael Bennett, who's a defensive end for the Seahawks. And the team looks-- and including the coach-- looks like they're cheering him on.

This went viral-- calls for boycotts of the NFL and of the Seahawks. And of course, this was the actual photo. There was no flag burning in the locker room, of course. But there's an old saying, a rumor is halfway around the world while truth is still putting its boots on.

So this video went viral, got seen by millions of people, got retweeted by tens of thousands of people. And by time the correction came out, how many people that circulated and retweeted the original false image, saw this. But I will say, it's about to get worse.

There's already technology that's developed in pretty close to commercial use. And we were going to show an example of it, but we just didn't have time. But I would encourage you to take a look at it.

So Stanford has developed a technology that allows the manipulation of video footage of people, where you can literally control what they're saying, almost-- they call it, like, puppet technology. So they have examples of George Bush endorsing-- or say Barack Obama endorsing Donald Trump. And imagine in the future, we're going to see videos where a company CEO is saying something in video that looks real.

And, in fact, the video itself is real. It's been expertly manipulated with this new software technology to have the CEO saying something offensive, hateful, damaging to the company, that could affect the stock price. And it's increasing.

We talk about fake news now. We'll touch on fake news later today. But this problem's going to magnify tenfold, where video itself is going to become unreliable and subject to accusations. It's not just photos, like in the Seahawks situation.

And we don't have time to go through this. And some of this will be touched on in other sessions today. But we did set out some best practices around these corporate fails that you might want to take a look at in advising your clients on staying out of trouble in their social media campaigns. And it's a couple pages. Good.

So we're now going to get into legal issues, and particularly around user-generated content. And I wanted to start with this statistic. So 11 of the 15 largest internet companies are based in the United States, and the other four are based in China, where, we've already mentioned, Facebook is banned, creating opportunities for Chinese-based companies. Twitter is banned in China.

So we ask ourselves, why are so many of the world's largest internet companies based in the United States? And one theory we have is two transformative laws that were signed into law by Bill Clinton-- the Communications Decency Act and the Digital Millennium Copyright Act-- one signed into law in 1996, one signed in 1998, right when the internet industry was just beginning to really take off. And these two laws allowed the US internet industry to become supercharged by addressing a lot of the common liability concerns for website internet companies, website operators, around user-generated content.

So these two laws made it possible to grow a business built on user-generated content, without the risk of being sued out of existence, which, but for these laws, would be a very real risk for companies dealing with user-generated content. So let's first talk about the Communications Decency Act.

AARON RUBIN: All right. So before we actually get into the Communications Decency Act, let us hearken back to those days of yore, back in the early '90s, when the commercial internet was just getting going, and we had some early services like CompuServe, Prodigy, and so forth. These were really the first large scale commercial efforts to create an internet community. It wasn't quite social media yet in the way that we think of it today. But it was the beginnings of it.

And these were large websites that offered news forums, bulletin boards, where people could post, and that type of thing. So we started to see, as you might expect, people behave badly on these platforms, and then some lawsuits came out of that. And these early lawsuits are important in setting the stage for what subsequently happened with the Communications Decency Act in '96, like John mentioned.

One of the early cases involved CompuServe. It was called Cubby versus CompuServe, a New York case. And just to sort of cut it short, basically what happened is somebody posted anonymously some allegedly libelous defamatory information about the plaintiff. The plaintiff was a guy named Robert Blanchard. He had this company called Cubby, Inc. that ran a business newsletter, or something of that nature.

And somebody posted on a CompuServe news board some information that Robert Blanchard thought was defamatory of him and his company. So he sued-- not the anonymous poster, because they were anonymous and potentially judgment-proof. Rather, he sued CompuServe for hosting this information, or for publishing this information. And he sued just as a straight defamation case.

And the court held that yes, in fact, online service providers like CompuServe are subject to just your standard everyday defamation law when it comes to hosting content, in just the same way that any brick-and-mortar type of publisher might be subject to defamation law. But the court then went on to hold that, in this particular case, that CompuServe wasn't a publisher in the sense that a magazine or a newspaper might be, but rather was a distributor, sort of like a news stand, or magazine store, or something like that might be, with respect to this information.

And in defamation law, that distinction is important-- in traditional defamation law-- because a publisher is responsible for the content that it publishes, regardless of whether it knows that it's defamatory. Whereas a distributor, under traditional defamation law, is only liable if it knows, or has reason to know, that the material is defamatory.

So in this case, the court said, well, CompuServe is really more like a distributor. And it wasn't monitoring the forums and so forth, and so therefore didn't know that this content was defamatory. And therefore, it was not liable in this particular case.

But nonetheless, the case is important, because it sort of set the stage for courts to treat online service providers, or platforms like CompuServe, as basically publishers or distributors under the traditional defamation law. It also created a perverse incentive for platforms not to monitor what was going on with their posters, with their platforms. Because, in this case, CompuServe got off the hook because it didn't know what was happening.

That was only reinforced a few years later with a case, also a New York case, called Stratton Oakmont versus Prodigy. This involved the Prodigy platform. And here, users on a certain sort of like financial news group posted information about a company called Stratton Oakmont and its president that was allegedly defamatory.

The company sued Prodigy again. And here, the court held that, in fact, Prodigy was liable as a publisher of this content, not just merely as a distributor but as a publisher of the content, because, among other things, Prodigy had content guidelines for its forums. It enforced those guidelines through what it called board leaders, which is sort of like super-user type people.

And it used screening software to screen for offensive language. So it sort of supervised what was going on on its platform. Therefore, it was a publisher, and it was liable for this defamatory, or allegedly defamatory, content that its users had posted.

Now you can imagine that this was a disturbing result for this budding industry of these sort of online quasi-social media type of platforms. If they were liable for all of the content that got posted on their platforms, that's potentially a killer. I mean, that could stop them in their tracks.

So it was just the next year after the Prodigy case that Congress enacted, and President Clinton signed into law, the Communications Decency Act. The particular provision of the CDA, as we like to call it, that is important to us is section 230.

There were, in fact, a lot of other things in the Communications Decency Act. It was actually primarily about stopping pornography and that type of thing on the internet. A lot of that got struck down as unconstitutional.

But the piece of it that lives on, and that's particularly important to us, is section 230. And we've got the statutory language here on this slide. What section 230(c)(1) says is that no provider or user of an interactive computer service-- and you can sort of just think of that as a website or online platform, a interactive computer service-- no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. And for our purposes, you can think of information content provider as a user.

So to sort of simplify this, it basically says that an online platform, or a website, or a social media site, is not going to be treated as the publisher or speaker of the content posted by users.

JOHN DELANEY: And just to add to that, it preempts all 50 state laws to the contrary, with some exceptions, which Aaron will touch on.

AARON RUBIN: Right, yeah

JOHN DELANEY: It's very powerful.


JOHN DELANEY: And it creates a different set of rules for online companies and for offline companies. So a letter to the editor published in the physical New York Times that has defamatory content-- the New York Times might be held potentially liable for that content, even though it comes from a third party, since it appears in the physical paper. But if it appears in the online edition of the New York Times, this set of safe harbors come into play.

AARON RUBIN: Yeah, that's exactly right. And that's a little bit counterintuitive for some people to think that we're treating online publishers completely differently from how we treat brick-and-mortar publishers. And the answer is yes, we are. That's exactly the point. And that's exactly what this does.

And as John said, this preempts defamation law and a whole bunch of other laws in terms of user-generated content throughout the country. It's been interpreted very broadly and is really sort of foundational to the modern internet as we know it.

Now there are some exceptions, some statutory exceptions, actually written into the statute itself, as this slide shows. Probably, in practical terms, the most important one, at least in our practice anyway, is that section 230 does not provide immunity with respect to infringement of intellectual property laws. Now John will talk a little bit later about the DMCA, which does provide a safe harbor, at least for copyright infringement. But the CDA does not provide a safe harbor when it comes to intellectual property or some of these other exceptions that you see here.

It's also important to keep in mind that it's a US law. So the internet is worldwide, and many of our clients have operations not just in the US and have websites and so forth outside the US. So obviously, the CDA does not apply there.

JOHN DELANEY: With that being said, the CDA was amended to create protection to the extent you don't have assets overseas, and someone gets a judgment against you and tries to enforce the judgment in the US-- if that judgment would have been blocked-- is inconsistent with the safe harbor, then the court under the law cannot enforce that judgment in the US.

AARON RUBIN: Right. OK, so now let's just talk a little bit about some of the cases that have interpreted the CDA. Because this is really-- the history of the cases interpreting the CDA is really what's most important. The statutory language itself is arguably somewhat cryptic. But the interpretation of the CDA since it was enacted is really what gives it its strength.

And one of the very earliest cases, and probably sort of the most important early case interpreting the CDA, was a case called Zaren versus America Online. It's a Fourth Circuit case from 1997.

The facts here were that there was an AOL bulletin board. And on that AOL bulletin board, somebody started advertising t-shirts making fun of the Oklahoma City bombing. For those of you who are too young to remember, this was a bombing of a federal building in Oklahoma City.

It killed 168 people, many of them children at a daycare center that was located in the building. It was an act of domestic terrorism. It was a very, very serious thing and very much in the news at the time.

And somebody posted on this AOL bulletin board these supposed ads for t-shirts that said things like, finally a daycare center that keeps the kids quiet-- Oklahoma in 1995. So offensive slogans and so forth. There were a number of them.

And they had in them a telephone number and a person's name to call to order the t-shirts. And it was this poor guy, Kenneth Zaren, who, as it turns out, had nothing to do with it. He was, as far as anybody could tell, just chosen at random for a prank. But his name was attached to these ads.

He started getting phone calls, and death threats, and so forth. He complained to America Online. They took down some of the ads, but then they popped back up. Eventually, he ended up suing AOL, alleging that it was negligent in failing to adequately respond to these fake posts after becoming aware that they were, in fact, fake.

AOL said that it was protected by the then newly-enacted CDA section 230. And Zaren argued, citing those earlier cases I mentioned-- the Prodigy case and the CompuServe case-- that he was trying to hold AOL responsible as a distributor of information rather than a publisher, and that CDA 230, which had not yet really been interpreted by the courts, only applied to publisher liability.

Remember the statutory language, will not be treated as a publisher or speaker. So he used that to argue that CDA 230 really only provided a safe harbor with respect to publisher liability, not distributor liability, and that he was trying to hold AOL liable as a distributor of this information rather than as a publisher.

The court rejected this. The Fourth Circuit said no, CDA 230 applies across the board. It provides a safe harbor for defamation claims, or this type of claim, that he was trying to make broadly. And so he lost. And that was an important ruling because that really set the stage for the CDA to be interpreted broadly.

Now, we have seen a few exceptions over the years where plaintiffs have managed to find little chinks in the armor of section 230. One of those came relatively early on. This is a well-known case out of the Ninth Circuit in 2008 called Fair Housing Council versus Roommates.com.

What happened here was that it was a website called Roommates.com, which, you may have discerned, involved finding roommates, basically just a matching site. If you had a room or you needed a room, you could go on this website and try to get matched up.

The website apparently asked various questions so that they could match compatible people up. But among those questions that were asked in these dropdown menus that you can see on the slide here, were various questions that allegedly asked about categories that you just can't consider in making housing decisions.

And so the Fair Housing Council-- you know, there were things about sexual orientation, family status, that kind of thing. The Fair Housing Council said that, well, Roommates.com is violating fair housing laws by requiring people to answer these questions. Now, of course, Roommates.com said no, no, no. Section 230-- you can't hold us responsible for the potentially discriminatory answers of our users.

And this is where it gets interesting. The court said, well, no, Roommates.com, you actually required people to give those answers through your dropdown menus. You contributed to the discrimination by having people provide answers through these dropdown menus. And therefore, you contributed to the development of this content.

It was not purely user content. And therefore, CDA 230 doesn't immunize Roommates.com with respect to this aspect of their website. The court also did say that where Roommates.com had just text boxes, free form text boxes, that users could type in whatever they want-- with respect to that input, CDA 230 covered it.

But with respect to these dropdown menus, because roommates.com contributed to the development of this content, it was not covered by Section 230. And so you still see in the cases even today, there's this sort of tension, where plaintiffs try to make the argument that a website is somehow contributing to the content rather than merely being a passive recipient of the content.

So the cases proceeded on. We had the Roommates case out there. But for the most, part, the trend for many years was really to a broad and robust interpretation by courts of the CDA section 230 safe harbor in all variety of cases, not just defamation, but all variety of cases. Wherever the claim was based on user content, the trend was definitely towards courts holding that the website, or the online service provider, was protected by the safe harbor.

This is just one case, came out-- what is it, a few years ago, 2014-- a Sixth Circuit case called Jones versus Dirty World Entertainment. We just picked this one because it's just illustrative of this broad interpretation that was the norm for many years. It involved a plaintiff and a website called TheDirty.com.

It was a pretty nasty website, actually. It's whole purpose is to solicit nasty, scandalous information about people-- not just celebrities, but just regular people too. And so users post this nasty information on this website.

The guy who runs the website apparently eggs people on, encourages them to post more. Really, the site is set up entirely for this purpose. And some user posted information about the plaintiff here, Sarah Jones.

She was a cheerleader for the Cincinnati Bengals, also apparently a high school teacher. And some anonymous person posted really nasty information about her, about her sex life, claiming that she had STDs, all sorts of things. She sued the website. She sued TheDirty.com.

And there were various different facts about how the guy who runs TheDirty.com commented on the posts about her, and so forth. And the argument was that, therefore, the website was contributing to the content. It wasn't just passively there waiting for it to come in, but was actually encouraging it, and so forth.

But nonetheless, in this case, the court held that, no, it doesn't matter. Even if the website is encouraging the information and so forth, the website is still protected by CDA 230. So just an example of how broadly the statute has been interpreted, even where the content at issue is really very objectionable.

But that was 2014. Soon after that, we started to see this string of cases where plaintiffs were having much more success in finding these little chinks in the armor of CDA 230. So just go to a couple of examples here. There's a number of them.

This is a case involving this website called ModelMayhem. The name of the case was Doe number 14 versus Internet Brands, so a Ninth Circuit case from 2014. This was a website that matched aspiring models with modeling gigs. So basically a model could go on and create a profile, and then photographers, or people looking for models, could go on and hire these models.

So the plaintiff in this case, a Jane Doe plaintiff, posted her profile. And apparently what happened was that through her profile, she was targeted by two assailants who lured her somewhere to a fake photo shoot and drugged and raped her, is what happened. And she then sued ModelMayhem.

She sued the website for failing to warn her that people, bad actors, were using the website to find victims, essentially. And, of course, the website argued that section 230 applied. The website's only connection to her was to her profile, which obviously was user content, her profile. So they said that 230 applies. We have no connection to this whatsoever, other than through this user content that she posted.

But here, the court said that this so-called failure to warn claim that she was making did not depend on ModelMayhem being the publisher or speaker of content. The basis of her claim was not trying to treat ModelMayhem as the publisher or speaker of content, which of course is what Section 230 covers. And therefore, the website was not protected by Section 230 with respect to this claim.

Now, what ended up happening is section 230 didn't apply, according to the court. But the case went up to the Ninth Circuit, went back down. What ended up happening was that the website was held to be not liable in the first instance because it simply didn't have a duty to warn her at all. There was not the kind of relationship between her and the website that created such a duty.

So the result was that the website got off the hook, but only after a lot of litigation-- obviously, a lot of expense and so forth for both sides-- that might have been cut off had 230 been held to apply at an earlier stage.

JOHN DELANEY: Well, now, you should say it's a direct result. In this case, we're seeing a big increase in plaintiffs using a duty to warn theory in all different types of contracts. Because now there is a Ninth Circuit decision saying, at least in some instances, a duty to warn obligation is not-- you are not protected from that claim under the safe harbor.

AARON RUBIN: Right. And that's the way it goes with these 230 cases. That's why-- I think I've said this phrase a couple of times, and I was sort of like-- plaintiffs looking for chinks in the armor. So 230 is this sort of edifice, and plaintiffs are constantly trying to find ways through it, or under it, or around it. And whenever any one plaintiff has any luck at all-- like in this case, with this so-called failure to warn claim-- then suddenly you see a string of other cases making those similar kinds of arguments.

This case here that we're going to talk about now is called Hassell versus Bird. This is a California case from 2016, right at the end of 2016. Got a lot of attention at the time because it creates a potentially interesting end run around section 230.

So what happened here is that there was an attorney, Dawn Hassell. And her former client, a woman named Ava Bird, posted a number of reviews, or allegedly posted reviews-- they were anonymous-- on Yelp that were critical of Hassell.

Now, Dawn Hassell, the attorney, was savvy enough to know that if she just went straight after Yelp, she probably wouldn't have much luck. Because section 230 would protect Yelp. This is just straight user content that was posted to Yelp.

So she did what is really the right thing, which is she sued Ava Bird, the person who actually posted the content. And, of course, section 230 doesn't protect the person who posted the content at all. It just protects the platform.

So she did the right thing and went after Ava Bird. Now, Ava Bird never showed up in court. So Dawn Hassell got a default judgment. So she won. But of course, that doesn't do her much good at that point, because the reviews are still up there on Yelp. What she wants is to have those reviews taken down.

But now, she has a default judgment in hand. So she goes to the court and says, hey, I have a default judgment against this Yelp user that says right here that these posts are defamatory. And now I want to have these posts taken down. I want an injunction from the court requiring Yelp to take these posts down.

Yelp comes in and says, well, section 230. You can't hold us liable for these posts. And here's where it gets interesting. The court says, well, no. The plaintiff here, Ms. Hassell, isn't trying to impose liability on Yelp for these defamatory posts. She just wants them taken down.

And so the court says, that's not what Section 230 is about. Section 230 is protecting the platform from liability. Here, she's not trying to impose liability. She's just trying to get the posts taken down. So therefore, the court went ahead and issued the injunction requiring Yelp to take the posts down.

Now, that's an interesting way to get around 230. That means that anytime somebody doesn't like something that's posted on some website somewhere, you go and you sue the person who posted it, who may just be some random anonymous person who never shows up.

You get a default judgment. You take that to the court. And now you try to get an injunction-- requiring that content to be taken down-- against a website that was not even a party to the underlying litigation.

So you can understand that platforms like Yelp and many others were disturbed by this. There was a lot of outcry at the time that this case came down. It's currently on appeal to the California Supreme Court. So we'll see what happens with it. Although I don't think we've seen a ton of other cases like this come up yet. Maybe they're proliferating out there and we haven't heard about them.

JOHN DELANEY: Well, actually, in San Francisco, though, one of our panelists at the end of the day was a lawyer at Yelp. And he mentioned that recently Yelp is seeing no decisions yet, but this technique being used by other plaintiffs. And, in one case, he mentioned a law firm got into trouble because in some of these cases, they're kind of selling-- this law firm was apparently promoting to dentists and to lawyers-- we'll help get your bad reviews down by using this type of technique.

So he said it's a real--

AARON RUBIN: It's a real thing. Yeah, yeah. So OK. So anyway, so-- oh, sir. Question.

AUDIENCE: What would be the basis of the injunction? Wouldn't she have to have a cause of action against Yelp?

JOHN DELANEY: No. The idea is that the missing defendant was ordered to remove the content herself, which is fine under CDA. That's how this content gets removed. But because she didn't appear and she's in default, the court then ordered Yelp to remove it under being in contempt of court if they failed to do so.

And I'm not a litigator. But I think outside of the CDA context, that, I'm told, may not be unusual. But in the CDA context, Yelp is, in effect, deprived of its ability to go in and raise the CDA defense with respect to its hosting of the content.

And there's kind of been an understanding in CDA case law that the service provider never has to remove it. Even if it's found to be defamatory, it's really the obligation of the party that posted it to remove it.

AARON RUBIN: So these are a couple of cases that illustrate this trend that we started to see, I guess, sort of around the beginning of 2015, where plaintiffs were managing to find these sort of chinks in the armor, that were having more success with overcoming CDA 230 defenses put up by these platforms.

This next slide, which is in the materials, you can peruse it at your leisure. I'm not going to go through each of these cases. But it's just a number of little squibs about other cases that sort of continued this trend. So where are we now?

Well, we're continuing to see CDA 230 come under attack in various ways. And I think it's just a natural reaction that people see websites getting protected for content that is-- you know, like in that Dirty.com case-- objectionable. Just objectively, it's pretty objectionable. And you think, there ought to be a law.

So people are starting-- the knives are starting to come out for section 230. We see that in some of these cases. There's been a whole string of cases recently where various websites that host content from actual terrorist organizations and extremist organizations like ISIS-- tweets and things like that-- we've had plaintiffs make claims that these platforms are quote "providing material support to terrorists."

So far, in the cases that we've seen come down, courts have been holding that "providing material support to terrorists" is not an exception to CDA 230, and that CDA 230 still provides a safe harbor, even for this type of content. But this is another line of attack that we've seen.

CDA 230 has also been challenged legislatively. There are bills under consideration in both the House and the Senate seeking to create exceptions to CDA 230 for content related to sex trafficking. The target was originally websites, like backpages.com or other websites, that actually post online prostitution ads and that type of thing, or allegedly do.

So the idea was that these laws would create an exception to CDA 230 that would allow authorities to go after that type of content that allegedly promoted sex trafficking, despite the fact that CDA 230 might protect all sorts of other user content. But the way that these laws are drafted, many feel, reaches more broadly than that and could have a chilling effect on content more generally.

Because in order to comply with the sex trafficking aspect of this, platforms would have to be overly vigilant in policing other types of content. So those laws-- they haven't been enacted yet. But they're under consideration.

And we continue to see 230 cases come down all the time. And we keep track of them, and we watch these trends. For those who are supporters of 230 and who believe that it's an important law to protect free speech on the internet and so forth, there have been some encouraging signs.

We have here a slide showing a number of cases that have a fairly sort of what we think of as a traditional application of 230, and in fact find for the defendants in much the way that we typically saw prior to 2015. So it sort of ebbs and flows. But we are continuing to keep track. And I'm sure we'll have more on this next year.

So with that, I'm going to turn it over to John to talk about the DMCA.

JOHN DELANEY: As Aaron mentioned, section 230 has a specific carve-out for intellectual property claims. So you can't use the safe harbor that Aaron has been discussing if you're sued for intellectual property infringement in connection with user-generated content, whether that's trademark infringement, copyright infringement, or potentially patent infringement, trade secret infringement, right of-- there's been a split in the court whether the right of publicity is an intellectual property claim or not.

If it is an intellectual property claim, it's, in theory, carved out from section 230 as well. But I will note, in California at least, there is some case law indicating that that carve-out is only for federal intellectual property claims, not state intellectual property claims. So that's still an issue that's being litigated. I think courts have gone different directions on that issue.

But generally speaking, intellectual property claims are not shielded. And definitely copyright claims, which are always a federal-- it's a federal right-- are not protected. The good news is, however, there is a separate safe harbor, as I mentioned, the Digital Millennium Copyright Act, that gives some protection for website operators, bloggers, companies that are hosting social media and user-generated content, with respect to copyright claims. However, this safe harbor is more complicated than section 230 in that section 230, you don't have to do anything.

The safe harbor protection just automatically attaches. This DMCA safe harbor that we're going to talk about, you have to take affirmative steps to take advantage of it. And there is case law where companies failed to take some of these steps, many of which are very simple steps. They just didn't know to take the steps. And as a result, they had no protection from copyright infringement claims relating to user-generated content.

But let's go back in time, just like Aaron did, where he talked about the CompuServe case and the Stratton Oakmont

Case. Let's talk about copyright law, generally.

So as we all know, if any of us are to work in the copyright space, you can be a direct infringer if you reproduce, distribute to the public, create a derivative work from, publicly perform or publicly display someone else's copyrighted work without their permission, subject to, primarily, the fair use defense. But there are some other defenses, like the first sale doctrine. But those are the five exclusive rights of the copyright owner.

But the issue, when the internet industry was first growing up in the '90s in the United States-- the issue is that you can also be liable, under copyright law, for what they call secondary liability. So you're not the direct infringer, but you could still be held liable for your involvement with the directed infringer.

So the traditional case law sent a chill up the spines of those CEOs of companies like America Online, and CompuServe, and Prodigy, that were pioneering the internet. Because what they were worried-- just like on the defamation side, they were worried that if someone uploads an infringing photograph, or infringing text, or infringing video-- which was difficult to do at the time because most internet connectivity in the late '90s was through dial-up modems-- but nevertheless music files, to what extent could the website operator be held liable on a secondary liability theory?

And I've got to say, the case law that is pre-internet wasn't that reassuring to website operators. In particular, there were a number of cases around flea markets. What do flea markets have to do with internet websites? Well, if you think about them, there is a lot of similarity.

A flea market is a company, makes available space where people come, and interact, and sell, and buy wares. It's a marketplace that's kind of user-driven. And there's a line of cases. And the most famous one is the Cherry Hill Auction case-- sorry, the Fonovisa versus Cherry Auction, which is a Ninth Circuit case from 1996.

So this case was decided right as the internet industry was taking off. And it basically said that the sponsor, the organizer, of the auction could be held secondarily liable for a table that was set up by one of the participants in this-- sorry, this was like a swap meet-- a flea market. Sorry, a flea market. So someone had set up a table at the flea market and sold CDs that were pirated.

And so the music company sued not the person operating the table, who might have been judgment-proof, but they sued the operator of the flea market. And they said, you're secondarily liable for the infringing sales of pirated CDs by someone that had set up a table at your flea market.

And so the court actually found liability under two theories of secondary liability. First, contributory liability. So it said Cherry Auction, which sponsored the flea market, knew or should have known what its vendors were selling in its location of the flea market. So they had knowledge, or they reasonably should have known, so constructive knowledge.

And they contributed to that infringement by providing space, by promoting the event so that buyers would come in and buy the infringing CDs. They provided electricity for the vendors that had set up tables at the event so they could play a boombox with the CD on it for potential purchasers. So they said, you're contributory liable.

And moreover, the Ninth Circuit went on to say, you're vicariously liable. So vicarious liability-- you don't even have to have knowledge of the direct infringement, but you're still held liable. So the court held that Cherry Auction had a right to kick any vendor out of their controlled premises, that they exercised control over who got into the flea market, who could set up a table, who couldn't, and that they financially benefited from the infringing conduct taking place at the flea market because they collected door admissions and concession stand sales from the users who were coming in and purchasing the pirated CDs.

And so even if there was no knowledge that these CDs being sold were being sold illegally, the court said that because of the ability to control who sold on the premises and the financial benefit being received in connection with the infringing activity-- even though it was kind of indirect-- it's not that they got a percentage of the sales of infringing CDs. They controlled the concession stands and the admission charges. So you can imagine, in the late 1990s, anyone that wanted a website that was going to host user-generated content was really concerned about this decision.

And so that's where the DMCA comes in. The DMCA actually provides five safe harbors for for-profit online service providers. But we're going to talk about the third one, which is 512(c), which is the so-called YouTube safe harbor. This is the safe harbor that is the focus of most of the case law.

This is the safe harbor that's relevant to your company if your company hosts any user-generated user-uploaded content on a website, on a blog, potentially on a company's social media pages. And as I mentioned, unlike the section 230 CDA safe harbor where you get the benefit of the safe harbor without taking any affirmative steps, you just have to be an online service provider. That's not true here.

So first, to qualify for this safe harbor, you have to satisfy certain what we call gating requirements. And I'm not going to go into this because we don't have enough time. This is a whole hour in itself. But I provided this information for you.

So these are the preliminary gating requirements to take advantage of the copyright safe harbor for hosting user-generated content. And moreover, the specific YouTube safe harbor, so-called YouTube safe harbor-- there are some additional requirements. And because I'm a tech lawyer, I think of flowcharts. So I have created kind of a flowchart that takes you through these different requirements which have to be met in order to get the benefit of the safe harbor.

And part of the safe harbor-- I'm sure even those of you who don't do work in the copyright space have heard about notice and take down. And YouTube, if they have an infringing video, a copyright owner can provide a take down notice. That's all part of the safe harbor.

So the safe harbor, unlike the Yelp situation, where the section 230 safe harbor doesn't really have a provision about whether content ever needs to be taken down if it's found to be infringing, here, under this 512(c) YouTube safe harbor, there's a process set out in the statute where, to maintain the benefit of the liability from copyright damages, the website operator, if they get a certain notice of infringing user-generated content on their network, and it's signed under penalty of perjury, and it meets all the requirements of the notice under the statute, content gets taken down.

But the party that posted the content gets notice and can provide a counter notice, where the content may end up going back up on the site. So again, I'm not going to go into all this. But if your company is involved in hosting user-generated content, you need to know how these rules work to make sure that your client gets the benefit of the safe harbor.

So the most famous case around this safe harbor-- it was front page news in the The New York Times and The Wall Street Journal for months-- was the Viacom lawsuit. So Viacom filed a massive multibillion dollar copyright infringement suit against YouTube for all of the infringing YouTube videos and clips from their TV shows and so forth that had been uploaded to the YouTube.com site. And the District Court basically owned that this safe harbor protected YouTube and shielded them from potentially billions and billions of dollars in copyright damages.

On appeal, the Second Circuit basically disagreed with the lower court decision, particularly around red flag knowledge. And the Second Circuit found-- and this is currently the law in the Second Circuit-- that if you're hosting user-generated content, and even if you don't have the actual knowledge, if there are sufficient red flags such that the website operator has willful blindness to instances of copyright infringement, then the safe harbor is lost. So that's the law at the Second Circuit.

Now, it remanded the case for further proceedings in the Southern District of New York. There, the court found, we don't think there was any willful blindness on the part of YouTube. So the court found, even under the Second Circuit's test about red flags and willful blindness, they didn't find that YouTube had triggered that.

So the District Court found-- does continue to receive the benefit of the safe harbor. The case then settled. So it didn't go back up on appeal to determine whether the District Court was correct in applying the test.

So that's kind of where the law stands. And again, if you have actual knowledge, or red flag knowledge, such that you're being willfully blind to infringing content, user-generated content, on your site or platform, you can lose the benefit of this safe harbor and open up your client to statutory copyright damages.

As you know, for copyright infringement, it's not just actual damages. It could potentially be statutory damages, which could be high as $150,000 per work infringed, if the infringement is willful. So if you multiply that against many, many works that your client may be hosting that are potentially infringing, this could be really big numbers.

So like Aaron mentioned, the case law for the CDA-- for the DMCA, has generally been pretty favorable to website operators. The DMCA safe harbor, the YouTube safe harbor, has been applied pretty broadly and in a manner that really protects website operators and that, frankly, has created frustrations among copyright owners. But we've recently seen the pendulum in this area swing back the other direction as well, maybe courts trimming the scope of the section 512(c) safe harbor as it's been applied.

So one case I want to talk about, which is just from last year, is the Zazzle case. So Zazzle.com is a Cafepress type website. So users and a lot of graphic artists go and upload images, artwork, and designs. And then other users of the site can order those physical products with those images.

So if you see a nice graphic design that you like, you can pay to have it put on a coffee mug and delivered to you by Zazzle. They can put the images on t-shirts and so forth, so all types of physical products. So it's user-generated content uploaded to the site.

The artist that uploads it, or the user, indicates whether it's OK for the image to be used on t-shirts and coffee mugs. And if the artist says OK, then Zazzle makes that available to its other users. It's an e-commerce site, but that's built on user-generated content.

So when a sale occurs-- so let's say Aaron buys a coffee mug with an image that I uploaded. At that point, Zazzle manufacturers the product. So it gets the coffee mug, it attaches the image to the coffee mug, it ships the product to Aaron. And then it pays me a royalty, a percentage of the sale, for that coffee mug.

So the plaintiff in the Zazzle case was the licensing image for an artist whose works were being uploaded to the Zazzle site without their permission. Because think about it. For a lot of users, this is nice way to make money. I can use other people's images, load it up, have them sold on coffee mugs and t-shirts, and sit back and get a percentage of it. And I didn't do anything other than upload it.

Well, so this agent and artist was really upset that this was taking place with their clients' work. They sued Zazzle because the user is either anonymous or judgment proof. And the Central District of California finds that Zazzle was not entitled to the Section 512(c), or the so-called YouTube safe harbor, that we've been talking about. Why?

It's interesting. Because you would have thought they might say, well, section 512 doesn't apply to offline sales of goods. It didn't say that. It said, the safe harbor does apply here. But then it went to look at the requirements for this safe harbor. And it said, in particular, the safe harbor doesn't apply if you control-- if you have a right to control the infringing conduct and if you have a direct financial benefit from that conduct.

So the court noted that Zazzle had a right and ability to control the sales of the infringing products. So even if the uploader authorized the use of the image on coffee mug, Zazzle could override that decision. Zazzle was the one actually adhering the image to the mug and shipping it out. So they were exercising a lot of control over the process of the sale of the infringing image.

And then it noted it was financially benefiting, because it was charging $10.99 for a coffee mug with an infringing image on it. So the court found DMCA doesn't apply here. So the lesson here is not so much don't sell user-generated content on coffee mugs-- probably not a good idea.

But there's a broader lesson here, which is we're now entering a stage of social media where companies aren't just content with hosting user-generated content. They want to exploit it. They want to make use of it. They want to use it offline. They want to use it on their social media pages. They want to use it in their TV ads.

And Zazzle is a reminder that the safe harbor that allows you to host it-- even if that hosting activity is protected-- when you go beyond that, or even if you try to monetize it while it's on your site, you have to know the requirements to continue receiving the safe harbor protection. Because it could-- has happened in Zazzle-- you could be doing something and monetizing it that takes you outside the benefit of the safe harbor.

So unlike the CDA safe harbor, you have to do things affirmatively to get protected. And then once you're protected, if you do some additional conduct, it could make you no longer eligible, at least with respect to that additional conduct, for protection.

OK, there's another major case that I want to talk about. I think it's the most important DMCA case in a decade. And it came out last year. It's a Ninth Circuit decision. It's called the Mavrix case. It's Mavrix versus LiveJournal.

So just to quickly go through the facts, LiveJournal is kind of a blogging site where people can set up blogs devoted to what their passion is. It originally started out as online diaries, but then it became companies and users starting fan celebrity sites, blogs devoted to a particular celebrity, or devoted to politics, or devoted to shopping-- whatever it is. There's all these kind of microsites that are hosted on the LiveJournal.com platform, where people are interacting with other users on some topic that they're passionate about.

And so what happened here was the LiveJournal had one particular blog that became really successful called, I think it was, "Oh No They Didn't." And it was devoted to celebrity gossip. And apparently, if you're into celebrity gossip, this is the place to go. It just had great gossip about all different types of celebrities. And it turned out to be one of the most popular blogs on the LiveJournal platform.

And so LiveJournal was helping to sell advertising, and was helping increase drive traffic to the LiveJournal platform. So they became pretty supportive of this blog. And, in fact, the blog was operated by a team of volunteers.

But over time, as the blog became more popular, LiveJournal hired one of those volunteers as an employee to oversee the other volunteers. And LiveJournal, through its volunteers, began getting into the process of judging the user-generated content.

So if you go to a gossip site, you don't want to see gossip about a celebrity that's old, that's two years old, and you've already seen it, right? So the volunteers began filtering contributions, submissions to the blog, based on how current it is, how juicy the gossip is. And if stuff wasn't sufficiently current, they wouldn't let it go up on the site.

So they were prereviewing content, using volunteers. But the volunteers were supervised by an employee. And the District Court said, we don't care. This is protected by the DMCA safe harbor, and we're not going to hold-- because a lot of the content being posted infringed third-party copyrights, including Mavrix.

The company that sued is a company that specializes in paparazzi photos. So they have a team of photographers who take these snapshots of celeb-- maybe ones like the Katherine Heigl coming out of Duane Reade-- and they have a whole business promoting, and selling, and licensing those images. But a lot of their images ended up on the "Oh No They Didn't" blog.

So they sue for copyright infringement. They don't sue the volunteers. They sue the company, LiveJournal, which runs the entire platform. And the District Court says, safe harbor is protected. We're not going to hold LiveJournal liable to the extent that some of the volunteers are curating content-- they're volunteers. They're not really connected with LiveJournal.

Well, the Ninth Circuit actually reversed. And the Ninth Circuit found that the volunteers-- there was an issue, and it's been remanded. The volunteers might be agents of LiveJournal. And as agents, their conduct, just under agent principal law, could be attributed to LiveJournal.

So one major issue for remand is whether the volunteers technically constituted agents of LiveJournal under common law agency principles. One of the biggest trends we see with social media and user-generated content is towards curated content, using humans to help filter the content. And this decision raises a lot of concerns about the filtering of content, user-generated content, whether prior to posting, or even after it's been posted to your site. And I'll get into a little bit more detail on that.

So basically the Ninth Circuit said, common law of agency applies in determining whether the acts of volunteers can be attributed to the operator or the website. And some of the questions that have to be looked at on remand, what level of control did LiveJournal exercise over the volunteers? And in particular, LiveJournal had an employee who kind of supervised these volunteers. So what extent did that supervision turn those volunteers into agents and kind of quasi-employees of LiveJournal?

And so the idea here, it's just like the Roommates case, right? If you get too embroiled in the user-generated content process-- soliciting content, controlling what content is submitted, rejecting some items of content and allowing others to be posted-- you run this risk that the party uploading the content isn't a true third party. It's you. Yes?

AUDIENCE: I'm surprised, though. I would think that that supervision itself was enough. So I'm surprised that it was even remanded at all.

JOHN DELANEY: Well, it had to be remanded for fact proceedings. But the decision does suggest that the Ninth Circuit thought there might be an agency relationship here. But that needs to be determined. I am really grossly simplifying the facts. So there are definitely arguments that-- and it may be that the District Court will find that there was no agency relationship.

But there is language in the Ninth Circuit decision that if you use volunteers in connection with reviewing and filtering user-generated content, you need to be worried, or you need to at least study this case. And even if you're outside the Ninth Circuit-- many copyright owners are based in the Ninth Circuit-- and so you are at risk of being sued in the Ninth Circuit.

So the court did say that automatic reformatting of posts, manual screening of user-generated content for quote "harmful materials" like pornography and what they say quote "obviously copyright infringing materials," is fine. So the Ninth Circuit said, we're not going to penalize if you use employees or volunteers to filter for harmful materials. But it said that that filtering conduct should be directed only to activities narrowly directed to enhancing the accessibility of the post.

And the court suggests that LiveJournal and its volunteers-- because they were going, how current is this information? They were looking at the substantive content. They weren't saying this is pornographic, we're going to leave it off the site. They were saying, you know, this content just isn't interesting enough, and so we're going to leave it off the site.

So there's a separate issue from the use of volunteers of whether what most user-generated content sites do some amount of filtering. If you have hate speech or pornography being uploaded to your website, no company wants to be hosting that. And the court does say that revealing and filtering for purposes of deleting porn, and maybe hate speech, and obviously copyright infringing materials, is OK.

But it leaves a real question mark. How much beyond that can you do and still get the benefit of this safe harbor? So that, to me, is a really key issue.

If you're at all involved in user-generated content and you do any kind of curation or filtering, you need to read the Mavrix case. And you need to be watching what the District Court says on remand.

And I should say, this decision has a bunch of stuff that's great for copyright owners and has been worrisome for website operators. Because it goes through all of the requirements of the so-called YouTube safe harbor, and it finds a number of potential issues that need to be addressed by the District Court on remand, going to what constitutes control, what constitutes a financial benefit. So this is going to be the most closely watched DMCA case of this year. Yes?

AUDIENCE: What if they just use an algorithm? Would that be the same thing?

JOHN DELANEY: Well, that's a great question. And that's an open issue. I mean, there's a sense among copyright lawyers that algorithm-driven filtering is safer and is better than human-driven filtering. However, and Aaron mentioned, if you've been reading the news, you see a movement towards human filtering, simply because it's more effective.

So for example, I think YouTube has been in the news where its advertisers are saying because of the Logan Paul video incidents, they want more human reviewers reviewing this content. And it's ironic. Because the more you move to human review, there is this increased risk that you might lose your safe harbor protection.

So I have some tips here. I'm not going to go through them. And again, this is just touching the sur-- this case, there's so much in it. We did write a blog post where we go through it on sociallyaware.com.

But some tips coming out of it is if you are involved in filtering of user-generated content, you need to think about what degree of risk are we taking with respect to our safe harbor protections? And we've actually been recommending and helping clients create some guidelines for their people, if they have human reviewers, to use. Because you want to ensure consistency.

And you, ideally, want your guidelines to be consistent with the Ninth Circuit's Mavrix decision. But you want those to be privileged, drafted by counsel. And training is so important.

So gone are the days where you could use employees who were not trained. They need to know what to look for. They need to know the bou-- they don't need to read the Mavrix case, but they need to be advised on what's crossing the line, potentially, at least in the Ninth Circuit. And then if you're using volunteers, you need to look at the common law of agency and really think hard about whether these folks are essentially employees, for purposes of the safe harbor. OK.

A few other things, I think I have-- yeah. As I've already mentioned, this safe harbor, it's not automatic. You have to take affirmative steps. You need to make sure. We have so many clients that come to us for the first time and say, oh, I'm protected by the safe harbor.

And I look at their terms of use. And I say, where's your notice of your agent for take down notices on there? It's like, what? What's that?

Well, that's a fundamental requirement of the safe harbor. And if you don't do it, it doesn't apply to you, even if you satisfied all the other steps. So you do have to designate an agent for receipt of take down notices from copyright owners.

There's another gotcha here. Since 1997, this was done by filling out a form and sending it to the Copyright Office. And they literally just scanned it and posted it on the copyright.gov website. And then you could search these forms.

You have to designate an agent on your website, and with the Copyright Office. But what happened was last year-- beginning this year, the Copyright Office has a whole new approach to designating a DMCA agent. And any designation you did under the old system is no longer effective.

So we estimate that as of January 1st, many, many companies that host user-generated content lost their safe harbor. And I think we're going to see some lawsuits involving the time period between January 1st and to when they actually get around to designating under the new system.

What happened is the Copyright Office went to an all-electronic agent registration system. Here's a screenshot. So this was Snapchat's old registration form. I'm not picking on Snapchat. It just seemed like one to-- but Snapchat got it right.

They registered under the new process, which is entirely electronic. And if you haven't registered under the new topic-- under the new process-- you have to tell your clients they don't have safe harbor protection. They don't have it until they get around to doing this. So it's a potential gotcha more.

But under the new process, every three years, you have to renew. And there is a fee associated with that. So it's becoming like trademark law, where you have to docket the deadlines. And you want to make sure you're-- now the Copyright Office has a way that they'll send you an email notice as the deadline approaches. But as lawyers, we don't rely on that.

So we have to-- just like our trademark lawyers-- docket trademark deadlines and renewal deadlines. And patent lawyers do the same. We now, as copyright lawyers, have to focus on that as well. Good.

So we'll now talk about online contracts. I think-- is it turn it over to you?


JOHN DELANEY: I can't remember. Sorry.

AARON RUBIN: That's all right. Dog and pony show down here. So talk a little bit about online contracts. This is switching gears a bit. And these are the ubiquitous terms of use, terms of service, those kinds of things, that all of us are very familiar with.

JOHN DELANEY: Well, I did want to say one thing. When segue-- you know, we've been talking about risk reduction strategies, the two safe harbors that we've been talking about. But online contracts is the other way that companies doing business online try to mitigate risk.

AARON RUBIN: Right. So if you operate, or your client operates, a website or online platform, typically you want to impose some terms and conditions on users for various reasons. But importantly, you want to have the limitations of liability.

You want to have disclaimers. You want to have arbitration provisions and other dispute resolution mechanisms that are favorable to you-- waivers of class actions, waivers of jury trial, those sorts of things-- that, as John says, help mitigate risk in any dispute that may arise between the platform and the user. And how do you do that?

You do that through an online contract, essentially, and online terms of use or terms of service. So that's what we're going to talk about now for a bit.

Before we get into the details, there's a little bit of terminology that it's important to be familiar with. And frankly, this is terminology that I think John and I think is a little bit misleading in a certain way. But it's so common that it's important to know it. And that terminology is clickwrap versus browse-wrap. So what does that mean?

A clickwrap refers to that type of online contract, or terms of use implementation, where the user is presented with the terms, or at least a link to the terms, that the website is wishing to impose. And then the user is required to check a box that says, I accept, or click a button that says, I accept, in order to proceed with using that website, or registering, or making a purchase, or whatever it may be. The point is that a clickwrap requires some affirmative action by the user to accept and acknowledge that he or she is bound by the terms of use.

A browse-wrap, on the other hand, is that type of implementation where the user isn't required to do anything affirmative to accept the terms of use. Rather, the terms of use, or the other online contract, is just presented at some link, perhaps somewhere down at the bottom of the page. It'll say legal, or terms of use, or something of that nature. And the concept is that just merely by using the website, the user is somehow being bound by those terms of use presented at that link.

As we'll see, that's pretty questionable that that type of implementation really does bind the user. I think that's why we tend to think that these terms clickwrap and browse-wrap are a little bit misleading. Because you see those two terms, and you sort of think, well, that must mean that these are two different ways of forming an online contract.

And that's, for the most part, really not the case. A clickwrap is a way of forming an online contract. A browse-wrap is sort of a nothing, in many cases. But anyway, that's the terminology.

So let's start, if my clicker will work, by talking about one early case that illustrates some of these issues. This is a case called Ticketmaster Corp versus Tickets.com-- pretty early internet case, a case out of the Central District of California from the year 2000. And in this case, the plaintiff was Ticketmaster, who you're likely familiar with if you buy concert tickets or anything of that nature. And the defendant was Tickets.com, a competing ticket website.

And what was happening was that Tickets.com was basically going to the Ticketmaster website and scraping information by automated means. They're sending what's called a spider, or a robot, to scrape the Ticketmaster website, get event information and so forth, and then use that information itself on its own website.

Ticketmaster, for obvious reasons, objected to this, and made a variety of claims about why this scraping was improper and actionable. Among those claims was that the scraping by Tickets.com violated Ticketmaster's terms of use. Because Ticketmaster's terms of use said no scraping. You're not allowed to scrape our website.

The trouble for Ticketmaster was that the only place that their terms of use appeared was way down here in the bottom. And we actually got this screenshot of how the Ticketmaster website looked in the year 2000. We, I guess, went to the Internet Archive, the Wayback Machine, and took a screenshot of the circa 2000 Ticketmaster website.

JOHN DELANEY: We're pretty certain it's a fair use.

AARON RUBIN: As with all of the images. And so you can see just down here, the terms and conditions are just down there in the bottom. And so is Tickets.com bound by these terms and conditions when it goes and accesses the Ticketmaster website?

Well, Ticketmaster argued that it was, and that merely by using the website, Tickets.com bound itself to these terms of conditions where that prohibition on scraping was contained. But the court in this case said, no, you can't create a contractual obligation in this way. You can't just have your terms and conditions linked down at the bottom and then say that every user who comes to your website is bound by those terms and conditions.

So this case set up, at a pretty early stage, that typically-- there are a couple of exceptions to this-- but typically, if a website operator wants to bind its users to terms and conditions, it has to do something more than this. It has to do something more than a browse-wrap. It has to have those users take some affirmative action indicating that they are agreeing to be bound by those terms.

So with that, I'm going to turn it over to John to look at how some various more modern websites try to accomplish this.

JOHN DELANEY: Yeah. So this is a section where we like just to highlight what different companies are doing, so you can think about what your clients are doing and what level you want to be. As Aaron mentioned, we're trying to move away just from browse-wrap and clickwraps. Because as we'll see in the next couple of slides, there's a real spectrum.

And by the way, we don't mean to pick on any of these companies. These companies are just illustrating different approaches taken by many different companies. And some companies use a variation of several of these different approaches.

So here is the website for Delta Airlines. And when you scroll all the way to the bottom of the website, there is a section at the bottom called "Get to know us," very cheerful section about Delta careers. And then there's this word Legal. And it's a hyperlink. And if you click on that term, you go to the terms of service, or terms of use-- the legal terms and conditions-- that govern your use, access to, and use of the website.

And as Aaron mentioned, absent some unusual set of circumstances, this probably isn't sufficient to create a binding agreement under contract law in any state. That being said, I think it's still important.

Because think about it. There may be things in your terms of use that are legally valuable and important, even if it's not a binding contract. So for example, if you're a website that provides health information, you want to say in your terms of use that if you're sick, or you have a health incident, contact your doctor. Don't rely on this site.

Or if you're a law firm-- every law firm terms of use, even if it's a browse-wrap-- hopefully has a provision saying, this website is not providing legal advice, and no attorney-client relationship is being formed. Those are valuable notices and disclaimers that you want to have in your terms of use, even if it's a browse-wrap, and even if it's unenforceable, because they put people on notice, regardless of whether a contract is formed.

If you want to say my health site wasn't a substitute for medical advice, you don't need to show that the browse-wrap agreement was actually binding. You just want to show that you provided those types of notices on the site. And the legal page is a good logical place to put those notices.

AARON RUBIN: I was just going to say, we saw a case-- the name of it's escaping me right now, but a couple of years ago-- that illustrated that point that John just made really well. It was a case where it was actually the website operator suing the user about the user having taken some item of content from the website, and the user was using it for some other purpose.

And the user claimed, well, you know, I had an implied license to use this piece of content. The website posted it on their public website for everybody to come, and see, and use. So I had, essentially, an implied license to use this piece of content.

And the website had, in its terms of use, basically a statement that said there are no implied licenses. All of this content belongs to us. We do not grant any implied licenses at all.

And the court said, look, that may not have been a contract per se in the sense that the user agreed to that. But that statement, that no implied licenses were granted, was sufficient to undermine this argument that there was an implied license.

JOHN DELANEY: Yeah. I mean, one thing we say to clients all the time is even if it's not binding, you still should have house rules that tell people how they should conduct themselves on your site. And that includes important notices that you want them to know about guidelines in using the site and improper uses.

I should note, there is two exceptions to the browse-wrap being unenforceable. One is if the visitor has actual knowledge of the website terms of use. So ironically, reading them can create a binding contract once you have actual knowledge. And second is there's been some scraping cases where their scraping is persistent and systematic of a website through bots, through automated means.

And the court has held, in the Verio versus Register.com case-- Register.com versus Verio-- that, in fact, a binding agreement was formed when you were accessing the website so many times. You're kind of presumed to have knowledge of the terms.

So if you're engaged in scraping, you need to think about-- don't tell the client that, oh, browse-wraps are never enforceable. The scraping activity-- there is case law on it, potentially resulting in a binding contract, even if it's a browse-wrap agreement. And most browse-wrap terms of use do prohibit scraping, or harvesting data and content, from the website.

What's that?

AUDIENCE: What case is this?

JOHN DELANEY: It's the Register.com versus Verio, V-E-R-I-O. So it's a case about from 10 years ago. It's kind of the landmark case on when a browse-wrap agreement can become a binding contract as a result of scraping.

So this is what we think of as a less passive browse-wrap. So Aaron already mentioned Ticketmaster. So not surprisingly, Ticketmaster-- a lot of people are coming to their site and scraping data and information. So what they do is at the bottom, in dark bold font, by continuing past this page, you agree to our terms of use. So this is pretty clever, right?

I think as lawyers, we would all argue, if we were litigating and trying to argue this browse-wrap is enforceable, we'd rather have this than this, right? Maybe this is a little more scary to visitors. Maybe the marketing people don't like this. But it's actually kind of brilliant.

Because landing on the home page doesn't create the binding agreement. It's going past the home page. So the notion is you're affirmatively consenting, once you go beyond the home page. Now, is this a binding agreement? No, it hasn't been litigated, to my knowledge.

But the issue more is if it's important that your browse-wrap be an enforceable contract, but for whatever business reason, you can't do a clickwrap. I don't think any lawyer would dispute that this is better than this. Although I think lawyers would caution the client that this in itself is maybe not [INAUDIBLE].

AARON RUBIN: I mean, the danger is that from a user point of view, you can say, no, I don't, and still click and proceed past that page. As you'll see when John continues on here, a real clickwrap-- the user shouldn't be able to get past whatever that page is without actually manifesting the--

JOHN DELANEY: Yeah, that's right. That's a great point. And also, it's still at the bottom. I mean, if you really wanted your browse-wrap to-- maybe you make it all capital letters at the top of the home page so no one could miss it. But then your marketing and your graphic design people are going to be hounding you and saying, what are you doing to our beautiful website? But that would be even more effective.

So we're next kind of shifting into more of the categories of clickwrap agreements. So the classic version is the check the box. So this is from the Netflix agreement.

So when you sign up for Netflix, you get this unchecked box. It should never be pre-checked. It says, I am over 18 and I agree to the above conditions in the terms of use, which is a hyperlink, and their privacy policy and cookies policy.

And to Aaron's point, you cannot go past this page. If you hit the continue box, but you hadn't clicked Accept at the terms of use, you can't go anywhere. You get a message saying, you can't move forward until you review and click Accept our terms of use.

So this is the safer approach. But even here, there's things you could do. Should this be more prominent? I think in all these issues, it's important for counsel to look at what's actually being done and think through, are we maximizing the likelihood that we are creating a binding contract here?

Now there's a variation on this called the sign-up wrap, or the sign-in wrap. And so this is from Twitter. So Twitter gets rid of the box that you need to check. And Twitter says by signing up, you agree to the terms of service and privacy policy. And then you have to hit this [INAUDIBLE] to sign up, even though they do have checkboxes for other features.

Again, it's in small print. It's kind of at the bottom. But courts generally-- and I'm going to get to a really scary case that came out recently that maybe undercuts some of what we're saying. But courts have generally been OK with sign-up wraps. It's widely used. The marketing people love them because you don't have to check.

But I will say, in the spectrum of risk, getting an affirmative check is probably safer, where your contract-- you absolutely want to be certain. So in the spectrum, courts tend to prefer this to this.

AUDIENCE: Does the placement of that language [INAUDIBLE]?

JOHN DELANEY: Yeah, we'll get into that. The question is how the placement of the language can affect the enforceability. We'll be touching on that very issue.

And then I want to talk about one other-- this is what courts have called the gold standard. It's the scroll-down clickwrap. We used to see these a lot, where you had a box, and you could see all the terms of the agreement, and you could scroll down. And this is from Google Analytics.

They have an I accept. But they also have-- and this is less common these days-- I Do Not Accept. And as a lawyer, you'd want to click that, and make sure you get taken back to the home page, and continue going through the process.

AARON RUBIN: Or it's a Rick Astley.

JOHN DELANEY: Yeah, or the Rick Astley video, right. The Rickroll. But so courts really liked this scroll. But when we moved to mobile phones, I think the scroll boxes can be harder to use. But just so you know, courts have had-- well, then I'll talk about a recent court decision that this is the-- judges loved this, when they're deciding whether a contract has been formed.

AARON RUBIN: All right. So it should be pretty easy, right? You just get your user to check a box and you're good to go. That's true. But there's still a lot of ways to go wrong. And we'll talk about a couple of cases that illustrate this.

This is a case from 2012. The case was called Nguyen versus Barnes & Noble, a case out of the Central District of California. This is just a good cautionary tale. Not an unexpected result, but a good cautionary tale.

What you see here on the slide is the Barnes & Noble checkout page. And what happened here was that the plaintiff, Mr. Nguyen, ordered a tablet. I guess maybe it was one of those Nook tablets-- was that Barnes & Noble?

JOHN DELANEY: Yeah, I think so.

AARON RUBIN: At some discounted price from Barnes & Noble. And then Barnes & Noble ran out of inventory and cancelled his order. And what do you do when you don't get your discounted Nook? You start a federal lawsuit, apparently. So he sued Barnes & Noble.

Barnes & Noble tried to enforce an arbitration provision that was contained in its terms of use. And Barnes & Noble said, well, you know, our terms of use, including this arbitration provision, were binding on the user. And here they were. They were right here. Terms of use-- right here at the bottom of the page in this link. So send us to arbitration, get rid of this lawsuit.

And the court said, no, this doesn't do it. That's your classic browse-wrap. That's not unexpected. Why Barnes & Noble still had this implementation in 2012, after we saw the Ticketmaster case and every case that came after it for the prior 12 years, I don't know. But it's just a good illustration, because it would have been trivially easy for Barnes & Noble to do this right in this case, right?

They've got a checkout happening, right here. They're selling something. They need the user to type in his name, and his credit card number, and all sorts of other information. It would have been trivially easy to put in an I Accept box somewhere here. But they just failed to do so, and therefore lost the benefit of the carefully crafted arbitration provision that they, no doubt, paid a lawyer a lot of money to draft for them.

OK, here's a little bit more of a subtle case. This one's interesting. This is a case called Sgouros versus TransUnion Corp, a Seventh Circuit case from 2016. And it involved TransUnion, the credit report people, and a user who purchased a credit report from TransUnion. And it was claimed to be inaccurate.

It was some dispute about this credit report. A lawsuit arose out of that. And here again, the website operator, TransUnion, tried to enforce its arbitration provision. These cases often involve arbitration provisions.

And an issue arose as to whether the user was actually bound by the terms of use that contained that arbitration provision. And this is a pretty interesting case for those of us who are interested in such things, which is, I realize, a small population.

So what happened here is that this basically shows the TransUnion checkout page. And what you see here, in this box right here-- it's like a scroll box, like John was describing, that actually has in it their terms of service, or their terms of use. But it's this tiny little scroll box, and it's like a 10 page terms of use. And the arbitration provision was in there somewhere.

But only the first three lines here were really shown to the user as part of the checkout process. Now there was a link here to a printable version. So it certainly would have been possible for the user to click that printable version and see the whole thing. But the only part that was actually displayed were the first three lines.

And then down here, you have this button that says, I accept and continue to step three. And there's also this other text up here that says various things, but nothing really specifically about accepting the terms of use. All you have down here is this button that says, I accept and continue to step three.

So in order to complete the purchase of the credit report, the user sees these three lines, sees this other text about various different topics-- but not specifically about the terms of use-- and then clicks this button that says, I accept and continue to step three. So here, we've got sort of all of the pieces, right?

TransUnion has a terms of use. It's sort of available to the user. They have an I accept button that the user has to click. But do you think this is enough? Turns out it wasn't, and that the court held that this didn't create a contract, a binding contract, that bound the user.

Because these pieces just were not connected together in the right way. Like, what exactly is the user agreeing to accept right there? Is it these three lines? Is it the whole thing? Anyway, so just an illustration. You've got to really put it together in a clear and explicit way if you want it to be binding.

JOHN DELANEY: Well, and this is a good lead-in to the next case, which is terrifying. It's a Southern District of New York case from last year. It involves Lyft. So Lyft was trying to enforce an arbitration provision. And as Aaron mentioned, a lot of this turns on arbitration provisions in terms of use, where plaintiffs' lawyers are trying to get the claim in court and as a class action.

This was Lyft's. When they introduced an arbitration provision through their terms of service, you had to click Accept this before you could take a ride on a Lyft vehicle. And it's a checkbox. I agree to Lyft's terms of service. And the user must check the box in order to hit Next and proceed.

But there was also this text, we'll send a text to you to verify your phone. And actually, I apologize. This wasn't amending the terms of use. This was a feature to get phone numbers. But at the same time, they were collecting a click Accept as a terms of use, which had been updated to add an arbitration provision.

So the court said a couple of things. The court refused to find this a binding agreement, even though it had a click Accept. The court said a couple of things. This process failed to alert Lyft users to the gravity-- that's a quote-- of the click, what's happening, including that the terms of service that they were agreeing to now has an arbitration provision in it.

The court said that this text is difficult to read, which I thought was a little head scratching. They say the key language is in the smallest font. But it's actually in the same font size as we'll send a [INAUDIBLE].

Now it is smaller than Add a Phone Number. So the court, I think, is seizing on the fact that the largest text focuses on adding a phone number rather than accepting the terms of use.

The court said quote-- this isn't a quote. But they said, in essence, reasonable consumers would not have understood this blue term to be a hyperlink to the terms of service. Come on. My grandmother knows that if something appears in a different color font and you click on it, you're taken to that document.

So this is really disturbing that a Southern District New York judge has said that a reasonable consumer wouldn't have known that you had the opportunity to review the terms of service by clicking it. And certainly, if you're dealing with a user--

Let's say you have a health care app that's aimed at the elderly. I think it's really important to maybe say, which can be viewed here. But of course, the court likes scroll boxes. So the scroll box would have taken care of that concern.

And then the court also said this was misleading in how it's presented. Because the user would have thought the whole focus was adding a phone number. And this was kind of an afterthought.

And then the court didn't like the Next bar because it didn't alert users to the significance of the box that they checked up above. So this is an important case, especially for companies in the Southern District of New York, or that have New York law governing their enforceability of their terms of use.

We're running out of time. Oh, but there's a happy story too. So this acceptance process was from February of 2016. And the court found this did not result in an enforceable terms of contract, however, in September of 2016, lifted it all over again. And here, it's the scroll bar.

You see the whole agreement. And it says, before you can proceed, you must read and accept the latest terms of service. And then they-- instead of saying Next, it says, I accept. Court found-- same court-- this was fine. So the people that accepted this version entered into a binding agreement. The people that accepted this version did not.

OK, so we're out of time. But we do want to say that another hot area in the case law is unilaterally modifying your terms of use, especially if you're updating it, either to change pricing or to add an arbitration provision. Courts are pushing back more and more. It used to be every terms of use has a provision saying we can update this simply by posting a new version to our website. We're now seeing courts push back on that concept.

So just as you put a lot of thought into how you form the binding agreement initially, you also need to put a lot of thought in how you unilaterally update your terms of use. The concern is that courts may be going in a direction where just posting a new version to the website isn't sufficient notice to user.

So we see things where CNN is using pop-ups to notify people. This is from PayPal, where you would get an actual email in 30 days to end your relationship with PayPal if you don't like the new terms of use. So that's obviously a gold standard type of approach.

We're seeing more summaries of changes to help. But I'd be wary under this approach. See, Google says, because many of you are allergic to legalese, there's a plain English summary for your convenience. It's us lawyers that are shuddering at that, because we don't want this to be a substitute for the actual reading the terms of use. And it should say that. This is provided for your convenience, but it's not a substitute for reading the full length terms of use.

I'm technically out of time, but I just want to get one thing. So we provided an analytical tool to think about enforceability of your online contracts. And the key issues are notice how conspicuous it is, the opportunity to review, hard to ignore is the scroll box, hard to find might be you don't have a hyperlink at all. And consent. A checkbox is pretty proactive consent. A browse-wrap is really passive.

And then one final issue. And then we'll answer questions. But we're actually over time. The New Jersey-- if you sell products or services online to people in New Jersey-- which if you have an e-commerce site you probably meet that-- you need to be aware of this New Jersey law, which is affectionately known as TCCWNA, but it's the Truth and Consumer Contract Warranty.

And notice that it actually says that provisions of an online terms of use that are inconsistent with well-established New Jersey law are unenforceable and can subject you to class action lawsuits with financial penalties attached to them. So many e-commerce retailers have been sued. If you work for an e-commerce company, you're probably well aware of this. But if your company is moving into e-commerce, you need to make sure your terms of use complies with TCCWNA.

We did have one question. I apologize. We're low on time. But I'll be happy to answer your question.

AUDIENCE: The example you had of Google, and also LinkedIn also did the same when they recently changed their terms where they showed what's our old terms and what's the new terms. Do you see a trend that lawyers are being asked to use more plan English-type drafting [INAUDIBLE] block legalese [INAUDIBLE]?

JOHN DELANEY: That's a great question. So the question is, do we see a trend towards greater focus on plain English versions of terms of use? And the answer to that is yes.

So I think the risk there-- we see some terms of use to where they're trying to appeal to millennials, and it's kind of jokey. And I think it should be a serious document. But we are, in our own templates, are trying to simplify-- get rid of null and void, when null or void will do-- and try to limit the number of legalisms. Because especially if your site is aimed at teenagers, or millennials, I think it's helpful.

And the ideal is we know that there are people that don't read these. But when they do read them, we want them to be able to find-- so using an index where they can jump to the section they're most interested in. One trend we're seeing is a little summary box after each section that has a plain English description. The issue is making sure that you're still protecting yourself legally.

So with that, we're over time. We're going to take a 10 minute break. We're back here at 11:30 New York time. For those of you in the live audience, if you have questions, Aaron and I will be up here during the break and happy to answer your questions.

I really want to say, I hope you feel we covered a lot of ground in two hours. But thank you for your time.


  • twitter
  • LinkedIn
  • YouTube
  • RSS

All Contents Copyright © 1996-2018 Practising Law Institute. Continuing Legal Education since 1933.

© 2018 PLI PRACTISING LAW INSTITUTE. All rights reserved. The PLI logo is a service mark of PLI.