Online community professionals in the U.S. are quite fortunate, legally-speaking, as compared to our counterparts in other countries, like the United Kingdom and Australia.

Section 230 of the Communications Decency Act is a big reason for this. It might be the most important law on the books that relates to our work. It discourages frivolous lawsuits and allows us to host critical speech of the powerful, without serious fear of a lawsuit.

The Delfi ruling is a reasonable example of what can happen without it. A large company, and a rich man, demanded that an Estonian news outlet remove not only comments that vaguely threatened violence, but also comments that referred to the man as a “bastard” or a “rascal.” Even after they removed the comments, the company and man further demanded damages and, when rebuffed, sued the news outlet. A 9 year legal battle ensued – and the news outlet lost.

Putting Section 230 in Jeopardy

Recently, TechCrunch published an article by Arthur Chu, of Jeopardy! fame, where Chu advocated for the repeal of Section 230, as a means of combating online harassment.

Section 230 is fairly easy to grasp. Let’s say you were to go into the comment section below and post, “Taylor Swift just stole my wallet!” Except that she didn’t. This is a matter of fact, not opinion. The “Bad Blood” singer either stole your wallet or she didn’t. Since this is untrue, it is damaging to her reputation. She decides to pursue legal action. Because of Section 230, I am not liable. She could still sue me, but it would almost certainly result in a quick dismissal. Swift’s correct course of action would be to sue you, the writer of the libelous comment.

That’s Section 230. It ensures that the liability of words posted online rests with the author of those words, not with the website where they are hosted. Without it, Swift could sue me, and I could be held liable for what you said. If I am a more attractive target than you (if I am easier to sue, easier to find, if I have more money than you), then I will be the primary target for your words.

What Could Go Wrong?

If you manage an online community, think back to when a member has been critical of a company. It’s probably happened a lot. Maybe they had a bad customer service experience, maybe a product didn’t match their expectations. Whatever it was, imagine if the company could hold you liable for those comments. What would happen?

You would go on the offensive when it comes to negative posts, especially anything that contains a fact (“the product broke!”) instead of an opinion (“I didn’t like it.”). You would remove comments, not because they violate your community guidelines or because they are harmful, but because they are negative. Whenever someone reports a post, comment or contribution to you as being unfavorable to them because it is about them, you would lean toward removing it. Assuming they report it to you first, before suing you.

Saying that you would take a principled stand is noble, but do you actually have the money to fend off the lawsuits? I don’t. I’m not going to go broke so that one of my members can criticize Comcast.

I Am No Friend to Irresponsible Platforms

If you’ve spent much time reading my work, you know how important I believe it is to take responsibility for the space you manage. For example:

And on and on. I believe in being proactive and am a constant advocate for platforms to take these issues seriously. I have built a career, over more than 15 years, by building respectful and inclusive spaces online.

But I don’t think repealing Section 230 is the answer. Section 230 is what empowers community professionals to moderate their communities and to have standards. When U.S.-based people try to get out of moderating content by saying, “if we moderate, that means we approve of it, and we’re liable,” this is the act I get to point to and say “no, that’s wrong, do your job.”

Making Communities More… Positive

Without Section 230, moderation loses some nuance. It’s simply, “is this negative and potentially harmful? well, better remove it.” Have you ever criticized a company on Facebook or Twitter? Why would Facebook and Twitter want to deal with a lawsuit just so you can post your message? Is this a doomsday scenario? Kind of. But without Section 230, it’s plausible. It’ll be more common for a smaller company to threaten a small online community. This would mean that online communities would primarily feature discussions where people praise things.

Yes, Section 230 protects bad people who host content and conversation I find reprehensible – and profit from it. That’s absolutely true. But Section 230 also protects amazing social spaces and communities where people share experiences and receive support. It protects ordinary people who use these platforms to criticize or expose the powerful. Just like Section 230 cuts both ways – so does repealing it.

I don’t know that there is any real threat of this actually happening, at this time. The reason people are talking about this is because TechCrunch is well-read.

The Court System as a Solution

Online harassment is a real problem. I shut it down in spaces I am responsible for. Chu’s solution is to turn to the legal system. He paints Section 230 as a tool used by the powerful, which is ironic because that’s how a lot of people would describe the U.S. legal system. As attorney Ken White, author of the popular Popehat blog, explains:

“The court system is broken, perhaps irretrievably so. Justice may not depend entirely on how much money you have, but that is probably the most powerful factor. A lawsuit – even a frivolous one – can be utterly financially ruinous, not to mention terrifying, stressful, and health-threatening. What do I mean by financially ruinous? I mean if you are lucky as you can possibly be and hire a good lawyer who gets the suit dismissed permanently immediately, it will cost many thousands, possibly tens of thousands. If you’re stuck in the suit, count on tens or hundreds of thousands.”

Chu suggests that big companies like Facebook, Twitter and Google (YouTube) only exercise halfhearted efforts in dealing with stalking and harassment because those behaviors lead to more traffic and revenue for them. Section 230 is the enabler of this behavior, he contends. Repealing it would force platforms to do more or face litigation.

But it’s not Facebook, Twitter or Google who will be most harmed by this litigation – it’s small, niche online communities who will shut down or fundamentally change (for the worse) rather than defend themselves against lawsuits they can’t afford. Section 230 is an equal opportunity safe harbor. It protects Facebook like it protects my forums.

Would Repealing Section 230 Impact Harassers?

Here’s a big question: what happens to harassers if Section 230 is repealed?

For example, let’s say Twitter structurally changes, somehow making this type of speech impossible. They shift dramatically, cease to be a live platform and drive many people away. Where do the harassers go? Do they stop harassing?

Or do they go to other services, in other countries? When you have a platform where you actually know who runs it, when you know the CEO, when it is a publicly traded company, there is at least some accountability to the public. They can be pressured by public opinion. When harassers use platforms where that is not true – something they certainly already do – what then? What happens when they use platforms hosted in countries where the legal system is less favorable or accessible?

You could say that it doesn’t matter, as long as the person being targeted is not on that platform. If you don’t visit some nasty online community in another part of the world, does it matter if they threaten you there? Of course it does.

If they themselves are not held responsible for harassment, why would they stop?

My point here isn’t inaction. It’s not, “they’ll just go somewhere else, so do nothing.” My point is that, for whatever limited impact repealing Section 230 could have on harassment, the negative impact it will have on the freedoms you enjoy will be greater. The greatest penalty from repealing Section 230 will be faced by the accountable people online, not by anonymous harassers or irresponsible website owners.

Combating Harassment

The abuse problem is a very challenging one, especially for Facebook, Twitter and Google. Many of us who manage niche online communities have essentially eradicated harassment from our spaces. But the big, live platforms deal with it on an entirely different level.

Most of us would like harassment to stop, but the how is tricky. You can’t pre-moderate. These are live platforms. Filtering and behavioral analysis is possible, but has to be approached delicately because of the high potential for false matches. Censor the wrong thing and you wipe out some advocacy group. Reporting tools are useful, but they receive a lot of reports – many of which are improper, but still have to be viewed.

I don’t accept things as they are. Improvement is possible. New ideas should be embraced and tested. I’m more interested in tools that allow people to protect and better control their time online. But this idea is not a good one. Repealing Section 230 would not only chill speech, but would absolutely harm those who already take these issues seriously and try to maintain platforms where people can be critical without harassing. It’s a challenge that we fight every day.

There are online communities, blogs and social spaces that are run by people who aren’t responsible. I don’t care for these people and have taken many of them to task publicly. Section 230 protects them. When Chu says that “companies simply will not change how they do moderation if there’s no downside risk for them if they don’t do it,” this is true for at least some companies. I don’t like it, and it’s a problem.

I recognize this idea is at least partially borne out of the fact that it’s not easy to identify online harassers. Forcing provider liability is a way to get around that, because it’s usually easier to identify the owner of a big website than a person who commented on it. As someone who kicks harassers off of online communities, I’d love to be able to identify them and nail them down, so they can’t come back.

I’m not a hard core “free speech at all costs” person. Section 230 could be adjusted or even replaced, but repealing Section 230 wholesale is not the answer. Making me liable for speech posted on my forums is not going to solve this problem, just create new ones.