CNN and The New York TimesNews organizations and online comments. If you think about that combination, what comes to mind?

There was a time when many regarded the comment sections on mainstream media sites as an example of some of the worst discourse on the web. But it is slowly getting better. Among the community management professionals leading the charge, at the highest levels of the media, are Bassey Etim and David Williams.

Respectively, they work as community managers at The New York Times and CNN. Both have been in the field since 2008, both lead the teams responsible for the moderation of comments posted on their news organization’s website.

I’ve known David for a few years now and just recently connected with Bassey. They are tremendously smart community managers and experts in moderation. If you work in this profession, you should know their names. They deal with moderation at a volume that few can fathom, in an environment that is highly charged, in a space where many people expect to be able to say their piece, no matter what that is, without restriction.

Because of my respect for both, I wanted to sit down with them, ask some questions and find out more about how moderation works at CNN and The New York Times. I believe that the resulting answers are really interesting and provide valuable insight. There is a lot to learn here. Thank you to Bassey and David for agreeing to participate.

Introduction

Can you tell me about your background and how you came to work in community management?

Bassey Etim (BE): My work background is pretty simple: I worked at my college newspaper, The Badger Herald, and then got an opportunity to do consulting work for nytimes.com as I approached graduation, and stuck on as a News Assistant for the newly-founded Community desk. I was recruited because of my experience live blogging, and also for philosophical inquisitiveness. Really, it was a happy accident that fit into my skill set quite well.

As far as my personal background, I’m a kid from the inner city who got my hands on some giant atlas and became a physics nerd. Then I began writing poetry, then music and eventually news. Now, I’m a journalist and musician working on my second novel.

At the end of the day, I’m a news junkie and an obsessive writer, with plenty of experience managing teams working toward creative goals.

David Williams (DW): I got into community management by accident. Shortly after I started as a producer with CNN iReport in 2008, I helped settle an issue between two users. There was another matter involving one of those users, so I handled that because I knew her.

More and more issues came up as iReport grew and the community quickly became my full time responsibility. So I was working as a community manager for about six months before I knew what community management was.

My background is in journalism. I have been at CNN since 1994. I started on the TV side at HLN and moved over to the digital side in 1999. I’ve done a little bit of everything over the years.

Community at Your Organization

How big is the team that you lead and what, ultimately, is the team responsible for?

DW: Community and comment moderation is one part of CNN’s larger participation team. I have a writer-producer, who also works with our social publishing team, and we have a service that provides 24/7 moderation for our comments and iReport.

I oversee our moderation, to make sure everyone is following CNN’s guidelines and work with our writers and producers to make sure they are aware of interesting conversations that could add new perspective to a story or inspire a new article.

I also help our users with any issues they may have, along with our Viewer Communications Management department.

BE: We’ve got a dozen folks on the community moderation team. We’re responsible, in one sense or another, for nearly every piece of user generated content on nytimes.com.

For your organization, what is the purpose of moderation?

BE: The best answer I can give is our internal boilerplate:

“Publishing comments both rounds out our coverage and utilizes our greatest strength: our unique readership. NYT readers are well-informed, passionate and, more often than not, highly articulate and civil. To serve those readers, we’ve decided to moderate comments to protect our conversations from those tenacious few who would try to derail them.”

DW: I think the ultimate goal of moderation is to encourage good conversations. A lot of smart, funny and talented people come to CNN every day and they have interesting perspectives on the news. Our job is to keep the conversations civil and on topic, so people feel safe to speak freely.

Moderation Technology

Let’s talk about the robotic side of the team. What type of technology do you have at your disposal as far as the posting, management and moderation of comments? Please take me through the process that a new comment goes through. If I visit an article that has comments enabled and I leave a comment, what happens to it?

David Williams, CNN

David Williams

DW: All of our comments go through a profanity filter before appearing on the site. If the comment contains a word in our list or a link, it goes into pre-moderation. Human moderators review comments in that queue to make sure nothing was caught by mistake. Otherwise, your comment will publish directly to the site.

BE: We use a proprietary moderation system as well as sentiment analysis tools to help us filter incoming comments. But it’s really pretty simple: comments arrive, we sort them in various ways, and then moderate them, primarily by hand.

For an average story, once your comment is submitted, profanity is marked up in red and it arrives in our CMS to await human moderation based on the order in which it was received and the relative importance of the article in question.

For some highly-trafficked stories, once your comment is submitted, it arrives in our CMS and may be approved automatically if your past actions indicate that we would be almost certain to approve it. (This is a very small percentage of our comments.) Otherwise, it may be sorted based on the probability that it would be approved and then moderated by hand in that order.

All of this is subject to change depending on whether the comments are time-dependent or based on other variables too numerous to list.

When to Allow Comments

How do you decide which stories have comments enabled and which do not?

BE: I’ll have to crib my Quora answer here:

The vast majority of NYT comments are handled by a human moderator. This means that we have to make an editorial decision about which comment threads we will open for comments each day. Also, we adhere to a sort-of “slow moderation” theory, which posits that the best way to respect the commenting efforts of our readership is to ensure that their comments exist in an urbane, literate environment. (Definitely not an approach that is good for everyone, but it works fabulously for us, for reasons you can probably divine.) Our goal is to have every NYT comment thread offer tangible added value to each article for our readership.

The costs to this approach are obvious: it takes a long time, and many stories do not get comments. But NYT readers expect the highest-quality everything from us, so that’s what we deliver.

To answer the root of your question, these are our general criteria for opening a story for comment, broadly in order of importance:

  • News value of the story
  • Projected reader interest in the story
  • Have we recently had comments on this issue?
  • Whether we can moderate the projected number of comments in a timely fashion

DW: We have comments enabled on all stories by default.

Consistency Among Moderators

What do you do to ensure consistent moderation between different moderators?

DW: We have written guidelines for our moderation staff and I have a weekly call with our moderation supervisor to make sure we’re all on the same page. We also talk over email or instant message whenever we need to.

BE: Good training and clear rules gets us part of the way. But getting the rest of the way is an impossible task. I’m more concerned with making sure moderation is done well, not consistently.

If we keep refining our rules to encompass more cases, we’ll become more consistent. But on the same token, too many rules slows down moderation and becomes an enemy of common sense. I try to hire moderators who possess the instincts of our readership.

Removing Comments

When you remove a comment, are people notified in any way? Or do you ever reach out to people who posted a pretty good comment that had some issues?

BE: No, people are not typically notified of comment rejections. We don’t have the staff necessary to perform this service. But occasionally, when a comment is particularly valuable, we will reach out to a reader and explain why we may be able to approve a slightly modified version of a submitted comment.

DW: No. It would be nice if our system sent out a notification when we delete comments, but it doesn’t and we don’t have time to do it manually.

Bassey EtimCredit: Chester Higgins/NYT

Bassey Etim
Credit: Chester Higgins/NYT

Mistakes happen. If you or a moderator removes the wrong comment or fails to approve a good one, how do you find out? Or is it simply lost in the volume?

DW: I look into our complaints, and will restore comments if I felt the moderator made a mistake, but I usually agree with the moderators’ decisions.

BE: We generally find out when managers sweep through a moderators work after-the-fact, or when a reader writes in to complain, or the user flags. Given the volume of comments we receive, however, it is likely that many of these comments fall through the cracks. Our users have made it quite clear to us that they prefer this type of occasional mishap to the vile comments seen elsewhere on the web. That said, we’re always trying to reduce these instances.

Do you ever edit comments? If so, for what reasons?

BE: If a user submits a subsequent comment asking for an edit, we comply, time permitting. We also edit comments for grammar and space if they are being featured outside of the comment section. Otherwise, we do not edit reader comments.

DW: No. We will either leave the comment the way it was written or delete it if it violates our Community Guidelines.

User Reports

How much do user reports factor into your moderation?

BE: Our moderators do get consistent updates on user reports or “flagged” comments. Those have proven to be the best way for us to discover bad comments that have slipped past. The flags are also a great forum for those with a certain political beef to send us a stream of useless reports.

Flags probably aren’t as big an aspect of our operation as they are for other sites, since we go through so much of it by hand in the first place. But when it comes to a big 1,000+ comment post, especially, those flags are indispensable. Most of our readers do a careful job of alerting us when we’ve made a mistake. Regular comments readers are, thankfully, well aware of our moderation practices.

DW: Our community plays a very important part in the moderation process. Users have the option to flag comments they find offensive and then we go through and decide if those comments violate our Terms of Use. It’s a really big help.

Pre-Moderation vs. Post-Moderation

Let’s talk about pre-moderation vs. post-moderation. NYT has verified commenters, identified by the system, which are not pre-moderated. Beyond that, everyone else is. Meanwhile, CNN has post-moderation, though some conversations may be limited to pre-moderation. I believe that both methods can be beneficial and I’d like each of you to tell me why your organization has adopted the approach it has.

David, why has CNN opted for post moderation?

DW: The biggest advantage of post-moderation is that the comments show up right away. That lets our users have real-time back-and-forth conversations without having to wait for the moderator to approve a comment.

Pre-moderation gives you more editorial control and lets you cherry pick the best, most thoughtful comments that fit your site’s voice and mission.

It really comes down to what you want to accomplish.

If you want people to post fully-formed arguments and aren’t terribly interested in interaction between users then pre-moderation is probably the way to go. Post-moderation is better if your goal is for your members to talk with each other.

Pre-moderation is also a lot more labor intensive because someone has to read every comment and decide if it should be approved.

And Bassey, why has NYT embraced pre-moderation?

BE: We pre-moderate because people subscribe to The New York Times for a high quality stream of information and analysis. We don’t want the comments to feel as if they aren’t part of the Times or our culture.

If our news articles were not painstakingly edited and produced, why would anyone subscribe to The New York Times? On the same token, why would a Times subscriber want to participate in a comments thread that makes you feel as if you need to bathe afterward? The nature of our audience and our mission demands that everything we present is well-curated.

A good comments system is an extension of the site it lives on. Times comments reflect Times standards. We’ve designed our moderation rules to play off of the stylebook for our journalism.

Anonymity

What’s your take on anonymity (meaning having no identity vs. having a username vs. requiring a real name) and how it affects your comments?

BE: In the past, we did see real identity as the key to ensuring a more civil comments space. It makes perfect sense in theory – after all, who would say such awful, hateful things in public with their names and job titles attached?

Turns out the answer is: An enormous amount of people would say awful and hateful things with their names attached.

And even worse, many great commenters with innocent reasons to withhold their identities begin to self-censor, and then abandon the comment threads entirely.

And even worse than that, the hostile readers who remain make highly personal insults toward the pictures, names, jobs and families of innocent commenters. So eventually, all that may remain is a group of goons who are perfectly comfortable posting hate speech under their real names, while all of the socially functional human beings decline to post comments, for obvious reasons. Even if some of those people still read the comments for entertainment value alone, that’s a long-term death spiral for a community.

Real ID, in summation, may be the worst great idea the community industry has ever had.

DW: Anonymous commenting isn’t the problem. The problem is when commenters feel anonymous.

It is really important to let your community know that you’re listening and that you value what they have to say. Our audience comes from all walks of life, so they often have interesting perspectives and ideas that we wouldn’t have considered.

If you don’t pay attention, people will misbehave until you are forced to pay attention and that’s not the best way to build a relationship with your community.

Who Must Be Respected in Your Community?

I find that community guidelines are truly tested when our members talk about people that the community doesn’t like, that aren’t present in the community or that hold a really far out, minority opinion. Do your policies apply to all people or just people present in the conversation? For example, can I call Miley Cyrus a name that I couldn’t call your community’s top contributor?

DW: We apply our guidelines pretty uniformly. There’s only so much time in the day, so we will shift the moderators’ attention to stories that involve vulnerable people, or sensitive issues.

BE: Yes, you could call Miley a name that you could not call another Community member. Criticism of public figures, so long as it is on-topic, is treated more loosely than criticism of commenters or private citizens quoted in our reports.

Generally, a commenter must explain his criticism of another community member for the comment to be approved, and that criticism must have an intellectual basis concerning remarks made specifically in response to the article in question.

Banning

Bassey, in October you told Andrew Beaujon of Poynter that NYT had banned “fewer than 10 nonspammers” in its history. You also said that the people that are banned made “really racist statements combined with a death threat against a whole race.” I wanted to ask you to elaborate on this a bit.

When you say “history,” how long of a time period does that refer to?

BE: You’re right to ask for specification, because I meant the history of our current comments platform – since 2007. This may exclude some blogs that may have been using an old comments system and were later merged into the modern platform over the next year or two. And it certainly excludes the old Times Forums from our prehistoric website, which I have no records from.

At the risk of splitting hairs, if people just make really racist statements, but don’t combine them with a death threat against a whole race, are they allowed to stay?

BE: Unless the racist sentiment is paired with a threat of violence, we would not ban that user generally. That said, it’s possible that such a user has been banned in the past if either a reader or staffer did truly feel threatened by the sentiment, but this would be quite rare.

I have to say, from an outside perspective that number seems extraordinarily low given the number of comments NYT must receive. Is there any chance this is just a matter of terminology? For example, let’s say you have a member who leaves terrible comments. You never ban them, you just never approve their comments. They are effectively banned, even if you don’t actually hit the ban button. Is that what is happening?

BE: No, even if we do internally flag a user for increased review, each of that user’s comments are still to be moderated in their due course.

You seem to be getting at the distinction implied here though – we mark users in a way to tell the moderator: “Warning! Read this carefully. History of bad stuff here.” But the moderators are instructed to read and moderate those comments in any case, just much more closely. It’s fair to assume that the users in this camp can get away with fewer minor infractions than any other comment writer would.

When you do ban someone, is there any system of documentation for that?

DW: Our Terms of Use lay out the reasons why users can be banned. We keep a list of banned users in our commenting platform and we can pull up their comments if there are questions.

BE: All of the banning goes directly through me to the dev team. There is a record of these users but not one that’s readily accessible. I’ve always vetted ban requests to be sure that they are, in fact, spammers.

Policy Changes

What is the process for policy changes, like if you need to update your commenting guidelines?

BE: I’ll discuss a policy proposal with the moderation team first, then I run it by my editor, then we update the guidelines and send out a memo. Most of the changes are so small at this point that there’s no reason to alert the news desk at-large. But when we do consider a major issue, the Standards editor is part of that conversation, as well.

DW: We work with our legal department to make sure that any changes are worded properly and will have the intended effect. I also have a weekly call with our moderation team to make sure that we are all on the same page.

Stories from the Field

Moderation is an interesting responsibility. You run into all types of people. Some people appreciate what you do. Some people don’t. And they threaten you or they say mean things. Sometimes those threats are serious, sometimes you can laugh at them. As your experience grows, so do the stories.

Do you have any memorable funny stories?

BE: You know, I’m always asked this question, and my response is always awful. But honestly, I’ve been at this for so long that I don’t really remember any particular incidents. It’s a bit of a blur. Sometimes I just think about the commenters I’ve interacted with and, really, worked with over the years and smile about the cumulative experience and feel thankful for the amazing readers I get to work with every day.

What I remember most is persistence. The people who, during the 2008 primaries posted hundreds of terrible comments about Sarah Palin and wrote addressing me (as “censor” usually) directly, querying me about my life and how I moderate or why I would do something like this. Then, they post a normal comment after days of abuse, and I quickly approve it. Then they write back more, submit another normal comment, and I approve it again. Then, over time, we gain respect and maybe even a certain love for each other. Over the years I’ve had the pleasure of meeting a lot of those folks via email – their passion is contagious.

DW: I laugh a lot at this job. A lot of trolls are just troublemakers, but some of them are pretty clever. One of the funnier pranks was when someone created a fake account pretending to be me and announced that I was resigning and gave some off-color personal reasons for my departure. Another troll ratted the person out, so that post wasn’t up for long.

Have you had any serious, credible threats that had to be looked into?

DW: I can’t comment on this, but we are lucky that CNN has a very professional security department.

BE: In my opinion, no. But I forward on all of these types of comments to the relevant personnel here, and they have taken some of these issues quite seriously. However, at that point, I’m out of the loop, so I’m unaware of any distinct police action related to a comment we’ve received.

Bad stuff aside, what have been some of the highlights of interacting with your members?

BE: The biggest highlight is that they keep coming back every day, for years. They feel like it’s a sort-of family, and I feel the same way. I feel a responsibility to look out for them, and to be sure their views are heard and respected, and to continue to try and increase the prominence of Community across the site.

DW: Our community is very diverse, so it’s really interesting to see which conversations take off. Stories about depression and other mental health issues do really well and our commenters are generally supportive and kind to each other. We also have sub-communities that have popped up around aviation, travel, food and other topics.

The exciting thing to me is that CNN is a gathering place for people who care about the news. It’s neat to watch people build real relationships in the comments on our site.

Looking Forward

Are there any recent or upcoming developments in online commenting or moderation (tech or otherwise) that you are excited about? What will 2014 mean for your comments section?

DW: CNN Digital is in the process of rolling out a major update to the site. This is going to give us new opportunities to talk with our community and for them to talk with each other. It’s going to be really interesting to see what they do with the new tools.

BE: Definitely, but you will surely forgive me for refusing to answer the question with any detail.

We’ll keep experimenting with new formats, of course. But I’m primarily concerned with building a sustainable future for our community. They don’t have all of the features they want or need to be totally self-sufficient, I know.

But the great thing about newspapers is that every day you have a new chance to build something great, and I’ll keep working toward developing the Community apparatus that our readers deserve.