Community managers and abuse teams have a sense of the health of a community. All sorts of tools are used to search for inappropriate behavior or content. But how successful are they? In a Friend-driven system, if someone is posting child porn, you better be paying detailed attention to that person's Friends. And if you want to curb problematic behavior, you need to think of the problem in terms of networks, not individuals. Further, while we all agree that killing off some behavior is an absolute imperative, what about the gray lines? The health of a community has a lot to do with its network and you can prune if you prune wisely. How can selective censorship done properly so that everyone can stay while they know that they can be banned for violating a certain code of conduct?
Network effects are also critical for deployment. People pick up the things that their friends use. This is all fine and well if everyone can get access to the same platform, but when that's not the case, new problems emerge. We're all developing nice new social technologies for the mobile phone. Even when people want those technologies, they aren't taking off. Why? There are no cluster effects. If you use IE and I use Firefox, we can still both get to Facebook. If you use Windows Mobile and I use an iPhone, the chances of us being able to do the same things with our devices are pretty limited. We can't role out cool new technologies if there are no cluster effects.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment