Skip to content

Social Media and Chat Monitoring

Suppose a system could help alert people to online sexual predators? Many might like that. But suppose that same system could allow people to look for gun purchasers, government critics, activists of any sort; what would we say then? The tension between these possibilities is before us. Mashable reports that Facebook and other platforms are now monitoring chats to see whether criminal activity is suspected. The article focuses on the child predator use case. Words are scanned for danger signals. Then “The software pays more attention to chats between users who don’t already have a well-established connection on the site and whose profile data indicate something may be wrong, such as a wide age gap. The scanning program is also ‘smart’ — it’s taught to keep an eye out for certain phrases found in the previously obtained chat records from criminals including sexual predators.” After a flag is raised a person decides whether to notify police. The other uses of such a system are not discussed in the article. Yet again, we smash our heads against the speech, security, privacy walls. I expect some protests and some support for the move. Blood may spill on old battlegrounds. Nonetheless, I think that the problems the practice creates merit the fight. The privacy harms and the speech harms mean that even if there are small “false positives” in the sexual predator realm, why a company gets to decide to notify police, how the system might be co-opted for other uses, and the affect on people’s ability to talk online should be sorted as social platforms start to implement monitoring systems.