Twitter makes humans look like bots

The advances in artificial intelligence and the rise of businesses that develop and employ chat-based bots mean that it gets increasingly hard to know whether you are dealing with a machine or a human being. The technology behind bots has gotten so sophisticated that it can require a longer conversation in order to be sure that there is no person of flesh and blood on the other end, as illustrated in this exchange with a Google support “employee” which I linked to in yesterday’s reading list. Basically, one has to run somewhat of a freestyle Turing Test. In other cases, the opposite happens: Users assume they are interacting with a machine, but in fact are having a chat with a real-human who only pretends to be a bot. An “Anti-Turing-Test”, as conducted in this example with Facebook’s experimental personal assistant M, can reveal this.

Bots pretending to be humans, humans pretending to be bots – sounds a bit bizarre, doesn’t it? Here is something else bizarre:

Think about what’s typical for a contemporary chatbot, those being used by large companies for customer service such as in the example above by Google, or by telecom operators (I recently had a chat interaction with T-Mobile which made me suspicious that I was conversing with a machine pretending to be a human):

  • Their messages seem prefabricated, built from pre-existing content blocks.
  • They are predictable, especially after you had a longer conversation with them.
  • They sometimes fail to fully address the inquiry, especially if it’s complex.
  • They don’t get irony and don’t respond well to nor have humor.
  • They usually don’t write long text but stick to 1 or 2 sentences at once.

Now, don’t you know this from somewhere?

Yep, this is a quite accurate description of what’s going on on Twitter.  And I am not referring to the actual bots that are populating the service. I am referring to real humans. To some extend, Twitter, with its 140 character limit and its encouragement of instantness and impulsive comments, has turned its users into bot-like creatures, who keep tweeting the same lines, the same reactions, the same ideas, the same arguments. If you are a Twitter user and don’t believe this, just type “[often used word(s)] from:yourusername” into Twitter search. Looking at my own results was pretty uncomfortable.

Sure, there is more humor and irony on Twitter than what you can expect from the encounter with a customer service bot. But only among a subset of users. And only as long as the discussion doesn’t touch sensitive topics such as [enter random object of outrage]. If that happens, everyone sticks to their pre-fabricated text blocks and appears to follow a very narrow conversation protocol.

The root cause of this is obviously not Twitter, but the human mind. Conversations on Facebook or other text-based Social Media platforms sometimes also look like strange interactions between bots. But the 140 character limit of Twitter forces people into a corset which makes it hard not to appear like a bot; to be unpredictable, to be sophisticated, to acknowledge complexity, to maintain a sense of humor.

Twitter recently announced that it will raise the character limit. In my opinion, it’s the right move. In a time in which bots get constantly better in imitating humans, there is nothing to gain with a service which makes humans in public look like bots.

Sign up for the weekly newsletter so you do not miss any posts. Or subscribe through other channels.
If you are the admin of a Slack team, use Slack’s RSS app to push the feed into a Slack channel of your choice.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.