Before Instagram, TikTok, Myspace, or Facebook, there was no such thing as a hashtag or a virtual timeline. The internet was dominated by forums and instant messaging programs where people with similar interests congregated. As tech companies began to shift the focus toward the individual, more and more personalized services populated the increasingly crowded social media market. YouTube and Twitter began to recommend videos or tweets that they thought the users would enjoy rather than forcing people to actively search for content they preferred. Hashtags were created that allowed people to promote their content to a specific market. With more and more users engaging with these suggestions, Big Tech finally had the data to create something unprecedented at the time: algorithms that could mimic human behavior.
Like all other businesses, social media conglomerates have one goal: to maximize profit. To do this, they can attempt to increase the number of users or user engagement. In a perfect world, both would be possible simultaneously. However, as more users join, it becomes more difficult to get them to stay before another application lures them away. This is why machine learning has become so necessary in social media. If a program can’t immediately detect someone’s interests, then it becomes much harder for them to interact with the content.
The ideal scenario for a user would be a highly specific and tailor–made environment where attention span and uninterrupted focus synergize with one another. A combination of easy access to private messaging, real–time timeline updates, and frequent push notifications would rarely let the user take a minute–long break. However, excessive optimization can result in a systematic inhibition of specific voices, defeating the purpose of algorithms that are meant to be inclusive.
Hashtags and keywords can make the process somewhat more controllable, but there’s no guarantee that interactions will be normalized. For instance, users who search the same phrase on Twitter may get remarkably different results depending on the profile Twitter has already established for them. All the tweets that have that phrase will appear, just not in the same order. The order of a search result can depend on accounts a user follows, accounts a user engages with the most, verified accounts, engagement rates, and other factors. Even under the veil of personalization, Twitter still has discretion over what it can choose to display, which has led to controversy.
For instance, everyday users on Twitter are unable to reach high positions on the trending chart and search index due to a lower number of followers, a phenomenon that markedly affects small accounts trying to raise awareness on certain issues. Because the trending tab can already be clogged during a high–level sports event or album release, it is difficult for those with no large built–in audience to effectively reach other people.
Over the past decade, the number of small content creators boomed as it became easier to document one’s life. The pandemic accelerated this movement since social media suddenly replaced face–to–face communication. YouTube channels and TikTok accounts became more popular than ever, and many users were within the 18–to 29–year–old demographic. However, this didn’t make it any easier to become an overnight sensation. Creating a viral video still requires a good amount of luck, as just one computer–generated recommendation can cause a chain reaction of views and likes.
What would potentially be the role of a real–life moderator who oversees the best tweets or videos and picks the most engaging ones is instead given to a machine. As much as this position can be taught to behave like humans, problematic initial data can crack its foundation. In 2016, Microsoft introduced a Twitter chatbot that would use the conversations it had with its users, learn from them, and develop a language on its own. Compared to other chatbots, Microsoft was one of the first to stray away from pre–written scripts and instead solely depend on people to shape its dialect. Users immediately took advantage of the bot, feeding it racist and misogynistic language that in turn caused the bot to tweet offensive phrases. Microsoft had to suspend the account just 16 hours after its release.
Accusations have also been made against TikTok and Twitter for using biased artificial intelligence. Jalaiah Harmon, a 15–year–old who first created the hugely popular “Renegade” dance, was not initially recognized due to the size of her account. Other content creators such as Charli D’Amelio, one of the most followed users on TikTok, used the same choreography and were credited because of their account size. Similar to other social media platforms, TikTok has a “For You” page which contains recommended content for the user. Large accounts have an inherent advantage here due to their engagement rates, making it difficult for small accounts to shine in the spotlight.
Black TikTok content creators say that Harmon’s story is not an isolated one. The algorithm causes the For You page to be racially biased, favoring white TikTokers who profit off the ideas Black TikTokers made first. When TikTok failed to take meaningful action, they took it into their own hands. Black creators went “on strike” after Megan Thee Stallion released her single “Thot S***,” a song whose lines and catchy beats are the perfect combination for a new trend. A few weeks later, no viral dance had caught on; instead, the sound was filled with videos that mocked mediocre dance tutorials. This movement was an effective and eye–opening way to see just how much power the algorithm holds. Nothing went viral because there were few dances to begin with, preventing big accounts from leeching off of small ones and using TikTok’s For You page to their advantage.
It’s promising to see that humans can still have some control over machines, but it requires a coordinated movement and quick mobilization. The power of the algorithm can only be changed by the social media companies themselves. While most have said they would analyze their programs for any bias, only time will tell if algorithms have changed for the better. As much as artificial intelligence has helped us progress as a society, we must also recognize its drawbacks and understand how to counter its fundamental flaws. When we give power to inanimate code and data, voices become unfairly suppressed.