Researchers at Oxford University have found online detection models are not as effective at spotting hateful comments made with emojis as they are with words.
Emojis are emoticons, or "picture letters", that can be used in text and on social media platforms.
The researchers carried out what they call a "Hatemoji check," and stated that "hateful content is complex and diverse, which makes it challenging to detection systems."
After England's defeat in the Euros 2020 finals, there was an influx of online racial abuse directed at footballers Jadon Sancho, Marcus Rashford and Bukayo Saka. In their published study findings, Oxford University researchers said there was "widespread racist use of the monkey, banana and watermelon emoji."
In a statement, the social platform said: "Following the tournament, we undertook our own analysis of the Tweets removed and accounts suspended....Given the international nature of the Euro 2020 Final, it was no surprise to see that the Tweets we removed came from all over the world.
"However, while many have quite rightly highlighted the global nature of the conversation, it is also important to acknowledge that the UK was - by far - the largest country of origin for the abusive Tweets we removed on the night of the Final and in the days that followed."
The researchers at Oxford University looked at several examples of online hate comments, spotting where hate messages online were made up of emojis in replace of threatening and hateful verbs, and where identity terms, such as "black people," were replaced with emojis that represented them.
They also stated, in their published findings, that there are "critical model weaknesses" in detecting emoji-based hate, and that current commercial and academic models "perform poorly" at identifying hate when the identity term is replaced with an emoji - even though they perform well at identify hateful words.
They concluded their findings were an indication that the current models "do not understand what the identity emoji represent."