Earlier this week, I started a search on Amazon and mistyped, entering just two letters in: “iv.” Amazon then helpfully compiled a list of suggested search results, almost all of which were for the horse de-worming version of the drug ivermectin. That drug is at the center of one of the more recent and more mind-boggling parts of the anti-vaccine misinformation story, a false cure touted by hucksters and others out to make a quick buck.
This evening, Amazon spokesperson Craig Andrews tells The Verge that “Amazon’s autocomplete responses are driven by customer activity. We are blocking certain autocomplete responses to address these concerns.”
Like Facebook, TikTok, and Reddit, Amazon has not gotten a lid on limiting the spread of COVID-19 misinformation. Unlike those platforms and others, Amazon has seemingly done very little to try to stop it.
Amazon is not unique in using an algorithm to drive its autocomplete results. But as companies like Google have learned, there are “data voids” for previously unpopular search terms that can suddenly skew the algorithm when those terms get swept up in a new misinformation campaign. Google has gone to some length to try to solve the data void problem, most recently by presenting warnings on search results it thinks might suffer from the issue. As of the publication of this article, Amazon’s search results still show entries for ivermectin.
When you click through to one of those suggested search terms, Amazon simply lists many options for purchasing the drug meant to treat animals — without any further context about its dangers when ingested by humans. Although there are legitimate uses for ivermectin in humans, treating COVID-19 isn’t one of them. And though it should go without saying that taking the veterinary version is a very bad idea, that’s exactly what’s happening.
In a bulletin issued by the Mississippi department of health, the agency says that “At least 70% of the recent calls [to poison control] have been related to ingestion of livestock or animal formulations of ivermectin.”
Ivermectin can cause side effects ranging from “skin rash, nausea, vomiting, diarrhea, stomach pain, facial or limb swelling, neurologic adverse events (dizziness, seizures, confusion), sudden drop in blood pressure, severe skin rash potentially requiring hospitalization and liver injury (hepatitis),” according to the FDA.
Where other platforms have put effort into presenting information boxes leading to reliable and trustworthy information about COVID-19 and COVID-19 treatments, Amazon shows no such information in either its search results nor on product pages for ivermectin. An Amazon spokesperson tells me that if you search specifically for “ivermectin for covid,” it does display only a link to the FDA’s warning page about the drug.
It does appear that on at least some product listings, at least some moderation teams at Amazon are keeping an eye out for problems. Some of the top search results for ivermectin have very few written reviews. Scroll down a bit, however, and there are plenty that are clearly meant to discuss the use of the drug in humans. One reviewer, for example, wrote that it “Worked good on my 200lb horse” and that “long CV already going away.” Another rated a product 5 stars, writing that “I love the taste […] Ivermectin too hard to get, but so effective at preventing something I cannot mention here.”
Over on Reddit, more than a hundred subreddits went dark today in protest of that platform’s refusal to ban communities that spread COVID-19 misinformation. TikTok appears to be playing whack-a-mole with videos on its platform, and Facebook has struggled to enforce its policies against misinformation as communities resort to euphemisms like “moo juice.”
On Amazon, veterinary ivermectin remains easy to find without being shown any science-based information that could inform a consumer of the dangers of taking a drug that is meant for animals. It also remains easy to purchase on that platform, in some cases with free Amazon Prime delivery, fulfilled by an Amazon warehouse.
Hey Amazon this autocomplete result for just two letters is VERY BAD on so many levels. Not only does it reveal how easily your algorithm can be skewed by misinformation, it’s also possibly leading to genuine, human harm. pic.twitter.com/A2hfsKIcQr
— Dieter Bohn (@backlon) August 31, 2021