Fake reviews are a big problem: How AI can help, according to Trustpilot’s Chief Trust Officer

Estimated read time 13 min read


AI generated thumbs up

peepo/Getty Images

Trustpilot, formed in 2007, is a site that aggregates user reviews of companies and websites. The company boasts 238 million reviews on its site, having reviewed nearly a million sites across 50 nationalities.

Although Trustpilot offers reviews of US-based businesses, the few local shops I looked for weren’t listed. I had better luck on Yelp. Trustpilot seems to have a much stronger presence in Europe.

1695727191958

Anoop Joshi, Trustpilot’s Chief Trust Officer

For our purposes in this article, it doesn’t matter where the preponderance of companies profiled are located. This article focuses on a problem dangerously endemic on review sites: fake reviews.

Also: When’s the right time to invest in AI? 4 ways to help you decide

In 2023 alone, Trustpilot identified 3.3 million fake reviews on its site. That’s after eliminating 2.6 million just the year before. Worse, according to research documented in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), only about half of consumers can distinguish between text written by artificial intelligence and text written by a real human being.

The rise of generative AI leaves consumers and companies like Trustpilot with an increasingly serious problem: filtering out fake reviews and identifying real opinions by real consumers.

Also: Generative AI is the technology that IT feels the most pressure to exploit

Trustpilot has made this challenge a key mission of the company. ZDNET spoke with Anoop Joshi, Trustpilot’s chief trust officer, to learn how the company is combatting AI-generated fake reviews. It’s quite an interesting challenge.

And with that, let’s get started.

ZDNET: Can you share your journey to becoming Trustpilot’s Chief Trust Officer?

Anoop Joshi: As Trustpilot’s chief trust officer, I oversee our Trust and Safety and Legal and Privacy operations with a team of around 80, covering a wide range of activities across litigation, public affairs, global comms, commercial contracting, content moderation, brand protection, and fraud investigations.

I joined Trustpilot over four years ago. I was initially responsible for the company’s enforcement-related work, meaning the actions taken against misuse on the Trustpilot platform by businesses or consumers. This included overseeing and supporting our actions to tackle fake reviews and investigate forms of abuse and misuse. Litigation was also a part of this role, specifically relating to content posted on the platform and claims submitted by businesses attempting to have reviews removed or hidden on the platform.

Also: Generative AI can transform customer experiences. But only if you focus on other areas first

This team developed into the company’s first platform integrity team and became more involved with the operational side of trust and safety, leading to greater prominence of the work we were doing at an industry level. Our impact was recognized as Trustpilot became a founding member of the Coalition of Trusted Reviews, together with Amazon, TripAdvisor, Glassdoor, Booking.com, Expedia, and others, with the goal of further improving trust in online reviews.

I have a background as a lawyer and software engineer, and today that mixed background supports my chief trust officer role at Trustpilot. Critically, we’re at a place where law and technology intersect in multiple different ways, and this is particularly the case for Trustpilot when it comes to building and earning trust.

ZDNET: How do you define the role of a chief trust officer in today’s digital landscape?

AJ: At Trustpilot, our vision is to be the universal symbol of trust and this role is here to ensure we’re delivering on that commitment. As the chief trust officer, I’m responsible for establishing what trust means at Trustpilot. A large part of that is our reviews, the content on our website, and the way we treat our customers, both consumers and businesses.

It’s also about driving the governance and processes that mitigate risk, enable compliance and ultimately, earn the trust and the loyalty of our stakeholders, which include consumers, employees, businesses that use Trustpilot, investors, policymakers, journalists, and more.

As technology becomes increasingly more pervasive in the work of organizations across the world, and more and more engagement happens online, the question of trust will continue to surface, and I expect we’ll start to see more demand for this type of role in the C-suite.

ZDNET: What are the most common fake reviews you encounter on Trustpilot?

AJ: We define fake reviews as reviews that aren’t based on a genuine experience or have otherwise been left as an attempt to mislead the reader in some way. The types we commonly come across and remove are:

  • Spam reviews: People leave a review that is ultimately some form of advertisement or is masquerading as a promotion for another business
  • Conflicts of interest reviews: An owner or employee of a business reviewing that business itself
  • Reviews left as an attempt to mislead: Someone submitting a review where they haven’t had an experience at all with the business
  • Incentive-based reviews: The nature of the review itself is misleading and the motivation of submitting that review is nefarious

ZDNET: How has the rise of AI-generated content impacted the authenticity of online reviews?

AJ: Generative AI in this space has reduced the cost for individuals to create content. As a platform, Trustpilot has designed its automated systems and engines to detect fake reviews by focusing on behaviors.

Our engines look at how a review got onto Trustpilot by examining the relationship between the user who submitted the review and looking for patterns or suspicious markers. While the content of the review is absolutely something we look at, it’s a small part of the overall picture when it comes to the detection of fake reviews.

Also: Agile development can unlock the power of generative AI – here’s how

Our systems are constantly looking at the behaviors leading up to the submission of a review, and our findings in our latest Transparency Report show a relative consistency year-over-year in terms of the volume and number of fake reviews detected.

This shows that since the launch of AI technologies like ChatGPT, we have not seen a surge in the number of fake reviews and have remained consistent in our findings as a company.

ZDNET: Can you explain how Trustpilot’s AI and machine-learning systems detect fake reviews?

AJ: Every review that is submitted to Trustpilot is analyzed by automated fake review detection engines. These engines look at different features or facets of a review such as prior user behavior — what other reviews this user has submitted to the platform — or even promotional statements to detect suspicious activity. Some patterns detected are not immediate and may take time to evolve before we take action.

In addition to our detection engines, we rely on our Trustpilot community of consumers and businesses who can flag any review they deem suspicious or breach our guidelines. These are flagged to our human moderators (our “content integrity team”), who then assess the review and determine the action taken.

Whenever we remove a review, we contact the reviewer directly to let them know the reasons why, and to give them an opportunity to challenge the decision.

Our detection engines and our content integrity team work hand-in-hand to continually improve our approach to detecting and removing fake reviews.

ZDNET: What challenges does Trustpilot face in distinguishing between genuine and fake reviews?

AJ: One of our biggest challenges is that some patterns of behavior are not immediately apparent and take time to develop and understand that this is, in fact, a fake or misleading review. This will always be a challenge when distinguishing between genuine or fake reviews.

ZDNET: How do you deal with the issue of keeping genuine reviews where users legitimately used AIs to help write them?

AJ: We look at whether reviewers have had a genuine experience with a business, and if that experience is reflected in their review. We analyze a variety of factors when determining if a review is suspicious, which can include if a reviewer used data copied from another source (such as being generated elsewhere, including from a generative AI model).

Where these factors amount to a high degree of suspicion, we’ll automatically remove the review and let the reviewer know we’ve taken action, giving them an opportunity to challenge our decision.

Also: Rote automation is so last year: AI pushes more intelligence into software development

We think that’s the right balance to take when it comes to this emerging technology, acknowledging there are use cases where reviewers may use generative AI-based tools to help frame genuine experiences or to support reviewer needs, such as accessibility or neurodiversity.

ZDNET: How does Trustpilot balance the need for automated detection with the importance of human oversight?

AJ: In thinking about the platform’s future, we always have and always will ensure that humans are involved in the creation of the design and implementation of the automation software we develop.

We acknowledge that automation is impactful in supporting operations at scale, but the nature of the problems that we’re solving are human. Those problems and challenges change over time, and so automation needs to adapt, and that adaptation is often driven by what we learn from human behavior.

ZDNET: How has the percentage of fake reviews detected changed over the years, and what factors have contributed to this?

AJ: Total reviews written on Trustpilot continue to increase year on year, from 46 million (FY 2022) to 54 million (FY 2023), an increase of 17%. With that, more fake reviews were removed in FY 2023, a total of 3.3 million compared to 2.6 million in FY 2022. However, our removal rate remains consistent at 6% of the total year-on-year proportionally.

In 2023, 79% of the fake reviews were detected and removed by our fake detection systems, demonstrating our continued investment in technology to automatically detect fake reviews is becoming increasingly more effective. While AI and machine learning continue to rapidly evolve, generative AI tools allow written information to be quickly created from a few simple prompts.

Also: 4 ways to help your organization overcome AI inertia

Recent research shows that participants in a study could only distinguish between human and AI text with 50-52% accuracy. Today, our investments in technology to better detect behavioral patterns that focus as much on how reviews get onto the platform as they do on the specific content of a review means we continue to identify and remove suspicious reviews, even where the content may have been generated using AI.

Additionally, the community on Trustpilot helps us to promote and protect trust on the platform. Our reviewer and business communities can flag a review to us at any time if they believe it breaches our guidelines. We refer to those reviews flagged to us as reported reviews.

By utilizing both technology like AI and machine learning as well as our community, we are able to continue providing a platform built on trust and transparency.

ZDNET: What are the long-term effects of fake reviews on consumer trust and business reputation?

AJ: Fake reviews have the ability of impacting consumer decisions. A consumer that makes a purchase based on a fake review could ultimately have a bad experience, or at least not the experience they were expecting. Ultimately this impacts their trust in online platforms.

And if platforms aren’t doing all that they can to reduce the likelihood of fake reviews, this will have long-term effects, as consumers will ultimately lose faith in the platforms that they rely on to make their buying decisions.

ZDNET: What ethical considerations guide Trustpilot’s use of AI in review moderation?

AJ: Ultimately it’s our commitment to transparency. Where we are using AI for automated decision-making, we are transparent about that fact. We design our platform for trust between consumers and businesses.

That transparency is at the core of the approach we take when it comes to using and developing AI tools for our platform and is something that consumers increasingly come to expect

ZDNET: How do you educate consumers about distinguishing real reviews from fake ones?

AJ: We use Trust Signals to highlight verified reviews, plus reviewers have the ability to verify themselves. Our dedication to a high standard of verification ensures that consumers browsing Trustpilot are able to distinguish between the different types of reviews on our platform.

It’s another piece of our commitment to transparency throughout everything we do. Where we take enforcement actions against businesses for misuse of the platform, we display prominent banners (we call them Consumer Warnings) to help consumers make better-informed choices.

ZDNET: How do you foresee the future of AI in combating fake reviews evolving?

AJ: There are massive opportunities in using AI for platforms like ours. Generative AI specifically excels at pattern prediction and I’m interested to see how innovation develops using that technology to better identify fake reviews. We have been operating since 2007 and have a massive amount of data and experience in determining which reviews are fake and which are genuine to help us build better fake detection models.

Also: Want to work in AI? How to pivot your career in 5 steps

It’s also important to recognize that these technologies can be used to foster greater transparency, using the technology to support and guide people online, something we’re seeing a lot of when it comes to online chat. This technology is only going to improve over time, but with that level of sophistication comes a deep sense of responsibility.

ZDNET: What future developments do you envision in the landscape of online reviews?

AJ: Looking at the wider web, I expect the disparity between content that is human-generated and potentially AI-generated will become greater, impacting trust in online content. As a result, content created by real people, based on the experiences of real people, will become increasingly more valuable in the future.

Platforms like Trustpilot, where we have invested in a combination of technology, people, community, and processes to highlight genuine, authentic voices and opinions, will provide more meaningful value to consumers and businesses.

Final thoughts

ZDNET’s editors and I would like to give a shoutout to Anoop Joshi for engaging in this in-depth interview. There’s a lot of food for thought here. Thank you, Anoop.

What do you think? Did these recommendations give you any insights into how to navigate the sea of online reviews? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours