Please follow these steps when troubleshooting Rich Snippets for service reviews.
1) Make sure that the TrustBox code is implemented correctly
Check that the TrustBox script and TrustBox widget code snippets have been implemented correctly and that the TrustBox is loading normally on the page.
The SEO TrustBox must contain the data-schema-type="Organization" attribute:
Here’s a step-by-step video on how to check that the SEO attribute has been implemented correctly on a page:
2) Make sure that there aren’t any errors in the structured data
- Open Google’s Structured Data Testing Tool
- Paste the link of the page where the SEO TrustBox is implemented
- Click Run Test
Here’s a step-by-step video on how to test the URL in Google’s Structured Data Testing Tool:
If Googlebot finds an error on the page, it may give up and decide not to crawl the page at all.
So, check that your page does not have any errors:
3) Make sure that both actions in Google Search Console are performed: Fetch&Render and Request Indexing
- Open Google Search Console
- Go to Crawl - Fetch as Google
- Make sure that both actions were performed - Fetch&Render and Request Indexing
Here’s a step-by-step video on how to do Fetch&Render and Request Indexing:
4) Make sure that Googlebot and visitors see the TrustBox in the same way
To see how Google renders the TrustBox in the Google Search Console, simply click on a row to view the details of a fetch attempt.
It is important that both pages look the same. There shouldn’t be any difference between how Googlebot sees the page and how visitors see the page.
If Googlebot can’t see the TrustBox, it might be because the page is too slow to load or perhaps something is blocking Googlebot from crawling the content.
At the bottom of the page you’ll see the list of resources which Googlebot couldn’t get.
Here’s a step-by-step video on How to view Render(s).
Be patient when indexing
It is normal that Rich Snippet Stars appear shortly after the Indexing Request and then disappear until Google’s Normal Indexing method picks up the page.
According to Google’s Webmaster Trends Analyst, John Mueller, there are essentially two ways to get indexed:
“...one is the normal indexing method where when pages get indexed they are more likely to stay in the index for longer periods of time. The second [is] "fast track" indexing where Google quickly puts it into the index, maybe because you used the submit to index feature in the fetch as Google in Google Search Console. Those pages may not stay in the index for long periods of time, not at least until the normal indexing method picks up the page.”
For more details read Google Fast Track Indexing Vs. Normal Indexing.
What might block Googlebot from crawling and indexing?
There are many potential reasons why Googlebot may be blocked from crawling and indexing a page:
- Page speed is low
- Page code is bloated
- Sitemap is too complex
- The TrustBox has been placed too low on the page
- The TrustBox has been implemented on too many pages (Rich Snippets abuse)
- Duplicate content
- URL parameters are restricted from indexing to avoid duplicate content
- Robots.txt are blocking Googlebot
- Incorrect configuration of .htaccess file
- Broken links, 404 errors, and incorrect redirects
- ...and many more
Other things to keep in mind:
“Recrawling is not immediate or guaranteed. It typically takes several days for a successful request to be granted. Also, understand that we can't guarantee that Google will index all your changes, as Google relies on a complex algorithm to update indexed materials.” - From Ask Google to recrawl your URLs
“Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages.” - From How Google Search Works
“Markup should not be used to hide content not visible to users in any form, since it might create a misleading or deceptive search experience.” - From Structured data quality guidelines