"Ghost" errors on blog structured data?
-
Hi,
I'm working on a blog which Search Console account advises me about a big bunch of errors on its structured data:
But I get to https://developers.google.com/structured-data/testing-tool/ and it tells me "all is ok":
Any clue?
Thanks in advance,
-
Hi Everett,
Yes it seems that this is the way.
Thanks a lot.
-
Yes it is.
Well, it's both a magento site with a wordpress blog.
Thank you very much
-
Webicultors,
Read this thread on Google's Product Forums. Let us know if it answers your question. If not, at least you're not alone...
Upon reading several similar threads on various forums and Q&A sites, it appears this is a very common occurrence resulting from a disparity between what the two tools define as an "error". The testing tool seems to be limited to errors in syntax / markup while GSC may see missing elements as errors.
-
Hi,
I've just added a couple of screenshots more to illustrate that I've already checked this information you are telling me.
But the test tool keeps telling me: All OK
Thanks
-
Like Kristen mentions, you should be able to see an overview of the Schema.org implementations with the amount of errors that they have individually. So that's why you're already not seeing any changes in the one URL you were testing. In the list you should be easily able to identify the pages with issues.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
4XX client error
I am a bit confused...my recent site crawl told me I had 1 4XX Client error, (high priority). This is the page...
Technical SEO | | sdwellers
http://www.seadwellers.com/wp-content/uploads/2014/06/367679d2+0+277-SD.mp4 This link below is listed as the "linking page"....I guess that the link comes from?
http://www.seadwellers.com/category/dive-travel/ I'm just not getting this...where did the page of the first link above come from...and what is the deal with the catagory/dive-travel/ page? And how do I fix? Any guidance would be greatly appreciated...0 -
Handling "legitimate" duplicate content in an online shop.
The scenario: Online shop selling consumables for machinery. Consumable range A (CA) contains consumables w, x, y, z. The individual consumables are not a problem, it is the consumables groups I'm having problems with. The Problem: Several machines use the same range of consumables. i.e. Machine A (MA) consumables page contains the list (CA) with the contents w,x,y,z. Machine B (MB) consumables page contains exactly the same list (CA) with contents w,x,y,z. Machine A page = Machine B page = Consumables range A page Some people will search Google for the consumables by the range name (CA). Most people will search by individual machine (MA Consumables, MB Consumables etc). If I use canonical tags on the Machine consumable pages (MA + MB) pointing to the consumables range page (CA) then I'm never going to rank for the Machine pages which would represent a huge potential loss of search traffic. However, if I don't use canonical tags then all the pages get slammed as duplicate content. For somebody that owns machine A, then a page titled "Machine A consumables" with the list of consumables is exactly what they are looking for and it makes sense to serve it to them in that format. However, For somebody who owns machine B, then it only makes sense for the page to be titled "Machine B consumables" even though the content is exactly the same. The Question: What is the best way to handle this from both a user and search engine perspective?
Technical SEO | | Serpstone0 -
Acquiring a blog
Hello All, I've recently acquired somebody else's blog and have redirected every post to the relevant page of my website (madegood.org). The content is the same as on the original site, and I have used 301 redirects. The original blog didn't have a particularly high page rank I'm slightly worried that there are now thousands of links coming from one domain, which itself doesn't have much authority. Is there a way that I can tell google that I've acquired the blog, as opposed to just having lots of links from one domain. Thanks Will
Technical SEO | | madegood0 -
Jigoshop "add to cart" producing 302 redirects
Hi, My site is throwing thousands of 302 redirect warnings on crawl for the add to cart process in my Wordpress/Jigshop online store. A sample url the crawl references is: | | https://morrowsnuts.com/product/the-best-of-the-best-8-oz/?add-to-cart=6117&_n=9773652185 | | I have read several other threads here that are similar in nature but haven't discovered a way to eliminate this. I am a store owner and with only partial technology skills and I don't know what to try next. I posted the problem with Jigoshop but I am not sure if they will provide a solution since this was the first time they heard of this. The site is Morrow's Nut House located at: https://morrowsnuts.com Thanks in advance for any direction or suggestions for me on next steps, John
Technical SEO | | MorrowsCandyMan0 -
"Products 1-20" text in the Serp Results
We have e-commence site (zen-cart) and we use our category pages (which has the list of the products) as landing pages. In the Serp results our link is showing up like this Our Page Title www.link.com Rich snip stuff Products 1 - 40 of 93 - Meta Description text I just wanted to know where its getting the "Products 1 - 40 of 93" from, and can it be removed (if we wanted to)? On the landing page say "Displaying 1 to 40 (of 93 products)", But i looked in to the source and it does not say "Products 1 - 40 of 93" anywhere, so google must be coming up with that text. I have noticed other zen-cart sites have the same text, and other e-commence sites have something similar like " 20+ Products"
Technical SEO | | eunaneunan0 -
Use webmaster tools "change of address" when doing rel=canonical
We are doing a "soft migration" of a website. (Actually it is a merger of two websites). We are doing cross site rel=canonical tags instead of 301's for the first 60-90 days. These have been done on a page by page basis for an entire site. Google states that a "change of address" should be done in webmaster tools for a site migration with 301's. Should this also be done when we are doing this soft move?
Technical SEO | | EugeneF0 -
How many steps for a 301 redirect becomes a "bad thing"
OK, so I am not going to worry now about being a purist with the htaccess file, I can't seem to redirect the old pages without redirect errors (project is an old WordPress site to a redesigned WP site). And the new site has a new domain name; and none of the pages (except the blog posts) are the same. I installed the Simple 301 redirects plugin on old site and it's working (the Redirection plugin looks very promising too, but I got a warning it may not be compatible with the old non-supported theme and older v. of WP). Now my question using one of the redirect examples (and I need to know this for my client, who is an internet marketing consultant so this is going to be very important to them!): Using Redirect Checker, I see that http://creativemindsearchmarketing.com/blog --- 301 redirects to http://www.creativemindsearchmarketing.com/blog --- which then 301 redirects to final permanent location of http//www.cmsearchmarketing.com/blog How is Google going to perceive this 2-step process? And is there any way to get the "non-www-old-address" and also the "www-old-address" to both redirect to final permanent location without going through this 2-stepper? Any help is much appreciated. _Cindy
Technical SEO | | CeCeBar0 -
Different version of site for "users" who don't accept cookies considered cloaking?
Hi I've got a client with lots of content that is hidden behind a registration form - if you don't fill it out you can not proceed to the content. As a result it is not being indexed. No surprises there. They are only doing this because they feel it is the best way of capturing email addresses, rather than the fact that they need to "protect" the content. Currently users arriving on the site will be redirected to the form if they have not had a "this user is registered" cookie set previously. If the cookie is set then they aren't redirected and get to see the content. I am considering changing this logic to only redirecting users to the form if they accept cookies but haven't got the "this user is registered cookie". The idea being that search engines would then not be redirected and would index the full site, not the dead end form. From the clients perspective this would mean only very free non-registered visitors would "avoid" the form, yet search engines are arguably not being treated as a special case. So my question is: would this be considered cloaking/put the site at risk in any way? (They would prefer to not go down the First Click Free route as this will lower their email sign-ups.) Thank you!
Technical SEO | | TimBarlow0