Rich Snippets Publisher errors
-
Hi all.
Happen to do a bit of testing with some of our microformat and microdata markup when I noticed our linked Google+ Publisher markup has stopped working.
It definitely was working, and nothings changed, but now we are flagging errors, and I've noticed some of our competitors also have the same problem.
publisher
linked Google+ page = https://plus.google.com/103929635387487847550
Error: This page does not include verified publisher markup. Learn more.If I actually add a duplicate rel="publisher" then I get the following results:
Extracted Author/Publisher for this page
publisherlinked Google+ page = https://plus.google.com/103929635387487847550
Error: This page does not include verified publisher markup. Learn more. publisherlinked Google+ page = https://plus.google.com/103929635387487847550/The second line doesn't seem to flag an error?
I know this is still all pretty new, so is anyone else having problems or odd results, or is Google having some problems?
All our other rich snippets such as reviews etc are working fine, just seems to be the publisher bit.
cheers
Steve
-
I'm trying to go the publisher route, ie business pages not author.
Interesting, sit and do nothing, now it reports as verified! All very odd...
Extracted Author/Publisher for this page
publisherlinked Google+ page = https://plus.google.com/103929635387487847550
Verified: Publisher markup is verified for this page. Learn more. -
I think you need to do rel="author" not rel="publisher". You are trying to get your Google+ profile connected to your blog right? So your pic appears in search results? Pretty sure your name must link to an about page with rel="author" and that the about page must link to your Google+ profile.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Productontology URLs are 404 erroring, are there alternatives to denote new schema categories?
Our team QA specialist recently noticing that the class identifier URLs via productontology are 404ing out saying that the "There is no Wikipedia article for (particular property)". They are even 404ing for productontology URLs that are examples on the productontology.com website! Example: http://www.productontology.org/id/Apple The 404 page says that the wiki entry for "Apple" doesn't exist (lol) Does anybody know what is going on with this website? This service was extremely helpful for creating additionalType categories for schema categories that don't exist on schema.org. Are there any alternatives to productontology now that these class identifier URLs are 404ing? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Rich Snippets For Q&A Forums?
As Google seems to have new interest in Rel Author for articles, is there anything comparable for q&a comments, where you have many commenters? IMHO, Google is trying to get at the quality of the authors advice via authority of some kind, especially for YMYL content. Best... Mike
Intermediate & Advanced SEO | | 945011 -
Best way to fix 404 crawl errors caused by Private blog posts in WordPress?
Going over Moz Crawl error report and WMT's Crawl errors for a new client site... I found 44 High Priority Crawl Errors = 404 Not Found I found that those 44 blog pages were set to Private Mode (WordPress theme), causing the 404 issue.
Intermediate & Advanced SEO | | SEOEND
I was reviewing the blog content for those 44 pages to see why those 2010 blog posts, were set to private mode. Well, I noticed that all those 44 blog posts were pretty much copied from other external blog posts. So i'm thinking previous agency placed those pages under private mode, to avoid getting hit for duplicate content issues. All other blog posts posted after 2011 looked like unique content, non scraped. So my question to all is: What is the best way to fix the issue caused by these 44 pages? A. Remove those 44 blog posts that used verbatim scraped content from other external blogs.
B. Update the content on each of those 44 blog posts, then set to Public mode, instead of Private.
C. ? (open to recommendations) I didn't find any external links pointing to any of those 44 blog pages, so I was considering in removing those blog posts. However not sure if that will affect site in anyway. Open to recommendations before making a decision...
Thanks0 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220 -
GWT Crawl Error Report Not Updating?
GWT's crawl error report hasn't updated for me since April 25. Crawl stats are updating normally, as are robots.txt and sitemap accesses. Is anyone else experiencing this?
Intermediate & Advanced SEO | | tonyperez0 -
Robot.txt error
I currently have this under my robot txt file: User-agent: *
Intermediate & Advanced SEO | | Rubix
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspx WebMatrix 2.0 On webmaster > Health Check > Blocked URL I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL: User-agent: * Disallow: / WebMatrix 2.0 Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas? Thanks Seda0 -
Duplicate Content Error because of passed through variables
Hi everyone... When getting our weekly crawl of our site from SEOMoz, we are getting errors for duplicate content. We generate pages dynamically based on variables we carry through the URL's, like: http://www.example123.com/fun/life/1084.php
Intermediate & Advanced SEO | | CTSupp
http://www.example123.com/fun/life/1084.php?top=true ie, ?top=true is the variable being passed through. We are a large site (approx 7000 pages) so obviously we are getting many of these duplicate content errors in the SEOMoz report. Question: Are the search engines also penalizing for duplicate content based on variables being passed through? Thanks!0 -
Crawling error or somthing else that male my page unvisible ( Simple problem, no solved yet )
Hi, my problem isn't solved and nobody was able to answer my question: why isn't my page poltronafraubrescia.zenucchi.it indexed for the keyword poltrona frau Brescia? The same page on another domain was four on the ranking reluts... And now it redirects to the new one... An you explain me how to proceed? I trust you... Help me...
Intermediate & Advanced SEO | | guidoboem0