How do you remove Authorship photos from your homepage?
-
Suppose you have a website with a blog on it, and you show a few recent blog posts on the homepage. Google see the headline + by Author Name and associates that user's Google+ profile.
This is great for the actual blog posts, but how do you prevent this from happening on the homepage or other blog roll page?
-
I have a similar issue. For whatever reason, Google has decided our CEO (Glen Kelman) is the 'author' of some of our site pages. There is no author markup on the page anywhere. In fact, our CEO's name isn't anywhere on the page. Yet, in SERPs, he is the 'author' of our Seattle market page (you can likely see it by searching for 'seattle real estate' and looking for Redfin in the results).
Glen is a prolific blogger who not only posts to the Redfin blog, but also guest blogs on high profile sites around the web so it stands to reason that Google is very 'familiar' with him as an author. Moreover, he lives in Seattle so maybe Google is thinking, "Glen is from Seattle...he's the CEO of Redfin...he's a prolific author...Glen + Seattle + Redfin + Author = Glen is the author of the Seattle market page on Redfin!"
Any ideas on how to stop Google from making this mistake?
-
Hi Tom, thanks for the response but that doesn't work.
There is no link to a Google+ profile on this page - the Author, though, is verified by the domain name and the page includes "by", causing this.
Any other thoughts?
-
Hi Stephen
Basically, all you need to do is make sure that the rel=author code is not in the tag of that page.
The code will look something like rel="author" href="https://plus.google.com/112656687930780652496"/> but obviously with the G+ profile URL that you are talking about.
If that code isn't on the page, then Google will not verify the page as marked by an author.
If you've gone a different way and linked by an actual URL on the page, like Name here - again all you need to do is just make sure that this link isn't present on the page and the authorship markup won't be attributed to that page.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the "Homepage" for an International Website With Multiple Languages?
BACKGROUND: We are developing a new multi-language website that is going to have: 1. Multiple directories for various languages:
Intermediate & Advanced SEO | | mirabile
/en-us, /de, etc....
2. Hreflang tags
3. Universal footer links so user can select their preferred language.
and
4. Automatic JS detection of location on homepage only, so that when the user lands on /, it redirect them to the correct location. Currently, the auto JS detection only happens on /, and no other pages of the website. The user can also always choose to override the auto-detection on the homepage anytime, by using the language-selector links on the bottom. QUESTION: Should we try to place a 301 on / to point to en/us? Someone recommended this to us, but my thinking is "NO" - we do NOT want to 301 /. Instead, I feel like we should allow Google Access to /, because that is also the most authoritative page on the website and where all incoming links are pointing. In most cases, users / journalists / publications IMHO are just going to link to /, not dilly dally around with the language-directory. My hunch is just to keep / as is, but also work to help Google understand the relationship between all of the different language-specific directories. I know that Google officially doesn't advocate meta refresh redirects, but this only happens on homepage, and we likewise allow user to override this at any time (and again, universal footer links will point both search engines and users to all other locations.) Thoughts? Thanks for any tips/feedback!2 -
URL Index Removal for Hacked Website - Will this help?
My main question is: How do we remove URLs (links) from Google's index and the 1000s of created 404 errors associated with them after a website was hacked (and now fixed)? The story: A customer came to us for a new website and some SEO. They had an existing website that had been hacked and their previous vendor was non-responsive to address the issue for months. This created THOUSANDS of URLs on their website that were then linked to pornographic and prescription med SPAM sites. Now, Google has 1,205 pages indexed that create 404 errors on the new site. I am confident these links are causing Google to not rank well organically. Additional information: Entirely new website Wordpress site New host Should we be using the "Remove URLs" tool from Google to submit all 1205 of these pages? Do you think it will make a difference? This is down from the 22,500 URLs that existed when we started a few months back. Thank you in advance for any tips or suggestions!
Intermediate & Advanced SEO | | Tosten0 -
Will using my Homepage as a KW target improve my Inner page Ranking?
Hello your help please! I have 2 KWs that i have targeted Inner pages for and they have got them to page 2 in SERPs, but now its getting difficult to move them up to page 1. Will targeting the home page with a higher authority, for the same terms, help or hinder the inner pages current position? Many Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Anyone Have a Tool or Method to Track Successful Link Removals?
Hello All, I am undertaking the daunting task of a link removal campaign. I've got a pretty good plan for my work flow in terms of doing the backlink research, gathering contact information, and sending the email requests. Where I'm a bit stuck is in regards to tracking the links that actually get removed. Obviously if someone replies to my email telling me they removed it, then that makes it pretty clear. However, there may be cases where someone removes the link, but does not respond. I know Moz has a ton of link tools (which I'm still getting familiar with). Is there a report or something I can generate that would show me links that did exist previously but have now been removed? If Moz cannot do it, does anyone have a recommendation on another tool that can track links to inform me whether or not they have been removed. Thanks!
Intermediate & Advanced SEO | | Lukin0 -
Can you recover from "Unnatural links to your site—impacts links" if you remove them or have they already been discounted?
If Google has already discounted the value of the links and my rankings dropped because in the past these links passed value and now they don't. Is there any reason to remove them? If I do remove them, is there a chance of "recovery" or should I just move forward with my 8 month old blogging/content marketing campaign.
Intermediate & Advanced SEO | | Beastrip0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Is it possible to Remove Owner History from the GWMT?
Hello, As a site owner, I've worked with several SEO firms in the past. Even though a long time has passed, they still appear in the GWMT list of admins (though inactive). I wouldn't like other companies and consultants see that in the future. Is there a way to remove them? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
redirect 404 pages to homepage
Hello, I'm puting a new website on a existing domain. In order to not loose the links that point to the varios old url I would like to redirect them to homepage. The old website was a mess as there was no seo and the pages didn't target any keywords. Thats why I would like to redirect all links to home. What do you think is the best way to do this ? I tried to ad this in the .htaccess but it's not working; ErrorDocument 404 /index.php Con you tell me how it exacly look? Now the hole file is like this: @package Joomla @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. @license GNU General Public License version 2 or later; see LICENSE.txt READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! The line just below this section: 'Options +FollowSymLinks' may cause problems with some server configurations. It is required for use of mod_rewrite, but may already be set by your server administrator in a way that dissallows changing it in your .htaccess file. If using it causes your server to error out, comment it out (add # to beginning of line), reload your site in your browser and test your sef url's. If they work, it has been set by your server administrator and you do not need it set here. Can be commented out if causes errors, see notes above. Options +FollowSymLinks Mod_rewrite in use. RewriteEngine On Begin - Rewrite rules to block out some common exploits. If you experience problems on your site block out the operations listed below This attempts to block the most common type of exploit attempts to Joomla! Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]([^)]) [OR] Block out any script that includes a
Intermediate & Advanced SEO | | igrizo0