Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Warnings, Notices, and Errors- don't know how to correct these
-
I have been watching my Notices, Warnings and Errors increase since I added a blog to our WordPress site. Is this effecting our SEO? We now have the following:
2 4XX errors. 1 is for a page that we changed the title and nav for in mid March. And one for a page we removed. The nav on the site is working as far as I can see. This seems like a cache issue, but who knows?
20 warnings for “missing meta description tag”. These are all blog archive and author pages. Some have resulted from pagination and are “Part 2, Part 3, Part 4” etc. Others are the first page for authors. And there is one called “new page” that I can’t locate in our Pages admin and have no idea what it is.
5 warnings for “title element too long”. These are also archive pages that have the blog name and so are pages I can’t access through the admin to control page title plus “part 2’s and so on.
71 Notices for “Rel Cononical”. The rel cononicals are all being generated automatically and are for pages of all sorts. Some are for a content pages within the site, a bunch are blog posts, and archive pages for date, blog category and pagination archive pages
6 are 301’s. These are split between blog pagination, author and a couple of site content pages- contact and portfolio. Can’t imagine why these are here.
8 meta-robot nofollow. These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes.
8 Blocked my meta-robots. And are also for the same 4 blog posts but duplicated twice each.
We use All in One SEO. There is an option to use noindex for archives, categories that I do not have enabled. And also to autogenerate descriptions which I do not have enabled.
I wasn’t concerned about these at first, but I read these (below) questions yesterday, and think I'd better do something as these are mounting up. I’m wondering if I should be asking our team for some code changes but not sure what exactly would be best.
http://www.seomoz.org/q/pages-i-dont-want-customers-to-see
http://www.robotstxt.org/meta.html
Our site is http://www.fateyes.com
Thanks so much for any assistance on this!
-
Thanks so much, Mike. Good to know I can let this go and I've done my due diligence with checking it all out.
I wish our WP would always create the 301's automaticallybushmen needed, but it doesn't seem to. I just installed Redirection plugin today for a URL change I wanted to make.
-
You don't need to really worry or stress about the missing meta descriptions and long titles.
Meta descriptions do not impact your rankings and Google will automatically create a description for your page if it appears in the SERPs.
Title tags that are too long do not impact your rankings... at least not directly. If your title tag is over by 10 or even 20 characters, it will not impact whether your pages ranks or not. The 70 characters is a suggestion as that was the number of characters that would display in the SERPs; however, now it is based on pixil width. The only other important info you need to know about titles is that you put your most important keywords towards the beginning of the title.
If you are unsure about how or are unable to edit these pages to add or edit the description and title, it isn't going to make our break your site from a ranking standpoint.
Some CMS will automatically generate 301s if you edit a URL's structure. It does this so that any old links pointing to the old URL will be brought to the edited URL. The CMS will not fix broken links that point to the old URL, but on the server side, if someone clicks on an old, broken link, they will be brought to the edited URL page - if that makes sense.
I understand that you want to attack warnings and notices and get things perfect; however, sometimes it just isn't possible. Whether it is a CMS issue or knowing how to fix something complex - what does matter is that you investigate each warning and notice and make sure that it is not negatively impacting your site. From the sounds of it, the handful of warnings and notices you have are just fine.
Hope this helps answer your question.
Mike
-
I'm really sorry to be confusing! It's hard to find the precise language for stuff when you don't really understand it well enough. ;o) I really appreciate that you have stuck with this and are trying to understand my concerns.
Pasted here from my last comment: "I was saying that the metarobots/nofollows were for blog posts, but in looking again, I am realizing these are blog post Comments and Replies, so I understand why WP would automatically put the noindex/nofollow on those. I typo-ed and put "robots" instead of "index". Sorry!"
So, in other words, I found that the noindex/nofollows that SEOMoz is reporting are for the blog comments which means all is well on those. I don't want Google to index comments and my replies to comments.
I'm going to see if I can ask my other question more clearly:
What I am still trying to determine is how to cut down on the number of notices and warnings by fixing or changing the conditions that are causing them.
I do not know what to do programming-wise to either create meta descriptions since they are "missing" and fix title tags that are too long for the archive and author type pages that are generating those notices and warnings. I don't know whether to use noindex, nofollow or block robots so that they won't matter.
I also don't know how/where the 301s were generated as we did not implement those manually or knowingly.
I hope this is better said and more understandable. Crossed fingers as I push "Post Reply".
-
I don't completely understand where you are saying the noindex/nofollow is located. If both are in the head, it applies to the whole page; however, "nofollow" can be used specifically for links (in most cases blog comments).
The easiest thing you can do is ask yourself, "Do I want this page to be indexed by Google?" If no, then you want to use the noindex directive; however, if you want the page indexed, you will want to make sure you are not using the noindex directive.
As far as nofollows are concerned, those can/should be used for blog comments. Nofollow can be used in other instances, but it generally isn't a tag that you throw around much.
This Matt Cutts article talks about how the nofollow directive works in relation to link juice... it is worth a read.
Hope this answers your question Gina.
Mike
-
Thanks much, John! And Mike!
404s:
These are now fixed. Thanks, Mike, for finding them. I tried to subscribe to Screaming Frog awhile back and had a roadblock due to my system. (older MacBook Pro and I can't update the OS any further)Blog Archives:
I have wanted to use archive pages for alternate ways a user can find posts. I tend to like those on other blogs. But thank you for the article link. I look forward to reading that.I am happy to hear the duplicated descriptions on archive pages is ok. I'm guessing you mean the post excerpt with the thumbnails? But I don't quite understand why SEOMoz is telling me that I am missing descriptions then AND I don't know how to access archive pages to insert meta descriptions onto them. Or author pages for that matter.
301's:
We did not implement 301s and I don't have a clue as to why they are there except that I change the name of the Gina Fiedel page. So I guess WP automatically created a 301?? That seems odd. And for the others, I have no idea. They are author pages generated from the User page in the admin and one is our website contact page with an inquiry form.Noindex/nofollow: "These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes."
What the heck did I mean by that? Just kidding- I figured it out. I was saying that the metarobots/nofollows were for blog posts, but in looking again, I am realizing these are blog post Comments and Replies, so I understand why WP would automatically put the noindex/nofollow on those. I typo-ed and put "robots" instead of "index". Sorry!
Mike- I am still wondering which tag(s) is/are recommended for the notices and warnings. I'm not sure what to request from our programming team on this.
Again! Thank you both for all the time you've spent on this. So grateful.
-
Screaming Frog - I usually wait for SEOmoz or Webmaster Tools to identify issues, then use Screaming Frog to verify that I have fixed them. It is a great tool and FREE if your site is under 500 pages.
Here are the SEOmoz definitions of the other warnings you are talking about:
"Meta Robots Nofollow - When the meta robots tag for a page includes 'nofollow', no link juice is passed on through the links on that page.
Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots."
I am guessing someone put the following in the head of those blog posts:
It is just telling Google to not index the page and to not pass page rank or anchor text for any links on that page.
Typically the "nofollow" is used in blog comments, so commenters cannot provide links back to their personal websites.
"noindex" shouldn't have any affect on rankings. It is just telling Google that certain pages are not worth putting in their index (copyright, terms of use, etc.).
"nofollow" links if not implemented correctly can look kind of spammy to Google, but in most cases you should be fine.
Does that help?
Mike
-
Hi Gina -
Great questions here. Some of these you should worry about, others are just notices and not necessarily an issue.
Fix the 4XX errors if those pages have links, or have a 404 page that redirects users. 404s are not always bad, but if the user isn't supposed to end up there (ie your product page is expired), then redirect.
Don't worry about the duplicated meta descriptions on archive pages, but do think about if these pages are needed. Ayima had a good post on pagination recently - http://www.ayima.com/seo-knowledge/conquering-pagination-guide.html
Same as above with the title tags on paginated archives.
Rel-canonicals are fine. Once again, just notices that they are there.
Did you implement those 301s? Moz notifies you of them because they might pass less link equity than straight links, but 301s are not bad.
What do you mean by "These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes." It seems that this may have been implemented manually on your side, though I don't know how All In One SEO Pack handles it (I use Yoast).
-
Thanks, Mike.
I agree about 404s! Thank you for locating those. Interestingly, the 404s that SEOMoz is picking up are the ones I was guessing are cached because those were fixed within minutes of being created.What I didn't realize is that there were additional internal links to these pages within blog posts. How'd you find those?
I would like to fix to avoid the warnings and notices continuing to generate, can you please explain the norobots vs noindex and how I should set those?
Since there are 8 norobots, how will these effect rankings?
thanks again!
-
Hi Gina,
You should try to fix any errors. Errors can impact your users' experience, as well as interfere with web crawlers and even impact your rankings.
404 errors:
- /balancing-seo-with-your-website-design/ links to /on-target-web-design-santa-barbara/ using anchor text "It is best to get a custom design
- /5-steps-to-increase-traffic-to-your-website/ links to /we-create-websites-that-bring-you-more-business/ using anchor text "increasing traffic to your website,"
Warnings are more or less a "if you have time and can, you could fix these". They really do not impact your rankings, but if you are trying to be perfect, you could fix them.
Notices are just a "heads-up". They do not impact rankings, UNLESS you are blocking robots ; )
Long story short, fix Errors, work on Warnings when you have time, verify you already knew about the Notices.
Hope this helps.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
403s: Are There Instances Where 403's Are Common & Acceptable?
Hey All, Both MOZ & Webmaster tools have identified 403 errors on an editorial site I work with (using Drupal CMS). I looked into the errors and the pages triggering the 403 are all articles in draft status that are not being indexed. If I am not logged into our drupal and I try to access an article in draft status I get the 403 forbidden error. Are these 403's typical for an editorial site where editors may be trying to access an article in draft status while they are not logged in? Webmaster tools is showing roughly 350 pages with the 'Access Denied' 403 status. Are these harmful to rank? Thanks!
Moz Pro | | JJLWeber1 -
Pages with Temporary Redirects on pages that don't exist!
Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined. So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!
Moz Pro | | Raptor-crew0 -
Need help understanding search filter URL's and meta tags
Good afternoon Mozzers, One of our clients is a real estate agent and on that site there is a search field that will allow a person to search by filtered categories. Currently, the URL structure makes a new URL for each filter option and in my Moz reports I get the report that there is missing meta data. However, the page is the same the filter options are different so I am at a loss as to how to proper tag our site to optimize those URL's. Can I rel canonical the URL's or alt rel them? I have been looking for a solution for a few days now and like I said I am at a loss of how to properly resolve these warning messages, or if I should even be concerned with the warning messages from Moz (obviously I should be concerned, they are warning messages for a reason). Thank you for your assistance in advance!
Moz Pro | | Highline_Ideas0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
How do you guys/gals define a 'row?'
I have a question about calls to the API and how these are measured. I noticed that the URL Metrics calls allow a batch of multiple URLs. We're in a position where we need link data for multiple websites; can we request a single row of data with link information for multiple URLs, or do we need to request a unique row for each URL?
Moz Pro | | ssimburg0 -
How to resolve Duplicate Content crawl errors for Magento Login Page
I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page. The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above. This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing? Do I need to try to add canonical to the login page? If so, does anyone know how to do it using XML?
Moz Pro | | kdl01 -
Batch lookup domain authority on list of URL's?
I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround?
Moz Pro | | SirSud1