Long title problem
-
I'm getting an incredible number of 4xx errors and long titles from a small website (northstarpad.com); over 13k 4xx errors and almost 20k "title element is too long". The number keeps climbing, but the site shouldn't have more than a couple hundred pages. When I look at the 4xx errors they are clearly being generated by some program since they have multiple and repeating keywords.
Here's an example:
I looked at the ftp files and plugins and couldn't see anything that could cause it, but I'm a beginner so no surprise there. Any suggestions where to look or how to fix this?
-
Thanks!
-
I can point you towards the best places online to find wordpress developers;
- https://clarity.fm/browse/technology/wordpress
- https://www.odesk.com/o/profiles/browse/skill/wordpress/
- http://premium.wpmudev.org/blog/find-wordpress-developer-designer/
Try those!
-
Hello Dan,
I've sent your message to a few web developers who apparently don't have a clue. Any suggestions on someone who could fix this?
Thanks!
-
Thanks a bunch Dan! I'm still clearly a neophyte, but am learning as I go so I appreciate your detailed responses. I'll get someone to work on the URLs right away.
It's also good to know Screaming Frog's limitations for the freebie.
-
I just wanted to clarify that the SEO plugin has nothing to do with this, and also turning all in one on/off will probably not fix anything.
Either you may have the free version of Screaming Frog which limits to 500 URLs, or you may need to adjust crawl settings - my crawl definitely was heading towards the 57k
-
The root of your issue is that there are links that are coded incorrectly
--> http://screencast.com/t/ndeKw3PL
which is resulting in infinite crawling of pages that do not really exist, and thus the same duplicate/long title tags.
For example this page is a good URL: http://northstarpad.com/category/business-portrait-metro-detroit/
But as shown in my screenshot the "Pet Photography" image links to: http://northstarpad.com/category/business-portrait-metro-detroit/pet-photography// which is a bad URL and NOT http://northstarpad.com/pet-photography/ which is where it should link.
Essentially your links should be "absolute" URLs (which show the full file path) not "relative"
--> http://screencast.com/t/koL5QX9B
You'll need to pass this to a web dev who knows how to edit your WordPress theme files.
-
I crawled the site with Screaming Frog and it only showed 475 pages instead of the 57k+ from Moz, and none of them had 404 errors. Is it possible that Moz is crawling our WordPress pages? I looked in our settings for a way to exclude that URL, but couldn't see where to do it.
We'll also try deleting/reinstalling the All in One SEO pack to see if that helps.
-
Thanks a bunch Nakul! I'll let you know if any of your suggestions help me find the problem.
-
Looks like a problem with Wordpress. Are you using any SEO Plugins like Yoast or something ? I'd suggest reaching out to your Wordpress Guy and have them look into it. It's clearly an issue, and needs to be fixed. You can also download Screaming Frog SEO Spider and crawl your website with it to see where Google is finding these links from. It could also be an issue with your Theme or Permalinks. These are the only areas I can think of. You are using the latest version of Wordpress, so you are good from that point of view. So has to be a Plugin, Theme or your Permalinks settings.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does my 301 show the old urls with new descriptions and titles?
Hi all, We've just rebranded. The 301 appears to have worked well and moved the results and rankings onto the new domain. However a site:olddomain.com search in Google brings up about a hundred pages that have the new titles and descriptions but show the old urls - does anyone have any idea how to make the old domain disappear from the SERPS? Many thanks, Richard
Technical SEO | | panini0 -
Google disavow tool ( how long does it take ? )
Hello, I disavowed some of my links about three months but still see them in my link profile, using OSE? How long does it take for Google to make them nofollow. Thanks
Technical SEO | | mezozcorp0 -
Are W3C Validators too strict? Do errors create SEO problems?
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand." What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code. I ask this: If the search engine crawler is reading thru the code and comes upon an error like this: …ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
Technical SEO | | INCart
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). and this... <code class="input">…t("?");document.write('>');}</code> ✉ The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed). One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?0 -
Changing broken links and Anchor text be a problem?
Hi, Changing broken links and Anchor text be a problem? We have 80K pages which has about 40K links which has been created in the last few years and from last month we have been working on updating content on those pages that's old and links that are broken and changing the anchor text in those posts. Anchor text like Click me, Here,Download, link, etc is changed to meaningful words. its a total close to 10K link replacements and 10K anchor text. While doing this from last month have seen a slight decrease in daily traffic. is this something Google would consider as some kind of a wrong webmaster activity? or its just fine? Thanks
Technical SEO | | mtthompsons0 -
Thesis Theme (Nofollow, noindex) Problem
Hi, Im using Thesis theme for one of my wordpress website, for some reason, some of my pages are 'noindex" and 'nofollow" even though i have these boxes unchecked Does anybody know the solution to that? Thanks
Technical SEO | | KentR0 -
Roger has detected a problem
SEOMOZ says Roger has detected a problem: We have detected that the domain www.romancebookstore.com.au does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information . What is wrong with this domain??
Technical SEO | | damientown0 -
Understanding Duplicate Titles in Wordpress
I have duplicate title errors in Wordpress and I cannot pinpoint the problem. I have my blog set up so that the home page of the blog has the most recent posts. In my campaign report somehow the page directory is being found and I can't find any links on my blog to those pages. the errors are on pages that are like the following www.example.com/blog/page/13/ www.example.com/blog/page/14/ I am using Yoast and I thought I had it set up correctly. The other pages have the correct title and canonical tags, but he urls ending with page do not, but the page directory is duplicating the home page title. How or where can i fix this issue?
Technical SEO | | hfranz0 -
Product ratings causing 302 redirect problem
I am working on an ecommerce site and my crawl report came back with 7000+ 302 redirects and maxed out at 10,000 pages because of all the redirects. The site really only has maybe 1500 pages (dynamic content aside). After looking into it a little more I see it is because of the product rating system. They have a star rating system that kinda looks like amazons. The only problem is that each star is a link to a dynamic address that records the vote and then 302's back to the original page the vote was cast from. So virtually every page on this site links out anywhere from 15 to 45 times and 302's back to itself, losing virtually all of its PR. Am I correct in that assumption or am I missing something? I don't see the links being blocked by robots.txt or noindex, nofollowed. Also it is an anonymous rating system where a rating can be cast from any category page displaying a product or any product page. To make matters worse every page links to a printable version which duplicates the issue by repeating the whole thing over again. So assuming I am correct that is site has a major PR leak on virtually every page, what is the best recommendation to fix this. 1. Block all of those links in robots.txt, 2. no index, nofollow these links or 3. put the rating system behind a submit button or disallow anon ratings 4. something else??? Looking at their product ratings on the site virtually everything is between 2-3 starts out of 5 and has about the same number of votes except less votes on deeper pages. I dont believe this is real at all since this site gets almost no traffic and maybe 1 sale a week, there is no way that any product has been rated 50 times. I think the crawler is voting as it crawls and doing it 5 times for every product which is why everything is rated 2.5 out of 5. This is an x-cart site in case anyone cares. Any suggestions?
Technical SEO | | BlinkWeb0