Big problem with my new crawl report
-
I am owner of small opencart online store. I installed http://www.opencart.com/index.php?route=extension/extension/info&extension_id=6182&filter_search=seo. Today my new crawl report is awful. The number of errors is up by 520 (30 before), up with 1000 (120 before), notices up with 8000 (1000 before). I noticed that the problem is with search. There is a lot duplicate content in search only. What to do ?
-
Thank you again Alan.
Typo fixed.
-
I use Bing search API,
By the way, you want to change from GET to POST, not the other way around.
-
Alan,
Thank you for the great advice. If one has enough control over the eCommerce system, or the internal site search product, to change from GET to POST so these pages act more like real dynamically generated "search pages" than an infinite amount of "landing pages" I think that is a fantastic solution. It would keep merchandisers and others from linking to those pages - because we all know that they will continue to do it even if the SEO pleads on hands and knees for them to stop.
However, I have found it to be the case that most eCommerce businesses (from small mom-n-pop shops to fortune 500 companies) do not have the ability to do this because the internal site search functionality they use is out of their hands. Site search vendors like Endeca and Celebros serving enterprise eCommerce businesses don't typically hand over the keys to the client.
If you know any site search vendors or solutions that allow one to do this it would make a great contribution to this thread if you could share a few of them. I'd definitely look into recommending them in the future!
Thanks again!
-
The problem with PR leaks is that they are scalable, If you are losing 10%, then you get some quality links, 10% of them will be wasted, every effort you do in the future will be discounted by 10%.
There are ways to fix all these problems, for example I would make a search to be POST and not GET so that links to search pages can not be made and therefor search pages will not get indexed.
We work so hard to get good links, why waste them when you do?
-
I have tried different methods to fix this. First-hand experience tells me that oftentimes it is better to just block the paths (assuming there is better navigation on the site) from being crawled or indexed using robots.txt than to use a noindex,follow tag in order to save the pagerank you're sending via internal links. It is very easy for Google to get bogged down crawling around in the internal search results area.
Unless there are lots of links to search pages from top pages on the site, or a big list of search page links from every page (sitewide footer, for example) I really don't think the waste of internal pagerank is noticeable in the rankings, or worth salvaging if it risks sending spiders into a maze or a trap.
Yes, best practice is not to link to pages that you are blocking. In the real world though, search pages can be very useful to visitors, and to merchandisers who don't have the ability to create more targeted sub-sub-sub categories will often use them, and link to them on the site, as landing pages for promotional purposes (emails, PPC, sales...).
Everyone has their own strategies, and all we can do is make recommendations based on our own experience and knowledge. Thanks for helping out with this question Alan. Feel free to elaborate so Anastas has more input to help guide his decision.
-
as long as no one is linking to the search pages including internal links.
-
Hello Anastas,
I agree that you should block the search folder from being indexed. I'm going to assume that nobody is linking to your search pages and that you have other paths (e.g. SEO-friendly navigation, sitemaps...) for search engines to use to access your products).
I don't understand why you have formatted the disallow statement that way, however. Unless I'm missing something (and could be since I don't know what your site is) you only need to do this:
Disallow: /product/search*
And of course after doing this you should test it in GWT to make sure that A: You are blocking the pages you want to block, such as search pages with lots of parameters, and B: You are NOT blocking other pages you don't want to block, such as product pages. Here is more info on where to find the testing tool in GWT if you don't know: http://productforums.google.com/forum/#!topic/webmasters/tbikAxJiIZ4
Let us know how it goes. Good luck.
-
Please I need help
-
I am using opencart. I dont know what to do. Before I had 50 errors, now they are more than 500 after this plug in. The plug in removed the previous errors, but now there are many different errors. I have 2 options:
1. Remove the plug in
2. Do something with new errors - the new errors are only because of search, I have dublicate page content because when you type PDODUCT NAME in search box, there is same content as www.mydomain.com/category1/PRODUCT NAME
Maybe this plug in removed the canonical urls in search or I dont know what.
In robots.txt there is row:
Disallow: /*?route=product/search
The duplicate content is mydomain.com/product/search&filter_tag=XXXXXX
Instead of XXXXX there are many paths.
I decided to add another row in robots.txt:
Disallow: /*?route=product/search&filter_tag=/
Do you thing it is correct or to remove the plug in?
I hope you understand what is the problem.
-
When you no index a page, any links pointing to those pages pour away link juice from you indexed pages. you should never no-index pages IMO
I assume you are using a CMS or some sort of plug in, this is a common cost when you do so. CMS create very untidy code, not good for SEO
-
The urls are: /product/search&filter_tag=%D0%B1%D0%B8%D0%B6%D1%83%D1%82%D0%B0
after = there are a lot of combinations. Is it correct to put this in robots.txt
Disallow: /*?route=product/search&filter_tag=/
-
Sholud I disallow search (in robots.txt)?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New Page Not ranking?
One of this client's top keyword is "oak beams". They already rank well in the UK for other related terms like "reclaimed oak beams" at /reclaimed-oak-beams/ and "air dried oak beams" at /air-dried-oak-beams/ We have created a page at /oak-beams/ but this page ranks nowhere? Instead the reclaimed oak beams or air dried oak beams page ranks for the term "oak beams". Any ideas why Google is swapping between those pages and not choosing the /oak-beams/ page? A few notes are that the /oak-beams/ page is newest page on the site and yes I know there are no links pointing to it but there are no links pointing to the other pages either?
On-Page Optimization | | Marketing_Today0 -
I am trying to better understand solving the duplicate content issues highlighted in your recent crawl report of our site - www.thehomesites.com.
Below are some of the urls highlighted as having duplicate content -
On-Page Optimization | | urahul
http://www.thehomesites.com/zip_details/76105
http://www.thehomesites.com/zip_details/44135
http://www.thehomesites.com/zip_details/75227
http://www.thehomesites.com/zip_details/94501 These are neighborhood reports generated for 4 different zip codes. We use a standard template to create these reports. What are some of the steps we can take to avoid these pages being categorized as duplicate content?0 -
Moz Crawl Shows Duplicate Content Which Doesn't Seem To Appear In Google?
Morning All, First post, be gentle! So I had Moz crawl our website with 2500 high priority issues of duplicate content, not good. However if I just do a simple site:www.myurl.com in Google, I cannot see these duplicate pages....very odd. Here is an example....
On-Page Optimization | | scottiedog
http://goo.gl/GXTE0I
http://goo.gl/dcAqdU So the same page has a different URL, Moz brings this up as an issue, I would agree with that. However if I google both URL's in Google, they will both bring up the same page but with the original URL of http://goo.gl/zDzI7j ...in other words, two different URL's bring up the same indexed page in Google....weird I thought about using a wildcard in the robots.txt to disallow these duplicate pages with poor URL's....something like.... Disallow: /*display.php?product_id However, I read various posts that it might not help our issues? Don't want to make things worse. On another note, my colleague paid for a "SEO service" and they just dumped 1000's of back-links to our website, of course that's come back to bite us in the behind. Anyone have any recommendations for a good service to remove these back-links? Thanks in advance!!0 -
How to transfer old WP blog to new URL
I have a 9 year old WP website with a WP blog which is still getting 300+ new visitors a day even though I have not written a blog for 5 years and have not updated content. Some posts have over 25,000 links. However the Moz analytics is fraught with significant errors-404 redirects, page not found, dup content, no metatags, title too long etc. I was totally inexperienced 5 years ago and made many errors. However the basic content was sound and still is producing new visitors. I am starting a new ecommerce website using the same name but the URL and server will be different. I want to transfer my WP blog to the new site. I am concerned however that bringing the posts over can create the same errors on the new site. If I update all of the blogs on the old site using Yoast before transferring the blog to the new site will that help. I suppose I could check those flagged dup content and only transfer one of that category?
On-Page Optimization | | wianno1680 -
Massive increase in Moz crawl.
I have a subdomain which has just started to be crawled by Moz, Previously this wasn't the case. The sub-domain had 16,000+ issues. Why has Moz started to count sub-domains as part of the main domain, has Google started to do this aswell?
On-Page Optimization | | danwebman0 -
Waiting 3 days for Crawl Test to complete
Being new to seomoz Im not sure if I understand the crawl test completely. You setup a campaign, enter all your info, rogerbot goes out and crawls your site and gives you results as to what your doing right and what is wrong or could use looking into. So once I get my results, I make edits to my site pages. In my case Im getting lots of duplicate content and duplicate titles. So I go back and make adjustments and then submit a crawl test to see the change results. In other tools Ive used in past I was able to re run crawl immediately and fine tune results on the fly. seomoz crawl test is still pending after three days. is this normal? or is there another way to make changes and run reports to see results instantly? If your working on many sites and making changes, having to wait 3 or more days to see how your changes were received seems like a long time.
On-Page Optimization | | anthonytjm0 -
Creating a product per size causing duplicate content problems?
I have an e-commerce site and in order to receive a listing for each size and color in Google Merchant, I've created a new product for each size and color. The problem is that since I did this, the canonical tags aren't correct and there isn't a way to change them manually with the platform I'm on. I feel like this is one of the main reasons I've been dropping in the rankings. Should I delete all duplicate products? The system will take care of canonical tags automatically when creating a new size/color within the system (how it's supposed to be created) but the canonical tags become messy when I duplicate a product and edit the size/color to create a "whole new product". Here is an example of what I'm referring to: http://www.carbonconnection.com/search.php?search_query=nalini+rigel&x=0&y=0 (this problem actually isn't mine, it's a friend's but for the sake of simplicity and gaining a second opinion to be sure before he redoes all of his products, I'm asking as though it were my issue)
On-Page Optimization | | EmdeS0 -
How long after a URL starts showing a 404 does Google stop crawling?
Before hiring me to do SEO, a client re-launched their site and did not 301 the old URLs to the new. Only the home page URL stayed the same. For a month after the re-launch, the old URLs returned a 404. For the next month, all 404 pages (basically any non-existent URL) were 301'd to the home page. Finally, 2 months after launching, they properly 301'd the old URLs to the new. Now, the new URLs are not ranking well. I assume it's too late to realize any benefit from the 301's, just checking to see if anybody has any insight into how long Google keeps trying to crawl old/404/improperly 301'd URLs. Thanks!
On-Page Optimization | | AndrewMiller0