SEOmoz crawl error
-
Hi,
I'm getting a crawl error and it complains about there being missing meta description...
But, the errors are all for non existent index files in directories that only contain pdf files and some thumbs of the front page...
Just started trying to learn this stuff...!
Cheers
Rod
-
Thanks Keri, I'll do that...
-
Rod, I think the best thing here is to email help@seomoz.org and give them your username and outline your situation and they can tell you what's up and if you've found a bug.
-
Mysterio, your answer does not at all address the question. If you could reply only when you do know the answer, and then add a little information about the answer, that would be great. Thanks!
-
But they aren't pages... ??
And I don't understand why the crawl diagnostics is complaining about a file called thumbs.db...???
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirect and Redirect Error in Moz Crawl
Hello, We have a wordpress blog attached to our magento website located at domain.co.uk/blog/ Moz was coming back showing we had multiple page versions on show (http and https) So i updated the htaccess file to what is below. This has fixed most of the errors, however the homepage is being a little tricky. Moz is now saying that the page is redirecting and redirecting again http://www.domain.co.uk/blog to
On-Page Optimization | | ATP
http://www.domain.co.uk/blog/ to
https://www.domain.co.uk/blog/ BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /blog/</ifmodule> RewriteCond %{HTTPS} !=on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L] END WordPress Within wordpress settings the urls are set up as follows Wordpress Address URL: https://www.domain.co.uk/blog Site Address URL: https://www.domain.co.uk/blog i tried to add a trailing / to these but it gets automatically removed. So i am assuming that wordpress is serving up https://www.domain.co.uk/blog **RewriteBase /blog/ **is re-directing it to / then my https rewrite is re-directing it again I am not sure where exactly to fix this, could anybody advise? Many thanks,0 -
Random /feed 404 error from a wordpress site
My Moz Analytics report shows a 404 error on a page which I think should not exist at all. The URL is http://henryplumbingco.com/portfolio-item/butler-elementary/feed/. When I checked webmaster tools, it looks like there are a number of random /feed urls throwing 404 errors. I am using WordPress and the Enfold theme. Anyone know how to get rid of these errors? Thanks,
On-Page Optimization | | aj6130 -
Internal 404 Error
Hi sorry for the newbie question, I have a few 404 pages on my moz crawl report. so for example this one : http://www.dwliverpoolphotography.co.uk/blog/www.coraclecomm.wordpress.com. How can I find the page that is linking to it so I can fix the link or delete it? Best wishes. David.
On-Page Optimization | | WallerD0 -
Moz Crawl Shows Duplicate Content Which Doesn't Seem To Appear In Google?
Morning All, First post, be gentle! So I had Moz crawl our website with 2500 high priority issues of duplicate content, not good. However if I just do a simple site:www.myurl.com in Google, I cannot see these duplicate pages....very odd. Here is an example....
On-Page Optimization | | scottiedog
http://goo.gl/GXTE0I
http://goo.gl/dcAqdU So the same page has a different URL, Moz brings this up as an issue, I would agree with that. However if I google both URL's in Google, they will both bring up the same page but with the original URL of http://goo.gl/zDzI7j ...in other words, two different URL's bring up the same indexed page in Google....weird I thought about using a wildcard in the robots.txt to disallow these duplicate pages with poor URL's....something like.... Disallow: /*display.php?product_id However, I read various posts that it might not help our issues? Don't want to make things worse. On another note, my colleague paid for a "SEO service" and they just dumped 1000's of back-links to our website, of course that's come back to bite us in the behind. Anyone have any recommendations for a good service to remove these back-links? Thanks in advance!!0 -
Is the HTML content inside an image slideshow of a website crawled by Google?
I am building a website for a client and i am in a dilemma whether to go for an image slideshow with HTML content on the slides or go for a static full size image on the homepage. My concern is that HTML content on the slideshow may not get crawled by Google and hence may not be SEO friendly.
On-Page Optimization | | aravinn0 -
Big problem with my new crawl report
I am owner of small opencart online store. I installed http://www.opencart.com/index.php?route=extension/extension/info&extension_id=6182&filter_search=seo. Today my new crawl report is awful. The number of errors is up by 520 (30 before), up with 1000 (120 before), notices up with 8000 (1000 before). I noticed that the problem is with search. There is a lot duplicate content in search only. What to do ?
On-Page Optimization | | ankali0 -
Errors when checking W3C HTML after added Google Custom Search
hello, I have added google custome search to my website, and then check with W3C HTML, it report many error.
On-Page Optimization | | JohnHuynh
eg: there is no attribute "enableHistory" <gcse:searchbox-only enablehistory="true" autocompletemaxcompletions="5" au…<br="">or there is no attribute "resultsUrl" and so on ...</gcse:searchbox-only> Has anyone face with this problem, I don't know how to fix it. Please help!0 -
How To Prevent Crawling Shopping Carts, Wishlists, Login Pages
What's the best way to prevent engines from crawling your websites shopping cart, wishlist, log in pags, ect... Obviously have it in robots.txt but is their any other form of action that should be done?
On-Page Optimization | | Romancing0