Why are these m. results showing as blocked?
-
If you go to http://bit.ly/173gdWK, you'll see that m. results are showing as blocked by robots.txt, but we don't have anything in our robots.txt file that specifies to block m. results. Any ideas why these URLs show as blocked?
-
Yeah, I was testing exactly the same when you posted the response. I even tried crawling a googlebot-mobile and still I get the 301 redirect. Which, for everything that I am seeing it is correct, as no matter what browser I use (desktop, mobile, spider) I always get a 301 to the www. version.
@michelleh, are you sure there's a mobile version not being redirected to the www. one?
-
(Using example.com instead of your domain in case you want anonymity later)
If you try to go to any of these m.example.com URLs on your desktop computer, you're redirected on the server to a www.example.com URL. I'm guessing Googlebot and Googlebot-Mobile cannot access m. pages (unless you're sniffing out Googlebot-Mobile specifically to serve it m.example.com pages). If you're looking at screen resolution for these redirects, you might not be catching Googlebot-Mobile, as I don't think Googlebot-Mobile gives a screen resolution in its user agent. I believe you want Googlebot indexing your www. content, and Googlebot-Mobile indexing your m. content, so you'll need to sniff out Googlebot-Mobile's user agent (see here), and redirect it to m. content.
Also of note is that I think these should be 302 temporary redirects, and not 301 permanent redirects between your www. and m. versions, as they're not really permanent, just getting a given user to the right version of the site. Also, you don't let me switch from the mobile version to the desktop version, which drives me bananas! Let users choose after the initial redirect. If you allow people to switch, but maintain 301 redirects, the browsers may cache some of the redirects which will lead to weird behavior if people hit a page that redirected before.
You don't have a robots.txt file at m.example.com/robots.txt, as that redirects to www.example.com/robots.txt even on my phone. I don't think this is the root of the problem, but once you figure things out, you can set up a robots.txt file on your m. subdomain.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dates on Google Search Results
Hello, I manage htts://globalrose.com When I search on Google for "Yellow Roses", "Yellow Roses Globalrose", or any search that might bring up one of our pages, sometimes our search results appear with dates right before the description. Does anyone know what this mean? Why they appear on some and not other pages? Here is a search result for example: Example Google Search Can someone please help clarify this for us?
Intermediate & Advanced SEO | | globalrose.com0 -
Bolded words in search results
are those synonyms or semantically related keywords ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
if we search site:domain.com vs www.domain.com, We see: 130,000 vs 15,000 results. When reviewing the site:domain.com results, we're finding that the majority of the URLs showing are blocked by robots.txt. They are subdomains that we use as production environments (and contain similar content as the rest of our site). And, we also find the message "In order to show you the most relevant results, we have omitted some entries very similar to the 541 already displayed." SEER Interactive mentions that this is one way to gauge a Panda penalty: http://www.seerinteractive.com/blog/100-panda-recovery-what-we-learned-to-identify-issues-get-your-traffic-back We were hit by Panda some time back--is this an issue we should address? Should we unblock the subdomains and add noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
Show wordpress "archive links" on blog?
I here conflicting reports on whether to show wordpress archive links on the blog or not. Some say it is important for viewers to see, others say it is not and creates way too many links. I think both have good points but for SEO purposes, I lean towards removing them. What do Moz users think?
Intermediate & Advanced SEO | | seomozinator0 -
URL blocked
Hi there, I have recently noticed that we have a link from an authoritative website, however when I looked at the code, it looked like this: <a <span="">href</a><a <span="">="http://www.mydomain.com/" title="blocked::http://www.mydomain.com/">keyword</a> You will notice that in the code there is 'blocked::' What is this? has it the same effect as a nofollow tag? Thanks for any help
Intermediate & Advanced SEO | | Paul780 -
How to show country name in google search result
I have a site with tld .com but my target country is United kingdom so i want to show United Kingdom in SERPs.How can i show it ? I have already set target country United Kingdom in Webmaster tools but still it is not showing.
Intermediate & Advanced SEO | | Alick3000 -
Help! Optimizing dynamic internal search results pages...
Hi guys, Now I have always been against this, and opted to noindex internal search results pages to stop the waste of link juice, dupe content, and crawl loops... however, I'm in a discussion with somebody who feels there may be a solution, and that the pages could actually be optimized to rank (for different keywords to the landing pages of course). Anybody come across such a thing before? My only solution would be still to noindex and then build static pages with the most popular search results in but that won't suffice in this case. Any recommendations would be much appreciated 🙂 Thanks, Steve 🙂
Intermediate & Advanced SEO | | SteveOllington0 -
DNS(Name servers), IP and domain.. All Showing Same Site...
Hello, My site is hosted on VPS with dedicate IP. The problem I am facing is my site is indexed and cached by domain, IP address and name server. I am able to open all the pages via www.xyz.com and and IP 1.1.1.1 and the name sever like ns1. hosted.com And if I USE site operator in google for 1.1.1.1 and ns1.hosted.com there are so many pages which is cache and in sites. Is it any server issue my hosting company give me a reply as given below "This is because that site is on the same ip as the nameserver. This is completely correct and normal. This is not an error." Is it true ? can you help me in this issue? Thanks
Intermediate & Advanced SEO | | semshah1430