Robots.txt for Facet Results
-
Hi
Does anyone know how to properly add facets URL's to Robots txt?
E.g. of our facets URL -
Everything after the # will need to be blocked on all pages with a facet.
Thank you
-
Great thank you!
-
This is the right answer.
Great way to check is to see if you have multiple versions of that URL indexed, which you don't: https://www.google.com/search?q=site:http://www.key.co.uk/en/key/platform-trolleys-trucks
-
Google ignores everything after the hash to start with, so you do not need to block it to finish with. It is a clever way to pass parameters without having to worry about Google getting lost.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Strange site link on Google for a Facebook result
A Facebook page targetted to US Hispanics (with content in Spanish and English) is showing me a hindi sitelink underneath the main Facebook link when I google (in the US, English) for the page [ page name facebook]. We don't have any content in hindi, or targetted to that audience. If I click on the sitelink while logged out of facebook, I can see it takes me to a facebook subdomain of hi-in. When I'm logged in it just redirects me to the same page. Any idea why this could be happening?
Intermediate & Advanced SEO | | M_80 -
Indexation of internal search results from infinite scroll
Hello, I have an issue where we will have a website set up with dynamic (AJAX) result pages based on the selection of certain filters chosen by the user. The result page will have 12 results shown and if the user scrolls down, the page will lazy load (infinite scroll) additional results. So for example, with these filters: Filter A: Size Filter B: Color Filter 😄 Location We could potentially have a page for "Large, Blue, New York" results dynamically generated. My issue is that I want Google to potentially crawl and index all these variations, so that I can have a page that ranks for "Large Blue New York", another page that ranks for "Small Orange Miami" etc. However, I do not need all the products indexed--- just the page with the first set of dynamic results would be enough since the additional products would just be more of the same. In other words, I am trying to get these pages with filters applied indexed and not necessarily get every possible product indexed. Can anyone comment on the best way to Get Google to index all dynamic variations? The proper way of paginating pages? Thank you
Intermediate & Advanced SEO | | Digi12340 -
Site: search showing funny results
Hi When i do a site: search on my domain the very last result it returns is a URL which is listed as my domain but does not exist on my website. When clicked it redirects to a really spammy page. If im not being clear just let me know, quite hard to explain the situation! Any thoughts to get rid of this?
Intermediate & Advanced SEO | | TheZenAgency0 -
Mobile Search Results Include Pages Meant Only for Desktops/Laptops
When I put in site:www.qjamba.com on a mobile device it comes back with some of my mobile-friendly pages for that site(same url for mobile and desktop-just different formatting), and that's great. HOWEVER, it also shows a whole bunch of the pages (not identified by Google as mobile-friendly) that are fine for desktop users but are not supposed to exist for the mobile users, because they are too slow. Until a few days ago those pages were being redirected for mobile users to the home page. I since have changed that to 404 not founds. Do we know that Google keeps a mobile index separate from the desktop index? If so, I would think that 404 should work.. How can I test whether the 404 not founds will remove a url so they DON'T appear on a mobile device when I put in site:www.qjamba.com (or a user searches) but DO appear on a desktop for the same command.
Intermediate & Advanced SEO | | friendoffood0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Our Robots.txt and Reconsideration Request Journey and Success
We have asked a few questions related to this process on Moz and wanted to give a breakdown of our journey as it will likely be helpful to others! A couple of months ago, we updated our robots.txt file with several pages that we did not want to be indexed. At the time, we weren't checking WMT as regularly as we should have been and in a few weeks, we found that apparently one of the robots.txt files we were blocking was a dynamic file that led to the blocking of over 950,000 of our pages according to webmaster tools. Which page was causing this is still a mystery, but we quickly removed all of the entries. From research, most people say that things normalize in a few weeks, so we waited. A few weeks passed and things did not normalize. We searched, we asked and the number of "blocked" pages in WMT which had increased at a rate of a few hundred thousand a week were decreasing at a rate of a thousand a week. At this rate it would be a year or more before the pages were unblocked. This did not change. Two months later and we were still at 840,000 pages blocked. We posted on the Google Webmaster Forum and one of the mods there said that it would just take a long time to normalize. Very frustrating indeed considering how quickly the pages had been blocked. We found a few places on the interwebs that suggested that if you have an issue/mistake with robots.txt that you can submit a reconsideration request. This seemed to be our only hope. So, we put together a detailed reconsideration request asking for help with our blocked pages issue. A few days later, to our horror, we did not get a message offering help with our robots.txt problem. Instead, we received a message saying that we had received a penalty for inbound links that violate Google's terms of use. Major backfire. We used an SEO company years ago that posted a hundred or so blog posts for us. To our knowledge, the links didn't even exist anymore. They did.... So, we signed up for an account with removeem.com. We quickly found many of the links posted by the SEO firm as they were easily recognizable via the anchor text. We began the process of using removem to contact the owners of the blogs. To our surprise, we got a number of removals right away! Others we had to contact another time and many did not respond at all. Those we could not find an email for, we tried posting comments on the blog. Once we felt we had removed as many as possible, we added the rest to a disavow list and uploaded it using the disavow tool in WMT. Then we waited... A few days later, we already had a response. DENIED. In our request, we specifically asked that if the request were to be denied that Google provide some example links. When they denied our request, they sent us an email and including a sample link. It was an interesting example. We actually already had this blog in removem. The issue in this case was, our version was a domain name, i.e. www.domainname.com and the version google had was a wordpress sub domain, i.e. www.subdomain.wordpress.com. So, we went back to the drawing board. This time we signed up for majestic SEO and tied it in with removem. That added a few more links. We also had records from the old SEO company we were able to go through and locate a number of new links. We repeated the previous process, contacting site owners and keeping track of our progress. We also went through the "sample links" in WMT as best as we could (we have a lot of them) to try to pinpoint any other potentials. We removed what we could and again, disavowed the rest. A few days later, we had a message in WMT. DENIED AGAIN! This time it was very discouraging as it just didn't seem there were any more links to remove. The difference this time, was that there was NOT an email from Google. Only a message in WMT. So, while we didn't know if we would receive a response, we responded to the original email asking for more example links, so we could better understand what the issue was. Several days passed we received an email back saying that THE PENALTY HAD BEEN LIFTED! This was of course very good news and it appeared that our email to Google was reviewed and received well. So, the final hurdle was the reason that we originally contacted Google. Our robots.txt issue. We did not receive any information from Google related to the robots.txt issue we originally filed the reconsideration request for. We didn't know if it had just been ignored, or if there was something that might be done about it. So, as a last ditch final effort, we responded to the email once again and requested help as we did the other times with the robots.txt issue. The weekend passed and on Monday we checked WMT again. The number of blocked pages had dropped over the weekend from 840,000 to 440,000! Success! We are still waiting and hoping that number will continue downward back to zero. So, some thoughts: 1. Was our site manually penalized from the beginning, yet without a message in WMT? Or, when we filed the reconsideration request, did the reviewer take a closer look at our site, see the old paid links and add the penalty at that time? If the latter is the case then... 2. Did our reconsideration request backfire? Or, was it ultimately for the best? 3. When asking for reconsideration, make your requests known? If you want example links, ask for them. It never hurts to ask! If you want to be connected with Google via email, ask to be! 4. If you receive an email from Google, don't be afraid to respond to it. I wouldn't over do this or spam them. Keep it to the bare minimum and don't pester them, but if you have something pertinent to say that you have not already said, then don't be afraid to ask. Hopefully our journey might help others who have similar issues and feel free to ask any further questions. Thanks for reading! TheCraig
Intermediate & Advanced SEO | | TheCraig5 -
Can you appear in both Google local and universal results on the page?
I have a client that ranks 2nd for a google local result, and the local results are the first set of results being displayed, making it in fact the 2nd result on the page for the keyword. There aren't any blended results on this page, it's straightforward local results, then universal results directly after. They are concerned that they are not appearing in the universal results high enough (they're on page 5 with a different url than the homepage). I'm not sure why their #2 local ranking isn't good enough since those are consistently the same results, but I have to ask; Can you appear in both local and universal results on the same page?
Intermediate & Advanced SEO | | MichaelWeisbaum0 -
Undocumented anchor-text API result
Regarding the anchor-text api, there is no definition for *imr on the wiki:
Intermediate & Advanced SEO | | sycorr
http://apiwiki.seomoz.org/w/page/13991127/Anchor Text API ie. http://lsapi.seomoz.com/linkscape/anchor-text/google.com?Scope=phrase_to_page&Cols=2048&Sort=domains_linking_page&Expires=1329770786.46868 returns "[{"apuimr":5.422834471373288e-12},{"apuimr":4.785130890652429e-13},{"apuimr":2.922901387480201e-09}]" What is *imr?0