Does Googlebot Read Session IDs?
-
I did a raw export from AHREFs yesterday and one of our sites has 18,000 backlinks coming from the same site. But they're all the same link, just with a different session ID. The structure of the URL is:
[website].com/resources.php?UserID=10031529
And we have 18,000 of these with a different ID.
Does Google read each of these as a unique backlink or does it realize there's just one link and the session ID is throwing it off? I read different opinions when researching this so I'm hoping the Moz community can give some concrete answers.
-
Safest bet, set up canonicals that point to the page minus the parameter so even if Google does read the session IDs it will understand that they relate to the canon link. Honestly, I'm not 100% sure if Google reads those sessions IDs or not either and have seen conflicting information. I know they read other parameters as separate URLs... I had a few issues with the way one of our sites handled products (sometimes it was ?model= and sometimes it was ?prod_id= and some old products also had ?sku=). But adding the canonicals will solve this problem if it exists and if the problem doesn't exist it won't hurt having a self-referential canonical sitting in the code in case someone scrapes your site.
-
You have to inform yourself and really watch out for this kind of stuff and SE bots.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Is surfacing top blog posts with read more link could create a boost in traffic to main domain?
Hi mozzers, Because our blog is located on blog.example.com on powered by Wordpress and currently can't migrate it to the main domain, unfortunately. Since we would like to grow our main's domain organic traffic and would like to test an option that could help us leverage the traffic of the top blog posts content. There is a Wordpress API that would allow us to get 100-200 words(snippet of the blog post) from the blog posts into the main domain that would provide a "Read more link" linking back to the blog.
Intermediate & Advanced SEO | | Ty1986
Is this even a good idea assuming we would make sure content is not identical?0 -
Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's
An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.
Intermediate & Advanced SEO | | Atlanta-SMO0 -
GoogleBot Mobile & Depagination
I am building a new site for a client and we're discussing their inventory section. What I would like to accomplish is have all their products load on scroll (or swipe on mobile). I have seen suggestions to load all content in the background at once, and show it as they swipe, lazy loading the product images. This will work fine for the user, but what about how GoogleBot mobile crawls the page? Will it simulate swiping? Will it load every product at once, killing page load times b/c of all of the images it must load at once? What are considered SEO best practices when loading inventory using this technique. I worry about this b/c it's possible for 2,000+ results to be returned, and I don't want GoogleBot to try and load all those results at once (with their product thumbnail images). And I know you will say to break those products up into categories, etc. But I want the "swipe for more" experience. 99.9% of our users will click a category or filter the results, but if someone wants to swipe through all 2,000 items on the main inventory landing page, they can. I would rather have this option than "Page 1 of 350". I like option #4 in this question, but not sure how Google will handle it. http://ux.stackexchange.com/questions/7268/iphone-mobile-web-pagination-vs-load-more-vs-scrolling?rq=1 I asked Matt Cutts to answer this, if you want to upvote this question. 🙂
Intermediate & Advanced SEO | | nbyloff
https://www.google.com/moderator/#11/e=adbf4&u=CAIQwYCMnI6opfkj0 -
Changing URLs to include a fixed identifier or ID
The Scenario: I got pages that I need to track, located in a domain, within several folders. Adding a common identifier or ID (eg. www.domain.com/folder/page-name-identifier.html) in those URL's will ease my work so I would be able to select, in Anlx, all traffic including URL's with that specific identifier. URL's for which track is needed lack this identifier today. My Plan: add identifier (7 letters fixed and common for all URLs) to those existing pages and 301 redirect from old to new URL's My Question: will this change of URL's and redirections SEO-hurt me in anyway?
Intermediate & Advanced SEO | | Tit0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0 -
Fetch as Googlebot
"With Fetch as Googlebot you can see exactly how a page appears to Google" I have verified the site and clicked on Fetch button. But how can i "see exactly how a page appears to Google" Thanks
Intermediate & Advanced SEO | | seoug_20050