Can Search Engines Read "incorrect" urls?
-
I know that ideally a url should be something of the nature domain.com/topic, but if the url contains additional characters, for example, domain.com/topic?keyword, can the search engines still understand the complete words in the domain? Even though there are additional "incorrect" characters? Or do they stop "reading" once they find odd characters?
Thanks!
-
A few other things to note for having parameters in URLs:
- In Google Webmaster Tools and Bing Webmaster Tools, you can instruct the search engines to ignore certain parameters, so that they'll treat domain.com/topic?keyword and domain.com/topic as the same page (if ?keyword doesn't change the page content)
- You can also place the rel=canonical element on pages. So you could set domain.com/topic?keyword to rel canonical to domain.com/topic to pass its pagerank along.
-
Search engines will read all your parameters unless you tell google with webmaster tools what parameters to ignore. This can cause an issue with the url like domain.com/topic?keyword&somefield then pages that include the keyword and other parameters will share the link juice. So, if you have 10 options of somefield you will get ~1/10 value per page indexed.
So, it is better for you to use rewrites to include your keyword in the url and then mark parameters to not be indexed in Goggle etc.
-
Search engines can read most characters in a URL string, but specifically & generally refers to a variable in a script which doesn't typically have much valuable information regarding what a page may be about. Sometimes those variables may be the topic of a category of a shopping cart, so I have to imagine that information could be taken into account, but for long urls like the following it is hard to believe everything is factored into the URL's relevance to the keyword: http://www.google.com/search?q=long+url+string&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
Search engines index the whole URL and if there is keyword rich content that can definitely help, both from having the keyword bolded in the snippet (CTR WIN!) and a possible bump in the page's relevance to the keyword.
-
In general search engines are able to identify keywords in the URL even if they are i.e. a parameter that follows a "?" or other non-alphanumeric character. They might not treat it as an equally strong signal as when the keyword is a part of the file name, subdomain or domain name though. Hope that answers your question.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How handle pages with "read more" text query strings?
My site has hundreds of keyword content landing pages that contain one or two sections of "read more" text that work by calling the page and changing a ChangeReadMore variable. This causes the page to currently get indexed 5 times (see examples below plus two more with anchor tag set to #sectionReadMore2 This causes Google to include the first version of the page which is the canonical version and exclude the other 4 versions of the page. Google search console says my site has 4.93K valid pages and 13.8K excluded pages. My questions are: 1. Does having a lot of excluded pages which are all copies of included pages hurt my domain authority or otherwise hurt my SEO efforts? 2. Should I add a rel="nofollow" attribute to the read more link? If I do this will Google reduce the number of excluded pages? 3. Should I instead add logic so the canonical tag displays the exact URL each time the page re-displays in another readmore mode? I assume this would increase my "included pages" and decrease the number of "excluded pages". Would this somehow help my SEO efforts? EXAMPLE LINKS https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=More#sectionReadMore1 https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=Less#sectionReadMore1
Technical SEO | | DougHartline0 -
How to use rel="alternate" properly for mobile directory.
Hey everyone, For the URL - http://www.absoluteautomation.ca/dakota-alert-dcpa-p/dkdcpa2500.htm - I have the following tags in the header: rel="canonical" href="http://www.absoluteautomation.ca/dakota-alert-dcpa-p/dkdcpa2500.htm" /> rel="alternate" media="only screen and (max-width: 640px)" href="http://www.absoluteautomation.ca/mobile/Product.aspx?id=37564" /> Yes Google WMT is reading these as duplicate pages with duplicate titles, meta descriptions etc. How can I fix this? Thanks!
Technical SEO | | absoauto0 -
Magento Dublicate Content (Noindex and Rel"canonical")
Hi All, Just looking for some advice regarding my website on magento. We by mistake didnt enable canonical tags and noindex tags so had a big problem with dublicate content from filter pages but also have URLs to Cats as Yes so this didnt help with not having canonical tags enabled. We now have everything enabled for a few weeks now but dont see much drop in indexed pages in google. (currently 27k and we have only 5k products) My question basically is how do we speed up noindexation of dublicate content and also would you change URL to cats as No so google just now sees the url to products? (my concerns with this is would leaving it to Yes help because it will hopefully read the canonical tags on products now) Thank you in advance Michael
Technical SEO | | TogetherCare0 -
Trying to get on Google page one for keyword "criminal defense attorney san diego". What can I do?
I'm trying to help a friend who is an attorney get on page one for the keyword "criminal defense attorney san diego." So far I've changed his title and description tags since they weren't optimized before. (SERP shows old title tag, however I submitted a XML sitemap through Webmaster tools to get the new title tags updated.) He also had a few duplicate pages, but I took care of that with some 301 redirects. I also added a h1 tag, alt image tag, and more content. I also spent a few hours building links for him. He currently has a page authority of 52 and domain authority of 44 with a decent amount of links pointing to his site. I'm wondering why he's stuck on page 4, when his competitors that have less impressive numbers seem to show up on page 1. I did look at his link profile using OSE and I'm worried that his old SEO guy got him spam links. His website is www.nasserilegal.com, however the page I was focusing on was www.nasserilegal.com/criminal.html Any advice would be great.
Technical SEO | | micasalucasa0 -
Old URL redirect to New URL
Alright I did something dumb a year a go and I'm still paying for it. I changed my hyphenated URL to the non-hyphenated version when I redesigned my website. I say it was dumb because I lost most of my link juice even though I did 301 redirects (via the htaccess file) for almost all of the pages I could find in Google's index. Here's my problem. My new site took a huge hit in traffic (down 60%) when I made the change and even though I've done thousands of redirects my old site is still showing up in the SERPS and send much if not most of my traffic. I don't want to take the old site down in fear it will kill all of my traffic. What should I do? Is there a better method I should explore then 301 redirects? Could the other site be affecting my current rank since it's still there? (FYI...both sites are built on the WP platform). Any help or ideas are greatly appreciated. Thank you! Joe
Technical SEO | | kaje0 -
Meta tag "noindex,nofollow" by accident
Hi, 3 weeks ago I wanted to release a new website (made in WordPress), so I neatly created 301 redirects for all files and folders of my old html website and transferred the WordPress site into the index folder. Job well done I thought, but after a few days, my site suddenly disappeared from google. I read in other Q&A's that this could happen so I waited a little longer till I finally saw today that there was a meta robots added on every page with "noindex, nofollow". For some reason, the WordPress setting "I want to forbid search engines, but allow normal visitors to my website" was selected, although I never even opened that section called "Privacy". So my question is, will this have a negative impact on my pagerank afterwards? Thanks, Sven
Technical SEO | | Zitana0 -
Correct 301 of domain inclusive "/"
Do I have to redirect "/" in the domain by default? My root domain is e.g. petra.at
Technical SEO | | petrakraft
--> I redirect via 301 to www.petra.at Do I have to do that with petra.at/ and www.petra.at/, too?0