Internal file extension canonicalization
-
Ok no doubt this is straightforward, however seem to be finding to hard to find a simple answer; our websites' internal pages have the extension .html. Trying to the navigate to that internal url without the .html extension results in a 404.
The question is; should a 401 be used to direct to the extension-less url to future proof? and should internal links direct to the extension-less url for the same reason?
Hopefully that makes sense and apologies for what I believe is a straightforward answer;
-
As above
example/abc rewrites to example/abc.html
example/abc.html redirects to example/abc
and all internal links link to example/abc
-
Thankyou for the replies.
I will try and clarify what I am trying to get at; apologies in advance for any naivety.
I understand homepage canonicalization; the confusion revolves around how this applies to internal pages.
Logically; I am struggling to see how internal pages are any different to a homepage in terms of the need to avoid multiple urls....and thus an extension-less url seemed appropriate. Not too mention the benefit or cleaner urls, easier to link to, remember etc.
i.e.
example/abc
example/abc.html
example/abc.index.html
-
As nick said, you dont need to do this, but if you are.
1. REWRITE the new url to the old url, as your webserver needs to know the extention
2. REDIRECT the old url to the new one, incase you already have links to the old urls, you dont want5 duplicate content
3. you need to make surer that all internal links point to the new url, you dont want un-necessary redirects as they leak link juice.
-
I'm about to make a whole lot of assumptions about your website to give this answer, just be aware.
Your website is built static, using HTML. Hence the .html file extension. If you're seeing websites that don't have file extension, it's most likely they are using content management systems (or have some serious /folder/index.html stuff going on).
Having a file extension like .html or .aspx or .php is not a bad thing. On websites like yours, it is required (unless you do the above subfolder thing) because it's an actual file the browser is grabbing rather than something being dynamically generated by a CMS. It has nothing to do with future-proofing.
As for 301'ing non-extension URLs to extention'd ones...well I don't know why you'd need to do that for your type of site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal link structure for my loan website
Hi folks. I own a Norwegian consumer loan/financing website, which has been monetized with links. I've created various silos for my content, according to what I believe is most relevant to the user.
Technical SEO | | llevy
However, as a result each article now has a sidebar list, which in turn links to all other articles within the same category (silo). As you can see here, it has about 30 links in the sidebar: forbrukslån.no/beste-lån. With 30 articles in a silo, that corresponds to over 900 internal links, in just one silo alone. I wonder if this could be hurting me SEO wise? I know G cares a lot about relevance and user experience. So I have a feeling it could be interpreted as spammy. Reason I did this in the first place, is that the header links are also being repeated on all pages, without any issue. T4FHxHw0 -
Do I need a separate robots.txt file for my shop subdomain?
Hello Mozzers! Apologies if this question has been asked before, but I couldn't find an answer so here goes... Currently I have one robots.txt file hosted at https://www.mysitename.org.uk/robots.txt We host our shop on a separate subdomain https://shop.mysitename.org.uk Do I need a separate robots.txt file for my subdomain? (Some Google searches are telling me yes and some no and I've become awfully confused!
Technical SEO | | sjbridle0 -
Is there a limit to Internal Redirect?
I know Google says there is no limit to it but I have seen on many websites that too many 301 redirects can be a problem and might negatively affect your rankings in SERPs. I wanted to know especially from people who worked on large ecommerce site. How do they manage internal redirect from one URL to other and how many according to you are too many. I mean if you get a website that contain 300 plus 301 redirections within the website, how will you deal with that? Please let me know if the question is not clear.
Technical SEO | | MoosaHemani0 -
What's Worse - 404 errors or a huge .htaccess file
We have changed our site architecture pretty significantly and now have many fewer pages (albeit with more robust content and focused linking). My question is, what should I do about all the 404 errors (keep in mind, I am only finding these in Bing Webmaster tools, not Moz or GWT)? Is it worse to have all those 404 errors (hundreds), or to have a massive htaccess file for pages that are only getting hits by the Bing crawlbot. Any insight would be great. Thanks
Technical SEO | | CleanEdisonInc0 -
Can the Hosting location of image files have a negative effect if on the developers own media server rather than on client site server ?
Hi Can the Hosting location of image files have a negative effect if on the developers own media server as opposed to on the actual websites server ? In the case i'm looking at the image files are hosted on a totally separate server (a media subdomain of the developers site server) from the subject sites dedicated server. Will engines still attribute the properties of files hosted in this manner to the main website (such as file name or should they really be on the subject sites server own media folder ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
High pr doc files
I saw that the website www.comunicatedepresa.net outranks www.comunicatedepresa.ro for the therm "comunicate de presa" in google.ro SERP even though .ro beats .net in every seo indicator (links, domains linking, fb likes, g+, onpage etc) I saw that site:www.comunicatedepresa.net returns a lot of *.doc files with a title that contains the kw ("comunicate de presa"). Ex: www.comunicatedepresa.net/worddoc/1485/ It seems a little suspicios to me.Did anyone see this before (google giving higher importance to doc files)? Does anyone know why .net site is ranking better?
Technical SEO | | seo.academy0 -
Considering redirecting or canonicalization - Best Practice
Hi, I'm having a techinical problem and I would like advise on the effects this is having on my SEO efforts. My old site www.oldsiteexample.com (live for about 8 years) Directs to my new site www.example.com which is fine BUT When I type me new website into the tool bar both sides are found & do not direct to one domain; www.example.com & example.com (both the same site) What is the best practice here? Direct my new non www to my new www site considering my old website directs to the www. Advise & the SEO affects this is having my website would be greatly appreciated, thank you.
Technical SEO | | Socialdude0