Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How important is the file extension in the URL for images?
-
I know that descriptive image file names are important for SEO. But how important is it to include .png, .jpg, .gif (or whatever file extension) in the url path? i.e. https://example.com/images/golden-retriever vs. https://example.com/images/golden-retriever.jpg
Furthermore, since you can set the filename in the Content-Disposition response header, is there any need to include the descriptive filename in the URL path?
Since I'm pulling most of our images from a database, it'd be much simpler to not care about simulating a filename, and just reference an image id in my templates.
Example:
1. Browser requests GET /images/123456
2. Server responds with image setting both Content-Disposition, and Link (canonical) headersContent-Disposition: inline; filename="golden-retriever"
Link: <https: 123456="" example.com="" images="">; rel="canonical"</https:> -
In theory, there should be no difference - the canonical header should mean that Google treats the inclusion of /images/123456 as exactly the same as including /images/golden-retriever.
It is slightly messier so I think that if it was easy, I'd go down the route of only ever using the /golden-retriever version - but if that's difficult, this is theoretically the same so should be fine.
-
@Will Thank you so much for this response. Very helpful.
"If you can't always refer to the image by its keyword-rich filename"...
If I'm already including the canonical link header on the image, and am able to serve from both /images/123456 and /images/golden-retriever (canonical), is there any benefit to referencing the canonical over the other in my image tags?
-
Hi James. I've responded with what I believe is a correct answer to MarathonRunner's question. There are a few inaccuracies in your responses to this thread - as pointed out by others below - please can you target your future responses to areas where you are confident that you are correct and helpful? Many thanks.
-
@MarathonRunner - you are correct in your inline responses - it's totally valid to serve an image (or other filetype) without an extension, with its type identified by the Content-Type. Sorry that you've had a less-than-helpful experience here so far.
To answer your original questions:
- From an SEO perspective, there is no need that I know of for your images to have a file extension - the content type should be fine
- However - I have no reason to think that a filename in the Content-Disposition header will be recognised as a ranking signal - what you are describing is a rare use-case and I haven't seen any evidence that it would be recognised by the search engines as being the "real" filename
If you can't always refer to the image by its keyword-rich filename, then could you:
- Serve it as you propose (though without the Content-Disposition filename)
- Serve a rel="canonical" link to a keyword-rich filename (https://example.com/images/golden-retriever in your example)
- Also serve the image on that URL
This only helps if you are able to serve the image on the /images/golden-retriever path, but need to have it available at /images/123456 for inclusion in your own HTML templates.
I hope that helps.
-
If you really did your research you would have noticed the header image is not using an extension.
-
Again, you're mistaken. The Content-Type response header tells the browser what type of file the resource is (mime type). This is _completely different _from the file extension in URL paths.
In fact, on the web all the file extensions are faked through the URL path. For example, this page's URL path is:
https://moz.com/community/q/how-important-is-the-file-extension-in-the-url-for-images
It's not
https://moz.com/community/q/how-important-is-the-file-extension-in-the-url-for-images.html
How does the browser know the the page is an html doc? Because of the Content-Type response header. The faked "extension" in the URL path, is unnecessary.
You can view http response headers for any URL using this tool.
-
-
Do you need a new keyboard?
-
@James Wolff: I'm really hoping you're being sarcastic here. As it's totally fine to serve it without the extension. There are many more ways for a crawler to understand what type a file is. Including what @MarathonRunner is talking about here.
-
This isn't accurate. File extension (in the url path) is not the same as the **Content-Type **response header. Browsers respect the response header Content-Type over whatever extension I use in the path.
Example: try serving a file /golden-retriever.png with a content type of image/jpeg. Your browser will understand the file as a .jpg. If you attempt to save, your browser will correct to golden-retriever.jpg.
You can route URLs however you want.
Additionally, I'm not aware of any way browsers "leverage cache by content type". Browsers handle cache by the etag/expires header.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing Toxic Back Links Targeting Obscure URL or Image
There are 2 or 3 URLs and one image file that dozens of toxic domains are linking to on our website. Some of these pages have hundreds of links from 4-5 domains. Rather than disavowing these links, would it make sense to simply break these links, change the URL that the link to and not create a redirect? It seems like this would be a sure fire way to get rid of these links. Any downside to this approach? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan1 -
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
How important is admin-ajax.php?
Hi there! It's been a long time since I last did a technical audit of a site. I've currently playing with the 'fetch as google' tool to find out if we're blocking anything vital. The site is based on Wordpress, and after a recent hacking incident, a previous SEO moved the login portal from domain.com/wp-admin/ to domain.com/pr3ss/wp-admin/ - to stop people finding it. Fair enough. But they then updated the robots.txt file to look like this: User-agent: *
Intermediate & Advanced SEO | | Muhammad-Isap
Disallow: /pr3ss/wp-admin/ Now, some pages are trying to draw on theme elements like: http://www.domain.com/pr3ss/wp-admin/admin-ajax.php
http://www.domain.com/pr3ss/wp-content/themes/bestpracticegroup/images/column_wrapper_bg.png And are naturally being blocked (not that this seems to affect the way pages are rendering in Google's eyes) A good SEO friend of mine has suggested allowing the theme folder, and any sub folders where this becomes an issue. What are your thoughts? Is it even worth disallowing the /pr3ss/wp-admin/ path? Cheers guys and gals! All the best, John. I've found a couple of the theme's0 -
Removing UpperCase URLs from Indexing
This search - site:www.qjamba.com/online-savings/automotix gives me this result from Google: Automotix online coupons and shopping - Qjamba
Intermediate & Advanced SEO | | friendoffood
https://www.qjamba.com/online-savings/automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. and Google tells me there is another one, which is 'very simliar'. When I click to see it I get: Automotix online coupons and shopping - Qjamba
https://www.qjamba.com/online-savings/Automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. This is because I recently changed my program to redirect all urls with uppercase in them to lower case, as it appears that all lowercase is strongly recommended. I assume that having 2 indexed urls for the same content dilutes link juice. Can I safely remove all of my UpperCase indexed pages from Google without it affecting the indexing of the lower case urls? And if, so what is the best way -- there are thousands.0 -
What is the best URL structure for categories?
A client's site currently uses the URL structure: www.website.com/�tegory%/%postname% Which I think is optimised fairly well, as the categories are keywords being targeted. However, as they are using a category hierarchy, often times the URL looks like this: www.website.com/parent-category/child-category/some-post-titles-are-quite-long-as-they-are-long-tail-terms Best practise often dictates (such as point 3 in this Moz article) that shorter URLs are better for several reasons. So I'm left with a few options: Remove the category from the URL Flatten the category hierarchy Shorten post titles two a word or two - which would hurt my long tail search term traffic. Leave it as it is What do we think is the best route to take? Thanks in advance!
Intermediate & Advanced SEO | | underscorelive0 -
How important is the number of indexed pages?
I'm considering making a change to using AJAX filtered navigation on my e-commerce site. If I do this, the user experience will be significantly improved but the number of pages that Google finds on my site will go down significantly (in the 10,000's). It feels to me like our filtered navigation has grown out of control and we spend too much time worrying about the url structure of it - in some ways it's paralyzing us. I'd like to be able to focus on pages that matter (explicit Category and Sub-Category) pages and then just let ajax take care of filtering products below these levels. For customer usability this is smart. From the perspective of manageable code and long term design this also seems very smart -we can't continue to worry so much about filtered navigation. My concern is that losing so many indexed pages will have a large negative effect (however, we will reduce duplicate content and be able provide much better category and sub-category pages). We probably should have thought about this a year ago before Google indexed everything :-). Does anybody have any experience with this or insight on what to do? Thanks, -Jason
Intermediate & Advanced SEO | | cre80 -
Capitals in url creates duplicate content?
Hey Guys, I had a quick look around however I couldn't find a specific answer to this. Currently, the SEOmoz tools come back and show a heap of duplicate content on my site. And there's a fair bit of it. However, a heap of those errors are relating to random capitals in the urls. for example. "www.website.com.au/Home/information/Stuff" is being treated as duplicate content of "www.website.com.au/home/information/stuff" (Note the difference in capitals). Anyone have any recommendations as to how to fix this server side(keeping in mind it's not practical or possible to fix all of these links) or to tell Google to ignore the capitalisation? Any help is greatly appreciated. LM.
Intermediate & Advanced SEO | | CarlS0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1