Does Google index XML files?
-
Does Google or other search engines include XML files in their index? More specifically, I am wondering how Google knows the difference between an xml filetype and an RSS feed.
-
Yes, Google indexes XML files. You can try a search using filetype:xml
I am not an expert on RSS files but I believe the XML versions use the <rss>tag. If I were to take a guess, I would say Google can easily examine the file (they read pdf's for example) and determine if it is an RSS feed.</rss>
-
Google and other search engines know about XML files. The reason I don't necessarily say indexed is because you don't see them really popping up on a SERP page. Robots know the difference between a straight xml file and an rss feed because of the technical markup and structure of the actual file.
For example a very common use for Google and xml files is sitemaps, which you can give them the sitemap feed in Webmaster tools so they will know about the file. If you did a search for www.rootdomain.com + sitemap file you would never see an .xml file actually popup. You would need to visit the url directly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ranking and Indexing Issue
We have an established site www.getinspired365.com that previously wasn't SEO optimised. We are currently in the process of testing out some new pages to see if we can get them to rank in Google, however we are seeing huge fluctuations in where they rank. Within the first few days we saw our page rank on the first or second page, however it has now dropped out of the top 250 search results. We are wondering if we have made any mistakes with our optimisation ? Example Page : Keyword to target - "If you laugh, you think, and you cry, that's a full day. That's a heck of a day. You do that seven days a week, you're going to have something special." URL : http://www.getinspired365.com/if-you-laugh-you-think-and-you-cry-thats-a-full-day-thats-a-heck-of-a-day-you-do-that-seven-days-a-week-youre-going-to-have-something-special We can see it has been indexed by Google but is now not ranking in the top 250 search engine results. We have run the On Page Grader from SEOMoz and it ranks the page as an "A" so we suspect that we are doing the SEO ok on the page, but can't work out why it isn't ranking, despite ranking on the first or second page after a few days ? We have other pages that aren't SEO optimised that rank better than our newly SEO optimised pages e.g. Keyword - "THE BEST LOVE IS THE KIND THAT AWAKENS THE SOUL AND MAKES US REACH FOR MORE, THAT PLANTS A FIRE IN OUR HEARTS AND BRINGS PEACE TO OUR MINDS. AND THAT'S WHAT YOU'VE GIVEN ME. THAT'S WHAT I'D HOPED TO GIVE YOU FOREVER" URL: http://www.getinspired365.com/20130528 Any advice you could offer would be great. Thanks ! Mike
Technical SEO | | MichaelWhyley0 -
Fetch as Google - stylesheets and js files are temporarily unreachable
Fetch as Google often says that some of my stylesheets and js files are temporarily unreachable. Is that a problem for SEO? These stylesheets and scripts aren't blocked and Search Consoles show that a normal user would see the page just fine.
Technical SEO | | WebGain0 -
Google Indexing - what did I missed??
Hello, all SEOers~ I just renewed my web site about 3 weeks ago, and in order to preserve SEO values as much as possible, I did 301 redirect, XML Sitemap and so on for minimize the possible data losses. But the problem is that about week later from site renewal, my team some how made mistake and removed all 301 redirects. So now my old site URLs are all gone from Google Indexing and my new site is not getting any index from Google. My traffic and rankings are also gone....OMG I checked Google Webmaster Tool, but it didn't say any special message other than Google bot founds increase of 404 error which is obvious. Also I used "fetch as google bot" from webmaster tool to increase chance to index but it seems like not working much. I am re-doing 301 redirect within today, but I am not sure it means anything anymore. Any advise or opinion?? Thanks in advance~!
Technical SEO | | Yunhee.Choi0 -
Why my site is not indexing in google
In google webmaster i have updated my sitemap in Mar 6th..There is around 22000 links..But google fetched only 5300 links for long time...
Technical SEO | | Rajesh.Chandran
I waited for 1 month till no improvement in google index..So apr6th we have uploaded new sitemap (1200 links totally)..,But only 4 links indexed in google ..
why google not indexing my urls? Is this affect our ranking in SERP? How many links are advisable to submit in sitemap for a website?0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Google Sitelinks
Is there anyway to control the sitelinks under a listing in Google? I have a group of lawyers where 1 of the them is showing up in the sitelinks. They want all of the lawyers to show up. Right now it is showing 1 lawyer, about page, contact us page, etc. Thanks!!!!
Technical SEO | | SixTwoInteractive0 -
Do pages that are in Googles supplemental index pass link juice?
I was just wondering if a page has been booted into the supplemental index for being a duplicate for example (or for any other reason), does this page pass link juice or not?
Technical SEO | | FishEyeSEO0 -
Continued Lack of Google Indexing
I run a baseball site (http://www.mopupduty.com) that is in a very good link neighbourhood. ESPN, The Score, USA Today, MSG Network, The Toronto Star, Baseball Prospectucs, etc etc. New content has not been getting indexed on Google ever since the last update. Site has no dup content, 100% original. I can't think of any spammy links, we get organic links day after day. In the past Google has indexed the site in minutes. It currently has expanded site links within Google search. Bing & Yahoo index the site in minutes. Are there any quick fixes I can make to increase my chance to get indexed by Google. Or just keep pumping out content and hope to see a change in the upcoming future?
Technical SEO | | mkoster1