Duplicate Content for index.html
-
In the Crawl Diagnostics Summary, it says that I have two pages with duplicate content which are:
I read in a Dream Weaver tutorial that you should name your home page "index.html" and then you can let www.mywebsite.com automatically direct the user to index.html. Is this a bug in SEOMoz's crawler or is it a real problem with my site?
Thank you,
Dan
-
The code should definitely go into the websites root directory's .htaccess, however .htaccess can be weird, a few days ago I ran into a similar issue with a client's website, and I was able to remedy the issue with a variation of the code.
index Redirect RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^/]+/)index.(php|html|htm|asp)\ HTTP/ RewriteRule ^(([^/]+/))index.(php|html|htm|asp)$ http://yoursite.com/$1 [R=301,L]
If you give me the URL for the site I will take a look at it and let you know what would be feasible.
-
Hi Daniel, can you share with us the URL of your site? We can take a look at it and give you a more precise answer that way. Thanks!
-
I eventually figured out that your method was a 301 redirect and I definitely broke my site trying to use the code you posted. .. haha. Its ok though. I just removed the code and it went back to normal. At first, I was editing the .htaccess file in the public_html folder which wasnt working. Then I tried the root folder for the site (I created the .htaccess file since it did not exist.) Neither of those worked. (I am using Bluehost so I do not think that I have root access and I am not sure if it is a Linux server or not.)
If there is an easy way to explain what I am doing wrong, please do so. Otherwise, I will use canonical.
Thanks for everything!
-
@Dan
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
sorry about the delay of this response, i didn't realize the that you were asking me a question right away. When placing the code I provided in my previous answer this will cause a 301 perminant redirect to the original URL. That's actually what the
[R=301,L]
portion of the code is stating (R) redirect (301) status is referring to. After reviewing the Matt Cutts video, I realize that I should have asked you if you were operating on a Linux server that you had root access to. We actually utilize both redirects and canonical tags since it was recommended by the on-page optimization reports. Heck Google uses them, I would assume because it's easier for the user to be referred to a single page URL. Obviously though if you don't have server header access, and are not familiar with .htaccess (you can accidentally break your site) then the canonical solution is appropriate
-
Josh,
Thanks for your reply. It seems like there are lots of different ways to solve this problem. I just watched this video on Matt Cutt's blog where he discusses his preference for 301 redirects over rel canonical tag.
Where would you say your solution fits in?
Thanks,
Dan -
use the link rel tag for all my homepages for the http://www.yoursite.com
-
Odd enough I just recently answered this question. The SEOmoz crawler is correct, because without a redirect you will be able to access both versions of the page in your browser.
To resolve this issue simply rewrite the index.html to the root url by placing the following code into your .htaccess file into your root directory.
Options +FollowSymlinks RewriteEngine on
Index Rewrite RewriteRule ^index.(htm|html|php) http://www.yoursite.com/ [R=301,L] RewriteRule ^(.*)/index.(htm|html|php) http://www.yoursite.com/$1/ [R=301,L]
You can also do the same with the index file in any subdirectories that you might create, by simply placing a .htaccess into those sub directories and using variations of the above code. This is how you create nice tight URLs without the duplicate content issue that look like - http://www.semclix.com/design/business/
-
It is a problem which you need to fix. You need to canonicalize your pages.
Those are all various URLs which most likely lead to the same web page. I say "most likely" because these URLs can actually lead to different pages.
You need to tell crawlers and search engines how you organize your site. There are several ways to achieve canonicalization. The method I prefer is to add the following line of code to each page:
The URL provided should be the preferred URL for your page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will HTTPS Effect SERPS Depending on Different Page Content?
I know that HTTPS can have a positive influence on SERPS. Does anyone have any thoughts or evidence of this effect being different depending on the page content? For example, I would think that for e-commerce sites HTPS is a must, and I guess the change in rankings would be more significant. But what about other situations, AMP pages for example? Of if you run Adsense, or Affiliate links? Or if your page contains a form?
Web Design | | GrouchyKids1 -
Parallax, SEO, and Duplicate Content
We are working on a project that uses parallax to provide a great experience to the end user, and we are also trying to create a best case scenario for SEO. We have multiple keywords we are trying to optimize. We have multiple pages with the parallax function built into it. Basically each member of the primary navigation is it's own page, with all subpages built below it using the parallax function. Our navigation currently uses the hashbang method to provide custom URL's for each subpage. And the user is appropriately directed to the right section based on that hashbang. www.example.com/About < This is its own page www.example.com/about/#/history < This is a subpage that you scroll to on the About page We are trying to decide what the best method will be for trying to optimize each subpage, but my current concern is that because each subpage is really a part of the primary page, will all those URL's be seen as duplicate content? Currently the site can also serve each subpage as it's own page as well, so without the parallax function. Should I include those as part of the sitemap. There's no way to navigate to them unless I include them in the sitemap, but I don't want Google to think I'm disingenuous in providing them links that don't exist, solely for the purpose of SEO, but truthfully all of the content exists and is available to the user. I know that a lot of people are asking these questions, and there really are no right answers yet, but I'm curious about everyone else's experience so far.
Web Design | | PaulRonin2 -
Question #2: All of my INTERNAL links in OSE are being indexed from http://www.e.com/default.asp, and all my EXTERNAL links are linked to http://www.e.com/ am I getting a fraction of the link juice because of that?????
Hey guys, sorry for the really long question, but it appears that I am losing between 50 and 75 % of my link juice to my internal pages. In OSE all main category links (left sidebar) are being indexed from the URL that includes default.asp, even though NONE of my external links include that: http://www.opensiteexplorer.org/links?site=http%3A%2F%2Fwww.uncommonthread.com%2FSulky-Thread-s%2F78.htm If you check the PA for http://www.uncommonthread.com/: http://www.opensiteexplorer.org/links?site=http%3A%2F%2Fwww.uncommonthread.com%2F You see that it is practically double the PA of http://www.uncommonthread.com/default.asp: http://www.opensiteexplorer.org/links?site=http%3A%2F%2Fwww.uncommonthread.com%2FDefault.asp **Also, non of my internal menu links are being indexed. ** Look at the menu on this page: http://www.uncommonthread.com/Sulky-Thread-s/78.htm and then look at the OSE information here for the "invisible thread" item from the menu on the page above^^^: http://www.opensiteexplorer.org/links?site=http%3A%2F%2Fwww.uncommonthread.com%2FSulky-monofilament-s%2F54.htm Thanks SOOO much! Pre-thumbs and thanks to anyone that can lend me a seconds worth of advice! Thanks again for your time, Tyler A.
Web Design | | TylerAbernethy0 -
How serious is duplicate page content?
We just launched our site on a new platform - Magento Enterprise. We have a wholesale catalog and and retail catalog. We have up to 3 domains pointing to each product. We are getting tons of duplicate content errors. What are the best practices for dealing with this? Here is an example: mysite.com/product.html mysite.com/category/product.html mysite.com/dynamic-url
Web Design | | devonkrusich0 -
How do you get rid of the .html and .php extensions at the end of urls?
What is the whitehat way to properly remove the .html and .php extensions at the end of urls? Example: http://www.seomoz.org/learn-seo.php should be (and is) http://www.seomoz.org/learn-seo
Web Design | | Ryan-Bradley0 -
Outsourcing Content - Finding Superior Providers...
I am looking for content writers. Not textbroker.com, I want content written that isnt scraped and reworded from information already in google. Can anyone recommend a company which isnt afraid to read a book or a magazine, dig up old information to write something truly unique? This should likely be in a fresh thread, but ill put it here as a side note. If you also can recommend a wordpress or joomla theme designer who has his own creative ideas and is highly skilled...
Web Design | | getbigyadig0 -
Are HTML sitemaps still in use today?
I'm trying to help a client understand the importance of having a well-organized HTML site map as a method of helping usability. As part of this process, I spent some time searching for good examples of well-organized HTML site maps, and found that many sites don't offer one (including SEOmoz). I'm wondering if webmasters and/or SEOers think they aren't valuable any longer?
Web Design | | EricVallee340 -
Why is site not being indexed by Google, and not showing on a crawl test??
On a site we developed of which .com is forwarded to .net domain, we quit getting crawled by google on about the 20th of Feb. Now when we try to run a crawl test on either url, we get There was an error fetching this page. Error description For some reason the page returned did not describe itself as an html page. It could be possible that the url is serving an image, rss feed, pdf, or xml file of some sort. The crawl tool does not currently report metrics on this type of data. Our other sites are fine and this was up to this date. We took out noodp, noydir today as the only thing we could think of. Site is on WP cms.
Web Design | | RobertFisher0