I added an SSL certificate this morning and now I noticed duplicate content
-
Ok, so Im a newbie, therefor I make mistakes! Lots of them.
I added an SSL certificate this morning bc it was free and I read it can help my rankings. Now I just checked it in screaming frog and saw two duplicate content pages due to the https.
So im panicking! What's the easiest way to fix this?? Can I undue an SSL certificate?
I guess what's the easiest that will also be best for ranking.
Thank you!!
Rena
-
Since you are WordPress, install "Really Simple SSL" plugin https://really-simple-ssl.com/
You have a mixed content warning as well as the redirect problem. Really Simple SSL will fix that pretty painlessly. Worth the $25 for the premium version but the free version is also great.
Also looks like your host may be WP Engine? They can work with you to help as well.
I see the mixed content warning if I go directly to the page: https://intercallsystems.com/nurse-call-manufacturer/
-
Hi Brian, Im sorry to bug you. But if you don't mind... Im still confused and having a hard time wrapping my brain around this. This is what I do know.. when i type in intercallsystems.com it automatically goes to https://intercallsystems.com.
But if I type in any other page, like: http://intercallsystems.com/nursecallsystems/the-equinoxlegend-systems/ it doesnt automatically go.
Only the homepage does..
Also,
When I put my site through screaming frog I get duplicate title issues and duplicated H1 tags and what not for some pages like:
http://intercallsystems.com/nurse-call-manufacturer/
https://intercallsystems.com/nurse-call-manufacturer/
So do i need to redirect to http or do i need to do rel canonical?
This is what my current htaccess file looks like:BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Redirect 301 /intercallsystems.com/intercall-nurse-call-systems/ http://intercallsystems.com/nursecallsystems
Redirect 301 /about-us http://intercallsystems.com/nurse-call-manufacturer
Redirect 301 /cat/avstas.html http://intercallsystems.com/
Redirect 301 /contact.html http://intercallsystems.com/contact-us/
Redirect 301 /cat/product.html http://intercallsystems.com/nursecallsystems
Redirect 301 /legal.html http://intercallsystems.com
Redirect 301 /8345spec.html http://intercallsystems.com
Redirect 301 /patsta.html http://intercallsystems.com
Redirect 301 /employment.html http://intercallsystems.com/about-us/employment/
Redirect 301 /is/index.html http://intercallsystems.com/nursecallsystems/
Redirect 301 /intercall-nurse-call-systems/the-equinoxlegend-systems/ http://intercallsystems.com/nursecallsystems/the-equinoxlegend-systems/
Redirect 301 /the-audio-visual-system/ http://intercallsystems.com/nursecallsystems/the-equinoxlegend-systems/
Redirect 301 /nurse-call-systems/the-ultra-series/ http://intercallsystems.com/nursecallsystems/ultra-system/
Redirect 301 /systems/ http://intercallsystems.com/nursecallsystems/
Redirect 301 /about-intercall-systems/ http://intercallsystems.com/nurse-call-manufacturer/
Redirect 301 /ultra-touch-screen-master/map-1550/ http://intercallsystems.com/nursecallsystems/ultra-system/
Redirect 301 /the-vista-series/ http://intercallsystems.com/nursecallsystems/vista-series/
Redirect 301 /intercall-systems/ http://intercallsystems.com/nursecallsystems/
Redirect 301 /nurse-call-systems/the-ultra-system/ http://intercallsystems.com/nursecallsystems/ultra-system/
Redirect 301 /nurse-call-systems/the-audio-visual-system http://intercallsystems.com/nursecallsystems/the-equinoxlegend-systems/I really appreciate the help!
Rena
-
Thank you! Actually I suspected something was up bc it has more than usual down time. I wasn't sure what to do. Thanks
-
My pleasure, glad you were able to get that fixed rather quickly!
Yes, I would set up a new property in Search Console with the https version and resubmit the new sitemap and all that fun stuff. Then you can delete the old property to keep things neat.
One thing I want to mention, I noticed your site is on a shared hosting server with BlueHost. You may want to see about moving onto a dedicated server with them to play it safe. You run into malware issues and the possibility of the server being slowed down when loaded with sites like that. Run your site through this tool and you will see that there are several other sites that share the same IP address as your site. I am not sharing this to make you panic because there is no reason to, just so you are aware and can make an informed decision.
http://www.ipfingerprints.com/reverseip.php
Here is an article on the topic that can help shed more light of the risks. I am super picky about where my sites are hosted and page speed so I always steer clear from shared hosting environments.
-
Thank you Brian,
It looks like my hosting provider automatically did it. when I go to the homepage it goes directly to https version and when I look at in Moz Bar I see:
|
HTTP/1.1 301 Moved Permanently
http://intercallsystems.com/ HTTP/1.1 200 OK
|
So now my new question is do I have to create a https version in webmaster tools, submit the sitemap, and do the data highlighter all over again?
Thank you for the help!
-
Sorry I just went back and read that you were a new to SEO! My apologies. Check this article out for more info on htaccess redirects.
-
Next task is 301 redirection as http to https.
-
Hey Rena!
I would just redirect that duplicate page to the new https version and call it a day!
Keep the SSL, just go through like you are and check to make sure everything is directing properly. You should be good to go. Hope this helps! Cheers
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve this issue and avoid duplicated content?
My marketing team would like to serve up 3 pages of similar content; www.example.com/one, www.example.com/two and www.example.com/three; however the challenge here is, they'd like to have only one page whith three different titles and images based on the user's entry point (one, two, or three). To avoid duplicated pages, how would suggest this best be handled?
Intermediate & Advanced SEO | | JoelHer0 -
How can a website have multiple pages of duplicate content - still rank?
Can you have a website with multiple pages of the exact same copy, (being different locations of a franchise business), and still be able to rank for each individual franchise? Is that possible?
Intermediate & Advanced SEO | | OhYeahSteve0 -
Is This Considered Duplicate Content?
My site has entered SEO hell and I am not sure how to fix it. Up until 18 months ago I had tremendous success on Google and Bing and now my website appears below my Facebook page for the term "Direct Mail Raleigh." What makes it even more frustrating is my competitors have done no SEO and they are dominating this keyword. I thought that the issue was due to harmful inbound links and two months ago I disavowed ones that were clearly spam. Somehow my site has actually gone down! I have a blog that I have updated infrequently and I do not know if it I am getting punished for duplicate content. On Google Webmaster Tools it says I have 279 crawled and indexed pages. Yesterday when I ran the MOZ crawl check I was amazed to find 1150 different webpages on my site. Despite the fact that it does not appear on the webmaster tools I have three different webpages due to the format that the Wordpress blog was created: "http://www.marketplace-solutions.com/report/part2leadershi/", "http://www.marketplace-solutions.com/report/page/91/" and "http://www.marketplace-solutions.com/report/category/competent-leadership/page/3/" What does not make sense to me is why Google only indexed 279 webpages AND why MOZ did not identify these three webpages as duplicate content with the Crawl Test Tool. Does anyone have any ideas? Would it be as easy as creating a massive robot.txt file and just putting 2 of the 3 URLs in that file? Thank you for your help.
Intermediate & Advanced SEO | | DR700950 -
Site been plagiarised - duplicate content
Hi, I look after two websites, one sells commercial mortgages the other sells residential mortgages. We recently redesigned both sites, and one was moved to a new domain name as we rebranded it from being a trading style of the other brand to being a brand in its own right. I have recently discovered that one of my most important pages on the residential mortgages site is not in Google's index. I did a bit of poking around with Copyscape and found another broker has copied our page almost word-for-word. I then used copyscape to find all the other instances of plagiarism on the other broker's site and there are a few! It now looks like they have copied pages from our commercial mortgages site as well. I think the reason our page has been removed from the index is that we relaunced both these sites with new navigation and consequently new urls. Can anyone back me up on this theory? I am 100% sure that our page is the original version because we write everything in-house and I check it with copyscape before it gets published, Also the fact that this other broker has copied from several different sites corroborates this view. Our legal team has written two letters (not sent yet) - one to the broker and the other to the broker's web designer. These letters ask the recipient to remove the copied content within 14 days. If they do remove our content from our site, how do I get Google to reindex our pages, given that Google thinks OUR pages are the copied ones and not the other way around? Does anyone have any experience with this? Or, will it just happen automatically? I have no experience of this scenario! In the past, where I've found duplicate content like this, I've just rewritten the page, and chalked it up to experience but I don't really want to in this case because, frankly, the copy on these pages is really good! And, I don't think it's fair that someone else could potentially be getting customers that were persuaded by OUR copy. Any advice would be greatly appreciated. Thanks, Amelia
Intermediate & Advanced SEO | | CommT0 -
How do I best handle Duplicate Content on an IIS site using 301 redirects?
The crawl report for a site indicates the existence of both www and non-www content, which I am aware is duplicate. However, only the www pages are indexed**, which is throwing me off. There are not any 'no-index' tags on the non-www pages and nothing in robots.txt and I can't find a sitemap. I believe a 301 redirect from the non-www pages is what is in order. Is this accurate? I believe the site is built using asp.net on IIS as the pages end in .asp. (not very familiar to me) There are multiple versions of the homepage, including 'index.html' and 'default.asp.' Meta refresh tags are being used to point to 'default.asp'. What has been done: 1. I set the preferred domain to 'www' in Google's Webmaster Tools, as most links already point to www. 2. The Wordpress blog which sits in a /blog subdirectory has been set with rel="canonical" to point to the www version. What I have asked the programmer to do: 1. Add 301 redirects from the non-www pages to the www pages. 2. Set all versions of the homepage to redirect to www.site.org using 301 redirects as opposed to meta refresh tags. Have all bases been covered correctly? One more concern: I notice the canonical tags in the source code of the blog use a trailing slash - will this create a problem of inconsistency? (And why is rel="canonical" the standard for Wordpress SEO plugins while 301 redirects are preferred for SEO?) Thanks a million! **To clarify regarding the indexation of non-www pages: A search for 'site:site.org -inurl:www' returns only 7 pages without www which are all blog pages without content (Code 200, not 404 - maybe deleted or moved - which is perhaps another 301 redirect issue).
Intermediate & Advanced SEO | | kimmiedawn0 -
Duplicate content even with 301 redirects
I know this isn't a developer forum but I figure someone will know the answer to this. My site is http://www.stadriemblems.com and I have a 301 redirect in my .htaccess file to redirect all non-www to www and it works great. But SEOmoz seems to think this doesn't apply to my blog, which is located at http://www.stadriemblems.com/blog It doesn't seem to make sense that I'd need to place code in every .htaccess file of every sub-folder. If I do, what code can I use? The weirdest part about this is that the redirecting works just fine; it's just SEOmoz's crawler that doesn't seem to be with the program here. Does this happen to you?
Intermediate & Advanced SEO | | UnderRugSwept0 -
Do you bother cleaning duplicate content from Googles Index?
Hi, I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do... For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'. Do you think this is right? Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled? Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0 -
Managing Large Regulated or Required Duplicate Content Blocks
We work with a number of pharmaceutical sites that under FDA regulation must include an "Important Safety Information" (ISI) content block on each page of the site. In many cases this duplicate content is not only provided on a specific ISI page, it is quite often longer than what would be considered the primary content of the page. At first blush a rel=canonical tag might appear to be a solution to signal search engines that there is a specific page for the ISI content and avoid being penalized, but the pages also contain original content that should be indexed as it has user benefit beyond the information contained within the ISI. Anyone else running into this challenge with regulated duplicate boiler plate and has developed a work around for handling duplicate content at the paragraph level and not the page level? One clever suggestion was to treat it as a graphic, however for a pharma site this would be a huge graphic.
Intermediate & Advanced SEO | | BlooFusion380