Why is my servers ip address showing up in Webmaster Tools?
-
In links to my site in Google Webmaster Tools I am showing over 28,000 links from an ip address. The ip address is the address that my server is hosted on. For example it shows 200.100.100.100/help, almost like there are two copies of my site, one under the domain name and one under the ip address. Is this bad? Or is it just showing up there and Google knows that it is the same since the ip and domain are from the same server?
-
Hmmm, this is a weird one. My guess is, since Google originally found those links (maybe before your site launched, but the pages were linked to and live through the IP address?), it keeps returning to them and finding them. In that case, not much you can do, but keep those canonicals on.
Canonicals really can save you from duplicate content problems: I've had clients with multiple versions of every page based on the path you take to a page, and canonicals have allowed them to rank well and avoid penalties entirely. As long as you're doing everything else right, hopefully this shouldn't be too much of an issue.
Sorry this ended up falling on you!
-
According to my latest links in Webmaster Tools the first time it happened was October 2012, which is before the site launch. It seems to have accelerated this year. It is a total of 16341 links but under linked pages it only says 27.
-
Hm, this could have, though. When did you first notice these backlinks from the IP address in GWT?
-
I am unsure to be honest. We had an organic traffic drop in 2012 the week of the penguin release. We launched a new site last year which killed organic so I am trying to improve our rankings. I can say confidently we have had nothing in Webmaster Tools, but maybe it has hurt traffic.
-
Well, from an SEO perspective, this hasn't lead to any penalties or reduced rankings, right?
-
Recently we switched to https so I started using self-referential rel="canonical" on all my pages. I can't figure this out, and nobody else can either. I am on all sorts of boards, forums, groups, and nobody has ever heard of this. I just don't get it.
-
Did you add canonicals, at least, to make sure that Google wouldn't find duplicate content? That's what I'd be most worried about, from an SEO perspective.
-
I never solved the problem. I made a new post to see if anything has changed. It seems strange that nobody else has ever had this problem. I looked all over Google and nothing. I just ran Screaming Frog and nothing showed up.
-
How is this going? Did you solve the problem?
One quick note: if you can't find a link to the IP address on your site (or, a link to a broken link or an old domain), run a Screaming Frog or Xenu crawl and look at all external links. There's probably a surprise footer link or something like that that's causing the problem, and it'd be easy to miss manually. But tools find all!
Good luck.
-
Yeah it's generally a DNS setup. If you're hosting with a company the best thing to do is open a ticket and get them to walk through it with you. Most providers will have their own admin panels.
-
I have looked and can't find anything in the site that goes from ip. I have looked in Webmaster Tools and it doesn't show any duplicate content. We are on a Windows server, think it would be pretty easy to redirect the ip to the domain?
-
There might be a link or something directing the crawlers to your site's IP address instead of the original domain. There is potential for getting flagged with duplicate content but I feel it's fairly unlikely. You do want to fix this though, it would hamper your backlink efforts. These steps will correct this issue.
1. Setup canonical tags on all your pages. This lets Google know that 1 url should be linked for this page whether they're on the IP or domain.
2. Set your host up so that anything that directs to the IP is automatically redirected to the domain. This can be done with your hosting company, or through .htaccess, or through PHP. I suggest you do it with the hosting company.
3. Check through your site and make sure no links point to the IP domain. If there are no links pointing to the IP, the crawler shouldn't follow.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Fetch as Google showing Tablet View, not Desktop View
Hi Mozers, Fetch as Google is showing Tablet view and not Desktop view. Does anyone know why this is? And does that mean that Googlebot is reading the Tablet version instead of the Desktop version (same HTML but different visualization)? Thanks!! Yael
Intermediate & Advanced SEO | | yaelslater0 -
Google Webmaster tools -Fixing over 20,000+ crawl errors
Hi, I'm trying to gather all the 404 crawl errors on my website after a recent hacking that I've been trying to rectify and clean up. Webmaster tools states that I have over 20 000+ crawl errors. I can only download a sample of 1000 errors. Is there any way to get the full list instead of correcting 1000 errors, marking them as fixed and waiting for the next batch of 1000 errors to be listed in Webmaster tools? The current method is quite timely and I want to take care of all errors in one shot instead of over a course of a month.
Intermediate & Advanced SEO | | FPK0 -
Multiple Google Webmaster Tools Configurations
Hello everyone, I just inherited a website and 2 different users created GWT accounts on the same site and have configured different settings. Do you know how Google behaves when this happens? Thanks
Intermediate & Advanced SEO | | Carla_Dawson0 -
Using IP to deliver different sidebar content on homepage
We have a site with a generic top level domain and we'd like to use a small portion of the homepage to cater content based on the IP of a visiting user. The content is for product dealerships around different regions/states of the US, not internationally. The idea being that someone from Seattle would see dealerships for this product near their location in Seattle. The section on the homepage is relatively small and would churn out 5 links and images according to location. The rest of the homepage would be the same for everyone, which includes links to news and reviews and fuller content. We have landing pages for regional/state content deeper in the site that don't use an IP to deliver content and also have unique URLs for the different regions/states. An example being a "Washington State Dealerships" landing page with links to all the dealerships there. We're wondering what kind of SEO impact there would be to having a section of the homepage delivering different content based on IP, and if there's anything we should do about it (or if we should be doing it all!). Thank you.
Intermediate & Advanced SEO | | seoninjaz0 -
Block search bots on staging server
I want to block bots from all of our client sites on our staging server. Since robots.txt files can easily be copied over when moving a site to production, how can i block bots/crawlers from our staging server (at the server level), but still allow our clients to see/preview their site before launch?
Intermediate & Advanced SEO | | BlueView13010 -
Can you be penalized by a development server with duplicate content?
I developed a site for another company late last year and after a few months of seo done by them they were getting good rankings for hundreds of keywords. When penguin hit they seemed to benefit and had many top 3 rankings. Then their rankings dropped one day early May. Site is still indexed and they still rank for their domain. After some digging they found the development server had a copy of the site (not 100% duplicate). We neglected to hide the site from the crawlers, although there were no links built and we hadn't done any optimization like meta descriptions etc. The company was justifiably upset. We contacted Google and let them know the site should not have been indexed, and asked they reconsider any penalties that may have been placed on the original site. We have not heard back from them as yet. I am wondering if this really was the cause of the penalty though. Here are a few more facts: Rankings built during late March / April on an aged domain with a site that went live in December. Between April 14-16 they lost about 250 links, mostly from one domain. They acquired those links about a month before. They went from 0 to 1130 links between Dec and April, then back to around 870 currently According to ahrefs.com they went from 5 ranked keywords in March to 200 in April to 800 in May, now down to 500 and dropping (I believe their data lags by at least a couple of weeks). So the bottom line is this site appeared to have suddenly ranked well for about a month then got hit with a penalty and are not in top 10 pages for most keywords anymore. I would love to hear any opinions on whether a duplicate site that had no links could be the cause of this penalty? I have read there is no such thing as a duplicate content penalty per se. I am of the (amateur) opinion that it may have had more to do with the quick sudden rise in the rankings triggering something. Thanks in advance.
Intermediate & Advanced SEO | | rmsmall0 -
Best tools for exploring links?
and not just every single link, but ones you know that Google is actually indexing. I find seomoz to be super easy, but there is no way to distinguish links that are actually counting "juice", or am i missing something. What about majesticseo - any other similar tools you use when trying to find linking sites that pass juice?
Intermediate & Advanced SEO | | imageworks-2612900 -
How does Google Webmasters decide what order to show external links?
In "links to your site" how does Google Webmasters determine the order of the URLs? By influence? Quality?
Intermediate & Advanced SEO | | nicole.healthline0