Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is my servers ip address showing up in Webmaster Tools?
-
In links to my site in Google Webmaster Tools I am showing over 28,000 links from an ip address. The ip address is the address that my server is hosted on. For example it shows 200.100.100.100/help, almost like there are two copies of my site, one under the domain name and one under the ip address. Is this bad? Or is it just showing up there and Google knows that it is the same since the ip and domain are from the same server?
-
Hmmm, this is a weird one. My guess is, since Google originally found those links (maybe before your site launched, but the pages were linked to and live through the IP address?), it keeps returning to them and finding them. In that case, not much you can do, but keep those canonicals on.
Canonicals really can save you from duplicate content problems: I've had clients with multiple versions of every page based on the path you take to a page, and canonicals have allowed them to rank well and avoid penalties entirely. As long as you're doing everything else right, hopefully this shouldn't be too much of an issue.
Sorry this ended up falling on you!
-
According to my latest links in Webmaster Tools the first time it happened was October 2012, which is before the site launch. It seems to have accelerated this year. It is a total of 16341 links but under linked pages it only says 27.
-
Hm, this could have, though. When did you first notice these backlinks from the IP address in GWT?
-
I am unsure to be honest. We had an organic traffic drop in 2012 the week of the penguin release. We launched a new site last year which killed organic so I am trying to improve our rankings. I can say confidently we have had nothing in Webmaster Tools, but maybe it has hurt traffic.
-
Well, from an SEO perspective, this hasn't lead to any penalties or reduced rankings, right?
-
Recently we switched to https so I started using self-referential rel="canonical" on all my pages. I can't figure this out, and nobody else can either. I am on all sorts of boards, forums, groups, and nobody has ever heard of this. I just don't get it.
-
Did you add canonicals, at least, to make sure that Google wouldn't find duplicate content? That's what I'd be most worried about, from an SEO perspective.
-
I never solved the problem. I made a new post to see if anything has changed. It seems strange that nobody else has ever had this problem. I looked all over Google and nothing. I just ran Screaming Frog and nothing showed up.
-
How is this going? Did you solve the problem?
One quick note: if you can't find a link to the IP address on your site (or, a link to a broken link or an old domain), run a Screaming Frog or Xenu crawl and look at all external links. There's probably a surprise footer link or something like that that's causing the problem, and it'd be easy to miss manually. But tools find all!
Good luck.
-
Yeah it's generally a DNS setup. If you're hosting with a company the best thing to do is open a ticket and get them to walk through it with you. Most providers will have their own admin panels.
-
I have looked and can't find anything in the site that goes from ip. I have looked in Webmaster Tools and it doesn't show any duplicate content. We are on a Windows server, think it would be pretty easy to redirect the ip to the domain?
-
There might be a link or something directing the crawlers to your site's IP address instead of the original domain. There is potential for getting flagged with duplicate content but I feel it's fairly unlikely. You do want to fix this though, it would hamper your backlink efforts. These steps will correct this issue.
1. Setup canonical tags on all your pages. This lets Google know that 1 url should be linked for this page whether they're on the IP or domain.
2. Set your host up so that anything that directs to the IP is automatically redirected to the domain. This can be done with your hosting company, or through .htaccess, or through PHP. I suggest you do it with the hosting company.
3. Check through your site and make sure no links point to the IP domain. If there are no links pointing to the IP, the crawler shouldn't follow.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server update to ipv6, SEO consequences
Hi all, I read the article from 2014 on MOZ regarding ipv6.
Intermediate & Advanced SEO | | AdenaSEO
https://moz.com/blog/ipv6-cblocks-and-seo Our technical department is about to change our server from ipv4 to ipv6.
Are there any things we have to consider regarding SEO / rankings / duplicate content etc.. with this transition? I hope you have a little spare time to answer this question. Regards,
Tom1 -
What IP Address does Googlebot use to read your site when coming from an external backlink?
Hi All, I'm trying to find more information on what IP address Googlebot would use when arriving to crawl your site from an external backlink. I'm under the impression Googlebot uses international signals to determine the best IP address to use when crawling (US / non-US) and then carries on with that IP when it arrives to your website? E.g. - Googlebot finds www.example.co.uk. Due to the ccTLD, it decides to crawl the site with a UK IP address rather than a US one. As it crawls this UK site, it finds a subdirectory backlink to your website and continues to crawl your website with the aforementioned UK IP address. Is this a correct assumption, or does Googlebot look at altering the IP address as it enters a backlink / new domain? Also, are ccTLDs the main signals to determine the possibility of Google switching to an international IP address to crawl, rather than the standard US one? Am I right in saying that hreflang tags don't apply here at all, as their purpose is to be used in SERPS and helping Google to determine which page to serve to users based on their IP etc. If anyone has any insight this would be great.
Intermediate & Advanced SEO | | MattBassos0 -
P.O Box VS. Actual Address
We have a website (http://www.delivertech.ca) that uses a P.O Box number versus an actual address as their "location". Does this affect SEO? Is it better to use an actual address? Thanks.
Intermediate & Advanced SEO | | Web3Marketing870 -
Wrong meta descriptions showing in the SERPS
We recently launched a new site on https, and I'm seeing a few errors in the SERPS with our meta descriptions as our pages are starting to get indexed. We have the correct meta data in our code but it's being output in Google differently. Example: http://imgur.com/ybqxmqg Is this just a glitch on Google's side or is there an obvious issue anyone sees that I'm missing? Thanks guys!
Intermediate & Advanced SEO | | Brian_Owens_10 -
Exact Syntax for Canonical to PDFs for Windows Server
Hi There, I have got in my web several PDFs with the same content of the HTML version. Thus I need to set up a canonical for each of them in order to avoid duplicate content. In particular, I need to know how to write the exact syntax for the windows server (web.config) in order to implement the canonical to PDF. I surfed the web but it seems I cannot find this piece of info anywhere Thanks a lot!!
Intermediate & Advanced SEO | | Midleton0 -
50,000 backlinks in webmaster tools from one site???
Hi All, I'm new to evaluating backlinks, but I just saw I got over 50,000 links from a backlink that was added on ONE page at this site here: http://www.netnewspublisherDOTcom. I presume this is not a good thing, and if I contact them to remove the one link on the one page, it won't solve the other 49,999 links that Google is seeing pointing to us, so what do I do??. Should I contact them and ask to remove it and see if they don't and then disavow? Or would you just tell Google to disavow the whole site? Thanks!
Intermediate & Advanced SEO | | mlm120 -
Tool to check XML sitemap
Hello, Can anyone help me finding a tool to have closer look of the XML sitemap? Tks in advance! PP
Intermediate & Advanced SEO | | PedroM0 -
Serving different content based on IP location
I have city centric website. For sake of simplicity, say I only have 2 cities -- City A and City B. Depending on a user's IP address, they will either get City A or City B. Users can change their location through javascript on pages. But there is no cross-linking between cities. By this, I mean that unless you can read or execute javascript, there is no way for you to get from city A to City B. My concern is this: googlebot comes to my site, and we serve them up City A. How does City B get discovered if Googlebot doesn't read javascript? We have an xml sitemap plus plenty of backlinks to City B. Is this sufficient? Should I provide a static link to City B (and vice versa) on the homepage for crawling purposes?
Intermediate & Advanced SEO | | ChatterBlock0