Best Way to Determine Age of Site
-
What's the best way to determine the age of a site?
Where by it's beginning I mean when it went through the Google Sandbox and has been a functioning site every since.
Thanks!
-
I think archive.org may be my best bet. Thanks for the good advice
-
are you talking about versions of sites to see how old that particular website is or the domain?
obviously whois information is great for domains
http://www.networksolutions.com/whois/index.jsp
there is also a way to see old versions of websites here:
-
I've previously used Webconfs do research domain age - it's a pretty good resource. Don't think you'll be able to tell exactly when it made it's way through the Google Sandbox, but you should at least be able to determine when it went online. Although, if it was was anytime after 1998-99, then it's almost guaranteed to have made a trip to the box
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Site Links question
Are Google site links only ever shown on the top website? Or is it possible for certain queries for the site in position #2 or #3 or something to have site links but the #1 position not have them? If there are any guides, tips or write ups regarding site links and their behavior and optimization please share! Thanks.
Algorithm Updates | | IrvCo_Interactive0 -
How much link juice does a sites homepage pass to inner pages and influence inner page rankings?
Hi, I have a question regarding the power of internal links and how much link juice they pass, and how they influence search engine ranking positions. If we take the example of an ecommerce store that sells kites. Scenario 1 It can be assumed that it is easier for the kite ecommerce store to earn links to its homepage from writing great content on its blog, as any blogger that will link to the content will likely use the site name, and homepage as anchor text. So if we follow this through, then it can be assumed that there will eventually be a large number of high quality backlinks pointing to the sites homepage from various high authority blogs that love the content being posted on the sites blog. The question is how much link juice does this homepage pass to the category pages, and from the category pages then to the product pages, and what influence does this have on rankings? I ask because I have seen strong ecommerce sites with very strong DA or domain PR but with no backlinks to the product page/category page that are being ranked in the top 10 of search results often, for the respective category and product pages. It therefore leads me to assume that internal links must have a strong determiner on search rankings... Could it therefore also be assumed that a site with a PR of 5 and no links to a specific product page, would rank higher than a site with a PR of 1 but with 100 links pointing to the specific product page? Assuming they were both trying to rank for the same product keyword, and all other factors were equal. Ie. neither of them built spammy links or over optimised anchor text? Scenario 2 Does internal linking work both ways? Whereas in my above example I spoke about the homepage carrying link juice downward to the inner category and product pages. Can a powerful inner page carry link juice upward to category pages and then the homepage. For example, say the blogger who liked the kite stores blog content piece linked directly to the blog content piece from his site and the kite store blog content piece was hosted on www.xxxxxxx.com/blog/blogcontentpiece As authority links are being built to this blog content piece page from other bloggers linking to it, will it then pass link juice up to the main blog category page, and then the kite sites main homepage? And if there is a link with relevant anchor text as part of the blog content piece will this cause the link juice flowing upwards to be stronger? I know the above is quite winded, but I couldn't find anywhere that explains the power of internal linking on SERP's... Look forward to your replies on this....
Algorithm Updates | | sanj50500 -
Have name.org want to get name.com should .com redirect to .org or other way around?
Its a non profit organization. With name.org acquired in 2006. name.com will be acquired soon. In SEO terms it would make sense for me just to get .com and redirect to the original .org but from the standpoint of 7 year history of name.org is it worth keeping or its irrelevant or not that important or really important. I am in the process of rebuilding the site other than the initial domain home links to other pages do not matter at the moment. Thanks Mozzies
Algorithm Updates | | vmialik0 -
Would 37,000 footer links from one site be the cause for our ranking drops?
Hey guys, After this week's Penguin update, I've noticed that one of our clients has seen a dip in rankings. Because of this, I've had a good link at the client's back link profile in comparison to competitors and noticed that over 37,000 footer links have been generated from one website - providing us with an unhealthy balance of anchor terms. Do you guys believe this may be the cause for our ranking drops? Would it be wise to try and contact the webmaster in question to remove the footer links? Thanks, Matt
Algorithm Updates | | Webrevolve0 -
How to get Yahoo visitors to my site
I get great traffic from Google but Yahoo is at about a 20 to 1 ratio on visitors. Is there anything I should do to increase Yahoo traffic? I bought a Yahoo Directory listing about 3 months ago but it did no good. Thanks, Boo
Algorithm Updates | | Boodreaux0 -
Do links from unrelated sites dilute your rankings for your key phrases?
do links from unrelated sites dilute your rankings for your key phrases? i've always heard don't get links from unrelated sites but if that mattered, then how would sites with totally diverse pages such as newspaper sites, sears, and other catalogue sites rank for these diverse subjects on their site? How does Facebook rank when it gets 100,000 links a day from sites that have nothing to do with a social media site? I'd love to hear everyone's opinion on this. Also, Do links from unrelated sites give less push than related links? Take care,
Algorithm Updates | | Ron10
Ron0 -
Implications of removing all google products from site
Is there any data on the implications of removing everything google from a site; analytics, adsense, webmaster tools, sitemaps, etc. Obviously they still have their search data and they say they dont use these other sources of data for ranking information but has anyone actually tried this or is there any existing data on this?
Algorithm Updates | | jessefriedman0 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0