Best Way to Determine Age of Site
-
What's the best way to determine the age of a site?
Where by it's beginning I mean when it went through the Google Sandbox and has been a functioning site every since.
Thanks!
-
I think archive.org may be my best bet. Thanks for the good advice
-
are you talking about versions of sites to see how old that particular website is or the domain?
obviously whois information is great for domains
http://www.networksolutions.com/whois/index.jsp
there is also a way to see old versions of websites here:
-
I've previously used Webconfs do research domain age - it's a pretty good resource. Don't think you'll be able to tell exactly when it made it's way through the Google Sandbox, but you should at least be able to determine when it went online. Although, if it was was anytime after 1998-99, then it's almost guaranteed to have made a trip to the box
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which is the best way to rank a site?
Hi, I have been working on SEO for a long time, recently I started a new site where I was aiming to rank different niches but I am stuck. First I covered some keywords related to sports then I shifted the niche to hunting. My idea was to cover a niche fully then move on to the 2nd so the authority of the site can also help rank the 2nd niche but the problem is I am unable to rank my site. Should I be considering only a very specific niche site or should I continue doing all the stuff on the same site. Please checkout my site ReviewsCase.com and let me know. And if has also done the same please let me know.
Algorithm Updates | | seoasikhan20 -
What date tags are required/best to use for articles or other landing pages?
Does anyone have suggestions on which date tag(s) are most important to use and how to use them on the frontend? (i.e. dateModified, dateCreated, and datePublished). The Structured Data Testing Tool is coming up with errors for my article pages, but I'm a bit confused which ones should be in the code vs. showing on the frontend.
Algorithm Updates | | ElsaT0 -
How can a site with two questionable inbound links outperform sites with 500-1000 links good PR?
Our site for years was performing at #1 for but in the last 6 months been pushed down to about the #5 spot. Some of the domains above us have a handful of links and they aren't from good sources. We don't have a Google penalty. We try to only have links from quality domains but have been pushed down the SERP's? Any suggestions?
Algorithm Updates | | northerncs0 -
Does Site Size Influence Rank?
The Scenario:
Algorithm Updates | | kchandler
Currently one of my clients has 7-8 products that they sell on their website. For each product they have two different pages one with the product info and one with a video demo. So the pages began to split their authority as they began receiving new links. Since only one of the two pages for each product rank i suggested that we combine the two and redirect the video page to the product page to increases it's authority and rank. The Clients Response:
After explaining my reasoning and next steps the client mentioned that he thought a site's size was a ranking factor. I had never heard of this before so i told them i would do some research to prove my point, after a little digging around i am now even more confused. http://www.seroundtable.com/google-size-ranking-17044.html http://www.webmasterworld.com/google/4591155.htm The Question:
Does a websites size/amount of content indexed in Google actually effect your sites ability to rank? I look forward to everyones feedback, thanks Kyle1 -
Why does Google say they have more URLs indexed for my site than they really do?
When I do a site search with Google (i.e. site:www.mysite.com), Google reports "About 7,500 results" -- but when I click through to the end of the results and choose to include omitted results, Google really has only 210 results for my site. I had an issue months back with a large # of URLs being indexed because of query strings and some other non-optimized technicalities - at that time I could see that Google really had indexed all of those URLs - but I've since implemented canonical URLs and fixed most (if not all) of my technical issues in order to get our index count down. At first I thought it would just be a matter of time for them to reconcile this, perhaps they were looking at cached data or something, but it's been months and the "About 7,500 results" just won't change even though the actual pages indexed keeps dropping! Does anyone know why Google would be still reporting a high index count, which doesn't actually reflect what is currently indexed? Thanks!
Algorithm Updates | | CassisGroup0 -
ECommerce site being "filtered" by last Panda update, ideas and discussion
Hello fellow internet go'ers! Just as a disclaimer, I have been following a number of discussions, articles, posts, etc. trying to find a solution to this problem, but have yet to get anything conclusive. So I am reaching out to the community for help. Before I get into the questions I would like to provide some background: I help a team manage and improve a number of med-large eCommerce websites. Traffic ranges anywhere from 2K - 12K+ (per day) depending on the site. Back in March one of our larger sites was "filtered" from Google's search results. I say "filtered" because we didn't receive any warnings and our domain was/is still listed in the first search position. About 2-3 weeks later another site was "filtered", and then 1-2 weeks after that, a third site. We have around ten niche sites (in total), about seven of them share an identical code base (about an 80% match). This isn't that uncommon, since we use a CMS platform to manage all of our sites that holds hundreds of thousands of category and product pages. Needless to say, April was definitely a frantic month for us. Many meetings later, we attributed the "filter" to duplicate content that stems from our product data base and written content (shared across all of our sites). We decided we would use rel="canonical" to address the problem. Exactly 30 days from being filtered our first site bounced back (like it was never "filtered"), however, the other two sites remain "under the thumb" of Google. Now for some questions: Why would only 3 of our sites be affected by this "filter"/Panda if many of them share the same content? Is it a coincidence that it was an exact 30 day "filter"? Why has only one site recovered?
Algorithm Updates | | WEB-IRS1 -
Is there a utility that can tell me what keywords my site already ranks high for?
Ok... so I'm looking for a way to understand what my site already ranks high for.. I don't necessarily want to have to manually type in keywords. The purpose of this exercise is to demonstrate to a client what keywords they're already ranking high for. Is there an easy way / tool to go about doing this? Thanks in advance, Gene
Algorithm Updates | | BGroup0 -
What determines rankings in a site: search?
When I perform a "site:" search on my domains (without specifying a keyword) the top ranked results seem to be a mixture of sensible top-level index pages plus some very random articles. Is there any significance to what Google ranks highly in a site: search? There is some really unrepresentative content returned on page 1, including articles that get virtually no traffic. Is this seriously what Google considers our best or most typical content?
Algorithm Updates | | Dennis-529610