Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Url shows up in "Inurl' but not when using time parameters
-
Hey everybody,
I have been testing the Inurl: feature of Google to try and gauge how long ago Google indexed our page. SO, this brings my question.
If we run inurl:https://mysite.com all of our domains show up.
If we run inurl:https://mysite.com/specialpage the domain shows up as being indexed
If I use the "&as_qdr=y15" string to the URL, https://mysite.com/specialpage does not show up.
Does anybody have any experience with this? Also on the same note when I look at how many pages Google has indexed it is about half of the pages we see on our backend/sitemap. Any thoughts would be appreciated.
TY!
-
There are several ways to do this, some are more accurate than others. If you have access to the site which contain the web-page on Google Analytics, obviously you could filter your view down to one page / landing page and see when the specified page first got traffic (sessions / users). Note that if a page existed for a long time before it saw much usage, this wouldn't be very accurate.
If it's a WordPress site which you have access to, edit the page and check the published date and / or revision history. If it's a post of some kind then it may displays its publishing date on the front-end without you even having to log in. Note that if some content has been migrated from a previous WordPress site and the publishing dates have not been updated, this may not be wholly accurate either.
You can see when the WayBack Machine first archived the specified URL. The WayBack Machine uses a crawler which is always discovering new pages, not necessarily on the date(s) they were created (so this method can't be trusted 100% either)
In reality, even using the "inurl:" and "&as_qdr=y15" operators will only tell you when Google first saw a web-page, it won't tell you how old the page is. Web pages do not record their age in their coding, so in a way your quest is impossible (if you want to be 100% accurate)
-
So, then I will pose a different question to you. How would you determine the age of a page?
-
Oh ty! Ill try that out!
-
Not sure on the date / time querying aspect, but instead of using "inurl:https://mysite.com" you might have better luck checking indexation via "site:mysite.com" (don't put in subdomains, www or protocol like HTTP / HTTPS)
Then be sure to tell Google to 'include' omitted results (if that notification shows up, sometimes it does - sometimes it doesn't!)
You can also use Google Search Console to check indexed pages:
- https://d.pr/i/oKcHzS.png (screenshot)
- https://d.pr/i/qvKhPa.png (screenshot)
You can only see the top 1,000 - but it does give you a count of all the indexed pages. I am pretty sure you could get more than 1k pages out of it, if you used the filter function repeatedly (taking less than 1k URLs from each site-area at a time)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content hidden behind a 'read all/more..' etc etc button
Hi Anyone know latest thinking re 'hidden content' such as body copy behind a 'read more' type button/link in light of John Muellers comments toward end of last year (that they discount hidden copy etc) & follow up posts on Search Engine Round Table & Moz etc etc ? Lots of people were testing it and finding such content was still being crawled & indexed so presumed not a big deal after all but if Google said they discount it surely we now want to reveal/unhide such body copy if it contains text important to the pages seo efforts. Do you think it could be the case that G is still crawling & indexing such content BUT any contribution that copy may have had to the pages seo efforts is now lost if hidden. So to get its contribution to SEO back one needs to reveal it, have fully displayed ? OR no need to worry and can keep such copy behind a 'read more' button/link ? All Best Dan
On-Page Optimization | | Dan-Lawrence0 -
Two URL's for the same page
Hi, on our site we have two separate URL's for a page that has the same content. So, for example - 'www.domain.co.uk/stuff' and 'www.domain.co.uk/things/stuff' both have the same content on the page. We currently rank high in search for 'www.domain.co.uk/things/stuff' for our targeted keyword, but there are numerous links on the site to www.domain.co.uk/stuff and also potentially inbound links to this page. Ideally we want just the www.domain.co.uk/things/stuff URL to be present on the site, what would be the best course of action to take? Would a simple Canonical tag from the '/stuff' URL which points to the '/things/stuff' page be wise? If we were to scrap the '/stuff' URL totally and redirect it to the 'things/stuff' URL and change all our on site links, would this be beneficial and not harm our current ranking for '/things/stuff'? We only want 1 URL for this page for numerous reasons (i.e, easier to track in Analytics), but I'm a bit cautious that changing the page that doesn't rank may have an affect on the page that does rank! Thanks.
On-Page Optimization | | Jaybeamer2 -
Will "internal 301s" have any effect on page rank or the way in which an SE see's our site interlinking?
We've been forced (for scalability) to completely restructure our website in terms of setting out a hierarchy. For example - the old structure : country / city / city area Where we had about 3500 nicely interlinked pages for relevant things like taxis, hotels, apartments etc in that city : We needed to change the structure to be : country / region / area / city / cityarea So as patr of the change we put in place lots of 301s for the permanent movement of pages to the new structure and then we tried to actually change the physical on-page links too. Unfortunately we have left a good 600 or 700 links that point to the old pages, but are picked up by the 301 redirect on page, so we're slowly going through them to ensure the links go to the new location directly (not via the 301). So my question is (sorry for long waffle) : Whilst it must surely be "best practice" for all on-page links to go directly to the 'right' page, are we harming our own interlinking and even 'page rank' by being tardy in working through them manually? Thanks for any help anyone can give.
On-Page Optimization | | TinkyWinky0 -
Duplicate Content for Men's and Women's Version of Site
So, we're a service where you can book different hairdressing services from a number of different salons (site being worked on). We're doing both a male and female version of the site on the same domain which users are can select between on the homepage. The differences are largely cosmetic (allowing the designers to be more creative and have a bit of fun and to also have dedicated male grooming landing pages), but I was wondering about duplicate pages. While most of the pages on each version of the site will be unique (i.e. [male service] in [location] vs [female service] in [location] with the female taking precedent when there are duplicates), what should we do about the likes of the "About" page? Pages like this would both be unique in wording but essentially offer the same information and does it make sense to to index two different "About" pages, even if the titles vary? My question is whether, for these duplicate pages, you would set the more popular one as the preferred version canonically, leave them both to be indexed or noindex the lesser version entirely? Hope this makes sense, thanks!
On-Page Optimization | | LeahHutcheon0 -
The word "in" between 2 keywords influence on SEO
Does anybody know when you have the word "in" between two keywords has this a negative influence in Google? For example: "Holiday Home Germany" is the search term in Google
On-Page Optimization | | Bram76
"Holiday Home in Germany" as h1 on our website or do we have to use "Holiday Home Germany" on our website?0 -
Is there a tool that will "grade" content?
Does anybody know of a tool that can "grade" content for Panda compliance. For example, it might look at: • the total number of words on the page • the average number of words in sentences • grammar • spelling • repetitious words and/or phrases • Readability—using algorithms such as: Flesch Kincaid Reading Ease Flesch Kincaid Grade Level Gunning Fog Score Coleman Liau Index Automated Readability Index (ARI) For the last 5 months I've been writing and rewriting literally 100s of catalog descriptions—adhering to the "no duplicate content" and "adding value" rubrics—but in an extremely informal style. I would like to know if I'm at least meeting Google Panda's minimum standards.
On-Page Optimization | | RScime250 -
Rel="canonical" on home page?
I'm using wordpress and the all in one seo pack with the canonical option checked. As I understand it the rel="canonical" tag should be added to pages that are duplicate or similar to tell google that another page (one without the rel="canonical" tag) is the correct one as the url in the tag is pointing google towards it. Why then does the all in one seo pack add rel="canonical" to every page on my site including the home page? Isn't that confusing for google?
On-Page Optimization | | SamCUK0 -
Does having a "+" in a URL hurt SEO? Would much value be gained changing it to a hyphen?
There's a site that contains "+" signs in the URL in order to call different information for the content on the page. Would it be better to change those to hyphens (-), or not that much value will be gained, so leave them as is? Thanks!
On-Page Optimization | | MitchellStoker0