Why isn't google indexing our site?
-
Hi,
We have majorly redesigned our site. Is is not a big site it is a SaaS site so has the typical structure, Landing, Features, Pricing, Sign Up, Contact Us etc...
The main part of the site is after login so out of google's reach.
Since the new release a month ago, google has indexed some pages, mainly the blog, which is brand new, it has reindexed a few of the original pages I am guessing this as if I click cached on a site: search it shows the new site.
All new pages (of which there are 2) are totally missed. One is HTTP and one HTTPS, does HTTPS make a difference.
I have submitted the site via webmaster tools and it says "URL and linked pages submitted to index" but a site: search doesn't bring all the pages?
What is going on here please? What are we missing? We just want google to recognise the old site has gone and ALL the new site is here ready and waiting for it.
Thanks
Andrew
-
Well, links/shares are good. But of course I'm just begging the question of how you can get those.
Rand gave a great talk called "Inbound Marketing for Startups" at a Hackers & Founders meetup that was focused more on Inbound as a whole than SEO in particular, but it's full of valuable insights: http://vimeo.com/39473593 [video]
Ultimately it'll come down to some kind of a publishing/promotional strategy for your startup. Sometimes your startup is so unique/interesting that it has its own marketing baked right in - in which case you can get a lot of traction by simply doing old-school PR to get your startup in front of the right people.
Other times, you've got to build up links/authority on the back of remarkable marketing.
BufferApp is a great example of a startup that built traction off their blog. Of course, they weren't necessarily blogging as an SEO play - it was more in the aim of getting directly in front of the right audience for direct signups for their product. But they definitely built up some domain authority as a result.
I'd also take a look at the guides Mailchimp has created - they created the dual benefit of getting in front of the right audience in a positive/helpful way (which benefits the brand and drives sign-ups directly) as well as building a considerable number of inbound links, boosting their domain authority overall.
Unfortunately no quick/easy ways to build your domain authority, but things you do to build your authority can also get you immediately in front of the audience you're looking for - and SEO just becomes a lateral benefit to that.
-
Thank you all for your responses. It is strange. we are going to add a link to our g+ page and then add a post.
As a new site what is the best way to get our domain authority up so we get crailed quicker?
Thanks again
Andrew
-
I disagree. Unless the old pages have inbound links from external sites, there's not much reason to 301 them (and not much benefit). If they're serving up 404 errors, they will fall out of the index.
Google absolutely does have a way to know these new pages exist - by crawling the home page and following the links discovered there. Both of the pages in question are linked to prominently, particularly the Features page which is part of the main navigation. A sitemap is just an aid for this process - it can help move things along and help Google find otherwise obscure/deep pages, but it by no means is a necessity for getting prominent pages indexed, particularly pages that are 1-2 levels down from the home page.
-
If you didn't redirect the old URLs to the new ones when the new site went live, this will absolutely be the cause of your problem, Studio33. That, combined with having no (or misdirected) sitemap means there was essentially no way for Google to even know your site's pages existed.
Good catch Billy.
-
Hi Andrew,
-
Google has been indexing HTTPS URLs for years now without a problem, so is unlikely to be part of the issue.
-
Your domain authority on the whole may be slowing Google down in indexing new pages. Bottom line is crawl rate and depth are both functions of how authoritative/important you appear based on links/shares/etc.
-
That said, I don't see any indication as to why these two particular pages are not being indexed by Google. I'm a bit stumped here.
I see some duplication between your Features page and your Facebook timeline, but not with the invoice page.
As above, your domain authority (17) is a bit on the low side. So this could simply be a matter of Google not dedicating enough resources to crawl/index all of your pages yet. But why these two pages would be the only ones is perplexing, particularly after a full month. There are no problems with your Robots.txt, no canonical tag issues, the pages are linked to properly.
Wish I had an easy answer here. One idea, a bit of a long shot: we've seen Google index pages faster when they're linked to from Google+ posts. I see you have a Google+ business page for this website - you might try simply writing a (public) post there that includes a link over to the Features page.
As weak as that is, that's all I've got.
Best of Luck,
Mike -
-
OK - I would get a list of all of your old pages and start 301 redirecting them to your new pages asap. This could be part of your issue.
-
Hi checked XML, its there if you view source it just doesn't have a stylesheet
-
Hi thanks about 1 month. The blog page you are getting maybe the old ones,as they are working this end http://www.invoicestudio.com/Blog . What you have mentioned re the blog is part of the problem. Google has the old site and not the new.
-
Getting this on your Blog pages:
The page cannot be displayed because an internal server error has occurred.
where you aware?
Anyway - may I ask how old these pages are?
-
Thanks. I will look into the sitemap. That only went live about an hour ago whilst this thread has been going on.
-
Yeah - with no path specified the directive is ignored. (you don't have a '/' so the directive (disallow) is ignored)
however, you do direct to your xml sitemap which appears to be empty. You might want to fix that....
-
Hi no I think its fine as we do not have the forward slash after the disallow. See
http://www.robotstxt.org/robotstxt.html
I wish it was as simple as that. Thanks for your help though its appreciated.
-
Hmmm. That link shows that the way you have it will block all robots.
-
Thanks but I think Robots.txt is correct. Excert from http://www.robotstxt.org/robotstxt.html
To exclude all robots from the entire server
User-agent: * Disallow: /
To allow all robots complete access
User-agent: * Disallow:
(or just create an empty "/robots.txt" file, or don't use one at all)
-
It looks like your robots.txt file is the problem. http://www.invoicestudio.com/robots.txt has:
User-agent: * Disallow: When it should be:
User-agent: *
Allow: / -
Hi,
The specific pages are
https://www.invoicestudio.com/Secure/InvoiceTemplate
http://www.invoicestudio.com/Features
I'm not sure what other pages are not indexed.
New site has been live 1 month.
Thanks for your help
Andrew
-
Without seeing the specific pages i cant check for things such as noindex tags or robot text blocking access, i would suggest you double check these aspects. The pages will need to be accesible to Search engines when they crawl your site, so if there are no links to those pages Google will be unable to access them.
How long have they been live since the site re-launch as it may just be that they have not been crawled yet, particuarly if they are deeper pages within your site hierarchy.
Heres a link to Googles resources on crawling and indexing sites incase you have not been able to check through them yet.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving html site to wordpress and 301 redirect from index.htm to index.php or just www.example.com
I found page duplicate content when using Moz crawl tool, see below. http://www.example.com
Intermediate & Advanced SEO | | gozmoz
Page Authority 40
Linking Root Domains 31
External Link Count 138
Internal Link Count 18
Status Code 200
1 duplicate http://www.example.com/index.htm
Page Authority 19
Linking Root Domains 1
External Link Count 0
Internal Link Count 15
Status Code 200
1 duplicate I have recently transfered my old html site to wordpress.
To keep the urls the same I am using a plugin which appends .htm at the end of each page. My old site home page was index.htm. I have created index.htm in wordpress as well but now there is a conflict of duplicate content. I am using latest post as my home page which is index.php Question 1.
Should I also use redirect 301 im htaccess file to transfer index.htm page authority (19) to www.example.com If yes, do I use
Redirect 301 /index.htm http://www.example.com/index.php
or
Redirect 301 /index.htm http://www.example.com Question 2
Should I change my "Home" menu link to http://www.example.com instead of http://www.example.com/index.htm that would fix the duplicate content, as indx.htm does not exist anymore. Is there a better option? Thanks0 -
Strange site link on Google for a Facebook result
A Facebook page targetted to US Hispanics (with content in Spanish and English) is showing me a hindi sitelink underneath the main Facebook link when I google (in the US, English) for the page [ page name facebook]. We don't have any content in hindi, or targetted to that audience. If I click on the sitelink while logged out of facebook, I can see it takes me to a facebook subdomain of hi-in. When I'm logged in it just redirects me to the same page. Any idea why this could be happening?
Intermediate & Advanced SEO | | M_80 -
How can a recruitment company get 'credit' from Google when syndicating job posts?
I'm working on an SEO strategy for a recruitment agency. Like many recruitment agencies, they write tons of great unique content each month and as agencies do, they post the job descriptions to job websites as well as their own. These job websites won't generally allow any linking back to the agency website from the post. What can we do to make Google realise that the originator of the post is the recruitment agency and they deserve the 'credit' for the content? The recruitment agency has a low domain authority and so we've very much at the start of the process. It would be a damn shamn if they produced so much great unique content but couldn't get Google to recognise it. Google's advice says: "Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you'd prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content." - But none of that can happen. Those big job websites just won't do it. A previous post here didn't get a sufficient answer. I'm starting to think there isn't an answer, other than having more authority than the websites we're syndicating to. Which isn't going to happen any time soon! Any thoughts?
Intermediate & Advanced SEO | | Mark_Reynolds0 -
Google is indexing the wrong pages
I have been having problems with Google indexing my website since mid May. I haven't made any changes to my website which is wordpress. I have a page with the title 'Peterborough Cathedral wedding', I search Google for 'wedding Peteborough Cathedral', this is not a competitive search phrase and I'd expect to find my blog post on page one. Instead, half way down page 4 I find Google has indexed www.weddingphotojournalist.co.uk/blog with the title 'wedding photojournalist | Portfolio', what google has indexed is a link to the blog post and not the blog post itself. I repeated this for several other blog posts and keywords and found similar results, most of which don't make any sense at all - A search for 'Menorca wedding photography' used to bring up one of my posts at the top of page one. Now it brings up a post titled 'La Mare wedding photography Jersey" which happens to have a link to the Menorca post at the bottom of the page. A search for 'Broadoaks country house weddng photography' brings up 'weddingphotojournalist | portfolio' which has a link to the Broadoaks post. a search for 'Blake Hall wedding photography' does exactly the same. In this case Google is linking to www.weddingphotojournalist.blog again, this is a page of recent blog posts. Could this be a problem with my sitemap? Or the Yoast SEO plugin? or a problem with my wordpress theme? Or is Google just a bit confused?
Intermediate & Advanced SEO | | weddingphotojournalist0 -
Shouldn't Lower Bounce Rate Correlate into Greater Click Thru Rate for a Web Site?
Greetings: I run a real estate web site in New York City with about 650 pages out of which 330 are property listing pages. About 250 of those listing pages contain less than 150 words of content. In late August I set about 250 of the listing pages that generated the least traffic (generally corresponding to those with the least content) to "no-index, follow". Now Google has removed those pages from their index. The overall bounce rate for the site has been reduced from about 69% to about 64% since the removal of these low quality listing pages. However the click thru rate has not improved and is stuck at about 2.2 pages per visitor. Shouldn't the click thru rate improve if the bounce rate goes own? Am I missing something? Also, is a lower bounce rate something that Google will take into account when calculating rank? Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Any idea why this page isn't indexing?
Hi Mozzers, Question for all of you. Any idea why this page isn't indexing in Google? It's indexing in Bing, but we don't see it in Google's results. It doesn't seem like we have any noindex tags or anyway issues with the robots files either. Any ideas? http://ohva.k12.com/
Intermediate & Advanced SEO | | petertong230 -
Google Indexed my Site then De-indexed a Week After
Hi there, I'm working on getting a large e-commerce website indexed and I am having a lot of trouble.
Intermediate & Advanced SEO | | Travis-W
The site is www.consumerbase.com. We have about 130,000 pages and only 25,000 are getting indexed. I use multiple sitemaps so I can tell which product pages are indexed, and we need our "Mailing List" pages the most - http://www.consumerbase.com/mailing-lists/cigar-smoking-enthusiasts-mailing-list.html I submitted a sitemap a few weeks ago of a particular type of product page and about 40k/43k of the pages were indexed - GREAT! A week ago Google de-indexed almost all of those new pages. Check out this image, it kind of boggles my mind and makes me sad. http://screencast.com/t/GivYGYRrOV While these pages were indexed, we immediately received a ton of traffic to them - making me think Google liked them. I think our breadcrumbs, site structure, and "customers who viewed this product also viewed" links would make the site extremely crawl-able. What gives?
Does it come down to our site not having enough Domain Authority?
My client really needs an answer about how we are going to get these pages indexed.0 -
Removing a Page From Google index
We accidentally generated some pages on our site that ended up getting indexed by google. We have corrected the issue on the site and we 404 all of those pages. Should we manually delete the extra pages from Google's index or should we just let Google figure out that they are 404'd? What the best practice here?
Intermediate & Advanced SEO | | dbuckles0