Possible problem with new site (GWT no queries/very low index vs. submitted)
-
Hi everyone,
I recently launched a new website for a small business loan company in the Dallas area. The site has been live for roughly a month and a half. I submitted everything to GWT as usual, including my sitemap. I am not sure what's going on with the site, as there is no activity from GWT in the impressions or queries. The submit vs. index is 24/3 (and hasn't moved). Also the queries graph on the overview stops at 3/18/2015... On another note, when I go to Crawl > Sitemaps, it shows that there were pages indexed during the month of march and then on April 3 it drops from 17 to 2 and never increases.
Google says there are no errors or issues found, but I feel like there's something wrong. When I do site:, my URLs do pop up which makes me believe there's just a problem with my GWT. With that being said, I'm not happy THINKING there's something wrong. I need to actually know what the problem is.
The only thing I can think of that I have done is purchase SSL for the site, but when I search what pages are indexed using www. it shows all the HTTPS URLS, so that would tell me that the site is getting indexed without a problem?
Does anyone have a clue as to what might be happening? I will attach some screen shots so that you can get a better idea...
-
Thanks Rick, you just answered my question on how long it will take to update!
-
Hi James, great question!
Ryan you hit that on the nail lol
I'm having this same issue currently! I have ssl on all 3 domains and realise I was adding the http version and not the https version. I have since updated it now, how long did you take for you to receive the updated information after you made the adjustments?
-
Just like you said: add https version (with and without www) to gwt, change the wp main url, use the ssl redirect plugin or manage the redirect via htaccess and finally wait a few days to see the changes.
-
Good work James! And nice outline of your steps involved. This will be an asset for people experiencing similar issues I'm sure. Cheers!
-
Sorry to keep responding to my own problem but I think I just fixed the issue... I believe the entire problem was stemming from the fact that my HTTPS was setup incorrectly.
1. I added the HTTPS version of my site to GWT.
2. I changed my urls in WordPress to reflect HTTPS not http.
3. This created a problem with my redirection. Typing in a naked URL was no longer redirecting to https:// or www.
4. I changed my .htaccess file to force HTTPS.
Now I believe the entire problem is fixed. I just have to wait and see how my tools reflect the changes and hopefully I did everything correctly.
-
UPDATE:
Ooohkay, got an update. So I mentioned that I purchased SSL for the site. I just added the https://www. version to GWT and now it's giving me more updated information. I didn't realize that the https version was different. Now my question though is how is this affecting the rest of my website? Does anyone have experience managing https vs http pages?
Also, does Moz automatically track all versions of the domain or do I need to create a new campaign with the HTTPS version?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I disallow all URL query strings/parameters in Robots.txt?
Webmaster Tools correctly identifies the query strings/parameters used in my URLs, but still reports duplicate title tags and meta descriptions for the original URL and the versions with parameters. For example, Webmaster Tools would report duplicates for the following URLs, despite it correctly identifying the "cat_id" and "kw" parameters: /Mulligan-Practitioner-CD-ROM
Intermediate & Advanced SEO | | jmorehouse
/Mulligan-Practitioner-CD-ROM?cat_id=87
/Mulligan-Practitioner-CD-ROM?kw=CROM Additionally, theses pages have self-referential canonical tags, so I would think I'd be covered, but I recently read that another Mozzer saw a great improvement after disallowing all query/parameter URLs, despite Webmaster Tools not reporting any errors. As I see it, I have two options: Manually tell Google that these parameters have no effect on page content via the URL Parameters section in Webmaster Tools (in case Google is unable to automatically detect this, and I am being penalized as a result). Add "Disallow: *?" to hide all query/parameter URLs from Google. My concern here is that most backlinks include the parameters, and in some cases these parameter URLs outrank the original. Any thoughts?0 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
Indexing/Sitemap - I must be wrong
Hi All, I would guess that a great number of us new to SEO (or not) share some simple beliefs in relation to Google indexing and Sitemaps, and as such get confused by what Web master tools shows us. It would be great if somone with experience/knowledge could clear this up for once and all 🙂 Common beliefs: Google will crawl your site from the top down, following each link and recursively repeating the process until it bottoms out/becomes cyclic. A Sitemap can be provided that outlines the definitive structure of the site, and is especially useful for links that may not be easily discovered via crawling. In Google’s webmaster tools in the sitemap section the number of pages indexed shows the number of pages in your sitemap that Google considers to be worthwhile indexing. If you place a rel="canonical" tag on every page pointing to the definitive version you will avoid duplicate content and aid Google in its indexing endeavour. These preconceptions seem fair, but must be flawed. Our site has 1,417 pages as listed in our Sitemap. Google’s tools tell us there are no issues with this sitemap but a mere 44 are indexed! We submit 2,716 images (because we create all our own images for products) and a disappointing zero are indexed. Under Health->Index status in WM tools, we apparently have 4,169 pages indexed. I tend to assume these are old pages that now yield a 404 if they are visited. It could be that Google’s Indexed quotient of 44 could mean “Pages indexed by virtue of your sitemap, i.e. we didn’t find them by crawling – so thanks for that”, but despite trawling through Google’s help, I don’t really get that feeling. This is basic stuff, but I suspect a great number of us struggle to understand the disparity between our expectations and what WM Tools yields, and we go on to either ignore an important problem, or waste time on non-issues. Can anyone shine a light on this for once and all? If you are interested, our map looks like this : http://www.1010direct.com/Sitemap.xml Many thanks Paul
Intermediate & Advanced SEO | | fretts0 -
Large site rel=can or no-index?
Hi, A large site with tens of thousands of pages, but lots of the pages are very similar. The site is about training courses, and the url structure is something like: training-course/date/time I only really want the search engines to index the actual training course pages, which is the better option for me and why?: a) rel=canonical b) noindex, nofollow Thanks, Gary.
Intermediate & Advanced SEO | | cottamg0 -
Panda/Penguin & Ecommerce Sites in similar niches
Hello, We have a few online stores that are in similar niches. How do we make sure that we don't get penalized for this (Panda/Penguin) We have the sites interlinked, but our newest one is not going to be linked to the others. Also, will rewriting descriptions help if the product is on more than one site? Thanks!
Intermediate & Advanced SEO | | BobGW0 -
Sitemaps / Google Indexing / Submitted
We just submitted a new sitemap to google for our new rails app - http://www.thesquarefoot.com/sitemap.xml Which has over 1,400 pages, however Google is only seeing 114. About 1,200 are in the listings folder / 250 blog posts / and 15 landing pages. Any help would be appreciated! Aron sitemap.png
Intermediate & Advanced SEO | | TheSquareFoot0 -
Site views messy in a text browser, but can see all text, is that a problem?
In Google's webmaster guidelines, they mention to view your site in a text browser to ensure all text is visible. All of our text is visible, but is very messy and is all jumbled on the page. I've noticed most sites text browser layout is clean. H How important is it to SEO that the site views cleanly in a text browser? Does anyone know of any feedback from Google engineers about this point?
Intermediate & Advanced SEO | | nicole.healthline0 -
Index.php canonical/dup issues
Hello my fellow SEOs! I would LOVE some additional insight/opinions on the following... I have a client who is an industry leader, big site, ranks for many competitive phrases, blah blah..you get the picture. However, they have a big dup content/canonical issue. Most pages resolve with and without the /index.php at the end of the URL. Obviously this is a dup content issue but more importantly they SEs sometimes serve an "index.php" version of the page, sometimes they don't, and it is constantly changing which version it serves and the rank goes up and down. Now, I've instructed them that we are going to need to write a sitewide redirect to attempt a uniform structure. Most people would say, redirect to the non index.php version buttttt 1. The index.php pages consistently outperforms the non index.php versions, except the homepage. 2. The client really would prefer to have the "index.php" at the end of the URL The homepage performs extremely well for a lot of competitive phrases. I'd like to redirect all pages to the "index.php" version except the homepage and I'm thinking that if I redirect all pages EXCEPT the homepage to the index.php version, it could cause some unforeseen issues. I can not use rel=canonical because they have many different versions of the their pages with different country codes in the URL..example, if I make the US version canonical, it will hurt the pages trying to rank with a fr URL, de URL, (where fr/de are country codes in the URL depending where the user is, it serves the correct version). Any advice would be GREATLY appreciated. Thanks in advance! Mike
Intermediate & Advanced SEO | | MikeCoughlin0