Why Is this page de-indexed?
-
I have dropped out for all my first page KWDs for this page
https://www.key.co.uk/en/key/dollies-load-movers-door-skates
Can anyone see an issue?
I am trying to find one....
We did just migrate to HTTPS but other areas have no problem
-
Hi
Yes, there was an issue with my rank tracking software - phew
Thank you
-
OK great thanks for your help
I'll keep an eye on everything
-
You are 5th for Heavy Duty Dolly
I don't see what the problem is - the page is doing really well??
Regards
Nigel
-
This is not an issue.
Its totally normal to have some http indexed pages left. Even more common if the migration is recent. Dont be afraid of this Becky.
-
Yeap, give it a little time to google re-crawl all the new site. I´d give it nearley a month to consider that google has seen completely the new version of the site, always checking the number of indexed pages in GSC and the resuls appearing for a site: search
Being out of top 100 gives you a clue that you are in the middle of the transition.and for the keyword: _Heavy Duty Dolly; _ I do see your page. Check attached image.
Best luck.
GR. -
Hi Becky
I just searched in a normal browser so it could be Google skewing the results for you.
For indexed pages
site:key.co.uk inurl:http:
Regards
Nigel
-
How did you find these http pages?
I did a search in Incognito, but I couldn't see anything myself.
I'll try again, thanks!
-
Hi
Thanks for this. Yes I've checked in Google Console, I can find the page in indexed pages but the indexed pages are a lot less since migration:
HTTP - indexed 13013, blocked 12,891
HTTPS - indexed - 2814 / blocked robots.txt 5713
Do I just wait?
One keyword example for that page would be #Heavy Duty Dolly' & 'load moving dolly'
Were position 1 now out of top 100.
We're working on page speed/load time for the whole site, but why would it affect that one page so badly?
-
Hi Becky,
Without knoing those relevant search terms, there's almost no analysis to be done.
I´ve noticed that it took very long time to load, here a GTmetrix report.Remember that migrating to HTTPs makes google to re-crawl all your website's pages and re evaluate all ranking factors.
My advise is to wait a little longer. It might take a few weeks.Also, always monitor the Google Search console profile, there could be some message. Take a look into indexed pages, there could be also that there are less pages indexed now than before migration.
Hope I've helped.
Best luck.
GR. -
Hi Becky
Load Movers - Pos 3
Wooden dollies - Pos 1Maybe open an incognito browser with history cleared.
I don't see a problem
Regards
Nigel
PS You still have 748 http pages indexed but it's only 10% of the total
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do uncrawled but indexed pages affect seo?
It's a well known fact that too much thin content can hurt your SEO, but what about when you disallow google to crawl some places and it indexes some of them anyways (No title, no description, just the link) I am building a shopify store and it's imposible to change the robots.txt using shopify, and they disallow for example, the cart. Disallow: /cart But all my pages are linking there, so google has the uncrawled cart in it's index, along with many other uncrawled urls, can this hurt my SEO or trying to remove that from their index is just a waste of time? -I can't change anything from the robots.txt -I could try to nofollow those internal links What do you think?
Intermediate & Advanced SEO | | cuarto7150 -
Thinking about not indexing PDFs on a product page
Our product pages generate a PDF version of the page in a different layout. This is done for 2 reasons, it's been the standard across similar industries and to help customers print them when working with the product. So there is a use when it comes to the customer but search? I've thought about this a lot and my thinking is why index the PDF at all? Only allow the HTML page to be indexed. The PDF files are in a subdomain, so I can easily no index them. The way I see it, I'm reducing duplicate content On the flip side, it is hosted in a subdomain, so the PDF appearing when a HTML page doesn't, is another way of gaining real estate. If it appears with the HTML page, more estate coverage. Anyone else done this? My knowledge tells me this could be a good thing, might even iron out any backlinks from being generated to the PDF and lead to more HTML backlinks Can PDFs solely exist as a form of data accessible once on the page and not relevant to search engines. I find them a bane when they are on a subdomain.
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Date of page first indexed or age of a page?
Hi does anyone know any ways, tools to find when a page was first indexed/cached by Google? I remember a while back, around 2009 i had a firefox plugin which could check this, and gave you a exact date. Maybe this has changed since. I don't remember the plugin. Or any recommendations on finding the age of a page (not domain) for a website? This is for competitor research not my own website. Cheers, Paul
Intermediate & Advanced SEO | | MBASydney0 -
Big discrepancies between pages in Google's index and pages in sitemap
Hi, I'm noticing a huge difference in the number of pages in Googles index (using 'site:' search) versus the number of pages indexed by Google in Webmaster tools. (ie 20,600 in 'site:' search vs 5,100 submitted via the dynamic sitemap.) Anyone know possible causes for this and how i can fix? It's an ecommerce site but i can't see any issues with duplicate content - they employ a very good canonical tag strategy. Could it be that Google has decided to ignore the canonical tag? Any help appreciated, Karen
Intermediate & Advanced SEO | | Digirank0 -
Does Google make continued attempts to crawl an old page one it has followed a 301 to the new page?
I am curious about this for a couple of reasons. We have all dealt with a site who switched platforms and didn't plan properly and now have 1,000's of crawl errors. Many of the developers I have talked to have stated very clearly that the HTacccess file should not be used for 1,000's of singe redirects. I figured If I only needed them in their temporarily it wouldn't be an issue. I am curious if once Google follows a 301 from an old page to a new page, will they stop crawling the old page?
Intermediate & Advanced SEO | | RossFruin0 -
Can a home page penalty cause a drop in rankings for all pages?
All my main keywords have dropped out of the SERPS. Could it be that the home page (the strongest) page has been devalued and therefore 'link juice' that used to spread throughout the site is no longer doing so. Would this cause all other pages to drop? I just can't understand how all my pages have lost rankings. The site is still indexed so there's no problem there.
Intermediate & Advanced SEO | | SamCUK0 -
Thousands of 404 Pages Indexed - Recommendations?
Background: I have a newly acquired client who has had a lot of issues over the past few months. What happened is he had a major issue with broken dynamic URL's where they would start infinite loops due to redirects and relative links. His previous SEO didn't pay attention to the sitemaps created by a backend generator, and it caused hundreds of thousands of pages to be indexed. Useless pages. These useless pages were all bringing up a 404 page that didn't have a 404 server response (it had a 200 response) which created a ton of duplicate content and bad links (relative linking). Now here I am, cleaning up this mess. I've fixed the 404 page so it creates a 404 server response. Google webmaster tools is now returning thousands of "not found" errors, great start. I fixed all site errors that cause infinite redirects. Cleaned up the sitemap and submitted it. When I search site:www.(domainname).com I am still getting an insane amount of pages that no longer exist. My question: How does Google handle all of these 404's? My client wants all the bad pages removed now but I don't have as much control over that. It's a slow process getting Google to remove these pages that are returning a 404. He is continuously dropping in rankings still. Is there a way of speeding up the process? It's not reasonable to enter tens of thousands of pages into the URL Removal Tool. I want to clean house and have Google just index the pages in the sitemap.
Intermediate & Advanced SEO | | BeTheBoss0 -
Removing pages from index
Hello, I run an e-commerce website. I just realized that Google has "pagination" pages in the index which should not be there. In fact, I have no idea how they got there. For example, www.mydomain.com/category-name.asp?page=3434532
Intermediate & Advanced SEO | | AlexGop
There are hundreds of these pages in the index. There are no links to these pages on the website, so I am assuming someone is trying to ruin my rankings by linking to the pages that do not exist. The page content displays category information with no products. I realize that its a flaw in design, and I am working on fixing it (301 none existent pages). Meanwhile, I am not sure if I should request removal of these pages. If so, what is the best way to request bulk removal. Also, should I 301, 404 or 410 these pages? Any help would be appreciated. Thanks, Alex0