Search Console - Should I request to index redirected URL or Mark as fixed?
-
Hi all,
Many blog posts used to be showing 404s when doing crawl tests and in search console (despite being there when visited.) I realized it was an issue with URL structure. It used to be example.com/post-name
I've fixed the issue by changing the URL structure in Wordpress so that they now follow the structure of example.com/post-type/post-name
According to sitemaps, Google has now indexed all posts in /post-type/post-name.
My question is what to do with crawl errors in Search Console that are still there for example.com/postname. When I fetch, I get a redirect status (which is accurate). At this point should I request to index or mark as fixed?
Thank you!
-
You should not have too. However if you are concerned this has not been picked up then go right ahead - it won't hurt matters. The reason I mentioned the sitemap in the last post is if your sitemap has the updated URL included, you're already submitting the index request that way, no need to manually do it again.
You can also search using site: and see if the page has been indexed/updated with the new URL yet.
-
Thanks Steve.
So for example.com/post-name, should I request to index, even if it has the page I've redirected to (/post-type/post-name) is indexed already?
-
Marking as fixed is really a utility for you and does not have bearing on rankings for that page. If the pages are in your sitemap and index health is good there you should have done all you need.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Resolving 301 Redirect Chains from Different URL Versions (http, https, www, non-www)
Hi all, Our website has undergone both a redesign (with new URLs) and a migration to HTTPS in recent years. I'm having difficulties ensuring all URLs redirect to the correct version all the while preventing redirect chains. Right now everything is redirecting to the correct version but it usually takes up to two redirects to make this happen. See below for an example. How do I go about addressing this, or is this not even something I should concern myself with? Redirects (2) <colgroup><col width="123"><col width="302"></colgroup>
Technical SEO | | theyoungfirm
| Redirect Type | URL |
| | http://www.theyoungfirm.com/blog/2009/index.html 301 | https://theyoungfirm.com/blog/2009/index.html 301 | https://theyoungfirm.com/blog/ | This code below was what we added to our htaccess file. Prior to adding this, the various subdomain versions (www, non-www, http, etc.) were not redirecting properly. But ever since we added it, it's now created these additional URLs (see bolded URL above) as a middle step before resolving to the correct URL. RewriteEngine on RewriteCond %{HTTP_HOST} ^www.(.*)$ [NC] RewriteRule ^(.*)$ https://%1/$1 [R=301,L] RewriteCond %{HTTPS} !on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Your feedback is much appreciated. Thanks in advance for your help. Sincerely, Bethany0 -
301 redirecting a previously abused URL
A client previously had their most important landing page at domain.com/example.htm They carried out the sort of link building that was commonplace a few years back (exact match anchors, paid blog links etc) targeting this URL, but they also got a bunch of legitimate decent quality links here. I believe they may have had a number of issues when link quality algo updates were rolled out, so rather than try and get links removed and go through the disavow process they instead decided to abandon this URL, let it 404 and start afresh at domain.com/example.html - updating all internal navigation, XML sitemaps etc. So fast forward to today. What is the best practice for this URL these days do we think? Is it now possible to 301 domain.com/example.htm > domain.com/example.html and recover whatever value may be left here? The argument for not doing so may be that you could pass over the negative metrics associated with the old URL, but would this not be handled by the real-time penguin update and the poor links just devalued rather than actually harming? And could this just be tested - i.e. add in the 301, monitor the impact and if things don't go the way we'd want then just remove the 301 again? Would be keen to get a few opinions on this. TIA
Technical SEO | | Salience_Search_Marketing0 -
Search Console rejecting XML sitemap files as HTML files, despite them being XML
Hi Moz folks, We have launched an international site that uses subdirectories for regions and have had trouble getting pages outside of USA and Canada indexed. Google Search Console accounts have finally been verified, so we can submit the correct regional sitemap to the relevant search console account. However, when submitting non-USA and CA sitemap files (e.g. AU, NZ, UK), we are receiving a submission error that states, "Your Sitemap appears to be an HTML page," despite them being .xml files, e.g. http://www.t2tea.com/en/au/sitemap1_en_AU.xml. Queries on this suggest it's a W3 Cache plugin problem, but we aren't using Wordpress; the site is running on Demandware. Can anyone guide us on why Google Search Console is rejecting these sitemap files? Page indexation is a real issue. Many thanks in advance!
Technical SEO | | SearchDeploy0 -
Many errors in Search Console (strange parameters)
Hello, I have many strange parameters in my search console that make many 404 pages, for example: mywebsite.com/article-name/&ct=ga&cd=CAIyGjk4YjY4ZDExNTYxOTgzZTk6Y29tOmVuOlVT&usg=AFQjCNFvpYpYpYf9DoyRBBu8jbiQB8JcIQ mywebsite.com/article-name/&sa=U&ved=0ahUKEwj1zMLR0JbLAhUGM5oKHejjBJAQqQIILSgAMAk&usg=AFQjCNEBNFx3dG5B0-16X6eXTS7k-Srm6Q Can someone tell me how to solve it?
Technical SEO | | JohnPalmer0 -
Google Search Console Site Map Anomalies (HTTP vs HTTPS)
Hi I've just done my usual Monday morning review of clients Google Search Console (previously Webmaster Tools) dashboard and disturbed to see that for 1 client the Site Map section is reporting 95 pages submitted yet only 2 indexed (last time i looked last week it was reporting an expected level of indexed pages) here. It says the sitemap was submitted on the 10th March and processed yesterday. However in the 'Index Status' its showing a graph of growing indexed pages up to & including yesterday where they numbered 112 (so looks like all pages are indexed after all). Also the 'Crawl Stats' section is showing 186 pages crawled on the 26th. Then its listing sub site-maps all of which are non HTTPS (http) which seems very strange since the site is HTTPS and has been for a few months now and the main sitemap index url is an HTTPS: https://www.domain.com/sitemap_index.xml The sub sitemaps are:http://www.domain.com/marketing-sitemap.xmlhttp://www.domain.com/page-sitemap.xmlhttp://www.domain.com/post-sitemap.xmlThere are no 'Sitemap Errors' reported but there are 'Index Error' warnings for the above post-sitemap, copied below:_"When we tested a sample of the URLs from your Sitemap, we found that some of the URLs were unreachable. Please check your webserver for possible misconfiguration, as these errors may be caused by a server error (such as a 5xx error) or a network error between Googlebot and your server. All reachable URLs will still be submitted." _
Technical SEO | | Dan-Lawrence
Also for the below site map URL's: "Some URLs listed in this Sitemap have a high response time. This may indicate a problem with your server or with the content of the page" for:http://domain.com/en/post-sitemap.xmlANDhttps://www.domain.com/page-sitemap.xmlAND https://www.domain.com/post-sitemap.xmlI take it from all the above that the HTTPS sitemap is mainly fine and despite the reported 0 pages indexed in GSC sitemap section that they are in fact indexed as per the main 'Index Status' graph and that somehow some HTTP sitemap elements have been accidentally attached to the main HTTPS sitemap and the are causing these problems.What's best way forward to clean up this mess ? Resubmitting the HTTPS site map sounds like right option but seeing as the master url indexed is an https url cant see it making any difference until the http aspects are deleted/removed but how do you do that or even check that's what's needed ? Or should Google just sort this out eventually ? I see the graph in 'Crawl > Sitemaps > WebPages' is showing a consistent blue line of submitted pages but the red line of indexed pages drops to 0 for 3 - 5 days every 5 days or so. So fully indexed pages being reported for 5 day stretches then zero for a few days then indexed for another 5 days and so on ! ? Many ThanksDan0 -
Trackback Redirects
My wordpress blog/theme displays a Trackback URL link in the comments area of any page that has received a comment, eg http://guitarkitbuilder.com/build-your-own-clone-digital-echo-ping-pong-kit/#comment-2408 My crawl diagnostics report shows these links (basically domain.com/post-name/trackback) as Temporary Redirect warnings 302 with the stock advice "Using HTTP header refreshes, 302, 303 or 307 redirects will cause search engine crawlers to treat the redirect as temporary and not pass any link juice (ranking power). We highly recommend that you replace temporary redirects with 301 redirects." Before I take more action on this I want to make sure this is a real problem. My initial effort to fix it was to turn off trackbacks in the wordpress settings-discussion area and also on specific posts, but the Trackback URL link still shows for any post with a comment. Any advice?
Technical SEO | | jeff_amm0 -
What tools produce a complete list of all URLs for 301 redirects?
I am project managing the rebuild of a major corporate website and need to set up 301 redirects from the old pages to the new ones. The problem is that the old site sits on multiple CMS platforms so there is no way I can get a list of pages from the old CMS. Is there a good tool out there that will crawl through all the sites and produce a nice spreadsheet with all the URLs on it? Somebody mentioned Xenu but I have never used it. Any recommendations? Thanks -Adrian
Technical SEO | | Adrian_Kingwell0 -
Site wide search v catalogue search
I have a client building a new web site who has agreed that a site search function is a good thing in order to get a view on how customers are using the site, the search terms they are using as a source of keywords etc. The problem is the developer has implemented a catalogue/product search which only queries the products in the database. On the one hand this is fine in that the search is directing users to products and not to other areas of the site. But the customer is disappointed that the search is not site wide. Are there any solutions where third party search utility could be implemented whithin the site which will search both? The ecommerce platform is Magento. Any views would be very helpful!
Technical SEO | | k3nn3dy30