Index Problem
-
Hi guys
I have a critical problem with google crawler.
Its my website : https://1stquest.com
I can't create sitemap with online site map creator tools such as XML-simemap.org
Fetch as google tools usually mark as partial
MOZ crawler test found both HTTP and HTTPS version on site!
and google cant index several pages on site.
Is problem regards to "unsafe URL"? or something else?
-
Hi peter
just curious did you have to do anything specific for SEO side of things? I am working with a developer who assures me that angular will not impact on SEO but looking at their past efforts I am not too sure?
-
Hello all,
Thought I'd drop in my $0.02 about Google and Angular JS. We switched over to it two weeks ago at FixedPriceCarService.com.au - existing pages were fine, with no transition issues, and same goes for all the new pages we added. All were indexed quite quickly on Google.
Google Search Console has no issue with finding things like Page Title and Meta Description that have their home now in the DOM.
I'm curious about when MOZ might also be able to crawl the DOM, and not report missing page titles that are not truly missing, rather they've been relocated.
Scott - Fixed Price Car Service
-
Hi Hamid!
Did Peter or Martijn answer your question? If so, please mark one or both as a Good Answer.
If not, what are you still looking for?
-
Peter is indeed correct, it doesn't seem you have to worry about unsafe URLs but more about the technical way that your site is built. AngularJS is relatively new and Google and as it's JS based it's rather hard for Google to retrieve all the content already on these pages in order to 'get' the whole page. His suggestion is also spot on and will make it easier for search engines to crawl, read and index your site.
-
If you can't create xml sitemap then you can make plugin for your site with your devs for this. Google can't render your webiste because technology. I've seen that your site using AngularJS and you have tiny HTML code.
However there is solution for you described here:
https://www.distilled.net/resources/prerender-and-you-a-case-study-in-ajax-crawlability/So if you provide fully rendered HTML to search engine bots then your site can be finally indexed proper.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Redirects to relative URLs not absolute a problem?
Hi we recently did a migration and a lot of content changed locations see: https://d.pr/i/RvqI81 Basically, the 301 goes to the correct location but its a relative URL (as you can see from the screenshot) rather than absolute URL. Do you think this is a high priority issue from an SEO standpoint, should we get the developer to change the redirects to absolute? Cheers.
Intermediate & Advanced SEO | | cathywix0 -
Big problems with site traffic
Hello! I have big problems with website promotion. It's been 7 months and the attendance on the site is 1-5 people a day. I do not understand the reason. Can you tell me what I'm doing wrong? Site: www.azartlist.com Many thanks.
Intermediate & Advanced SEO | | Bobic1 -
Why isn't my site being indexed by Google?
Our domain was originally pointing to a Squarespace site that went live in March. In June, the site was rebuilt in WordPress and is currently hosted with WPEngine. Oddly, the site is being indexed by Bing and Yahoo, but is not indexed at all in Google i.e. site:example.com yields nothing. As far as I know, the site has never been indexed by Google, neither before nor after the switch. What gives? A few things to note: I am not "discouraging search engines" in WordPress Robots.txt is fine - I'm not blocking anything that shouldn't be blocked A sitemap has been submitted via Google Webmaster Tools and I have "fetched as Google" and submitted for indexing - No errors I've entered both the www and non-www in WMT and chose a preferred There are several incoming links to the site, some from popular domains The content on the site is pretty standard and crawlable, including several blog posts I have linked up the account to a Google+ page
Intermediate & Advanced SEO | | jtollaMOT0 -
Big problem with duplicate page content
Hello! I am a beginner SEO specialist and a have a problem with duplicate pages content. The site I'm working on is an online shop made with Prestashop. The moz crawl report shows me that I have over 4000 duplicate page content. Two weeks ago I had 1400. The majority of links that show duplicate content looks like bellow:
Intermediate & Advanced SEO | | ana_g
http://www.sitename.com/category-name/filter1
http://www.sitename.com/category-name/filter1/filter2 Firstly, I thought that the filtres don't work. But, when I browse the site and I test it, I see that the filters are working and generate links like bellow:
http://www.sitename.com/category-name#/filter1
http://www.sitename.com/category-name#/filter1/filter2 The links without the # do not work; it messes up with the filters.
Why are the pages indexed without the #, thus generating me duplicate content?
How can I fix the issues?
Thank you very much!0 -
"No Index" Extensions
Hi there, We run an e-commerce website and we are aware of our duplicate page content/title problems. We know about the "rel canonical" tag and the "no index" tag but I am more interested in the latter. We use a CMS called Magento. Now, Magento has an extension that allows you to use the "no follow" and "no index" tag on products. Google has indexed many of our pages and I wanted to know if applying the "no index" tag on duplicate pages will instruct Google to remove the duplicate url's it has already indexed. I know the tag will tell Google not to index a page but what if I apply it to a product already indexed?
Intermediate & Advanced SEO | | iBags0 -
New Web Page Not Indexed
Quick question with probably a straightforward answer... We created a new page on our site 4 days ago, it was in fact a mini-site page though I don't think that makes a difference... To date, the page is not indexed and when I use 'Fetch as Google' in WT I get a 'Not Found' fetch status... I have also used the'Submit URL' in WT which seemed to work ok... We have even resorted to 'pinging' using Pinglar and Ping-O-Matic though we have done this cautiously! I know social media is probably the answer but we have been trying to hold back on that tactic as the page relates to a product that hasn't quite launched yet and we do not want to cause any issues with the vendor! That said, I think we might have to look at sharing the page socially unless anyone has any other ideas? Many thanks Andy
Intermediate & Advanced SEO | | TomKing0 -
Language Subdirectory homepage not indexed by Google
Hi mozzers, Our Spanish homepage doesn't seem to be indexed or cached in Google, despite being online for over a month or two. All Spanish subpages are indexed and have started to rank but not the homepage. I have submitted sitemap xml to GWTools and have checked there's no noindex on the page - it seems to be in order. And when I run site: command in Google it shows all pages except homepage. What could be the problem? Here's the page: http://www.bosphorusyacht.com/es/
Intermediate & Advanced SEO | | emerald0 -
Adding Orphaned Pages to the Google Index
Hey folks, How do you think Google will treat adding 300K orphaned pages to a 4.5 million page site. The URLs would resolve but there would be no on site navigation to those pages, Google would only know about them through sitemap.xmls. These pages are super low competition. The plot thickens, what we are really after is to get 150k real pages back on the site, these pages do have crawlable paths on the site but in order to do that (for technical reasons) we need to push these other 300k orphaned pages live (it's an all or nothing deal) a) Do you think Google will have a problem with this or just decide to not index some or most these pages since they are orphaned. b) If these pages will just fall out of the index or not get included, and have no chance of ever accumulating PR anyway since they are not linked to, would it make sense to just noindex them? c) Should we not submit sitemap.xml files at all, and take our 150k and just ignore these 300k and hope Google ignores them as well since they are orhpaned? d) If Google is OK with this maybe we should submit the sitemap.xmls and keep an eye on the pages, maybe they will rank and bring us a bit of traffic, but we don't want to do that if it could be an issue with Google. Thanks for your opinions and if you have any hard evidence either way especially thanks for that info. 😉
Intermediate & Advanced SEO | | irvingw0