Does this page crawl well?
-
I just put up a page that uses an image map to illustrate a national currency note.
http://www.antiquebanknotes.com/NationalCurrency/National-Bank-Note-Information.aspx
My goal with this page is get results for National Bank Note. But I know image maps are wierd creatures and not good for linking. My question is, will Google index my tooltips and find this page useful and therefore worthy?
I think the content is useful for my users but I just don't know if the implementation will work well. This screen will eventually have 5 or 6 notes on it and I don't want to do it
the concensus is negative...
Thanks for any advice.
-
but of course it will index the page. and it has!
as far as... will it find the page useful or therefore worthy is not up to Google but your visitors. To make the page rank you need to include links from other pages that will eventually land your visitors to this page. including call to action buttons etc. I would also suggest to add more content to give the page some density as well.
hope this helped
-
Regarding #2, I understand your target keyword is the singular version. You hit it in the URL, page title and H1 tag. You do not need to use the singular version every time. You should use the appropriate tense in content. If you wish to use the singular version, find a grammatically correct way of doing such.
Regarding item #4, the note is not working as an image map for me. I am using FF8 as a browser. The note is simply a static image.
-
Hi Ryan,
First, thank you so much for the interesting feedback... I will learn plenty with this thread I am sure.
#1 Interesting tool validating the html... I am kind of surprised by the errors since the html is very basic. I wonder if it's finding issues with my Masterpage. I will dig in and see if I can clean those up. I have cleaned up my issue with the Div in the Head... which was hurting every page on the site... very nice.
The last 20 errors seem to be a complaint with microsofts image map. Or at least, I need to set noHref somewhere on the image map but I dont' see the property. I will work on that.
#2 I purposely used the singular because that is the exact keyword I want to hit. I figured taking the slight hit for grammar was better to get the exact keyword. I'd be curious to hear your thoughts on that trade off or if there really is a tradeoff there.
#3 Yeah, I will be adding some content but not much... the star of this page will be the notes and the breakdown of their anatomy
#4 The note is an image map and I have defined 12 clickable areas on the image where a tooltip will show on click or hover. I thought describing the click to a site user was a little more intuitive than having them hover over. My users will tend to be older.
#5 Fixed.
-
The web page could be optimized in numerous areas to improve it's crawlability and performance.
1. The page does not use valid HTML. There are 25 coding errors. Most if not all should be corrected. http://validator.w3.org/check?uri=http%3A%2F%2Fwww.antiquebanknotes.com%2FNationalCurrency%2FNational-Bank-Note-Information.aspx&charset=%28detect+automatically%29&doctype=Inline&group=0
2. For best SEO performance, you should improve the page's grammar usage. For example "Below are the major forms of National Bank Note." should end in "notes" (plural).
3. There is almost no textual content on the page. While it is not required, you may wish to add additional content.
4. The page says "Click on any element that you want to know about and read the description." There is no clickable part of the image.
5. The page's meta description is the same as the title.
The page is crawlable but there are many steps which can be taken to improve it's performance in search results.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google crawl drop
the crawl request of my company site: https://www.dhgate.com/ has dropped nearly over 95%, from daily 6463599 requests to 476493 requests at 12:00am on 9th, Oct (GMT+8). This dramatic dropping trend not only showed in our GSC crawl stats report but also our company's own log report. We have no idea what’s going on. We want to know whether there is an update of google about crawlling, or is this the issue of our own site? If something is wrong with our site, in what aspects would you recommend us to check, analyze and accordingly optimize?
Technical SEO | | DHgate_20140 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
Redesigned and Migrated Website - Lost Almost All Organic Traffic - Mobile Pages Indexing over Normal Pages
We recently redesigned and migrated our site from www.jmacsupply.com to https://www.jmac.com It has been over 2 weeks since implementing 301 redirects, and we have lost over 90% of our organic traffic. Google seems to be indexing the mobile versions of our pages over our website pages. We hired a designer to redesign the site, and we are confident the code is doing something that is harmful for ranking our website. F or Example: If you google "KEEDEX-K-DS-FLX38" You should see our mobile page ranking: http://www.jmac.com/mobile/Product.aspx?ProductCode=KEEDEX-K-DS-FLX38 but the page that we want ranked (and we think should be, is https://www.jmac.com/Keedex_K_DS_FLX38_p/keedex-k-ds-flx38.htm) That second page isn't even indexed. (When you search for: "site:jmac.com Keedex K-DS-FLX38") We have implemented rel canonical, and rel alternate both ways. What are we doing wrong??? Thank you in advance for any help - it is much appreciated.
Technical SEO | | jmaccom0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Google Page speed
I get the following advice from Google page speed: Suggestions for this page The following resources have identical contents, but are served from different URLs. Serve these resources from a consistent URL to save 1 request(s) and 77.1KiB. http://www.irishnews.com/ http://www.irishnews.com/index.aspx I'm not sure how to fix this the default page is http://www.irishnews.com/index.aspx, anybody know what need to be done please advise. thanks
Technical SEO | | Liammcmullen0 -
SEOMoz Crawling Errors
I recently implemented a blog using WordPress on our website. I didn't use WordPress as the CMS for the rest of our site just the blog portion. So as an example I installed Wordpress in http://www.mysite/blog/" not in the root. My error report in SEOMoz went from 0 to 22e. The Moz bot or crawler that SEOMoz uses is reporting a ton of 4xx errors to strang links that shouldn't exist anywhere on the site. Example: Good link - http://www.mysite/products.html Bad link reported by SEOMoz - http://www.mysite/blog/my-first-post/products.html I've also noticed that my page speed as become much slower as reported by Google. Does anybody know what could be happening here? I know that typically it's better to install WordPress in the root and use it to control the entire site but I was under the gun to get a blog out. Thanks
Technical SEO | | TRICORSystems0 -
Page MozRank and MozTrust 0 for Home Page, Makes No Sense?
Hey Mozzers! I'm a bit confused by a site that is showing a 0 for home page MozRank and MozTrust, while its subdomain and root domain metrics look decent (relatively). I am posting images of the page metrics and subdomain metrics to show the disparity: http://i.imgur.com/3i0jq.png http://i.imgur.com/ydfme.png Is it normal to see this type of disparity? The home page has very little inbound links, but the big goose egg has me wondering if there is something else going on. Has anyone else experienced this? Or, does anyone have speculation as to why a home page would have a 0 MozRank while the subdomain metrics look much better? Thanks!
Technical SEO | | ClarityVentures0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0