Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is using a Href in Div OK?
-
Hi,
I was just wondering what your thoughts are on using a Href in a Div, which contains anchor text. We currently use the Href on the div, as opposed to just the anchor text as I want the whole div to be clickable as opposed to just the anchor text.
So currently I have:
Keword 1
Keyword 2Is this perfectly fine to do it like this as opposed to using <a tags="" ???<br="">I suppose there are various alternatives - if you must use the</a><a tag="" like:<="" p=""></a>
However I would assume a search engine is smart enought to know its the same thing???
Thanks
-
Many thanks - I'm going to have a serious word with my coders!!
-
As Dave shared, the W3C is the official organization for determining the validity of code. Generally speaking, you want your code to be valid.
If you want an area to be a click-able link try Maximise's suggestion.
-
Jesus - Thanks.
So your telling me that its totally invalud to:
1/. Wrap a href tag around a div tag.2/. Put a href tag as an atribute on a div tag.
If that is the case I'm going to have so serious words with my coders!!
-
Any time I have a question about valid HTML I go to the Source. W3C has an invaluable tool for HTML . Becoming familiar with this will help you always create clean code.
-
I would try and stay clear of doing this to be honest. It won't validate, older browsers may have problems with it and I'm not sure how search engines would treat it (they may not follow it). There are better ways to do it.
If you set your anchor tag to 'display:block' you can then set the height and width too. This way the whole div would be the link, not just the text.
-
Neither option is valid HTML. Browser behavior and crawler behavior would be unpredictable.
The code would be something like:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using GeoDNS across 3 server locations
Hi, I have multiple servers across UK and USA. I have a web site that serves both areas and was looking at cloning my sites and using GeoDNS to route visitors to the closest server to improve speed and experience So UK visitors would connect to UK dedicated server, North America - New York server and so on Is this a good way or would this effect SEO negatively. Cheers Keith
Technical SEO | | Keith-0071 -
How to track my actual traffic source using Google Analytics which are now showing as referral traffic?
Hi Mozzers, I went through many Q&As in the community this morning. I found a solution where I could just remove the referral site in analytics>admin>property>tracking info>referral exclusion list. So I removed paypal.com which was the main referral traffic. I thought the problem is solved. Later today I got another order, now the referral traffic is from eway.com, now what? Yes I know I will add this to the exclusion list but there will be many more referral sites. My main concern is I am not able to track the actual traffic source. How do I do that? 1. Do I need to use google url tracking for all my pages?
Technical SEO | | DebashishB
2. Do I need to add tracking code in each page of the site?
3. Is there a way to track the actual source of this traffic, now that the transaction is already made but reflects as referral traffic in Google Analytics? jZjTN0 -
Custom hreflang tags in WP & using with Yoast
Hi My clients dev has added custom fields for adding hreflang tags to head of pages such as: "Rel Type", "The URL", and "Language Code" Am i right in thinking that until a different language/country version of the site is created these can remain empty or should they still be populated once added say with some sort of global reference or best left blank since will leave the head content global by default ? Also how important is it to add charset to the language code ? since seems optional ? Also this set up is on WP multi-site with Yoast and devs asked me the below: _One thing to note is that Yoast generates its own "canonical" tags - so if _
Technical SEO | | Dan-Lawrence
_you are going to use hreflang tags and canonical tags then you don't need to _
_add a canonical using the custom fields I have set up - Yoast has that _
sorted. _But if you are going down the route of NOT having any canonical tags - and _
_using a x-defult for the hreflang tags, I will need to try and suppress the _
_Yoast canonical tag so you can do this. Much depends on your approach and _
what you think is best. So how do i know if using canonicals or x-default, i take it best simplest to leverage Yoast and hence not add canonicals to custom fields ? Isnt x-default just for indicating language selectors/redirector not specific to 1 region? So long as havnt got those then good to proceed with Yoasts generated canonicals ? Cheers dan0 -
302 redirect used, submit old sitemap?
The website of a partner of mine was recently migrated to a new platform. Even though the content on the pages mostly stayed the same, both the HTML source (divs, meta data, headers, etc.) and URLs (removed index.php, removed capitalization, etc) changed heavily. Unfortunately, the URLs of ALL forum posts (150K+) were redirected using a 302 redirect, which was only recently discovered and swiftly changed to a 301 after the discovery. Several other important content pages (150+) weren't redirected at all at first, but most now have a 301 redirect as well. The 302 redirects and 404 content pages had been live for over 2 weeks at that point, and judging by the consistent day/day drop in organic traffic, I'm guessing Google didn't like the way this migration went. My best guess would be that Google is currently treating all these content pages as 'new' (after all, the source code changed 50%+, most of the meta data changed, the URL changed, and a 302 redirect was used). On top of that, the large number of 404's they've encountered (40K+) probably also fueled their belief of a now non-worthy-of-traffic website. Given that some of these pages had been online for almost a decade, I would love Google to see that these pages are actually new versions of the old page, and therefore pass on any link juice & authority. I had the idea of submitting a sitemap containing the most important URLs of the old website (as harvested from the Top Visited Pages from Google Analytics, because no old sitemap was ever generated...), thereby re-pointing Google to all these old pages, but presenting them with a nice 301 redirect this time instead, hopefully causing them to regain their rankings. To your best knowledge, would that help the problems I've outlined above? Could it hurt? Any other tips are welcome as well.
Technical SEO | | Theo-NL0 -
Does image domain name matter when using a CDN?
Has anyone does studies on using a different CDN domain name for images on a site? Here is an example:
Technical SEO | | findwellor ![](<a)http://cdn.mydomain.com/image.jpg> mydomain.com ranks highly and many images show up in Google/Bing image searches. Is there any actual data that says that using your real domain name for the CDN has benefits versus the default domain name provided by the CDN provider? On the surface, it feels like it would, but I haven't experimented with it.
0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Use of Meta Tag - MSSmartTagsPreventParsing
We've inherited some sites from another developer that had the following tag: All references I can find to it are from 2004. What is the purpose and is it worth including in pages/sites we build?
Technical SEO | | wcksmith0 -
How to handle sitemap with pages using query strings?
Hi, I'm working to optimize a site that currently has about 5K pages listed in the sitemap. There are not in face this many pages. Part of the problem is that one of the pages is a tool where each sort and filter button produces a query string URL. It seems to me inefficient to have so many items listed that are all really the same page. Not to mention wanting to avoid any duplicate content or low quality issues. How have you found it best to handle this? Should I just noindex each of the links? Canonical links? Should I manually remove the pages from the sitemap? Should I continue as is? Thanks a ton for any input you have!
Technical SEO | | 5225Marketing0