What coding works for SEO and what coding doesn't?
-
Hi:
I recently learned about inline styles and that Google started penalizing sites for that in October. Then I was told that Wix and Flash don't work (or work well) either for SEO as the engines won't crawl them (I think). Does anyone know of a blog that goes over everything that doesn't work so that I could recognize it when I look at someone's code. Anyone know of such a resource?
Cheers,
Wes
-
Hi Mark:
Thanks very much for your note. I tried something similar the SEO Browser, and it shows you what the search engines see. Very useful.
Cheers, Wes.
-
The general rule of using HTML for everything is one that I would follow. If you're unsure if something is crawlable, try downloading the web developer plugin for chrome http://chrispederick.com/work/web-developer/. Then disable javascript and plugins and refresh the page. Any content that can't be seen then probably won't be seen by search engines either.
-
Hi Wesley,
I would recommend going over a couple of technical SEO audits you can find on the web. They give a clear insight in what SEOs are looking for when doing a technical audit for SEO. It mostly contains way more than only checking for Flash or Wix sites that hide or dont' display any content in their HTML.
-
Thanks very much, I'll definitely try the adwords and the cached versions.
Cheers,
wes
-
Basically a web page has to be crawlable. If an engine can't crawl it than you don't exist. SEO 101, use html for all text. Google has made strides to try to understand what sites are about but to keep it easy for crawlers use html.
As for a tool you can use googles adwords tool and add the url into "landing page" and see how some of the site gets crawled by a simple bot. You can also look at google cached versions of sites and look at the "text only" options, that should give you a good view of what the crawlers see.
Hope that helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirects and site map isn't showing
We had a malware hack and spent 3 days trying to get Bluehost to fix things. Since they have made changes 2 things are happening: 1. Our .xml sitemap cannot be created https://www.caffeinemarketing.co.uk/sitmap.xml we have tried external tools 2. We had 301 redirects from the http (www and non www versions) nad the https;// (non www version) throughout the whole website to https://www.caffeinemarketing.co.uk/ and subsequent pages Whilst the redirects seem to be happening, when you go into the tools such as https://httpstatus.io every version of every page is a 200 code only whereas before ther were showing the 301 redirects Have Bluehost messed things up? Hope you can help thanks
Technical SEO | | Caffeine_Marketing0 -
Badges & SEO
Hello, Moz Community! We're working on creating an affiliate badge for events that make our best-of list and we're wondering: if every event website embedded the badge (could be as many as 70), would having the same image hosting URL for each one raise concerns with Google? Thanks!
Technical SEO | | EveryActionHQ0 -
Strange URL's for client's site
We just picked up a new client and I've been doing some digging around on their site. They have quite the wide variety of URL's that make for a rather confusing experience. One of the milder examples is their "About" page. Normally I would expect something along the lines of: www.website.com/about I see: www.website.com/default.asp?Page=About I'm typically a graphic designer and know basically nothing about code, but I just assume this has something funky to do with how their website was constructed. I'm assuming this isn't particularly SEO friendly, but it doesn't seem too bad. Until I got to another section of their site. It's a section that logically should look like: www.website.com/training/public-seminars It's: www.website.com/default.asp?Page=MT&Area=Seminars&Sub=MRM Now that's nonsensical to me! Normally if a client has terrible URL's, I'd say let's do some redirects, but I guess I'm a little intimidated by these. Do the URL's have to be structured like this for some reason? Am I missing some important area of coding here? However, the most bizarre example is a link back to their website from yellowpages.com. Where normally I would expect it to lead to their homepage, I get this bizarre-looking thing: http://website1-px.rtrk.com/?utm_source=ReachLocal&utm_medium=PPC&utm_campaign=AssetManagement&reference_id=15&publisher=yellowpages&placement=ypwebsitemip&action_target=listing_website And as you browse through the site, that strange domain stays. For example the About page is now: http://website1-px.rtrk.com/default.asp?Page=About I would try to google this but I have no idea where to even start! What is going on with these links? Will we be able to fix them to something presentable without breaking their website?
Technical SEO | | everestagency0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
I am cleaning up a clients link profile and am coming across a lot of directories (no surprise) My question is if an obvious fre for all generic directory doesn't look to have been hit by any updates is it a wise move recommending tit for removal?
I am cleaning up a clients link profile and am coming across a lot of directories (no surprise) My question is, if an obvious free for all generic directory doesn't look to have been hit by any updates is it a wise move recommending it for removal on the basis that it is a free for all directory and could be hit in teh future?
Technical SEO | | fazza470 -
Sitemap for pages that aren't on menus
I have a site that has pages that has a large number, about 3,000, pages that have static URLs, but no internal links and are not connected to the menu. The pages are pulled up through a user-initiated selection process that builds the URL as they make their selections, but,as I said, the pages already exist with static URLs. The question: should the sitemap for this site include these 3,000 static URLs? There is very little opportunity to optimize the pages in any serious kind of way, if you feel that makes a difference. There is also no chance that a crawler is going to find its way to these pages through the natural flow of the site. There isn't a single link to any of these pages anywhere on the site. Help?
Technical SEO | | RockitSEO0 -
Homepage display for seo
hi, Which is the best , seo speaking, about the homepage display with a CMS (joomla here). 1/ To display articles as "blog" and thus articles are always recent but not the same (chronological).. 2/ To display always a same article (just 1) but updated sometimes to times ? 3/ Both of them are good ? Tks a lot in advance..
Technical SEO | | mozllo0 -
What are SEO factors in re-doing a website?
Most of my work now involves converting older websites to CMS-based sites (in Wordpress) and I'm wondering about best practices here. If I create a "dev" or "sandbox" directory for my development work how do I keep the pages from being indexed while I am working on the new site? Can I "noindex" a directory? What do I do with the old html files when the new site goes live? I'm assuming I will do a 301 redirect from domain.com/index.html to the new domain.com/, and also on all of the inner pages that have equivalent pages in the new site. But there will be a lot of old files left that have no equal in the new site. Do I just delete these, or noindex nofollw them?
Technical SEO | | bvalentine0