Should I better noindex 'scripted' files in our portfolio?
-
Hello Moz community,
As a means of a portfolio, we upload these PowerPoint exports – which are converted into HTML5 to maintain interactivity and animations. Works pretty nicely! We link to these exported files from our products pages. (We are a presentation design company, so they're pretty relevant).
For example: https://www.bentopresentaties.nl/wp-content/portfolio/ecar/index.html
However, they keep coming up in the Crawl warnings, as the exported HTML-file doesn't contain text (just code), so we get errors in:
- thin content
- no H1
- missing meta description
- missing canonical tag
I could manually add the last two, but the first warnings are just unsolvable.
Therefore I figured we probably better noindex all these files… They appear to don't contain any searchable content and even then; the content of our clients work is not relevant for our search terms etc. They're mere examples, just in the form of HTML files.
Am I missing something or should I better noindex these/such files?
(And if so: is there a way to include a whole directory to noindex automatically, so I don't have to manually 'fix' all the HTML exports with a noindex tag in the future? I read that using disallow in robots.txt wouldn't work, as we will still link to these files as portfolio examples).
-
Hey Arnout,
I would do a few things:
1. You should be able to edit the html code of each presentation to add to the meta title of the page. That should be a solid starting point. I know that is tedious but it will help you cover your bases. It looks like you are doing that already.
2. I would still disallow the directory via robots.txt.
3. #2 should work but as an extra, you could add a rel = no follow link property to the places where you link to that specific section. Here's a link for more details on that: https://support.google.com/webmasters/answer/96569?hl=en
4. To speed the process up, I would use the search console and request to remove the directory
Hopefully that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Syntax: 'canonical' vs "canonical" (Apostrophes or Quotes) does it matter?
I have been working on a site and through all the tools (Screaming Frog & Moz Bar) I've used it recognizes the canonical, but does Google? This is the only site I've worked on that has apostrophes. rel='canonical' href='https://www.example.com'/> It's apostrophes vs quotes. Could this error in syntax be causing the canonical not to be recognized? rel="canonical"href="https://www.example.com"/>
Intermediate & Advanced SEO | | ccox10 -
Help me to understand why this page doesn't rank
Hello everyone. I am trying to understand why most of my website category pages don't show up in the in the first 50 organic results on Google, despite my high website DA and high PA of those pages. We used to rank high a few years ago, not clear why most of those pages have almost completely disappeared. So, just to take one as an example, please, help me to understand why this page doesn't shows up in the first 50 organic search results for the keyword "cello sheet music": http://www.virtualsheetmusic.com/downloads/Indici/Cello.html I really can't explain why, unless we are under some sort of "penalization" or similar (a curse?!)... I have analyzed any possible metric, and can't find a logical explanation. Looking forward for your thoughts guys! All the best, Fab.
Intermediate & Advanced SEO | | fablau0 -
Can't diagnose this 404 error
Hi Moz community I have started receiving a load of 404 errors that look like this: This page: http://paulminors.com/blog/page/5/ is linking to: http://paulminors.com/category/podcast/paulminors.com which is a broken link. This is happening with a load of other pages as well. It seems that "paulminors.com" is being added to the end of the linking pages URL.I'm using Wordpress and the SEO by Yoast plugin. I have searched for this link in the source of the linking page but can't find it, so I'm struggling to diagnose the problem. Does anyone have any ideas on what could be causing this? Thanks in advance Paul
Intermediate & Advanced SEO | | kevinliao0 -
Why isn't my site being indexed by Google?
Our domain was originally pointing to a Squarespace site that went live in March. In June, the site was rebuilt in WordPress and is currently hosted with WPEngine. Oddly, the site is being indexed by Bing and Yahoo, but is not indexed at all in Google i.e. site:example.com yields nothing. As far as I know, the site has never been indexed by Google, neither before nor after the switch. What gives? A few things to note: I am not "discouraging search engines" in WordPress Robots.txt is fine - I'm not blocking anything that shouldn't be blocked A sitemap has been submitted via Google Webmaster Tools and I have "fetched as Google" and submitted for indexing - No errors I've entered both the www and non-www in WMT and chose a preferred There are several incoming links to the site, some from popular domains The content on the site is pretty standard and crawlable, including several blog posts I have linked up the account to a Google+ page
Intermediate & Advanced SEO | | jtollaMOT0 -
Organic 'not provided data' - strip out brand?
I cannot strip out brand data on the 'not provided' keywords in Google analytics. Is this not possible anymore? I understand we cannot get specific keywords but can we no longer strip out brand on organic traffic in Google analytics for keywords that are 'not provided' ?
Intermediate & Advanced SEO | | pauledwards0 -
What is the recommended way to save Image Files in WP?
Hi, Is there a recommended setting (or even plugin) to use when saving image files on my Wordpress blog?
Intermediate & Advanced SEO | | BeytzNet
Which folders etc. Thanks0 -
Is 301 redirecting your index page to the root '/' safe to do or do you end up in an endless loop?
Hi I need to tidy up my home page a little, I have some links to our index.html page but I just want them to go to the root '/' so I thought I could 301 redirect it. However is this safe to do? I'm getting duplicate page notifications in my analytic reportings tools about the home page and need a quick way to fix this issue. Many thanks in advance David
Intermediate & Advanced SEO | | David-E-Carey0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0