Image and Content Management
-
My boss has decided that on our new website we are building, that he wants all content and images managed by not allowing copying content and/or saving images.
Some of the information and images is proprietary, yet most is available for public viewing, but never the less, he wants it prohibited from copy and/or saving. We would still want to keep the content indexable and use appropriate alt tags etc...
I wanted to find out if there is any SEO reason and facts to why this would not be a good idea?Would implementing code to prohibit (or at least make it difficult) to save images and copy content, penalize us?
-
If your boss really feels there is a site which offers such protection, please share the URL. There are sites which may disable right-clicking but that only removes a single method of copying content.
-
I totally concur. You cant stop anyone from jacking your content, or graphics.
-
That was what I was thinking.
I knew it would be nearly impossible, but I wanted to make sure I wasn't out of the loop on something. I am pretty sure he seen a website that had text select disabled or something of that nature, but there is always works arounds. Thanks
-
Would implementing code to prohibit (or at least make it difficult) to save images and copy content, penalize us?
It is not possible to prevent content theft. Anyone who can read your content can copy it. The same applies to images. It is a mistake to try. At best, you will prevent a relatively small percentage of people with little understanding from being able to copy your content, and that is not the group people normally wish to impede.
Let's say you did create a new technology which actually prevented most forms of copying content. If a person sees your webpage they can take an image of the page at the push of a single button, then paste that image into photo shop. From there they can cut out your images and save them. They could also use a OCR to capture all the text. This can be done in 60 seconds.
There is a large interest in achieving the goal you seek and I am not aware of any organization that has come close on any level to achieving it. If somehow it was achieved, the page would not be readable by search engines. Search engines have to be able to read every character of text through their web crawlers. In effect, search engines scrape your content. If you find a technology to prevent copying text, it would prevent search engines from crawling as well.
The best your boss can do is copyright the website and/or images. It is highly likely your bosses fears will be realized at some point and his content or images will be stolen if it is of a high quality nature. A copyright will provide him with the maximum amount of legal protection. You can also employ the services of companies which will embed a tracking code of sorts into images. These companies then crawl the web each month looking for your protected content on other sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to manage user profiles in your website?
We have a real estate website in which agents and builders can create their profiles. My question is shall we use h1 or h2 tags in business profile pages or make them according to web 2.0 standards? In case header tags are used, if two agents have the same name and we have used h2 tag for them, then search result page will end up having two same h2's. Can someone please tell me the right way to manage business profiles in a website? Thanks
Intermediate & Advanced SEO | | dailynaukri1 -
Content update on 24hr schedule
Hello! I have a website with over 1300 landings pages for specific products. These individual pages update on a 24hr cycle through out API. Our API pulls reviews/ratings from other sources and then writes/updates that content onto the page. Is that 'bad"? Can that be viewed as spammy or dangerous in the eyes of google? (My first thought is no, its fine) Is there such a thing as "too much content". For example if we are adding roughly 20 articles to our site a week, is that ok? (I know news websites add much more than that on a daily basis but I just figured I would ask) On that note, would it be better to stagger our posting? For example 20 articles each week for a total of 80 articles, or 80 articles once a month? (I feel like trickle posting is probably preferable but I figured I would ask.) Is there any negatives to the process of an API writing/updating content? Should we have 800+ words of static content on each page? Thank you all mozzers!
Intermediate & Advanced SEO | | HashtagHustler0 -
Minimum amount of content for Ecommerce pages?
Hi Guys, Currently optimizing my e-commerce store which currently has around 100 words of content on average for each category page. Based on this study by Backlinko the more content the better: http://backlinko.com/wp-content/uploads/2016/01/02_Content-Total-Word-Count_line.png Would you say this is true for e-commerce pages, for example, a page like this: http://www.theiconic.com.au/yoga-pants/ What benefits would you receive with adding more content? Is it basically more content, leads to more potential long-tail opportunity and more organic traffic? Assuming the content is solid and not built just for SEO reasons. Cheers.
Intermediate & Advanced SEO | | seowork2140 -
Putting my content under domain.com/content, or under related categories: domain.com/bikes/content ?
Hello This questions plays on what Joe Hall talked about during this years' MozCon: Rethinking Information Architecture for SEO and Content Marketing. My Case:
Intermediate & Advanced SEO | | Inevo
So.. we're working out guidelines and templates for a costumer (sporting goods store) on how to publish content (articles, videos, guides) on their category pages, product pages, and other pages. At this moment I have 2 choices:
1. Use a url-structure/information architecture where all the content is placed in one subfolder, for example domain.com/content. Although it's placed here, there's gonna be extensive internal linking from /content to the related category pages, so the content about bikes (even if it's placed under domain.com/bikes) will be just as visible on the pages related to bikes. 2. Place the content about bikes on a subdirectory under the bike category, **for example domain.com/bikes/content. ** The UX/interface for these two scenarios will be identical, but the directories/folder-hierarchy/url structure will be different. According to Joe Hall, the latter scenario will build up more topical authority and relevance towards the category/topic, and should be the overall most ideal setup. Any thoughts on which of the two solutions is the most ideal? PS: There is one critical caveat her: my costumer uses many url-slugs subdirectories for their categories, for example domain.com/activity/summer/bikes/, which means the content in the first scenario will be 4 steps away from the home page. Is this gonna be a problem? Looking forward to your thoughts 🙂 Sigurd, INEVO0 -
Internal Duplicate Content Question...
We are looking for an internal duplicate content checker that is capable of crawling a site that has over 300,000 pages. We have looked over Moz's duplicate content tool and it seems like it is somewhat limited in how deep it crawls. Are there any suggestions on the best "internal" duplicate content checker that crawls deep in a site?
Intermediate & Advanced SEO | | tdawson091 -
Duplicate Content with URL Parameters
Moz is picking up a large quantity of duplicate content, consists mainly of URL parameters like ,pricehigh & ,pricelow etc (for page sorting). Google has indexed a large number of the pages (not sure how many), not sure how many of them are ranking for search terms we need. I have added the parameters into Google Webmaster tools And set to 'let google decide', However Google still sees it as duplicate content. Is it a problem that we need to address? Or could it do more harm than good in trying to fix it? Has anyone had any experience? Thanks
Intermediate & Advanced SEO | | seoman100 -
Image URL Change Catastrophe
We have a site with over 3mm pages indexed, and an XML sitemap with over 12mm images (312k indexed at peak). Last week our traffic dropped off a cliff. The only major change we made to the site in that time period was adding a DNS record for all of our images that moved them from a SoftLayer Object Storage domain to a subdomain of our site. The old URLs still work, but we changed all the links from across our site to the new subdomain. The big mistake we made was that we didn't update our XML sitemap to the new URLs until almost a week after the switch (totally forgot that they were served from a process with a different config file). We believe this was the cause of the issue because: The pages that dropped in traffic were the ones where the images moved, while other pages stayed more or less the same. We have some sections of our property where the images are, and have always been, hosted by Amazon and their rankings didn't crater. Same with pages that do not have images in the XML sitemap (like list pages). There wasn't a change in geographic breakdown of our traffic, which we looked at because the timing was around the same time as Pigeon. There were no warnings or messages in Webmaster Tools, to indicate a manual action around something unrelated. The number of images indexed in our sitemap according Webmaster Tools dropped from 312k to 10k over the past week. The gap between the change and the drop was 5 days. It takes Google >10 to crawl our entire site, so the timing seems plausible. Of course, it could be something totally unrelated and just coincidence, but we can't come up with any other plausible theory that makes sense given the timing and pages affected. The XML sitemap was updated last Thursday, and we resubmitted it to Google, but still no real change. Anyone had a similar experience? Any way to expedite the climb back to normal traffic levels? Screen%20Shot%202014-07-29%20at%203.38.34%20PM.png
Intermediate & Advanced SEO | | wantering0 -
Is all duplication of HTML title content bad?
In light of Hummingbird and that HTML titles are the main selling point in SERPs, is my approach to keyword rich HTML titles bad? Where possible I try to include the top key phrase to descripe a page and then a second top keyphrase describing what the company/ site as a whole is or does. For instance an estate agents site could consist of HTML title such as this Buy Commercial Property in Birmingham| Commercial Estate Agents Birmingham Commercial Property Tips | Commercial Estate Agents In order to preserve valuable characters I have also been omitting brand names other than on the home page... is this also poor form?
Intermediate & Advanced SEO | | SoundinTheory0