How to deal with DMCA takedown notices
-
How do you deal with DMCA takedown notices related to product descriptions? With Google it is simple enough for any person to submit a DMCA takedown notice irrespective if the owner holds right to the content.
One such example is this http://www.chillingeffects.org/notice.cgi?sID=1012391. Although Google dealt in that particular case properly (and did not remove content), we find that nowadays more and more competitors use the DMCA takedowns as an easy way to de-index competitive content.
Since the person registering the DMCA takedown does not require to provide any proof of copyright, de-indexing happens quite quickly.
Try this URL: http://www.google.com/transparencyreport/removals/copyright/domains/mydomain.com/ (replace your domain) to see if you have been affected.
I would like your opinion if you have been affected by takedowns on product descriptions - in my mind if product descriptions are informative and relate to the characteristics of the product then takedowns should be denied.
-
luckily I have never had that problem, also it seems that you will be alerted in you webmaster account if someone requests a takedown.
Still an interesting "black hat" technique to remove competitors out of the serps that might not have webmaster accounts
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO on Jobs sites: how to deal with expired listings with "Google for Jobs" around
Dear community, When dealing with expired job offers on jobs sites from a SEO perspective, most practitioners recommend to implement 301 redirects to category pages in order to keep the positive ranking signals of incoming links. Is it necessary to rethink this recommendation with "Google for Jobs" is around? Google's recommendations on how to handle expired job postings does not include 301 redirects. "To remove a job posting that is no longer available: Remove the job posting from your sitemap. Do one of the following: Note: Do NOT just add a message to the page indicating that the job has expired without also doing one of the following actions to remove the job posting from your sitemap. Remove the JobPosting markup from the page. Remove the page entirely (so that requesting it returns a 404 status code). Add a noindex meta tag to the page." Will implementing 301 redirects the chances to appear in "Google for Jobs"? What do you think?
Intermediate & Advanced SEO | | grnjbs07175 -
Product search URLs with parameters and pagination issues - how should I deal with them?
Hello Mozzers - I am looking at a site that deals with URLs that generate parameters (sadly unavoidable in the case of this website, with the resource they have available - none for redevelopment) - they deal with the URLs that include parameters with *robots.txt - e.g. Disallow: /red-wines/? ** Beyond that, they userel=canonical on every PAGINATED parameter page[such as https://wine****.com/red-wines/?region=rhone&minprice=10&pIndex=2] in search results.** I have never used this method on paginated "product results" pages - Surely this is the incorrect use of canonical because these parameter pages are not simply duplicates of the main /red-wines/ page? - perhaps they are using it in case the robots.txt directive isn't followed, as sometimes it isn't - to guard against the indexing of some of the parameter pages??? I note that Rand Fishkin has commented: "“a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL, because “you'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).” **- yet I see this time again on ecommerce sites, on paginated result - any idea why? ** Now the way I'd deal with this is: Meta robots tags on the parameter pages I don't want indexing (nofollow, noindex - this is not duplicate content so I would nofollow but perhaps I should follow?)
Intermediate & Advanced SEO | | McTaggart
Use rel="next" and rel="prev" links on paginated pages - that should be enough. Look forward to feedback and thanks in advance, Luke0 -
How to deal with everscrolling pages?
A website keeps showing more articles when pressing a "load more" button. This loads additional category pages with a page parameter (e.g. ...?page=1, ...?page=2, etc.), as suggested by Google to get all pages indexed. The problem is that this creates thousands of additional, duplicate pages, with a duplicate title, header, and very unfocused content. They also show as duplicate content in Moz. The pages are indexed by Google, but none of them is ranking. What do you guys think: add a no-follow to the load-more button, so search engines will never see them? Thanks for your input!
Intermediate & Advanced SEO | | corusent1 -
Pagination and View All Pages Question. We currently don't have a canonical tag pointing to View all as I don't believe it's a good user experience so how best we deal with this.
Hello All, I have an eCommerce site and have implemented the use rel="prev" and rel="next" for Page Pagination. However, we also have a View All which shows all the products but we currently don't have a canonical tag pointing to this as I don't believe showing the user a page with shed loads of products on it is actually a good user experience so we havent done anything with this page. I have a sample url from one of our categories which may help - http://goo.gl/9LPDOZ This is obviously causing me duplication issues as well . Also , the main category pages has historically been the pages which ranks better as opposed to Page 2, Page 3 etc etc. I am wondering what I should do about the View All Page and has anyone else had this same issue and how did they deal with it. Do we just get rid of the View All even though Google says it prefers you to have it ? I also want to concentrate my link juice on the main category pages as opposed being diluted between all my paginated pages ? - Does anyone have any tips on how to best do this and have you seen any ranking improvement from this ? Any ideas greatly appreciated. thanks Peter
Intermediate & Advanced SEO | | PeteC120 -
What's the best way to deal with deleted .php files showing as 404s in WMT?
Disclaimer: I am not a developer During a recent site migration I have seen a bit of an increase in WMT of 404 errors on pages ending .php. Click on the link in WMT and it just shows as File Not Found - no 404 page. There are about 20 in total showing in webmaster tools and I want to advise the IT department what to do. What is the best way to deal with this for on-page best practice? Thanks
Intermediate & Advanced SEO | | Blaze-Communication0 -
Site Structure: How do I deal with a great user experience that's not the best for Google's spiders?
We have ~3,000 photos that have all been tagged. We have a wonderful AJAXy interface for users where they can toggle all of these tags to find the exact set of photos they're looking for very quickly. We've also optimized a site structure for Google's benefit that gives each category a page. Each category page links to applicable album pages. Each album page links to individual photo pages. All pages have a good chunk of unique text. Now, for Google, the domain.com/photos index page should be a directory of sorts that links to each category page. Alternatively, the user would probably prefer the AJAXy interface. What is the best way to execute this?
Intermediate & Advanced SEO | | tatermarketing0 -
Canonical tag: how to deal with product variations in the music industry?
Hello here. I own a music publishing company: http://www.virtualsheetmusic.com/ And we have several similar items which only difference is the instrument they have been written for. For example, look at the two item pages below: http://www.virtualsheetmusic.com/score/Canon2Vl.html http://www.virtualsheetmusic.com/score/Canon2Vla.html They are the exact same piece of music, but written in a different way to target 2 different instrumental combinations. If it wasn't for the user reviews that can make those two similar pages different, Google could see that as duplicate content. Am I correct? And if so, how do you suggest to tackle such a possible problem? Via canonical tags? How? To have a better idea of the magnitude of the problem, have a look at these search results on our site which give you product variations of basically the same piece of music, the only difference is in the targeted instruments: www.virtualsheetmusic.com/s.php?k=Canon+in+D www.virtualsheetmusic.com/s.php?k=Meditation www.virtualsheetmusic.com/s.php?k=Flight And, similarly, we have collections of pieces targeting different instruments: www.virtualsheetmusic.com/s.php?k=Wedding+Collection www.virtualsheetmusic.com/s.php?k=Christmas+Collection www.virtualsheetmusic.com/s.php?k=Halloween+Collection Any thoughts and suggestions to tackle this potential page duplication issue are very welcome! Thank you to anyone in advance.
Intermediate & Advanced SEO | | fablau0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180