Is there a way to prevent Google Alerts from picking up old press releases?
-
I have a client that wants a lot of old press releases (pdfs) added to their news page, but they don't want these to show up in Google Alerts. Is there a way for me to prevent this?
-
Thanks for the post Keri.
Yep, the OCR option would still make the image option for hiding "moo"
-
Harder, but certainly not impossible. I had Google Alerts come up on scanned PDF copies of newsletters from the 1980s and 1990s that were images.
The files recently moved and aren't showing up for the query, but I did see something else interesting. When I went to view one of the newsletters (https://docs.google.com/file/d/0B2S0WP3ixBdTVWg3RmFadF91ek0/edit?pli=1), it said "extracting text" for a few moments, then had a search box where I could search the document. On the fly, Google was doing OCR work and seemed decently accurate in the couple of tests I had done. There's a whole bunch of these newsletters at http://www.modelwarshipcombat.com/howto.shtml#hullbusters if you want to mess around with it at all.
-
Well that is how to exclude them from an alert that they setup, but I think they are talking about anyone who would setup an alert that might find the PDFs.
One other idea I had, that I think may help. If you setup the PDFs as images vs text then it would be harder for Google to "read" the PDFs and therefore not catalog them properly for the alert, but then this would have the same net effect of not having the PDFs in the index at all.
Danielle, my other question would be - why do they give a crap about Google Alerts specifically. There has been all kinds of issues with the service and if someone is really interested in finding out info on the company, there are other ways to monitor a website than Google Alerts. I used to use services that simply monitor a page (say the news release page) and lets me know when it is updated, this was often faster than Google Alerts and I would find stuff on a page before others who did only use Google Alerts. I think they are being kind of myopic about the whole approach and that blocking for Google Alerts may not help them as much as they think. Way more people simply search on Google vs using Alerts.
-
The easiest thing to do in this situation would be to add negative keywords or advanced operators to your google alert that prevent the new pages from triggering the alert. You can do this be adding advanced operators that exclude an exact match phrase, a file type, the clients domain or just a specific directory. If all the new pdf files will be in the same directory or share a common url structure you can exclude using the "inurl:-" operator.
-
That also presumes Google Alerts is anything near accurate. I've had it come up with things that have been on the web for years and for whatever reason, Google thinks they are new.
-
That was what I was thinking would have to be done... It's a little complicated on why they don't want them showing up in Alerts. They do want them showing up on the web, just not as an Alert. I'll let them know they can't have it both ways!
-
Robots.txt and exclude those files. Note that this takes them out of the web index in general so they will not show up in searches.
You need to ask your client why they are putting things on the web if they do not want them to be found. If they do not want them found, dont put them up on the web.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Where does Google get its meta descriptions from?
We have a new client and they don't have meta descriptions yet. However, Google has assigned descriptions for them now appearing on the SERPs. The problem is that Google added a phone number that's totally not the client's and goes to a different unrelated business. Our plan is to update the meta to reflect the correct information, however, we're just perplexed as to how Google came up with the incorrect phone number. Where does it get its information from? The page currently has all the correct phone number, hours, and content. I've read that Google sometimes also doesn't recognise our meta descriptions if it thinks they could serve up a better one. My next question is, what if Google insists on showing the incorrect phone number. Is there a way we can fix this? Thanks!
On-Page Optimization | | nhhernandez2 -
What is the best way to execute a geo redirect?
Based on what I've read, it seems like everyone agrees an IP-based, server side redirect is fine for SEO if you have content that is "geo" in nature. What I don't understand is how to actually do this. It seems like after a bit of research there are 3 options: You can do a 301 which it seems like most sites do, but that basically means if google crawls you in different US areas (which it may or may not) it essentially thinks you have multiple homepages. Does google only crawl from SF-based IPs? 302 passes no juice, so probably don't want to do that. Yelp does a 303 redirect, which it seems like nobody else does, but Yelp is obviously very SEO-savvy. Is this perhaps a better way that solves for the above issues? Thoughts on what is best approach here?
On-Page Optimization | | jcgoodrich0 -
My text does not show up in Google
Hi there. I've got an urgent question I hope someone can help me with. I've made a website (www.tonyharrismakingcents.com.au) with a few content pages. I don't get a lot of traffic. All my pages are scrawled and I don't see any errors. However, when I copy an entire paragraph and Google it, it does not show up in the search results. This makes me believe that the pages are not scrawled correctly. Only when I search for the exact paragraph by putting it between "", the website shows up on the results page. What can be the reason for this? Thanks for your help..It's much appreciated.
On-Page Optimization | | csrinpractice0 -
How to design a site map page for users (not for Google)
I would like to design a site map for my visitors so they can have a quick view on the whole content of the website. 2 questions : 1 - is this kind of site map can help in terms of SEO ? 2 - if so, what are the best practices to design it ? Thanks in advance.
On-Page Optimization | | betadvisor0 -
Blog Comment IPs Seen By Google?
I have a page on a client's site for testimonials (a dental practice). The page is actually a post on a Wordpress install where customers can enter their testimonials as WP comments. In an effort to encourage more clients to give more testimonials I was considering setting up an iPad or other tablet at the receptionist's desk where patients would be able to enter their successes as comments on the page. If I made sure the patients all used unique names and emails in the Wordpress comments, would Google still see all the comments are from the same IP and view this as suspicious?
On-Page Optimization | | jargomang0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
Does Google still see masked domains as duplicate content?
Older reads state the domain forwarding or masking will create duplicate content but Google has evolved quite a bit and I'm wondering if that is still the case? Not suggesting that a 301 is not the proper way to redirect something but my question is: Does Google still see masked domains as duplicate content? Is there any viable use for domain masking other than for affiliates?
On-Page Optimization | | TracyWeb0 -
Prevent link juice to flow on low-value pages
Hello there! Most of the websites have links to low-value pages in their main navigation (header or footer)... thus, available through every other pages. I especially think about "Conditions of Use" or "Privacy Notice" pages, which have no value for SEO. What I would like, is to prevent link juice to flow into those pages... but still keep the links for visitors. What is the best way to achieve this? Put a rel="nofollow" attribute on those links? Put a "robots" meta tag containing "noindex,nofollow" on those pages? Put a "Disallow" for those pages in a "robots.txt" file? Use efficient Javascript links? (that crawlers won't be able to follow)
On-Page Optimization | | jonigunneweg0