RSS Hacking Issue
-
Hi
Checked our original rss feed - added it to Google reader and all the links go to the correct pages, but I have also set up the RSS feed in Feedburner. However, when I click on the links in Feedburner (which should go to my own website's pages) , they are all going to spam sites, even though the title of the link and excerpt are correct.
This isn't a Wordpress blog rss feed either, and we are on a very secure server.
Any ideas whatsoever? There is no info online anywhere and our developers haven't seen this before.
Thanks
-
Thanks so much for your help - I think this should fix it. You've saved me hours of time. It's our own cms so I should be able to fix it today.
-
I don't think you're being linked to spam, specifically. What you're seeing is the Feedburner page linking your post titles to feeds.feedburner.com/[whatever the guid of the post is] -- URLs of different feeds from different sites entirely.
I believe this is the problem referenced in the FeedBurner FAQ - http://www.google.com/support/feedburner/bin/answer.py?hl=en&answer=79014&topic=13190 - "Why don't my feed content item links work?"
In which case, the isPermalink attribute on the feed guids should be false. I'd post about this on the support forum for your CMS.
-
Mmm, actually maybe if I change that guid entry that came up in the validator to false that will fix it?
-
Some answers to your checks:
- Feed is correct - still my feed
- No FeedMedic reports -says everything is fine
- Feedburner url and url people are directed to from the blog are the same
- No malware reports
- Ran tool on blog article page, rss, feedburner page, and feedburner article link page - doesn't pick up any malware
- Validity check brings up one issue: guid must be a full URL, unless isPermaLink attribute is false:
129
- Current entry for guid for one article is <a id="l16" name="l16">
<guid ispermalink="true">129</guid>
</a>
Sure, here's the feed: http://feeds.feedburner.com/EnjoyTravelBlog (check in Chrome or IE as for some reason someone looking in Firefox didn't see them)
Here are screencasts of what I see if I click on any of the article titles:
- http://screencast.com/t/PNvrItea3ky - see articles 1 & 2
- http://screencast.com/t/bZI8qlg74 - what I see if I click on article 1 - clicking on link goes to spam site
- http://screencast.com/t/cER9Fm9RTunm - what I see if I click on article 2
Like this for every single article - even got some links to Baidu, Ebay and all sorts in there.
Would welcome suggestions on other forums to post on if this goes beyond technical seo!
-
A few avenues to check out:
- Log into your feedburner account and make sure the feed it's processing is still your blog's actual feed.
- Under feedburner's "Troubleshootize" tab, check if there are any FeedMedic reports, and under Tips and Tools run the feed validity checks.
- Check and make sure the Feedburner URL shown in your account is the same one people are being directed to on the blog.
- Go to Google Webmaster Tools. Under Diagnostics, check and see if there are any malware reports.
- Run a malware scan on the site URL and the Feedburner URL through a tool like http://sitecheck.sucuri.net/scanner/
Can you provide us more information? Screenshots showing links and the URLs they direct you to?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal Links issue in webmaster
we implemented our website on the basis of WordPress, then we migrate our website to PHP (YII Framework). after a while, we found out an issue around internal links which they were increasing severely. when we check our landing pages in webmaster (for example Price list), in contains 300 internal links but the reality is that there are no href tags on this page. it seems that webmaster calculate most of our links with the links of a single page and show them to us. is it natural or a mis configuration has been happened? Yh1NzPl
Technical SEO | | jacelyn_wiren0 -
Http to https redirection issue
Hi, i have a website with http but now i moved to https. when i apply 301 redirection from http to https & check in semrush it shows unable to connect with https & similar other tool shows & when i remove redirection all other tools working fine but my https version doesn't get indexed in google. can anybosy help what could be the issue?
Technical SEO | | dhananjay.kumar10 -
Duplicate Page Content Issue
Hello, I recently solved www / no www duplicate issue for my website, but now I am in trouble with duplicate content again. This time something that I cannot understand happens: In Crawl Issues Report, I received Duplicate Page Content for http://yourappliancerepairla.com (DA 19) http://yourappliancerepairla.com/index.html (DA 1) Could you please help me figure out what is happenning here? By default, index.html is being loaded, but this is the only index.html I have in the folder. And it looks like the crawler sees two different pages with different DA... What should I do to handle this issue?
Technical SEO | | kirupa0 -
Duplicate content issue
Moz crawl diagnostic tool is giving me a heap of duplicate content for each event on my website... http://www.ticketarena.co.uk/events/Mint-Festival-7/ http://www.ticketarena.co.uk/events/Mint-Festival-7/index.html Should i use a 301 redirect on the second link? i was unaware that this was classed as duplicate content. I thought it was just the way the CMS system was set up? Can anyone shed any light on this please. Thanks
Technical SEO | | Alexogilvie0 -
SEO Issues
Hi, We have created a moving cost calculator tools and other moving company can added this tools their website. This is the code: [ <iframe src="http: www.enakliyat.com.tr="" fiyat-hesapla.aspx"="" height="554" width="400" frameborder="0" scrolling="no" style="border:none;"> ] when the other moving company added this code their websites, tool also works on the site and the tool make the referrals traffic our site.** Is it right using this method?**</iframe src="http:> http://www.enakliyat.com.tr/evden-eve-nakliyat-fiyatlari-hesaplama/ here is the tool
Technical SEO | | iskq0 -
A site I am working with has multiple duplicate content issues.
A reasonably large ecommerce site I am working with has multiple duplicate content issues. On 4 or 5 keyword domains related to site content the owners simply duplicated the home page with category links pushing visitors to the category pages of the main site. There was no canonical URL instruction, so have set preferred url via webmaster tools but now need to code this into the website itself. For a reasonably large ecommerce site, how would you approach that particular nest of troubles. That's even before we get to grips with the on page duplication and wrong keywords!
Technical SEO | | SkiBum0 -
Database Driven Websites: Crawling and Indexing Issues
Hi all - I'm working on an SEO project, dealing with my first database-driven website that is built on a custom CMS. Almost all of the pages are created by the admin user in the CMS, pulling info from a database. What are the best practices here regarding SEO? I know that overall static is good, and as much static as possible is best, but how does Google treat a site like this? For instance, lets say the user creates a new page in the CMS, and then posts it live. The page is rendered and navigable, after putting together the user-inputed info (the content on the page) and the info pulled from the database (like info pulled out to create the Title tag and H1 tags, etc). Is this page now going to be crawled successfully and indexed as a static page in Google's eyes, and thus ok to start working on rank for, etc? Any help is appreciated - thanks!
Technical SEO | | Bandicoot0 -
OnPage Issues with UTF-8 and ISO-8859-1
Hi guys, I hope somebody can help me figure this out. On one of my sites I set the charset to UTF-8 in the content-type meta-tag. The file itself is also UTF-8. If I type german special chars like ä, ö, ß and the like they get displayed as a tilted square with a questionmark inside. If I change the charset to iso-8859-1 they are getting displayed properly in the browser but services like twitter are still having the issues and stop "importing" content once they reach one of those specialchars. I would like to avoid having to htmlencode all on-page content, so my preference would be using UTF-8.. You can see it in action when you visit this URL for example: http://www.skgbickenbach.de/aktive/1b/artikel/40-minuten-fußball-reichen-nicht_1045?charset=utf-8 Remove the ?charset parameter and the charset it set to iso-8859-1. Hope somebody has an answer or can push me into the right direction. Thanks in advance and have a great day all. Jan
Technical SEO | | jmueller0