[Very Urgent] More 100 "/search/adult-site-keywords" Crawl errors under Search Console
-
I just opened my G Search Console and was shocked to see more than 150 Not Found errors under Crawl errors. Mine is a Wordpress site (it's consistently updated too):
Here's how they show up:
Example 1:
- URL: www.example.com/search/adult-site-keyword/page2.html/feed/rss2
- Linked From: http://an-adult-image-hosting.com/search/adult-site-keyword/page2.html
Example 2 (this surprised me the most when I looked at the linked from data):
-
URL: www.example.com/search/adult-site-keyword-2.html/page/3/
-
Linked From:
-
www.example.com/search/adult-site-keyword-2.html/page/2/ (this is showing as if it's from our own site)
-
http://a-spammy-adult-site.com/search/adult-site-keyword-2.html
Example 3:
- URL: www.example.com/search/adult-site-keyword-3.html
- Linked From: http://an-adult-image-hosting.com/search/adult-site-keyword-3.html
How do I address this issue?
-
Here is what I would do
-
Disavow the domain that is linking to you from the adult site(s).
-
The fact that Google search console is showing that you have an internal page linking as well makes me want to know a) have you always owned this domain and maybe someone previously did link internally like this or b) you may have been or are hacked
In the case of b) this can be really tricky. I once had a site that in a crawl it was showing sitewide links to various external sites that we should not be linking to. When I looked at the internal pages via my browser, there was no link as far as I could see even though it showed up on the crawler report.
Here was the trick. The hacker had setup a script to only show the link when a bot was viewing the page. Plus, we were running mirrored servers and they had only hacked one server. So, the links only showed up when you were spidering a specific mirrored instance as a bot.
So thanks to the hacking, not only were we showing bad links to bad sites, we were doing this through cloaking methodology. Two strikes against us. Luckily we picked this up pretty quick and fixed immediately.
Use a spidering program or browser program to show a user agent of Googlebot and go visit your pages that are linking internally. You might be surprised.
Summary
Googlebot has a very long memory. It may be that this was an old issue that was fixed long ago. If that was the case, just show the 404s for the pages that do not exist, and disavow the bad domain and move on. Make sure that you have not been hacked as this would also be why this is showing.
Regardless, the fact that Google did find it at one point, you need to make sure you resolve. Pull all the URLs into a spreadsheet and run Screaming Frog in list mode to check them all to make sure you fix all of it.
-
-
Yep.. Looking if anyone can help with this..
-
Oh yea, I missed that. That's very strange, not sure how to explain that one!
-
Thanks for the response Logan. What you are saying definitely makes sense.. But it makes think why do I see something like Example 2 under Crawl errors. Why Google Search Console shows linked from as 2 URL - one the spammy site's and other is from my own website. How is that even possible?
-
I've seen similar situations, but never in bulk and not with adult sites. Basically what's happening is somehow a domain (or multiple) are linking to your site with inaccurate URLs. When bots crawling those sites find the links pointing to yours, they obviously hit a 404 page which triggers the error in Search Console.
Unfortunately, there's not too much you can do about this, as people (or automated spam programs) can create a link to any site and any time. You could disavow links from those sites, which might help from an SEO perspective, but it won't prevent the errors from showing up in your Crawl Error report.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have implemented rel = "next" and rel = "prev" but google console is picking up pages as being duplicate. Can anyone tell me what is going on?
I have implemented rel="next" and rel = "prev" across our site but google console is picking it up as duplications. Also individual pages show up in search result too. Here is an example linkhttp://www.empowher.com/mental-health/content/sizeismweightism-how-cope-it-and-how-it-affects-mental-healthhttp://www.empowher.com/mental-health/content/sizeismweightism-how-cope-it-and-how-it-affects-mental-health?page=0,3The second link shows up as duplicate. What can i do to fix this issue?
Intermediate & Advanced SEO | | akih0 -
Search engine submission - Urgent
Is it necessary to submit a new site to search engines? I have a brand-new site I purchased a few days ago which I didn't think to check until after I purchased it, But it has not been indexed by Google!
Intermediate & Advanced SEO | | seoman10
The domain was registered three months ago, and probably the website wouldn't have been designed until after that.
But I'm still left puzzling why the site is not indexed by Google. Any ideas? Thanks in advance.0 -
Why is my site not getting crawled by google?
Hi Moz Community, I have an escort directory website that is built out of ajax. We basically followed all the recommendations like implementing the escaped fragment code so Google would be able to see the content. Problem is whenever I submit my sitemap on Google webmastertool it always 700 had been submitted and only 12 static pages had been indexed. I did the site query and only a number of pages where indexed. Does it have anything to do with my site being on HTTPS and not on HTTP? My site is under HTTPS and all my content is ajax based. Thanks
Intermediate & Advanced SEO | | en-gageinc0 -
List of Search Engines subscribing to the ajax crawling scheme?
Hi, Does anyone have a list of (major) Search Engines that subscribe to the Ajax Crawling Scheme? (https://developers.google.com/webmasters/ajax-crawling/) Specifically interested in major international Search Engines such as Bing/Yahoo, Baidu & Yandex - if anyone knows, please let me know! Thanks in advance
Intermediate & Advanced SEO | | FashionLux0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Development site crawled
We just found out our password protected development site has been crawled. We are worried about duplicate content - what are the best steps to take to correct this beyond adding to robots.txt?
Intermediate & Advanced SEO | | EileenCleary0 -
Best strategy for "product blocks" linking to sister site? Penguin Penalty?
Here is the scenario -- we own several different tennis based websites and want to be able to maximize traffic between them. Ideally we would have them ALL in 1 site/domain but 2 of the 3 are a partnership which we own 50% of and why are they are off as a separate domain. Big question is how do we link the "products" from the 2 different websites without looking spammy? Here is the breakdown of sites: Site1: Tennis Retail website --> about 1200 tennis products Site2: Tennis team and league management site --> about 60k unique visitors/month Site3: Tennis coaching tip website --> about 10k unique visitors/month The interesting thing was right after we launched the retail store website (site1), google was cranking up and sending upwards of 25k search impressions/day within the first 45 days. Orders kept trickling in and doing well overall for first launching. Interesting thing was Google "impressions" peaked at about 60 days post launch and then started trickling down farther and farther and now at about 3k-5k impressions/day. Many keywords phrases were originally on page 1 (position 6-10) and now on page 3-8 instead. Next step was to start putting "product links" (3 products per page) on site2 and site3 -- about 10k pages in total with about 6 links per page off to the product page (1 per product and 1 per category). We actually divided up about 100 different products to be displayed so this would mean about 2k links per product depending on the page. FYI, those original 10k pages from site2 and site3 already rank very well in Google and have been indexed for the past 2+ years in there. Most popular word on the sites is Tennis so very related. Our rationale was "all the websites are tennis related" and figured that the links on the latest and greatest products would be good for our audience. Pre-Penguin, we also figured this strategy would also help us rank for these products as well for when users are searching on them. We are thinking through since traffic and gone down and down and down from the peak of 45 days ago, that Penguin doesn't like all these links -- so what to do now? How to fix it and make the Penguin happy? Here are a couple of my thoughts on fixing it: 1. Remove the "category link" in our "product grouping" which would cut down the link by 1/3rd. 2. Place a "nofollow" on all the links for the other "product links". This would allow us to get the "user clicks" from these while the user is on that page. 3. On our homepage (site2 & site3), place 3 core products that change frequently (weekly) and showcase the latest and greatest products/deals. Thought is to NOT use the "nofollow" on these links since it is the homepage and only about 5 links overall. Heck part of me debated on taking our top 1000 pages (from the 10k page) and put the links ONLY on those and distribute about 500 products on them so this would mean only 2 links per product -- it would mean though about 4k links going there. Still thinking #2 above could be better? Any other thoughts would be great! Thanks, Jeremy
Intermediate & Advanced SEO | | jab10000