Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Should I remove the ?replytocom variables in wordpress?
-
I'm using Yoast's wordpress plugin and there is an option to remove the replytocom variables. I'm curious what everyone's thoughts were on that, and if I should do it.
Here's the site if you need to see it.
Thanks!
-
Hey, guys, this is a very old post but because it might be useful to people in the future I thought I would update the URL that Ryan did a. great job posting but no longer goes to the correct place. No one can control what I third-party site changes there URL structure to rite?
you can use this plug-in called
?replytocom= replaced with #replytocom=
-
Mine is not indexed in the Google Search, but in Google Webmasters and SEOmoz they showed the error !
Should i remove those links via URL parameters ?
Not indexed, someone here told me some plugins are helping you, i have All-in seo plugin and removed the comluv plugin(for more spam in my blog). other plugins are like share social network That's it.
Any help for me and my blog will be much appreciated !
-
Thanks Ryan! Have a great 4th of July!
-
What do people not using this plug in do? I'm assuming not many people do this, right?
I presume they accept the WP default options. Our practice and understanding of SEO is what allows us to analyze and make decisions regarding tidbits such as the one you mentioned.
You think there is any benefit to doing it, or just one of those "hey why not" sort of things?
I do think there is a benefit. You are impacting a LOT of links. Every comment on your site. It may be a tiny 1% benefit type of thing, but the change applies site wide and will presumably be in place for years.
-
I''ll try to find the link where he talks about using pages instead of posts and share it. Curious to hear your thoughts on it.
I'll go ahead and select that option, thanks for your help. (On a side note, what do people not using this plug in do? I'm assuming not many people do this, right?)
You think there is any benefit to doing it, or just one of those "hey why not" sort of things?
-
Regarding the new pages instead of posts idea, do you have a link to share?
Regarding the comment url, the page with the comment should be fully indexed either way. By changing the link, you are helping search engines better understand your site. The comment links do not represent a new page or new information.
Google clearly understands WP sites exceptionally well. I am confident you can choose various options and they will still understand those links represent comments. With that said, I would still go with Yoast on this one.
Actually, SEOmoz does it too. Take a look at their blog comments.
-
Thanks for taking the time to check into it. One I'm concerned with is how this will effect long tail seo / indexing of the comments. How will this effect my organic traffic? (will it hurt it?)
I don't see these sorts of pages coming up in google now, so I'm not sure what selecting that option does (and how it effects the site.)
Yoast does a few things different with his site, and I don't always follow his lead. For example he suggets making new pages instead of new posts for your blog posts. He's the only one I've ever heard say this, or do this.
-
I just took a look at Yoast's site and I now better understand the option to remove the variables. I recommend selecting that option. From the Yoast site:
method remove_reply_to_com [line 939]
string remove_reply_to_com( string $link)
Removes the ?replytocom variable from the link, replacing it with a #comment- <number>anchor.</number> Tags: access: public Parameters: string $link The comment link as a string.
Example: http://yoast.com/user-contact-fields-wordpress/#comment-110294
-
hmm..thanks for the feedback. So do you suggest not blocking those? (and I'll message yoast also and see what his thoughts are.)
Thanks.
-
I understand the logic behind blocking removing the variables. They are a lot of extra links on the page which some webmasters might prefer to manage.
What I would prefer is to reform the link so it was something like: http://noahsdad.com/treadmill-training-progress#replytocom=22729
I am guessing the "respond" portion of the URL acts as if someone pressed the reply button which seems unnecessary. If someone clicks the link whether in search results or otherwise and is taken directly to the comment, they should be quite happy. If they wish to reply they can hit the reply button.
Google ignores anything after the # character in a URL. Therefore Google would see these as simply a link to the page which should already be indexed.
Perhaps you can ask Yoast about his thoughts.
-
Thanks for the kind words, I agree, he is a cutie.
Will blocking those cause the comments not to be indexed though?
-
Yup - removing those will save you the trouble of duplicate content - since Google by default is crawling those as different URLs. By default, if you have comments enabled, there's a link at the bottom of posts with that parameter in the url (the same as the blog post url - see here ---> http://noahsdad.com/treadmill-training-progress/?replytocom=22729#respond ).
Noah is cute!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will critical error in wordpress for memory limit affect seo rankings?
will critical error in wordpress to increase memory limit affect seo rankings?
Intermediate & Advanced SEO | | gamstopbet0 -
Robots.txt blocked internal resources Wordpress
Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted. I've created the following new one: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush, I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!2 -
Removing the Trailing Slash in Magento
Hi guys, We have noticed trailing slash vs non-trailing slash duplication on one of our sites. Example:
Intermediate & Advanced SEO | | brandonegroup
Duplicate: https://www.example.com.au/living/
Preferred: https://www.example.com.au/living So, SEO-wise, we suggested placing a canonical tag on all trailing slash pointing to non-trailing slash. However, devs have advised against removing the trailing slash from some URLs with a blanket rule, as this may break functionality in Magento that depends on the trailing slash. The full site would need to be tested after implementing a blanket rewrite rule. Is any other way to address this trailing slash duplication issue without breaking anything in Magento? Keen to hear from you guys. Cheers,0 -
Removing duplicate content
Due to URL changes and parameters on our ecommerce sites, we have a massive amount of duplicate pages indexed by google, sometimes up to 5 duplicate pages with different URLs. 1. We've instituted canonical tags site wide. 2. We are using the parameters function in Webmaster Tools. 3. We are using 301 redirects on all of the obsolete URLs 4. I have had many of the pages fetched so that Google can see and index the 301s and canonicals. 5. I created HTML sitemaps with the duplicate URLs, and had Google fetch and index the sitemap so that the dupes would get crawled and deindexed. None of these seems to be terribly effective. Google is indexing pages with parameters in spite of the parameter (clicksource) being called out in GWT. Pages with obsolete URLs are indexed in spite of them having 301 redirects. Google also appears to be ignoring many of our canonical tags as well, despite the pages being identical. Any ideas on how to clean up the mess?
Intermediate & Advanced SEO | | AMHC0 -
I have a lot of spammy links coming to my 404 page (the URLs have been removed now). Should i re-direct to Home?
I have a lot of spammy links pointing at my website according to MOZ. Thankfully all of them were for some URLs that we've long since removed so they're hitting my 404. Should i change the 404 with a 301 and Re-Direct that Juice to my home page or some other page or will that hurt my ranking?
Intermediate & Advanced SEO | | jagdecat0 -
Best way to remove full demo (staging server) website from Google index
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains! company.com agency1.company.com agency2.company.com... I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google: demo.company.com demo.agency1.company.com demo.agency2.company.com... Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index? We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it? I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all. The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually. Or would it be better to go down the canonical route?
Intermediate & Advanced SEO | | iam-sold0 -
Best way to permanently remove URLs from the Google index?
We have several subdomains we use for testing applications. Even if we block with robots.txt, these subdomains still appear to get indexed (though they show as blocked by robots.txt. I've claimed these subdomains and requested permanent removal, but it appears that after a certain time period (6 months)? Google will re-index (and mark them as blocked by robots.txt). What is the best way to permanently remove these from the index? We can't use login to block because our clients want to be able to view these applications without needing to login. What is the next best solution?
Intermediate & Advanced SEO | | nicole.healthline0 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1