Question about Robot.txt
-
I just started my own e-commerce website and I hosted it to one of the popular e-commerce platform Pinnacle Cart. It has a lot of functions like, page sorting, mobile website, etc. After adjusting the URL parameters in Google webmaster last 3 weeks ago, I still get the same duplicate errors on meta titles and descriptions based from Google Crawl and SEOMOZ crawl. I am not sure if I made a mistake of choosing pinnacle cart because it is not that flexible in terms of editing the core website pages. There is now way to adjust the canonical, to insert robot.txt on every pages etc. however it has a function to submit just one page of robot.txt. and edit the .htcaccess. The website pages is in PHP format.
For example this URL:
www.mycompany.com has a duplicate title and description with www.mycompany.com/site-map.html (there is no way of editing the title and description of my sitemap)
Another error is
www.mycompany.com has a duplicate title and description with http://www.mycompany.com/brands?url=brands
Is it possible to exclude those website with "url=" and my "sitemap.html" in the robot.txt? or the URL parameters from Google is enough and it just takes a lot of time.
Can somebody help me on the format of Robot.txt. Please? thanks
-
Thank you for your reply. This surely helps. I will probably edit the htaccess.
-
That's the problem with most sitebuilder type prgrams, they are very limited.
Perhaps look at your site title, and page titles. Usually the site title will be the included on all of your webpages followed by the page title so you could simply name your site www.yourcompany.com then add an individual page title to each page.
A robots.txt file is not supposed to be added to every page and only tells the bots what to crawl, and what not to.
If you can edit the htaccess, you should be able to get to the individual pages and insert/change the code for titles, just be aware that doing it manually can work, but sometimes when you go back to make an edit in the builder it may undo all of your manual changes, if that's the case, get your site perfect, then do the individual code changes as the last change.
Hope this helps.
-
I have no way of adding those too. Ooops thanks for the warning. I guess I would have to wait for Google to filter out the parameters.
Thanks for your answer.
-
You certainly don't want to block your sitemap file in robots.txt. It takes some time for Google to filter out the parameters and that is the right approach. If there is no way to change the title, I wouldn't be so concerned over a few pages with duplicate titles. Do you have the ability to add a noindex,follow meta tag on these pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt Tester - syntax not understood
I've looked in the robots.txt Tester and I can see 3 warnings: There is a 'syntax not understood' warning for each of these. XML Sitemaps:
Technical SEO | | JamesHancocks1
https://www.pkeducation.co.uk/post-sitemap.xml
https://www.pkeducation.co.uk/sitemap_index.xml How do I fix or reformat these to remove the warnings? Many thanks in advance.
Jim0 -
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Little confused regarding robots.txt
Hi there Mozzers! As a newbie, I have a question that what could happen if I write my robots.txt file like this... User-agent: * Allow: / Disallow: /abc-1/ Disallow: /bcd/ Disallow: /agd1/ User-agent: * Disallow: / Hope to hear from you...
Technical SEO | | DenorL0 -
Curious Keyword Tags Question...
In 2012, we got hit with something... I have always assumed Panda... We have hundreds of thousands of products on our site. Prior to the traffic drop, our old site design listed a small number of keywords tags on the product pages. About 10 or so... After the site re-design, we allowed all of the keyword tags to appear on these product pages and also linked them to our search results pages. I know that one thing this did is cause a lot of these Search Results pages to be indexed. But our traffic has been constantly declining since then... I wonder what would happen if I just went back to the old with a smaller number of keywords listed and not linked? Any thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Questions about canonicals
Howdy Moz community, I had a question regarding canonicals. I help a business with their SEO, and they are a service company. They have one physical location, but they serve multiple cities in the state. My question is in regards to canonicals and unique content. I hear that a page with slightly differing content for each page won't matter as much, if most of the content is relevantly the same. This business wants to create service pages for at least 10 other cities they service. The site currently only have pages that are targeting one city location. I was wondering if it was beneficial to use a template to service each city and then put a canonical there to say that it is an identical page to the main city page? Example: our first city was san francisco, we want to create city pages for santa rosa, novato, san jose and etc. If the content for the 2nd, 3rd, 4th, city were the same content as the 1st city, but just had the slight change with the city name would that hurt? Would putting a canonical help this issue, if i alert that it is the same as the 1st page? The reason I want to do this, is because I have been getting concerns from my copywriter that after the 5th city, they can't seem to make the services pages that much different from the first 4 cities, in terms of wording of the content and its structure. I want to know is there a simpler way to target multiple cities for local SEO reasons like geo targeted terms without having to think of a completely new way to write out the same thing for each city service page, as this is very time consuming on my end. Main questions? Will making template service pages, changing the city name to target different geographic locations and putting a canonical tag for the new pages created, and referring back to the main city page going to be effective in terms of me wanting to rank for multiple cities. Will doing this tell google my content is thin or be considered a duplicate? Will this hurt my rankings? Thanks!
Technical SEO | | Ideas-Money-Art0 -
One more redirect question
If there are two URLs like below: example.com/toys/batman-toys
Technical SEO | | IceIcebaby
example.com/birthday/batman-toys Both have the exact same everything, except URL key. The first example ranks for all KWs and search terms in the SEs. Does having the second page hurt my ranking potential for the first page? Should I redirect the 2nd page to the first or just leave it? As always, thanks for your help.0 -
Questions about root domain setup
Hi There, I'm a recent addition to SEOmoz and over the past few weeks I've been trying to figure things out. This whole SEO process has been a bit of a brain burner but its slowly becoming a little more clearer. For awhile I noticed that I was unable to get Open Site Explorer to display information about my site. It mentioned that that there was not enough data for the URL. Too recent of a site, no links, etc. Eventually I changed the the URL to include "www." and it pulled up results. I also noticed that a few of my page warnings are because of duplicate page content. One page will be listed as http://enbphotos.com. The other will be listed as http://www.enbphotos.com. I guess I'm not sure what this all means and how to change it. I'm also not really sure what the terminology even is and something regarding root domain seemed appropriate but I'm not sure if it is accurate. Any help/suggestions/links would be appreciated! Thanks, Chris
Technical SEO | | enbphotos0 -
Question about duplicate content within my site
Hi. New here to SEOmoz and also somewhat new to SEO in general. A friend has asked me to help do some onsite SEO for their company's website. The company uses Drupal Content Management System. They have a couple product pages that contain a tabbed section for features, accessories, etc. When they built their tabs, they used a Drupal module called Quicktabs, by which each individual tab is created as a separate page and then pulled into the tabs from those pages. So, in essence, you now have instances of repeated content. 1) the page used to create the tab, and 2) the tab that displays on the product page. My question is, how should I handle the pages that were used to create the tabs? Should I make them NOINDEX? Thank you for your advice in advance.
Technical SEO | | aprilm-1890400