Problem indexing web developed with Ruby on Rails
-
Hi there!
Here we are again, we are having problems indexing one of our clients, which website has been developed with Ruby on Rails.
It doesnt get the titles right from almost all our pages...Has anyone had the same problem? Any feedback would help a lot...
Thanks!
-
Hi Eduardo,
For the titles this is probably due to google rewriting page titles based on brand searches. They have been experimenting with various ways of displaying titles in the serps for branded searches and if you are searching for 'jobsandtalent' with no spaces then this is a pretty specific search and google is rewriting you title based on it. If you search for your whole page title + brand you will see the normal title as expected. It does not have anything to do with Ruby on Rails.
As for the page rank, this is not a number I place much importance in. I cant remember off hand how often it is updated but it is not all the time. More to the point to be looking a moz domain and page metrics if you ask me. That being said I see your pr as 5 for the root domain www.jobandtalent.oom.
I noticed you seem to be using cookie based redirects from the main domain to the language folder so that if you have entered /es once then going to the .com main page automatically pushes you to .com/es. This can potentially be problematic in terms of google properly indexing you site. I cannot say if this is responsible for your difficulties in rankings but in a competitive sector like job postings I would certainly look changing that so that google (and users) can view all pages of the site in whichever language they choose without being pushed into a language based on cookies.
Hope that helps!
-
Lynn is correct, if you give a look we can see if we spot anything.
When you say they don't get the titles right, Google often changes the titles depending on the search term. But a site:domain.com search should bring up correct titles.
-
Hi Eduardo,
There is no reason why the language the site is developed in would have this affect since the page titles etc that the search engines read are in the final html produced, so if it looks right in the html it should look right to the crawlers. Same goes for the indexing of pages, although in that case there are more potential issues, but again none specific to ruby on rails. Care to give an example so we can have a look?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is my page being indexed?
To put you all in context, here is the situation, I have pages that are only accessible via an intern search tool that shows the best results for the request. Let's say i want to see the result on page 2, the page 2 will have a request in the url like this: ?p=2&s=12&lang=1&seed=3688 The situation is that we've disallowed every URL's that contains a "?" in the robots.txt file which means that Google doesn't crawl the page 2,3,4 and so on. If a page is only accessible via page 2, do you think Google will be able to access it? The url of the page is included in the sitemap. Thank you in advance for the help!
Technical SEO | | alexrbrg0 -
How to know how much pages are indexed on Google?
I have a big site, there are a way to know what page are not indexed? I know that you can use site: but with a big site is a mess to check page by page. This is a tool or a system to check a entire site and automatically find non-indexed pages?
Technical SEO | | markovald0 -
Duplicate content problem
Hi there, I have a couple of related questions about the crawl report finding duplicate content: We have a number of pages that feature mostly media - just a picture or just a slideshow - with very little text. These pages are rarely viewed and they are identified as duplicate content even though the pages are indeed unique to the user. Does anyone have an opinion about whether or not we'd be better off to just remove them since we do not have the time to add enough text at this point to make them unique to the bots? The other question is we have a redirect for any 404 on our site that follows the pattern immigroup.com/news/* - the redirect merely sends the user back to immigroup.com/news. However, Moz's crawl seems to be reading this as duplicate content as well. I'm not sure why that is, but is there anything we can do about this? These pages do not exist, they just come from someone typing in the wrong url or from someone clicking on a bad link. But we want the traffic - after all the users are landing on a page that has a lot of content. Any help would be great! Thanks very much! George
Technical SEO | | canadageorge0 -
No index on subdomains
Hi, We have a subdomain that is appearing in the search results - I want to hide this as it looks really bad. If I were to add the no index tag to the sub domain would URL would this affect the whole domain or just that sub domain? The main domain is vitally important - it is just that sub domain I need to hide. Many thanks
Technical SEO | | Creditsafe0 -
AJAX and High Number Of URLS Indexed
I recently took over as the SEO for a large ecommerce site. Every Month or so our webmaster tools account is hit with a warning for a high number of URLS. In each message they send there is a sample of problematic URLS. 98% of each sample is not an actual URL on our site but is an AJAX request url that users are making. This is a server side request so the URL does not change when users make narrowing selections for items like size, color etc. Here is an example of what one of those looks like Tire?0-1.IBehaviorListener.0-border-border_body-VehicleFilter-VehicleSelectPanel-VehicleAttrsForm-Makes We have over 3 million indexed URLs according to Google because of this. We are not submitting these urls in our site maps, Google Bot is making lots of AJAX selections according to our server data. I have used the URL Handling Parameter Tool to target some of those parameters that are currently set to let Google decide and set it to "no urls" with those parameters to be indexed. I still need more time to see how effective that will be but it does seem to have slowed the number of URLs being indexed. Other notes: 1. Overall traffic to the site has been steady and even increasing. 2. Google bot crawls an average of 241000 urls each day according to our crawl stats. We are a large Ecommerce site that sells parts, accessories and apparel in the power sports industry. 3. We are using the Wicket frame work for our website. Thanks for your time.
Technical SEO | | RMATVMC0 -
Pages to be indexed in Google
Hi, We have 70K posts in our site but Google has scanned 500K pages and these extra pages are category pages or User profile pages. Each category has a page and each user has a page. When we have 90K users so Google has indexed 90K pages of users alone. My question is. Should we leave it as they are or should we block them from being indexed? As we get unwanted landings to the pages and huge bounce rate. If we need to remove what needs to be done? Robots block or Noindex/Nofollow Regards
Technical SEO | | mtthompsons0 -
Does Switching Web Hosts Hurt SEO?
A few months ago, my site was shut down by BlueHost because of performance issues, so I moved it to WP Engine, and cleaned up most of the plug-ins. Since then, my search engine traffic has decreased over 50%. Does switching web hosts hurt SEO? Thanks!
Technical SEO | | JodiFTM0 -
How do I eliminate indexed products?
Please help! We got clobbered by Penguin and are at risk of having to close down after 10 years. We have been trying to figure out why and believe now it might be because of duplicate content. We added 2" inserts in March (over 500): http://www.trophycentral.com/inserts1.html Even though each is a different products, SEOMOZ is saying they are considered duplicate content. Given the timing, we think this might be the cause, even though it is totally legitimate. Question - since these are now indexed and since we can't easily add content quickly, what is the best way to handle this situation? A no-index tag? Is there a way to let Google know that their algorithm is detroying legitimate businesses??
Technical SEO | | trophycentraltrophiesandawards0