Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Questions created by danatanseo
-
Are you looking for an SEO job? National Pen (Pens.com) is hiring!
Hi all, We have an opening for a Senior SEO Associate. Would love to hire someone in the Moz Community. Here are the details: Sr SEO Associate https://g.co/kgs/Ucwzp7 Cheers, Dana
Jobs and Opportunities | | danatanseo0 -
Bingbot appears to be crawling a large site extremely frequently?
Hi All! What constitutes a normal crawl rate for daily bingbot server requests for large sites? Are any of you noticing spikes in Bingbot crawl activity? I did find a "mildly" useful thread at Black Hat World containing this quote: "The reason BingBot seems to be terrorizing your site is because of your site's architecture; it has to be misaligned. If you are like most people, you paid no attention to setting up your website to avoid this glitch. In the article referenced by Oxonbeef, the author's issue was that he was engaging in dynamic linking, which pretty much put the BingBot in a constant loop. You may have the same type or similar issue particularly if you set up a WP blog without setting the parameters for noindex from the get go." However, my gut instinct says this isn't it and that it's more likely that someone or something is spoofing bingbot. I'd love to hear what you guys think! Dana
Technical SEO | | danatanseo1 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
Can anyone recommend a tool that will identify unused and duplicate CSS across an entire site?
Hi all, So far I have found this one: http://unused-css.com/ It looks like it identifies unused, but perhaps not duplicates? It also has a 5,000 page limit and our site is 8,000+ pages....so we really need something that can handle a site larger than their limit. I do have Screaming Frog. Is there a way to use Screaming Frog to locate unused and duplicate CSS? Any recommendations and/or tips would be great. I am also aware of the Firefix extensions, but to my knowledge they will only do one page at a time? Thanks!
Web Design | | danatanseo0 -
How complicated would it be to optimize our current site for the Safari browser?
Hi all! Okay, here's the scoop. 33% of our site visitors use Safari. 18% of our visitors are on either an iPad or iPhone. According to Google Analytics, our average page load time for visitors using Safari is 411% higher than our site average of 3.8 second. So yes, average page load time pages loading in Safari is over 20 seconds...totally unacceptable, especially considering the large percentage of traffic using it. While I understand that there are some parameters beyond our control, it is in our own best interest to try to optimize our site for Safari. We've got to do better than 20 seconds. As you might have guessed, it's also killing conversation rates on visits from that browser. While every other browser posted double-digit improvements in conversion rates over the last several months, the conversion rate for Safari visitors is down 36%...translating into 10's of thousands in lost revenue. Question for anyone out there gifted in Web design and particular Web Dev....Do you think that it's possible/reasonable to attempt to "fix" our current site, which sits on an ancient platform with ancient code, or is this just not realistic? Would a complete redesign/replatform be the more realistic (and financially sound) way to go? Any insights, experiences and recommendations would be greatly appreciated. If you're someone interested in spec'-ing out the project and giving us a cost estimate please private message me. Thanks so much!
Conversion Rate Optimization | | danatanseo1 -
How is Google crawling and indexing this directory listing?
We have three Directory Listing pages that are being indexed by Google: http://www.ccisolutions.com/StoreFront/jsp/ http://www.ccisolutions.com/StoreFront/jsp/html/ http://www.ccisolutions.com/StoreFront/jsp/pdf/ How and why is Googlebot crawling and indexing these pages? Nothing else links to them (although the /jsp.html/ and /jsp/pdf/ both link back to /jsp/). They aren't disallowed in our robots.txt file and I understand that this could be why. If we add them to our robots.txt file and disallow, will this prevent Googlebot from crawling and indexing those Directory Listing pages without prohibiting them from crawling and indexing the content that resides there which is used to populate pages on our site? Having these pages indexed in Google is causing a myriad of issues, not the least of which is duplicate content. For example, this file <tt>CCI-SALES-STAFF.HTML</tt> (which appears on this Directory Listing referenced above - http://www.ccisolutions.com/StoreFront/jsp/html/) clicks through to this Web page: http://www.ccisolutions.com/StoreFront/jsp/html/CCI-SALES-STAFF.HTML This page is indexed in Google and we don't want it to be. But so is the actual page where we intended the content contained in that file to display: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff As you can see, this results in duplicate content problems. Is there a way to disallow Googlebot from crawling that Directory Listing page, and, provided that we have this URL in our sitemap: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff, solve the duplicate content issue as a result? For example: Disallow: /StoreFront/jsp/ Disallow: /StoreFront/jsp/html/ Disallow: /StoreFront/jsp/pdf/ Can we do this without risking blocking Googlebot from content we do want crawled and indexed? Many thanks in advance for any and all help on this one!
Intermediate & Advanced SEO | | danatanseo0 -
How reliable is the link depth info from Xenu?
Hi everyone! I searched existing Q & A and couldn't find an answer to this question. Here is the scenario: The site is: http://www.ccisolutions.com I am seeing instances of category pages being identified as 8 levels deep. For example, this one: http://www.ccisolutions.com/StoreFront/category/B8I This URL redirects to http://www.ccisolutions.com/StoreFront/category/headphones - which Xenu identifies as being only 1 level deep. Xenu does not seem to be recognizing that the first URL 301-redirects to the second. Is this normal for the way Xenu typically reports? If so, why is the first URL indicated to be so much further down in the structure? Is this an indication of site architecture problems? Or is it an indication of problems with how our 301-redirects are being handled? Both? Thanks in advance for your thoughts!
Intermediate & Advanced SEO | | danatanseo0 -
301-Redirects, PageRank, Matt Cutts, Eric Enge & Barry Schwartz - Fact or Myth?
I've been trying to wrap my head around this for the last hour or so and thought it might make a good discussion. There's been a ton about this in the Q & A here, Eric Enge's interview with Matt Cutts from 2010 (http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml) said one thing and Barry Schwartz seemed to say another: http://searchengineland.com/google-pagerank-dilution-through-a-301-redirect-is-a-myth-149656 Is this all just semantics? Are all of these people really saying the same thing and have they been saying the same thing ever since 2010? Cyrus Shepherd shed a little light on things in this post when he said that it seemed people were confusing links and 301-redirects and viewing them as being the same things, when they really aren't. He wrote "here's a huge difference between redirecting a page and linking to a page." I think he is the only writer who is getting down to the heart of the matter. But I'm still in a fog. In this video from April, 2011, Matt Cutts states very clearly that "There is a little bit of pagerank that doesn't pass through a 301-redirect." continuing on to say that if this wasn't the case, then there would be a temptation to 301-redirect from one page to another instead of just linking. VIDEO - http://youtu.be/zW5UL3lzBOA So it seems to me, it is not a myth that 301-redirects result in loss of pagerank. In this video from February 2013, Matt Cutts states that "The amount of pagerank that dissipates through a 301 is currently identical to the amount of pagerank that dissipates through a link." VIDEO - http://youtu.be/Filv4pP-1nw Again, Matt Cutts is clearly stating that yes, a 301-redirect dissipates pagerank. Now for the "myth" part. Apparently the "myth" was about how much pagerank dissipates via a 301-redirect versus a link. Here's where my head starts to hurt: Does this mean that when Page A links to Page B it looks like this: A -----> ( reduces pagerank by about 15%)-------> B (inherits about 85% of Page A's pagerank if no other links are on the page But say the "link" that exists on Page A is no longer good, but it's still the original URL, which, when clicked, now redirects to Page B via a URL rewrite (301 redirect)....based on what Matt Cutts said, does the pagerank scenario now look like this: A (with an old URL to Page B) ----- ( reduces pagerank by about 15%) -------> URL rewrite (301 redirect) - Reduces pagerank by another 15% --------> B (inherits about 72% of Page A's pagerank if no other links are on the page) Forgive me, I'm not a mathematician, so not sure if that 72% is right? It seems to me, from what Matt is saying, the only way to avoid this scenario would be to make sure that Page A was updated with the new URL, thereby avoiding the 301 rewrite? I recently had to re-write 18 product page URLs on a site and do 301 redirects. This was brought about by our hosting company initiating rules in the back end that broke all of our custom URLs. The redirects were to exactly the same product pages (so, highly relevant). PageRank tanked on all 18 of them, hard. Perhaps this is why I am diving into this question more deeply. I am really interested to hear your point of view
Algorithm Updates | | danatanseo0 -
How does a search engine bot navigate past a .PDF link?
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks. If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window? If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this? 1. Add a rel="nofollow" attribute 2. Don't open the link in a new window so the back button remains finctional 3. Both 1 and 2 or 4. Create specs on the page instead of relying on a .pdf Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element] Thoughts and suggestions would be greatly appreciated. Dana
Technical SEO | | danatanseo0 -
What is the best way to include video transcripts on your pages?
I just posted a question as a comment on this blog post (http://www.audiotranscription.org/6-reasons-you-need-to-have-your-sites-videos-transcribed/comment-page-1/#comment-4311) and realized I probably should have asked the question here because it is really SEO-related. We are in the process of transcribing all of our videos. Can you someone recommend how best to post the transcript file on our product pages? For example, here's a product page that has a video review on it: http://www.ccisolutions.com/StoreFront/product/behringer-x32-digital-mixing-system. Would it be best to post a link to the transcript file or insert the iframe transcript file from DotSub.com (in that case wouldn't DotSub.com be getting credit for our original content?) or, should we just let our captions on the YouTube stream from there (which is what's happening right now)? Will Google still crawl the transcript if the only place it's available is on YouTube? If so, doesn't YouTube get credited for that content, and not our site? Even if we do post a trascript on the page, if it appeared at YouTube first, wouldn't YouTube get credit for the original version? We want credit for that content...just not sure how to get it? I understand that moving to Wistia.com would probably solve most of these issues. What I'm trying to figure out is how best to handle new transcripts for existing videos over at YouTube?
Image & Video Optimization | | danatanseo0 -
We have set up 301 redirects for pages from an old domain, but they aren't working and we are having duplicate content problems - Can you help?
We have several old domains. One is http://www.ccisound.com - Our "real" site is http://www.ccisolutions.com The 301 redirect from the old domain to the new domain works. However, the 301-redirects for interior pages, like: http://www.ccisolund.com/StoreFront/category/cd-duplicators do not work. This URL should redirect to http://www.ccisolutions.com/StoreFront/category/cd-duplicators but as you can see it does not. Our IT director supplied me with this code from the HT Access file in hopes that someone can help point us in the right direction and suggest how we might fix the problem: RewriteCond%{HTTP_HOST} ccisound.com$ [NC] RewriteRule^(.*)$ http://www.ccisolutions.com/$1 [R=301,L] Any ideas on why the 301 redirect isn't happening? Thanks all!
Technical SEO | | danatanseo0