URL Encoding
-
HI
SEOmoz has finished crawling the site and surprised me with nearly 4k of 301's
all the 301 are on my deal pages
Example of the 301
as you can see from the above URL it returns a 404 but the URL is actually sent as below
For some reason SEOmoz crawler is converting the = to %3d and reporting its a 301 even though it returns 404
Is this an error on SEOMOZ part ? or is there an error on my site
Googlebot when i do a fetch as Google bot returns all on with the = sign and every other tool i have tried is ok too
so not sure why SEOMOZ is seeing it different and then adding the URL as a 301
I am hoping this is just a glitch on the report tool part as im struggling since a recent site 301
-
Kind of answered my own question to a point
The encoding of a = sign converts to %3D (no idea why SEOMOZ is picking this up on its crawl, the 301 is being done by my htaccess because its an upper case D the 301 it 301's the upper case to lower case then to a 404 page which is not good, i have fixed this but still wonder why SEOMOZ is seeing the %3D instead of an = sign
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Length Issue
MOZ is telling me the URLs are too long. I did a little research and I found out that the length of the URLs is not really a serious problem. In fact, others recommend ignoring the situation. Even on their blog I found this explanation: "Shorter URLs are generally preferable. You do not need to take this to the extreme, and if your URL is already less than 50-60 characters, do not worry about it at all. But if you have URLs pushing 100+ characters, there's probably an opportunity to rewrite them and gain value. This is not a direct problem with Google or Bing - the search engines can process long URLs without much trouble. The issue, instead, lies with usability and user experience. Shorter URLs are easier to parse, copy and paste, share on social media, and embed, and while these may all add up to a fractional improvement in sharing or amplification, every tweet, like, share, pin, email, and link matters (either directly or, often, indirectly)." And yet, I have these questions: In this case, why do I get this error telling me that the urls are too long, and what are the best practices to get this out? Thank You
Moz Pro | | Cart_generation1 -
WWW used in research URL, or not to WWW
Long time user, infrequent poster.... thanks for taking my question... When I go to gather a series of data elements on a company's URL, the data changes (sometime dramatically) depending on whether the 'www.' is added to the URL & it seems related more to Page data than Domain. My question is about which data I should be using to assess the real strength of the site / page? Is there a 'best practice' question here, a personal preference or is there an actual difference in the performance of the www vs the non-www version? aquGYdz
Moz Pro | | SWGroves0 -
Looking For URL Anchor Text Metrics Definitions
Running some keyword difficulty reports that are showing some interesting data around URL Anchor Text Metrics. But ti fully understand them, I need some definitions, which I cannot find anyone. So can someone point me to definitions of these terms: Exact Anchor Text Links % Links w/ Exact Anchor Text Linking Root Domains w/ Exact Anchor Text % Linking Root Domains w/ Exact Anchor Text Partial Anchor Text Links % Links w/ Partial Anchor Text Partial Anchor Text Root Doms. % Linking Root Domains w/ Partial Anchor Text Also, if say Exact Anchor Text Links is bolded purple, that means that URL has more Exact Anchor Text Links than any other URL in the report. Is that correct? Thanx David
Moz Pro | | BraveheartDesign0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources. So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues. Example Case: SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404). When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found. I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed. Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved). Bonus for any tool that uses Google and SEOmoz API to gather this info for me Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed). Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools. Thanks!
Moz Pro | | prima-2535090 -
Crawl reports urls with duplicate content but its not the case
Hi guys!
Moz Pro | | MakMour
Some hours ago I received my crawl report. I noticed several records with urls with duplicate content so I went to open those urls one by one.
Not one of those urls were really with duplicate content but I have a concern because website is about product showcase and many articles are just images with href behind them. Many of those articles are using the same images so maybe thats why the seomoz crawler duplicate content flag is raised. I wonder if Google has problem with that too. See for yourself how it looks like: http://by.vg/NJ97y
http://by.vg/BQypE Those two url's are flagged as duplicates...please mind the language(Greek) and try to focus on the urls and content. ps: my example is simplified just for the purpose of my question. <colgroup><col width="3436"></colgroup>
| URLs with Duplicate Page Content (up to 5) |0 -
Can overly dynamic URLs be overcome with canonical meta tags?
I tried searching for questions regarding dynamic URLs and canonical tags, but I couldn't find anything s hopefully this hasn't been covered. There are a large number of overly dynamic URLs reported in our site crawl (>7,000). I haven't looked at each of these, but most of these either have a canonical meta tag or have are indicated as FOLLOW, NO INDEX pages. Will these be enough to overcome any negative SEO impact that may come from overly dynamic URLs? We are down to almost 0 critical errors and this is now the biggest problem reported by the site crawl after too many on page links.
Moz Pro | | afmaury0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0