Why moz pro detects inexistent links?
-
I have a campaign in moz pro to my personal webpage for testing purposes and also a bit of learning. But i have a question:
On link -> Link analysis i can see this:
http://maqui.darkbolt.net/project/chat/index.php 404http://maqui.darkbolt.net/project/docs/index.php 404http://maqui.darkbolt.net/project/down/index.php 404http://maqui.darkbolt.net/project/foto/index.php 404http://maqui.darkbolt.net/project/news/index.php?news=1 404http://maqui.darkbolt.net/project/project/index.php 404http://maqui.darkbolt.net/project/ro/index.php 404http://maqui.darkbolt.net/project/who/index.php 404Obviously all these address doesn't exist. There are links on the page project/index.php linking to, for example, /chat/index.php.How can i resolve this problem on the stats? There's something bad really on the page? As i can see all links on the page are working properly.
-
Hi there,
-
I reviewed several pages that we're reporting as duplicates and none of them are canonicalized. If you don't want to change the content in the source code so it's not 95% similar to the other pages, you'll need to add canonicals. The Help Hub has some good info on how to do this. You can also run a search in the community.
-
If we reported 404's in the initial crawl, it's because they existed at the time. The most recent crawl isn't showing any 404's so this shouldn't be an issue anymore.
-
Again, there are no 404's reported in this week's crawl for your campaign so there will be no 404's in the crawl diagnostic csv. That is where you'll want to go if this comes up again though.
-If the page on your site is linked to from anywhere on your site, we will crawl and report on it up to the page crawl limit set for the campaign. We're not going to report data for non-existent links as that isn't physically possible. I hope this helps clear things up.
-
-
As i've told at first post and anothers,
1º: I have a failure of duplicate content, detected on first crawl. The dup content comes from a parameter on all pages (?login). I've solvented it making this parameter a link to main page. This change resolves the duplicate content on all site.
2º: On first crawl the system itself detects 404's. The do not exist. I haven't change anything to resolve 404's. If you see the urls, from my screenshots, they are strange URLs, because they detect the 404 on urls as style of index.php/project.php. This page doesn't have mod_rewrite or anything similar, because this, these URL's are impossible.
3º: I've tried to download the crawl diagnostic. On one file i doesn't have the referrer URL, on another, they doesn't have the 404 ones.
I'm trying to know why the system detects this pages when they doesn't exist and aren't linked from anyother site. If i have something bad, then, i have something VERY bad and i need to resolve right now. If not, i think the system detects some incorrect at my page but i cannot understand why.
That's worries me a lot, because, this page and campaign are a test. It's my personal web and i have only a few pages and a few links. But if i can't understand the results from this page. How can i understand / read the results from my best page, who haves more than 10.000 pages, multiple domains, social medias, and more?
-
Gotcha.
We're not actually reporting 404's in this case. We're reporting that one page is a duplicate of another which happens if the content on the source code is 95% similar or greater. The pages we're reporting that are duplicate did exist at the time of the crawl which is why they're showing up. If you made any changes after the crawl, there is a chance that the pages no longer exists in which case the next crawl will not show them as duplicates. They will be reported as 404's though so you'll still want to resolve that problem.
Outside of that, you can download the crawl diagnostic csv to get a list of referrer URL's. This is handy if you're ever unsure of how we got to a specific page. Hope this helps clear things up!
-
Yes, that's correct.
-
Hi there,
We're a bit lost in what you're trying to ask here. Based on what I've read, it sounds like you're saying that the weekly campaign crawl (not the Link Analysis data) is reporting 404's and you're not sure how/why that is happening. Is that correct?
-
Yes. Link analysis shows pages as: comusys.php/comusys.php (Who, obviously, doesn't exist).
On crawl analysis CSV i cannot get these pages. You can also get the analysis from your own open site explorer and view these pages doesn't appear.
And, now, Link analysis doesn't show any 404.
I've attached some examples of my campaign. I cannot understand why they are detecting this.
Any help will be apreciated.
Thanks,
Screenshot%20455.png Screenshot%20456.png Screenshot%20457.png
-
So the link analysis is showing you that there are sites linking to pages that don't exist on your site?
-
In the crawl test there isn't have the 404 errors.
-
There is a column to the far right that should show the referring URL. Be sure to scroll until you find that, and then you will see where we found those URLs.
Clarification: this is in the crawl test report. I'm not sure why you're seeing 404s in the link analysis page.
-
I've requested the CSV and downloaded it. But i cannot see the page pointing to the error, i can only view the error:
That's the report:
<colgroup><col width="392"> <col width="28"></colgroup>
| http://maqui.darkbolt.net/project/chat/index.php | 404 |
| http://maqui.darkbolt.net/project/docs/index.php | 404 |
| http://maqui.darkbolt.net/project/down/index.php | 404 |
| http://maqui.darkbolt.net/project/foto/index.php | 404 |
| http://maqui.darkbolt.net/project/news/index.php?news=1 | 404 |
| http://maqui.darkbolt.net/project/project/index.php | 404 |
| http://maqui.darkbolt.net/project/ro/index.php | 404 |
| http://maqui.darkbolt.net/project/who/index.php | 404 |Obviously there's all erroneous, the section "who" isn't inside "project" one. All links are valid without the part of /project.
I cannot understand why system are reading these links, on page, the links works ok.
-
Have you downloaded the CSV of your crawl report? You can look at the column for the referring URL and see what page is pointing to the 404 error.
-
There's no reason to leave index.php in different URIs, simply i have defined all links with their name, for example, for a root page from a section, the uri are defined as /project/index.php, not /project/ only.
I can clear them, or leave it. There isn't the problem. My problem are the moz stats returning non-linked inexistent addresses, and i doesn't know why.
Also, i've detected a failure who makes moz to find duplicated pages (A erroneous link with only a parameter) and i've corrected already. But i cannot find the reason for the inexistent pages.
-
Is there a reason you leave index.php in your URL? Might make it easier to strip it off using HTACCESS so you can see more clearly what you are dealing with.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I want to know how PA works in MOZ?
I want to know how PA works in MOZ? Like I have a website https://myinfo.pk its DA is same on all posts but when I see PA of IESCO Bill Page and LESCO Bill page. Both are different. Why PA is different of many posts when DA is same. Is there any specific reason?
Link Explorer | | ehtishamkhan0 -
Problem with third party cookies while using MOZ?
Hello, I am using Moz Pro and I am happy to have this tool for my SEO work. But I am having a little problem while working on my blog site https://thepoolparadise.com/ some times the Moz does not work and asks to allow third-party cookies, wherein the settings third-party cookies are always allowed. Please guide me in this matter. Thank you.
Link Explorer | | Niujbuj760 -
Angular SPA & MOZ Crawl Issues
Website: https://www.exambazaar.com/ Issue: Domain Authority & Page Authority 1/100 I am using Prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (https://www.exambazaar.com/). Hence I think it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this? List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff Adding dotbot to Express Server:
Link Explorer | | gparashar
prerender.crawlerUserAgents.push('dotbot');0 -
Just Discovered Links not appearing in main index
Hi Around October, November (and ongoing since then) we had some links appear in the "just discovered" index of OSE, and yet they've never made it into the main list of domains linking to the website. We have amassed quite a few links from various domains that seem to get picked up in "Just Discovered", but not by the standard list of links to our site Is there any reason for this?
Link Explorer | | Lexica0 -
How often are Moz Domain authority and Page Authority Metrics Refreshed?
How often are the MOZ metrics of DA and PA refreshed, this is because i have noticed when i conduct an Inbound link analysis of my blog, i realize that MOZ is merely detecting around 100+ only links to what is being detected on Google Webmaster Tools, any advise?
Link Explorer | | ConnectMedia0 -
OpenSiteExplorer is showing all link root domains or not?
I want to know open site explorer is show all link root domain or not? As my point of view missing some linking root domain of my website on opensiteexplorer,so, Why Its not showing all link root domain?
Link Explorer | | renukishor0 -
Removing the clutter of site-wide links
I have a multi-part question with regard to the moz link index and some presentation suggestions. Firstly I would be interested to know how the link index treats site-wide links with regard to metrics such as DA, and PA. We all know that it is highly likely that SE's are unlikely to pass full link value across from sitewide links, and therefore it would make sense for Moz values to account for this as well - if they do not already. One annoying thing that also relates to sitewides is that they tend to clutter the much of the information presentation in a few of the tools (you can't see wood for trees as it were). This is most prominent in the "Just Discovered" page - if you have a sitewides on a large site, you can often find that this screen is just totally filled with these links as they are found. It would be very useful to be able to filter these out, as they are of little interest - currently I can't see a way of filtering them out. A further value where they create to much noise is the 'Total Links' value. Where sitewides are included in this value, the value actually becomes pretty meaningless as you can find that the majority of that value is sitewides. It would therefore be useful if there was another value for 'Total Links - Excluding Sitewides' where maybe value of 1 was just added to the count for a site wide
Link Explorer | | James770 -
Advanced Inbound Links Report - hanging at 99,450?
Hi, the rest of my report seemed to process quickly, but now it's at 99,450,
Link Explorer | | matthaslem
it doesn't seem to be processing further. It's been about 12 hours since requesting the report, any ideas? Thanks!0