I have had this problem on 2 sites. Can't remember the plugin, but it was a PITA. The problem was the configuration where by default the visitor had to opt out of not storing data. so no cookies and no tracking. The plugin settings had to be reversed so the visitor could opt out, so cookies are kept by default and everything tracked.
Posts made by MickEdwards
-
RE: Since implementing GDPR, has anyone seen website traffic plummet?
-
RE: What is the difference between 301 redirects and backlinks?
A 301 redirect is the header response sent when a page does not exist or not required and the redirected page is loaded instead. Typically a 301 redirect is created when a page is taken down. A 301 redirect is not bad for a site. Redirect chains can be bad - so 301 to 301 to 301 etc.
What you want to look at is if you have 301s in your menus, homepage or main content linking to main pages. These links should be either removed or updated to the new URL. There should be no 301s here, it's good housekeeping.
A backlink is an link from an external site linking to a page on your site.
-
RE: What is the best SEO way to categorize products on an ecommerce site
.../category-green/some-product.html
.../category-brand/some-product.html
.../category-widget/some-product.html.../some-product.html clean URL should exist and the canonical tag within the top three URLs should be pointing to this clean product URL. The 3 above can be crawled and indexed, but the canonical is assisting Google in understanding which is the correct product URL. A product can rightly exist in multiple categories.
If the platform is such that a clean product URL is not possible (urghh!), then a strategy needs to be developed to choose one of the category/product URLs as the canonical.
My preference though is to have a category structure and all product links coming from those category pages are clean product URLs in the first place, with self referencing canonicals.
-
RE: Changing Url Removes Backlink
If it is an image URL that is being proliferated with spammy backlinks, you can do one of the following, based on the links actually being harmful:
- Rename the image and replace it in the post. But only if the existing image has no SEO value, real traffic coming in to it etc.
- Set the image URL to noindex, only if the image is not organically useful.
- Add the culprits to a disavow file.
-
RE: What is the best SEO way to categorize products on an ecommerce site
The way I prefer is to have multiple optimised categories that a product rightly sits in, but the actual product URL is clean with no category in the URL. The page canonical is then self referencing.
If that is not possible and the product URL/s contains the category path, like on Shopify for example, then it is ok to have the product URL canonical set to the clean product URL without category structure. But that only works if that clean URL actually exists (in Shopify it does).
-
RE: Using hreflang="en" instead of hreflang="en-gb"
The .co.uk domain is already geo-targeted to the UK, so unless you are targeting other countries/languages
-
RE: Correct use of schema for online store and physical stores
The head office happens to be the e-commerce store. Then there are actual physical stores that sell the same products physically.
So we do want visibility for 'HQ' as the main 'entity'. Yes if anyone has a problem they contact the shop or HQ/e-commerce store. So with that in mind I still need clarification of the schema to use.
-
RE: How Many Links to Disavow at Once When Link Profile is Very Spammy?
Hi Alan,
"Most 503 error links are from low quality directories, so I would disavow anyway. " Yes if they are low quality non-human edited then yes i'd disavow.
"We would disavow the majority of our links in one shot. Any risk of doing this?" If ranking is impacted by a toxic link profile then disavowing only 75% of them will not recover you 75%, probably nothing.
"Is there a reasonable chance that our ranking would improve significantly by disavowing these links? How long does it take Google to process the disavow? Is there a way of checking if Google has actually processed the disavow?" How long is a piece of string. The timeframe depends on how long it takes Google to crawl the toxic links.
Will this improve your rankings? I don't know is the simple answer. The best bet is to take the links on merit and disavow the ones you know are clearly toxic, manipulated etc. But soon as you mention improvement it makes me wonder if you have had a hit on organic traffic. If that is the case and it was around Sept onwards you may be looking at a broader E-A-T issue so disavowing would not resolve the bigger issue. That's pure guesswork but you get my point.
I don't know anyone who has any significant success with requesting links to be removed, other than sharks trying to charge to do so. You could argue that the 'good' sites will help, the poor sites ignore/charge, but it's a bit too much time and effort to use that signal in any way.
Mick
-
RE: How To SEO Sinhala Teledramas Keyword In Google
The first way is not to spam your keyword on here.
-
RE: How Many Links to Disavow at Once When Link Profile is Very Spammy?
I think the most important aspect of your question is to not trust a tool. The tool might flag domains/URLs as spam or manipulated links but the most important thing is to manually inspect each domain. I have had reports from tools where the domain in question is actually not a problem at all when inspected.
If you are getting 404, 403 or 503 error messages the links are gone. You wouldn't be penalised by Google for these because they no longer exist. There is no need to disavow because they don't exist, but you wouldn't be causing a problem if you did. The potential issue is that those header responses 'could' change back to a 200 found. I'd be inclined to monitor them at this stage and add to the disavow if the status changes. A 503 header is a maintenance response so that may come back and you would want to check what you'd be disavowing, as the link may be good.
With regard to disavowing all the links. If you have a toxic link profile you have an issue you need to address and resolve as quickly as you can, so if you determine there are 100 toxic links/domains you will want to add them to the disavow in one hit and hope that you have captured them all.
But please be aware that if some of the links are just a bit spammy/low quality then Google looks like it takes the view to ignore those links anyway.
Some things you need to manually check are:
- the relevance of the link
- the quality of the content
- the anchor text (e.g. have you got exact match, close match anchor on multiple dubious quality posts)
- the ranking of the page/domain
- the placement of the link on the page (e.g. is is a site-wide footer link).
- the quality throughout the domain
- is the link paid for but dofollow (e.g. are there signs on the site that content can be somehow 'purchased', advertorial)
-
Correct use of schema for online store and physical stores
I have been getting conflicting advice on the best way to implement schema for the following scenario.
There is a central e-commerce store that is registered to it's own unique address which is "head office". There are a few physical shops each of which has their own location and address. Each shop has its own landing page within /our-stores/.
So each page on the website has the Organisation schema for the central 'organisation', something like:
Then on each physical store landing page is something like the following as well as the Organisation schema:
Is this correct? If it is should I extend LocalBusiness with store URL and sameAs for GMB listing and maybe Companies House registration?
It's also been suggested that we should use LocalBusiness for the head office of the company, then Departmentwith the typeStore. But i'm not sure on that?
-
Truncated product names
Due to the restraints of category page layout many of the products in certain categories have the product titles truncated, in some cases missing off 2-5 words depending on the product in question. The product name which displays on the category page is lifted straight from the product page itself, so not possible to do something like "product name including spec..." to place ... to indicate a bit more.
I'm assuming not but just wanted to check that Google will not frown on this. Text is not being hidden it just does not render fully in the restricted space. So there is a scenario of 'bits of' text in the source not displaying on the rendered page.
-
Breadcrumbs with JSON-LD
Just a quick question re implementation of JSON-ID breadcrumbs
You are here: Acme Company → Electronics → Computers → Laptops
So in this example laptops is my current page without a link on the visible on-page breadcrumb.
When implementing JSON-LD BreadcrumbList should Laptops be included in the schema snippet, or commence from Computers to home?
-
Search function rendering cached pages incorrectly
On a category page the products are listed via/in connection with the search function on the site. Page source and front-end match as they should.
However when viewing a browser rendered version of a google cached page the URL for the product has changed from, as an example -
https://www.example.com/products/some-product
to
https://www.example.com/search/products/some-product
The source is a relative URL in the correct format, so therefore /search/ is added at browser rendering.
The developer insists that this is ok as the query string in the Google cache page result URL is triggering the behaviour, confusing the search function - all locally. I can see this but just wanted feedback that internally Google will only ever see the true source or will it's internal rendering mechanism possibly trigger similar behaviour?
-
RE: Main menu duplication
A couple of other issues were uncovered with certain collections browser rendering. Cleaned up menu duplication and these. Monitoring.
-
RE: Will switching my domain cause SEO suicide?
If it is a new domain name there will be no domain authority and you will have to build up the site again from scratch. The 301s will help kick start the process but you will need to put in time and effort to restore your rankings.
-
Main menu duplication
I am working on a site that has just gone through a migration to Shopify at the very same time as Google did an update in October. So problems from day 1.
All main menu categories have subsequently over the past 6 weeks fallen off a cliff. All aspects of the site have been reviewed in terms of technical, link profile and on-page, with the site in better shape than several ranking competitors.
One issue that i'd like some feedback on is the main menu which has 4 iterations in the source.
- desktop
- desktop (sticky)
- mobile
- mobile (sticky - appears as a second desktop sticky but I assume for mobile)
These items that are "duplicated" menus are the top level menu items only. The rest of the nested menu items are included within the last mobile menu option.
So
- desktop menu in source doesn't include any of the sub-menu items, the mobile version carries all these
- there are 4 versions of the top level main menu items in source
Should I be concerned? Considering we have significant issues should this be cleaned up?
-
RE: Robots.txt: how to exclude sub-directories correctly?
I mentioned both. You add a meta robots to noindex and remove from the sitemap.
-
RE: Robots.txt: how to exclude sub-directories correctly?
Install Yoast Wordpress SEO plugin and use that to restrict what is indexed and what is allowed in a sitemap.
-
404s clinging on in Search Console
What is a reasonable length of time to expect 404s to be resolved in Search Console? There was a mass of 404s that were built up from directory changes and filtering URLs that have been fixed. These have all been fixed but of course there are some that slipped the net. How long is it reasonable to expect the old 404s that don't have any links to drop away from Search Console? New 404s are still being reported over 4 months later. 'First detected' is always showing as a date later than the fixed 404's date.
Is this reasonable, i've never seen this being so resilient and not clean up like this? We manually fix these 404s and like popcorn more turn up.
Just to add the bulk of 404s came into existence around a year ago and left for around 8 months.
-
RE: GMB departments - what is the setup
Hi Joy,
there are several aspects to the business and there are indeed identifiable departments. They could be chosen from Marine Engineer, Yacht Broker, Marina, Shipyard and so on.
-
RE: GMB departments - what is the setup
Hi Miriam, thanks for the reply.
There is a main reception where the customer is directed to the department they require. Typically the services required cross over but they are really unique to each other and have clear working areas - the customer goes to a specific location for their requirements. The problem is that as an excellent customer service the business will bend over backwards and may serve them from the start position if they can, rather than sending them to the different offices.
The problem I have then is that I don't know how Google will perceive a central switchboard that all the direct telephone numbers redirect back to? Although I see Google says "...whenever possible."
-
GMB departments - what is the setup
Without duplicating i'm looking at a similar setup to this thread
There is a marine business that has within it 3 distinct departments, Sales & brokerage, marina and shipyard all within the same location but with unique telephone numbers grouped into sections on the website with the unique numbers. I'm clarifying whether there are genuine distinct customer facing locations for each department. We want to create department pages because each one has unique opening hours.
However looking at this I have some questions i'm unsure about.
- Each department has a unique landline number that routes through to the main switchboard when called, does that matter?
- The departments kind make the idea of a core G+ page, map page redundant because there are only 3 areas of the business and the departments handle that. So can you have just departments, or should we set up the main 'hub' page as the brand page, linking to 3 departments?
- Where can I find information on how to correctly set up a department so the connection/hierarchy is in place? Looking around I can't find any instructions.
-
RE: Central Index anchor text
To make matters worse I can't remove any listings because I can't login to Central Index because the whole process is being managed by Moz Local. I think i'll also talk to support over this.
-
Central Index anchor text
I'm working on a site that has had some real bad technical issues over a period of time. We have carefully resolved all of them, rinsed through, rechecked for issues and so forth.
Organic traffic started to move slightly up but has now taken a real backward step.
Taking a look at the link profile which has not really been worked on at all we have a mass of Central Index derived links coming through from sites such as gethampshire, heathrowpages and so on. Within each of these directories the business is listed under pages for areas it doesn't belong in, so for example in gethampshire it is listed under printers in Warwick or printers in Surbiton.
The end result is that 65% and quickly growing anchor text is 'website' - 90% dofollow. They are now coming through like popcorn.
My instinct is to remove these listings from the profile. Has anyone else had this kind of issue with Central Index?
-
RE: Query string category pagination
Hi Andy,
thanks for the reply. Yes, each p=* is identical to the base category URL, the only differences are a small handful of products on each p=* which are not really offering anything to those pages in the way of uniqueness at all in the way they are presented. So from that point of view the canonical makes sense. However, I don't want to take Google's focus away from cleanly crawling all the products within p=*
So rel=next & prev for me opens up duplication issues as there are no "parts" of content, it's going to be effectively the same category textual content.
However if I implement &view-all and set the canonical to that version i'm then worried Google may be problematic and not play ball.
-
Query string category pagination
I've been reading some posts on the merits and pitfalls of using rel=prev, rel=next and canonical, but I just wanted to double check the right solution.
example.com/birth-announcements
example.com/birth-announcements?p=2
example.com/birth-announcements?p=3
With a small selection of products on each variation.
So at the moment there is a canonical on all of them to the base example.com/birth-announcements. The problem is we are having difficulty getting the products within p=* indexed. I don't think from all I read that rel=prev/rel=next is the way to go. Would the solution (or best way to go) be to create a "view-all" filter and set that to be the canonical URL, so all product URLs are in clear focus for Google. The volume of products won't (shouldn't) have too much of an impact on page load. Or am I wrong and rel=prev/rel=next is a feasible solution?
-
Hiding H1
I'm working on a site that has hidden H1 content. So for example:
Video/Film Production
with the page using the following codeVideo/Film Productionas the title. there are no other H1 tags on the page.
I have taken this up with their dev and they have suggested this has to be implemented this way due to some issues with displaying in iOS. They are digging their heels in and suggesting it stays as is.
How much of a risk would you say this is? Well i'm actually looking for a bit of a back-up here.
-
RE: No content using Fetch
I got there in the end. They have a Wistia video loading on the homepage, but Wistia robots blocks this resource. When the resource is blocked the CSS is loading a holding image. However, this is configured to fill the whole page. So therefore when googlebot crawls it cannot render anything further than this image or this defined area in CSS. Dev is fixing.
-
RE: Anyone seem anything from penguin yet?
not yet, not a sniff.
-
Staff??
Apparently on my last question my profile status says Staff? Is there something I should know??
-
No content using Fetch
Wooah, this one makes me feel a bit nervous.
The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that.
If I Google some of the content it is there in the index and the cache version is yesterday.
If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems?
The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow">
class="data"> some content...
Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
-
RE: Can lazy loading of images affect indexing?
Although Google can now get to js, I would still be nervous on choosing a theme/CMS that is using lazy loading.
According to John Muller from Google:
“Is Googlebot able to trigger lazy loading scripts- lazy loading images for below the fold content” – “This is a tricky thing.”
On lazy loading images John says “test this with Fetch as Google in Webmaster Tools” and “imagine those are things that Googlebot might miss out on.”
-
RE: SEO website migration gone wrong - noticed too late?
In my experience fixing all technical issues, making sure all redirects are properly in place and doing some link building then the site should recover well, even 2-3 months later. I rinse through and through on the technical side.
The issues start coming to the fore when content is killed, keywords are changed, directory structure is changed. You know how it goes.
-
RE: Issues with Magento layered navigation
"Magento is churning out tons of 404 error pages like this https://www.tidy-books.co.uk/childrens-bookcases-shelves/show/12/l/colour:24-4-9/letters:6-7 which google is indexing"
That page is returning a 404 header response so it does not exist. Therefore Google cannot index it.
Without seeing Magento it's difficult to be certain what settings you have and/if you have a bug.
What you can do (maybe you have) is to add the attributes into Webmaster Tools > Crawl > URL Parameters and set to no URLs. You could also add the directory /sort-by/ to robots.txt to disallow.
Using your example of https://www.tidy-books.co.uk/childrens-bookcases-shelves/colour/natural-finish-with-letters/letters/lowercase, well this has an internal rewrite to https://www.tidy-books.co.uk/childrens-bookcases-shelves/letters/lowercase?colour=20 which is not indexed.
It looks like not only do you need to resolve any MANAdev issues but you need to do an audit on the site as I think you have several issues.
-
RE: Removing massive number of no index follow page that are not crawled
Personally I don't agree with setting internal filter URLs to nofollow. I set noindex as you have done and add the filter attributes to the Search Console > Crawl > URL Parameters.
For the option "Which URLs with this parameter should Googlebot crawl?" you can set "No URLs" (if the filters are uniform throughout the site).
"No URLs: Googlebot won't crawl any URLs containing this parameter. This is useful if your site uses many parameters to filter content. For example, telling Googlebot not to crawl URLs with less significant parameters such as
pricefrom
andpriceto
(likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5&pricefrom=10&priceto=1000
) can prevent the unnecessary crawling of content already available from a page without those parameters (likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5)"
-
RE: I just uncovered a massive 500+ domain PBN linking to competitors
Each domain in a PBN should leave no footprint and if you have found it then Google will find it/them.
For the 4 sites have you got any proof of relationship - contact details, whois, duplicate content, same plugins/layout, a very good reason to believe they are from the same source?
It would be very foolish of them to leave such a huge footprint as to link to all the sites.
Have you checked the PBN domains to see if they share the same C class? e.g. http://smallseotools.com/class-c-ip-checker/? or http://www.authoritydomains.com/bulk-ip-checker.php
If they were totally lazy then there is probably only a handful of hosting accounts. If that was proven then I wouldn't have any difficulty talking to Google if the SERPs were being distorted by more than one site dominating but basically from the same source. If it was a single site i'd tend to sit back and watch, while building a strong link profile.
-
RE: Wrong redirect used
It sounds like you have done all the right things. I agree with Vettyy that you should use something like Screaming Frog to crawl all the old URLs just to double check there are no hanging 404 pages or missed http pages. Switching to 301 will take a few days to filter through, so you could run cache:domain.com in Google on your most important pages to monitor when they are being crawled. Also do you have a mix of http and https in Google in present? It may very well be something to just wait and monitor.
A good tool for sniffing URLs headers is Fiddler.
-
RE: Are Directories Dead?
Name, Address, Phone - you should provide comprehensive contact details. Part of your service could/should be to double check their details on their site and make sure it identically matches on your listing. Businesses are still lapse in this, or try to extend their address to capture more than one county, as if it will somehow do them good.
They also need to give you fresh new content about themselves, not some default directory description.
-
RE: Are Directories Dead?
IMHO directories in the main are dead. Where they do succeed is when they are very niche and actually human managed. If you have the skills and passion in your niche then there is a good possibility you will make it work and become a valuable resource. Using NAP in the detail will provide valuable citation for the business listing. SEO impact will be minimal, but for the business it is still a movement in the right direction.
You'll need to heavily market the directory to the business customer base as an invaluable one stop shop for their needs. That will then drive referral traffic to the listings and allow you to charge a reasonable fee for inclusion.
-
Breadcrumb issue
The site has 2 main categories for scooters. One category is Type of Scooter menu item with nested types and the second category is Manufacturer menu item with nest makes. So all the scooters can be found in either of these categories depending on how you search.
The Manufacturer category is mainly thin content and set as noindex, as well as the nested makes categories.
However when searching for products Google is invariably using the breadcrumb for the Manufacturer category rather than the Type of Scooter category, which is indexed.
Should this be of concern Google using breadcrumbs of non indexed URLs, even if they are followed and therefore the site navigable?
-
RE: What is the "UPDATE" indicate in the Google Search Console Query Reports?
If you hover over the line you'll get a "Learn More" link. After much fiddling with the mouse and buttons you can get to it and it will detail all the recent updates.
https://support.google.com/webmasters/answer/6211453#search_analytics
-
RE: Location based IP Redirect cuasing Google Search Issue
Have you implemented hreflang to all domains and URLs to indicate to Google your intentions with the domains/URLs?
Also have you checked Analyics >> Audience >> Geo >> Location of www.example.com.au for a period prior to your implementation to see if you have excluded a section of traffic? I know you said ranking but maybe you refer to traffic.
-
RE: Menu Structure
yep, without giving the site it's difficult to exactly describe. No it's not breadcrumbs, or on-page links. It's a secondary menu option to the same page they have historically had problems with. That I understand as there are valid ways to lead to the same page and arguably show importance to that page. But in this case it seems to be a secondary menu link to the same page just for the sake of another sitewide menu link that isn't really helpful to the visitor. But I might have a little "anti-over-optimising fever"!
-
Menu Structure
I'm working on a site where there is a top level menu with a dropdown off a couple of the main headings and subsequent dropdown from one or two of those dropdowns. Usual stuff.
The main problem we are having is the ranking of one of the main menu pages, some of which is historical stuff we have cleaned up and waiting for Penguin.
My question is whether the following is a prudent step. The main menu option/page and keyword is something like "Green Widgets" but this activates a dropdown where there is a link to 'Types of Green Widget', then again there is a dropdown with several pages to different types of Green widget. The two menu items "Green Widgets" and "Types of Green Widgets" both link to the "Green Widget" page.
As the "Types of Green Widgets" link is sitewide and not really in the right flavour for the "Green Widget" page would it be prudent to remove the link element of that menu item or set it to /#
-
RE: Using hreflang="en" instead of hreflang="en-gb"
From my understanding if you have hreflang=“en-gb” then that/those pages are targeted at the UK. If you wish to target any English speaking countries then you add hreflang=“en”. But if you wish to target specific English speaking countries then you'd use hreflang="en-ie", hreflang="en-gg" etc.
What you are doing is giving Google information, not a directive, as to what pages are targeted for where. Google could ignore and it's not a ranking solution. You are just giving Google the heads up of your intentions.