Discrepancy between Search Console & LightHouse - CLS shift
-
Curious if anyone else is having this problem. I have, for example, a page that is listed in Search Console as having a CLS of .44 - it is listed as a "CLS issue." The same page rendered in LightHouse shows 0 for field data CLS and 0.02 for lab data (both in the "green"). It has been over a month since I made updates to the page to improve CLS. I tried to submit a validation in Search Console, but "validation failed." I'm not sure what else to fix on the page when LightHouse data shows it as in the green! I have the same issue with other pages as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console 'Change of Address' Just 301s on source domain?
Hi all. New here, so please be gentle. 🙂 I've developed a new site, where my client also wanted to rebrand from .co.nz to .nz On the source (co.nz) domain, I've setup a load of 301 redirects to the relevant new page on the new domain (the URL structure is changing as well).
Technical SEO | | WebGuyNZ
E.G. On the old domain: https://www.mysite.co.nz/myonlinestore/t-shirt.html
In the HTACCESS on the old/source domain, I've setup 301's (using RewriteRule).
So that when **https://www.mysite.co.nz/**myonlinestore/t-shirt.html is accessed, it does a 301 to;
https://mysite.nz/shop/clothes/t-shirt All these 301's are working fine. I've checked in dev tools and a 301 is being returned. My question is, is having the 301's just on the source domain only enough, in regards to starting a 'Change of Address' in Google's Search Console? Their wording indicates it's enough but I'm concerned, maybe I also need redirects on the target domain as well? I.E. Does the Search Console Change of Address process work this way?
It looks at the source domain URL (that's already in Google's index), sees the 301 then updates the index (and hopefully pass the link juice) to the new URL. Also, I've setup both source and target Search Console properties as Domain Properties. Does that mean I no longer need to specify that the source and target properties are HTTP or HTTPS? I couldn't see that option when I created the properties. Thanks!0 -
Google Search Results Flip-Flop
For a site we manage, Google can’t seem to decide which of two pages to present for a search for “skid steer attachments.” Almost weekly, it flip-flops from the home page to an interior page (which is a shopping cart category page that we have not actually optimized for the phrase.) The site is berlon.com. Have any of you had a similar experience and, if so, how did you address it? I’ve attached a Moz screen shot that shows the changes. mNfmJoY
Technical SEO | | PKI_Niles0 -
HTTP & HTTPS
what is best recommended when some of the pages on site goes from HTTP to HTTPS: 301 redirection or 302 redirection?
Technical SEO | | JonsonSwartz
and why? thank you I was asked to elaborate so: on my website I have open account pages. users are asked to fill the details. those page are secured and are HTTPS. the problem is that the whole website turned to HTTPS so they redirected most of the pages from HTTPS to HTTP.
the secured pages are redirected from HTTP to HTTPS. I wanted to check if it's correct and what is the best redirection way (301 or 302)0 -
How does a search engine bot navigate past a .PDF link?
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks. If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window? If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this? 1. Add a rel="nofollow" attribute 2. Don't open the link in a new window so the back button remains finctional 3. Both 1 and 2 or 4. Create specs on the page instead of relying on a .pdf Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element] Thoughts and suggestions would be greatly appreciated. Dana
Technical SEO | | danatanseo0 -
WordPress & Page Numbers
Hi, I am working on a large WP site for a client and have an issue with duplicate content and page numbers. I am using the Yoast SEO plugin but can't seem to resolve the issue. Let me give an example: If I go to a popular category, for example F1, there are over 10 pages of content for the category and although the URL changes, the Title and Meta Description stay the same. Now...if I was using a template for the title and description I could add the page number variable, but as I am overwriting the template with SEO specific category information I can't use variables and hence the problem! This is such a common problem I know somebody will have an answer! Thanks
Technical SEO | | JonathanSmith0 -
Why is there such a big discrepancy between OSE and GWT regarding # backlinks?
Hello, We have been doing some analysis around our backlink profiles for our sites and have been experiencing a massive discrepancy between what is reported as number of C class linking domains in OSE and the information returned in Google Webmaster tools. For a variety of sites OSE is reporting numbers < 10 for C class linking doamins while GWT shows >100 unique domains linking (we confirmed that the majority of these links are in different C classes) Is this simply a matter of the limited index size of OSE or could there be another explanation? It is interesting that the links that do show up in OSE a nearly exclusively sites that we own. /T
Technical SEO | | tomypro0 -
On-Page Report Card & Rel Canonical
Hello, I ran one of our pages through the On-Page Report Card. Among the results we are getting a lower grade due to the following "critical factor" : Appropriate Use of Rel Canonical Explanation If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL. Recommendation We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply. This is for an e-commerce site, and the canonical links are inserted automatically by the cart software. The cart is also creating the canonical url as a relative link, not an absolute URL. In this particular case it's a self-referential link. I've read a ton on this and it seems that this should be okay (I also read that Bing might have an issue with this). Is this really an issue? If so, what is the best practice to pass this critical factor? Thanks, Paul
Technical SEO | | rwilson-seo0 -
Search Engine Blocked by Robot Txt warnings for Filter Search result pages--Why?
Hi, We're getting 'Yellow' Search Engine Blocked by Robot Txt warnings for URLS that are in effect product search filter result pages (see link below) on our Magento ecommerce shop. Our Robot txt file to my mind is correctly set up i.e. we would not want Google to index these pages. So why does SeoMoz flag this type of page as a warning? Is there any implication for our ranking? Is there anything we need to do about this? Thanks. Here is an example url that SEOMOZ thinks that the search engines can't see. http://www.site.com/audio-books/audio-books-in-english?audiobook_genre=132 Below are the current entries for the robot.txt file. User-agent: Googlebot
Technical SEO | | languedoc
Disallow: /index.php/
Disallow: /?
Disallow: /.js$
Disallow: /.css$
Disallow: /checkout/
Disallow: /tag/
Disallow: /catalogsearch/
Disallow: /review/
Disallow: /app/
Disallow: /downloader/
Disallow: /js/
Disallow: /lib/
Disallow: /media/
Disallow: /.php$
Disallow: /pkginfo/
Disallow: /report/
Disallow: /skin/
Disallow: /utm
Disallow: /var/
Disallow: /catalog/
Disallow: /customer/
Sitemap:0