I came across this SERP Feature in a search today on a mobile device. It does not show for the same search query on desktop. What do we know about this "Shops" SERP feature?
- Home
- seoelevated
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
seoelevated
@seoelevated
Job Title: E-Commerce Director
Company: n/a
Favorite Thing about SEO
huge community of smart people with all kinds of theories and learnings to share
Latest posts made by seoelevated
-
What do we know about the "Shops" SERP Feature?
-
RE: What happens to crawled URLs subsequently blocked by robots.txt?
@aspenfasteners To my understanding, disallowing a page or folder in robots.txt does not remove pages from Google's index. It merely gives a directive to not crawl those pages/folders. In fact, when pages are accidentally indexed and one wants to remove them from the index, it is important to actually NOT disallow them in robots.txt, so that Google can crawl those pages and discover the meta NOINDEX tags on the pages. The meta NOINDEX tags are the directive to remove a page from the index, or to not index it in the first place. This is different than a robots.txt directive, whcih is intended to allow or disallow crawling. Crawling does not equal indexing.
So, you could keep the pages indexable, and simply block them in your robots.txt file, if you want. If they've already been indexed, they should not disappear quickly (they might, over time though). BUT if they haven't been indexed yet, this would prevent them from being discovered.
All of that said, from reading your notes, I don't think any of this is warranted. The speed at which Google discovers pages on a website is very fast. And existing indexed pages shouldn't really get in the way of new discovery. In fact, they might help the category pages be discovered, if they contain links to the categories.
I would create a categories sitemap xml file, link to that in your robots.txt, and let that do the work of prioritizing the categories for crawling/discovery and indexation.
-
RE: Multiple H1s and Header Tags in Hero/Banner Images
While there is some level of uncertainty about the impact of multiple H1 tags, there are several issues about the structure you describe. On the "sub-pages", if you have an H1 tag on the site name, that means the same H1 tag is used on a bunch of pages. This is something you want to avoid. Instead, develop a strategy of which pages you would like to target to rank for which search queries, and then use the page's primary query in the H1 tag.
The other issue I see in your current structure is that it sounds like you have heading tags potentially out of sequence. Accessibility checker tools will flag this as an issue, and indeed it can cause frustration for people with vision disabilities accessing your pages with screen readers. You want to make sure that you preserve a hierarchy where an H1 is above the H2 is above the H3, etc.
-
RE: Is using a subheading to introduce a section before the main heading bad for SEO?
You will also find that you fail some accessibility standards (WCAG) if your heading structure tags are out of sequence. As GPainter pointed out, you really want to avoid styling your heading structure tags explicitly in your CSS if you want to be able to to style them differently in different usage scenarios.
Of course, for your pre-headings, you can just omit the structure tag entirely. You don't need all your important keywords to be contained in structure tags.
You'll want, ideally, just one H1 tag on the page and your most important keyword (or semantically related keywords) in that tag. If you can organize the structure of your page with lower-level heading tags after that, great. It does help accessibility too, just note that you shouldn't break the hierarchy by going out of sequence. But it's not a necessity to have multiple levels of heading tags after the h1.
-
RE: How important is Lighthouse page speed measurement?
My understanding is that "Page Experience" signals (including the new "core web vitals) will be combined with existing signals like mobile friendliness and https-security in May, 2021. This is according to announcements by Google.
https://developers.google.com/search/blog/2020/05/evaluating-page-experience
https://developers.google.com/search/blog/2020/11/timing-for-page-experience
So, these will be search signlas, but there are lots of other very important search signals which can outweigh these. Even if a page on John Deere doesn't pass the Core Web Vitals criteria, it is still likely to rank highly for "garden tractors".
If you are looking at Lighthouse, I would point out a few things:
- The Lighthouse audits on your own local machine are going to differ from those run on hosted servers like Page Speed Insights. And those will differ from "field data" from the Chrome UX Report
- In the end, it's the "field data" that will be used for the Page Experience validation, according to Google. But, lab-based tools are very helpful to get immediate feedback, rather than waiting 28 days or more for field data.
- If your concern is solely about the impact on search rankings, then it makes sense to pay attention specifically to the 3 scores being considered as part of CWV (CLS, FID, LCP)
- But also realize that while you are improving scores for criteria which will be validated for search signals, you're also likely improving the user experience. Taking CLS as an example, for sure users are frustrated when they attempt to click a button and end up clicking something else instead because of a layout shift. And frustrated users generally equals lower conversion rates. So, by focusing on improvements in measures like these (I do realize your question about large images doesn't necessarily pertain specifically to CLS), you are optimizing both for search ranking and for conversions.
-
Reducing cumulative layout shift for responsive images - core web vitals
In preparation for Core Web Vitals becoming a ranking factor in May 2021, we are making efforts to reduce our Cumulative Layout Shift (CLS) on pages where the shift is being caused by images loading. The general recommendation is to specify both height and width attributes in the html, in addition to the CSS formatting which is applied when the images load. However, this is problematic in situations where responsive images are being used with different aspect ratios for mobile vs desktop. And where a CMS is being used to manage the pages with images, where width and height may change each time new images are used, as well as aspect ratios for the mobile and desktop versions of those.
So, I'm posting this inquiry here to see what kinds of approaches others are taking to reduce CLS in these situations (where responsive images are used, with differing aspect ratios for desktop and mobile, and where a CMS allows the business users to utilize any dimension of images they desire).
-
RE: Is a page with links to all posts okay?
Depending on how many pages you have, you may eventually hit a limit to the number of links Google will crawl from one page. The usual recommendation is to have no more than 150 links, if you want all of them to be followed. That also includes links in your site navigation, header, footer, etc. (even if those are the same on every page). So, at that point, you might want to make that main index page into an index of indices, where it links to a few sub-pages, perhaps by topic or by date range.
-
RE: Web Core Vitals and Page Speed Insights Not Matching Scores
To my understanding, GSC is reporting based on "field data" (meaning the aggregate score of visitors to a specific page over a 28 day period). When you run Page Speed Insights, you can see both Field Data and "lab data". The lab data is your specific run. There are quite a few reasons why field data and lab data may not match. One reason is that changes have been made to the page, which are reflected in the lab data, but will not be reflected in the field data until the next month's set is available. Another reason is that the lab device doesn't run at the exact same specs as the real users in the field data.
The way I look at it is that I use the lab data (and I screen print my results over time, or use other Lighthouse-based tools like GTMetrix, with an account) to assess incremental changes. But the goal is to eventually get the field data (representative of the actual visitors) improved, especially since that's what appears to be what will be used in the ranking signals, as best I can tell.
-
RE: Should I canonicalize URLs with no query params even though query params are always automatically appended?
I would recommend to canonicalize these to a version of the page without query strings, IF you are not trying to optimize different version of the page for different keyword searches, and/or if the content doesn't change in a way which is significant for purpose of SERP targeting. From what you described, I think those are the case, and so I would canonicalize to a version without the query strings.
An example where you would NOT want to do that would be on an ecommerce site where you have a URL like www.example.com/product-detail.jsp?pid=1234. Here, the query string is highly relevant and each variation should be indexed uniquely for different keywords, assuming the values of "pid" each represent unique products. Another example would be a site of state-by-state info pages like www.example.com/locations?state=WA. Once again, this is an example where the query strings are relevant, and should be part of the canonical.
But, in any case a canonical should still be used, to remove extraneous query strings, even in the cases above. For example, in addition to the "pid" or "state" query strings, you might also find links which add tracking data like "utm_source", etc. And you want to make sure to canonicalize just to the level of the page which you want in the search engine's index.
You wrote that the query strings and page content vary based on years and quarters. If we assume that you aren't trying to target search terms with the year and quarter in them, then I would canonicalize to the URL without those strings (or to a default set). But if you are trying to target searches for different years and quarters in the user's search phrase, then not only would you include those in the canonical URL, but you would also need to vary enough page content (meta data, title, and on-page content) to avoid being flagged as duplicates.
-
RE: Inconsistency between content and structured data markup
This is what they say, explicitly: https://developers.google.com/search/docs/guides/sd-policies. Specifically, see the "Quality Guidelines > Content" section.
In terms of actual penalties, ranking influence, or marking pages as spam , I can't say from experience as I've never knowingly used markup inconsistent with the information visible on the page.
Best posts made by seoelevated
-
RE: Redirect to http to https - Pros and Cons
If your current pages can be accessed by http and by https, and if you don't have canonicals or redirects pointing everything to one version or the other, then one very significant "con" for that approach is that you are splitting your link equity. So, if the http page has 50 inbound links, and the https has another 50, you would do better to have one page with 100 inbound links.
Another difference is how browsers show/warn about non-secure pages. As well as any ranking factor they may associate with secure. Again, in favor of redirecting http to https. The visual handling can also impact conversion rates and bounce rates, which can in turn impact ranking.
As far as cons to redirecting, one would be that you might expect a temporary disruption to rankings. There will likely be a bit of a dip, short term. Another is that you will need to remove and then be careful about accidentally adding any non-secure resources (like images) on the https pages, which will then issue a warning to visitors as well as possibly impacting ranks. There is some consensus that redirects (and canonical links) do leak a very small amount of link equity for each hop they take. So, that's another "con". But my recent experiences doing this with two sites have been that after the temporary "dip" of a couple of months, if done properly, the "pros" outweigh the "cons".
-
Reducing cumulative layout shift for responsive images - core web vitals
In preparation for Core Web Vitals becoming a ranking factor in May 2021, we are making efforts to reduce our Cumulative Layout Shift (CLS) on pages where the shift is being caused by images loading. The general recommendation is to specify both height and width attributes in the html, in addition to the CSS formatting which is applied when the images load. However, this is problematic in situations where responsive images are being used with different aspect ratios for mobile vs desktop. And where a CMS is being used to manage the pages with images, where width and height may change each time new images are used, as well as aspect ratios for the mobile and desktop versions of those.
So, I'm posting this inquiry here to see what kinds of approaches others are taking to reduce CLS in these situations (where responsive images are used, with differing aspect ratios for desktop and mobile, and where a CMS allows the business users to utilize any dimension of images they desire).
-
RE: Is there a way to get a list of urls on the website?
If all of the pages you are interested in are linked internally from somewhere in your site which can be reached through navigation or page links, you can run a simulated crawl with a tool like ScreamingFrog, whcih will discover all the "discoverable" pages.
The site you referenced is built with a platform called "Good Gallery", whcih generates a sitemap. This is at www.laskeimages.com/sitemap.xml. I'm not sure what criteria it might use to include/exclude pages, but that would likely be a good list. You will need to view the page source of that page to see the data in a structured way to extract it.
Another method is to use Google Analytics. Assuming that each page of your site has been viewed at least once in its history, you could extract the list from Google Analytics. Especially from an unfiltered view which includes visits by bots.
-
RE: Should Hreflang x-default be on every page of every country for an International company?
Yes, your understanding of x-default is correct. The purpose of including it everywhere you have alternate HREFLANG links, is to handle any locales you don't explicitly include (to tell the search engine which is the default version of the page for other non-specified locales). And it should be included on each version of the page, along with the other specified alternate links for each locale. Alternatively, you could collect all of these centrally into the sitemap file, rather than inserting into each page. Both types of implementation are valid (but anecdotally I've had better luck with on-page tags instead of sitemap implementation).
-
RE: Google SERP shows wrong (and inappropriate) thumbnail for Facebook videos?
This is very interesting, and I see from the threads you linked that multiple businesses are having the same problem and the same difficulty navigating both the Google or Facebook support communities. Out of curiosity, are you able to inspect one of your Facebook pages whcih still has the video, and see if any schema for the type "VideoObject" is included in the page, and if so, paste the markup here (redacted as necessary)? I don't think I'll probably be able to help much on this, but perhaps something in the schema data might give some clues to the community here to work with.
-
RE: How to Localise per Region (Europe, America, APAC, EMEI) and not per country as best SEO practise?
I currently manage a site which is localized per region, as opposed to country. For some regions, like US and Australia, it is 1:1 with country, so we do not have issues there. But for Europe, that is where we do have some issues currently. We took the following approach (below), but I have to first say that it is quite problematic and has not performed very well so far (implemented about 1 year ago).
The approach we took was to implement HREFLANG within our sitemap, and for Europe, we generate specific alternate locations for each of the countries where we do business in that region, all with the same URL. Here (below) is a redacted version of one page's LOC node in our sitemap (I've only included a partial list, and only showing English, as the full list of alternate URLs for this one LOC has 150 alternate links to cover every EU country x 5 languages we support). But, the general approach is that for Europe, we create one alternate link for each EU country, in each of our supported languages (we support 5 languages). So, we don't assume, for example, that German speakers are only in Germany, or that English speakers are only in the UK. We cover every country/language combination and point many of these to the exact same alternate link.
Again, as I mentioned, this hasn't achieved all we had hoped. But sharing the approach for a reference point here, as an option, and open to any other ideas from the community. We also struggle with EU in terms of Google Search Console geographic targeting. Unfortunately, Google does not allow a property to be targeted to "Europe". And they only allow one single country per property. In our case, we really need to target a single domain to "Europe", not to a specific country. But we can't, and that is a problem currently.
Here is the example from our Sitemap (partial cut-and-past of the first few entries from one URL node):
<loc>https://www.example.com/example-page-path</loc>
<priority>1</priority>
... remainder of alternate links removed to shorten list here
| |
-
RE: Traffic drop after hreflang tags added
Yes, that looks correct now. And in your specific case, x-default might indeed handle the rest since Europe is your default, and that's where the unspecified combinations are most likely to be for you.
I wouldn't be too concerned about site speed. These are just links. They don't load any resources or execute any scripts. For most intents, it's similar to text. The main difference is that they will be links that may be followed by the bots. But really, even though you'll have many lines, you only really have two actual links among them. So, I wouldn't be too concerned about this part.
Good luck.
-
RE: Cant find source of redirect
A few thoughts:
- Install the browser extension Ayima, which will let you see if this is actually the result of multiple redirects. There are other ways to see this same info, but the Ayima extension makes it really simple to see the multiple hops, when there are.
- You might try to sequence the existing redirect that is in your htaccess file (A to B) all the way up, or all the way down. There is a sequence followed (most specific to most general).
-
RE: H1 text and Header Image Overlap?
The common solution is to overlay the text on the image, rather than producing the image with text in it. The overlay text can then be given an H1 element.
-
RE: Hide sitelinks from Google search results
Ah. So, then I might try one of the following:
- My preferred approach would be to set up a redirect for that URL to a valid new URL. That way, you would make the best use of the traffic coming from the Sitelink, for whatever time it might remain there. After a while, I suspect Google will either update the sitelink title and description with those from the new redirected page, or perhaps drop that sitelink eventually in favor of another page.
- If you can't do the above (maybe you are not able to set up redirects from the old URL), then I might go the route of using the Search console (old version) to request removal of the old URL (Google Index > Remove URLs). If it really does give a proper 404 response code, then this should work. It doesn't do the job on its own if the URL still gives a valid response code. But a 404 plus a removal should get rid of it. That said, then you are rolling the dice with whatever Google decides to promote as a replacement sitelink. So, I would prefer the first approach, if I thought I could make the best of the traffic coming from that link.
E-Commerce Director with both agency and brand-side experience.
Looks like your connection to Moz was lost, please wait while we try to reconnect.