Portfolio Image Landing Page Question/Issue
-
Hello,
We have a client with a very image heavy website. They have Portfolio pages with a large number of images. We are currently working on adding more copy to the site but wanted to confirm we are taking the right approach for the images on the site.
Under the current structure each image has its own landing page (with no copy) and is fed in (or generated on) to a Portfolio Page. While we know this is not ideal as it would be best to have the images on the Portfolio Page directly or even fill out the landing pages with copy; due to the amount of images and the fact these are only images (and not a 'targeted' page) that would not really be feasible.
Aside from the thin content concern these individual landing pages were being indexed so they are showing hundreds of pages on their sitemap.xml and in GSC even though they only have a few actual pages.
In the meantime we went into each image-page and placed a canonical tag back to the main Portfolio Page (with the hopes to add content to that page and have it as the ‘overarching’ page). Would this be the right approach? – We considered ‘noindex-follow’ tags but would want the images to be crawled; the issue is because the pages are not on the actual page are we canonicalizing these images to nothing?
Any insight would really be appreciated.
Thank you in advance.
-
Standard operating procedure for us is to nofollow anything we noindex. Theory is that pagerank will still flow through a link to another page that has been noindexed if you don't nofollow it. But since it has been noindexed the PR will just be trapped there and can't move on to another page through any links on that page.
I don't know how true that theory is, but erring on the side of caution has never cost me a dime, and nofollowing links to a page that is noindexed won't have a negative impact.
-
Thanks again. We noticed you mentioned 'nofollowing' the pages but Steve said no-index, follow so we just wanted to confirm which you would recommend for these image/landing pages?
Best,
- B.
-
Thank you both for your answers and help.
I’m not sure why they set it up that way it was executed before I came on board. The site actually has a ‘gallery style’ page (with the enlarging function) but I believe they uploaded them to the Portfolio through those landing page URLs. Still trying to figure out the how and why.
For now we placed ‘no-index,follow’ tags on each image/page and will make sure each image is listed on the sitemap and has the right files names/alt text, which should help thank you.
It’s not how I would have set it up but there are so many images already in this structure. The help is very appreciated and if there is anything I might have missed please let me know.
Thanks again.
Best,
-
Is there some reason you need to have individual landing pages for each image? That's a lot of bloat. Hundreds (or thousands) of unique landing pages with nothing but an image isn't going to convey either relevance or authority to search engines, nor does it sound particularly useful to users.
Since Googlebot can't "see" the images, I would follow Steve's advice and add the images to your sitemap, noindex and nofollowing the pages that lack content. Taking it a step further, I would also make sure the filenames are readable by search engines and alt text is entered for each image. Doing so will help Google understand what the images are and may help display you for relevant queries.
You may want to consider just doing gallery style pages for your portfolios with the option to enlarge a photo to max resolution by clicking on it if you're just trying to let users see a larger image. Probably will help with UX on top of SEO.
-
Maybe think about noindex-follow the thin pages and create an image sitemap to make sure your images are crawled.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Organic Landing Page Changes Repeatedly
Hello, We are selling bee products online in Germany. Our star product and the most important keyword is propolis. For this keyword, our landing page has changed several times between 2 product pages and a content page about propolis. For a short time, the landing page was homepage. We also have a Propolis category page which consists of propolis products. For this page, most of the internal and external links have the anchor text propolis. But the category page never ranked for this keyword. Product, category and informational pages all have content over 1500 words. Does anyone have an idea why the landing page change this much? Also, what can we do to stabilize and optimize the landing page? Thanks! qksc61
Intermediate & Advanced SEO | | ozzzge0 -
How to handle blank, auto generated system pages/urls
Hi Guys Our backend system has been creating listing pages based on out of date and irrelevant data meaning we have hundreds of thousands of pages that are blank but currently indexable and active. They're almost impossible to access from the front end and have 0 traffic pointing at them but you can access these pages if you have the URL and i'm pretty sure due to the site architecture, google is crawling them regardless. For the most part, I think its likely best to 301 these pages to the most closely related page on the site but I'm concerned we're wasting crawl budget here. We don't want these pages to be crawled or found. Would a sound solution be to make them inactive, no-index and create a custom 404 in the event anyone (or the crawler) managed to get to them? Would this enormous increase in 404 pages cause us issues? Many thanks
Intermediate & Advanced SEO | | Jon.Kennett0 -
Google is indexing wrong page for search terms not on that page
I’m having a problem … the wrong page is indexing with Google, for search phrases “not on that page”. Explained … On a website I developed, I have four products. For example sake, we’ll say these four products are: Sneakers (search phrase: sneakers) Boots (search phrase: boots) Sandals (search phrase: sandals) High heels (search phrase: high heels) Error: What is going “wrong” is … When the search phrase “high heels” is indexed by Google, my “Sneakers” page is being indexed instead (and ranking very well, like #2). The page that SHOULD be indexing, is the “High heels” page (not the sneakers page – this is the wrong search phrase, and it’s not even on that product page – not in URL, not in H1 tags, not in title, not in page text – nowhere, except for in the top navigation link). Clue #1 … this same error is ALSO happening for my other search phrases, in exactly the same manner. i.e. … the search phrase “sandals” is ALSO resulting in my “Sneakers” page being indexed, by Google. Clue #2 … this error is NOT happening with Bing (the proper pages are correctly indexing with the proper search phrases, in Bing). Note 1: MOZ has given all my product pages an “A” ranking, for optimization. Note 2: This is a WordPress website. Note 3: I had recently migrated (3 months ago) most of this new website’s page content (but not the “Sneakers” page – this page is new) from an old, existing website (not mine), which had been indexing OK for these search phrases. Note 4: 301 redirects were used, for all of the OLD website pages, to the new website. I have tried everything I can think of to fix this, over a period of more than 30 days. Nothing has worked. I think the “clues” (it indexes properly in Bing) are useful, but I need help. Thoughts?
Intermediate & Advanced SEO | | MG_Lomb_SEO0 -
Pages with rel "next"/"prev" still crawling as duplicate?
Howdy! I have a site that is crawling as "duplicate content pages" that is really just pagination. The rel next/prev is in place and done correctly but Roger Bot and Google are both showing duplicated content + duplicate page titles & meta's respectively. The only thing I can think of is we have a canonical pointing back at the URL you are on - we do not have a view all option right now and would not feel comfortable recommending it given the speed implications and size of their catalog. Any experience, recommendations here? Something to be worried about? /collections/all?page=15"/>
Intermediate & Advanced SEO | | paul-bold0 -
Domaim.com/jobs?location=10 is indexed, so is domain.com/jobs/sheffield
Whats the best way you'd tackle that problem? I'm inheriting a website and the old devs had multiple internal links pointing to domain.com/jobs?location=10 (plus a ton of other numbers assigned to locations) and so they've been indexed. I usually use WMTs parameter tool but I'm not sure what the best approach would be other than that. Any help would be appreciated!
Intermediate & Advanced SEO | | jasondexter0 -
Penguin/Panda/Domain Purchase
If I move forward with the acquisition: 1. Should I, if there is a way, just acquire the domain and then attempt to unlink existing links? 2. Can I just buy the domain, completely kill the site, and then build again from scratch? Even if I do that, the links to the domain will still be out there. 3. Should I even move forward with the purchase if I know these tactics have been used? Thanks!
Intermediate & Advanced SEO | | dbuckles0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Question about "launching to G" a new site with 500000 pages
Hey experts, how you doing? Hope everything is ok! I'm about to launch a new website, the code is almost done. Totally fresh new domain. The site will have like 500000 pages, fully internal optimized of course. I got my taticts to make G "travel" over my site to get things indexed. The problem is: to release it in "giant mode" or release it "thin" and increase the pages over the time? What do you recomend? Release the big G at once and let them find the 500k pages (do they think this can be a SPAM or something like that)? Or release like 1k/2k per day? Anybody know any good aproach to improve my chances of success here? Any word will be apreciated. Thanks!
Intermediate & Advanced SEO | | azaiats20