Wordpress Config Thoughts: Multisite vs. Parent/Child Themes vs. Infinite WP?
-
We publish four local food and drink magazines, each with its own website and related web content. Even though the content across all four titles shares a common mission, there is little overlap in actual stories. That is, each site has its own story content, events calendar and business listing guide.
Still, since we share an editorial staff and a common look among all four, we are debating the pros and cons of a few different wordpress and SEO configurations, and would welcome the community's input on the pros and cons.
Here is what we are considering for the Wordpress configuration:
-
Wordpress Multisite - concerns about 10-15% performance hit, incompatibility with certain plug ins, need to more ‘expert’ development
-
InfiniteWP - concerns that adding a 3rd party plugin to the mix might complicate things
-
Parent / child themes
-
A single wordpress site with different content subfolders for each locale - simplifies events / guide listings / seo, but too much in one place?
Problems with current config (four different wordpress installs across four different base domains - ediblemanhattan.com, ediblebrooklyn.com, ediblelongisland.com, etc)
- SEO value is currently spread across four base domains
- Four different wordpress installs / upgrades / templates / plugins must be managed separately
- Four different namespaces for registered users make cross-domain registration more difficult, less usable
- The independent site approach is potentially problematic if we were to decide to combine certain site features - for example guide and event listings - into a single site experience filterable by zip / location
Our questions:
-
WP config: independent sites vs. multisite vs. parent/child themes vs. other?
-
SEO config: should we move to shared parent domain? If we do, should we use locale-based subfolders or second level domains (brooklyn.ediblemag.com vs. ediblemag.com/brooklyn)?
-
Operations: We think there are SEO advantages to move all four sites share the same base domain - ex, ediblemagazine.com, but are there operational disadvantages we are not considering?
-
Ability for local site editors to work within their locale section only
-
Ability for ad sales to target a single locale, example, run of site display ads on specific locales
-
Ability to segment users by their locale - ex. enroll users in email lists for edible brooklyn only
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What should I include in disavow file and/or reconsideration request?
My client got a manual penalty notice. Need to submit a disavow file and reconsideration request which is new territory for me. The task of contacting/disavowing 100's of sites to remove 1000's of links is a bit overwhelming. Answers to any of these questions would be greatly appreciated. Search console is showing 100's of hacked websites pointing to the site. Many of the incoming links showing in search console are already gone. Should I include in the disavow file or is the disavow file only for links that persist? I have read that Google does not actually read the #remarks in the disavow file. Since its manual penalty should I include them anyway since it's possible that a human could look it over? If anyone who has submitted a reconsideration request for unnatural links can comment on their use or non use of #remarks and the result that would be very helpful. You can tell that Google wants an effort to be made that the site owners are contacted. What is the best way to document that? In the reconsideration request?: The disavow file? or both.
Intermediate & Advanced SEO | | KentH0 -
Worldpress multisite for SEO
hello, guys, at the moment we have 3 websites, basically, the websites have the same content and appearance. we got UK website, New Zealand web site and USA website for the different business purpose. I have some questions about multisite for SEO, with similar content, will it harm website ranking? if it is bad, what should we do to deal with multisite? thank you
Intermediate & Advanced SEO | | kelvinbongcn850 -
Should I nofollow my Wordpress tags?
I have a website that have a strong root domain (ranking on many terms) but the subpages (articles) doesn't rank well. My feeling is that the linkjuice is not flowing to them (not enough anyway). When I run site:http://mydomain.com I have my root as the first result and the next many results are tagpages on my sites. I have arund 180 index pages, and I need to go to down to result #50 give or take before I see any subpage using the site command. My website theme have the tags on every page possible. The tags are useful for my viewers, but not SEO useful, but I fear that they are dilluting my linkjuice. Should I nofollow and noindex them? Noindex makes sense (the tags are just duplicate content featuring snippets of text from the articles). But Nofollow would make sense too since I wouldn't send any linkjuice through the tags. What would you guys do? Bests regards
Intermediate & Advanced SEO | | claus101 -
Joomla to Wordpress site migration - thousands of 404s
I recently migrated a site from Joomla to Wordpress. In advance I exported the HTML pages from Joomla using Screaming Frog and did 301 redirects on all those pages. However Webmaster Tools is now telling me (a week after putting the redirects in place) that there are >7k 404s. Many of them aren't HTML pages, just index.php files but I didn't think I would have to export these in my Screaming Frog crawl. We have since done a blanket 301 redirect for anything with index.php in it but Webmaster Tools is still picking them up as 404s. So my question is, what should I have done with Screaming Frog re exporting to ensure I captured all pages to redirect and what should I now do to fix the 404s that Webmaster Tools is picking up?
Intermediate & Advanced SEO | | Bua0 -
Citation/Business Directory Question...
A company I work for has two numbers... one for the std call centre and one for tracking SEO. Now, if local citation/business directory listings have the same address but different numbers, will this affect local/other SEO results? Any help is greatly appreciated! 🙂
Intermediate & Advanced SEO | | geniusenergyltd0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
TLA / Text Link Ads
Hi folks, Curious to hear what people know about the TLA situation since reports surfaced that they'd been de-indexed. It looks like it's all been quiet since those early reports. Not many people admit to using TLA so perhaps you've heard something on the grapevine... nudge nudge wink wink.
Intermediate & Advanced SEO | | MattBarker0 -
Automotive part / OEM / Manufacturer numbers
Hi All, What's the best way to optimise pages for OE / Manufacturer Part numbers? Disclaimer: All part numbers in this post are fictional. I dont want this post out ranking my client for real part numbers 🙂 Take this for Throttle Body for example: WOODYS S-AB-Q.123.53G This is the main part number from WOODYS (the manufacturer). However, these are all variations of exactly the same product: Woodys 2.78972.11.0 Woodys 2.78972.16.0 Woodys 2.78972.20.0 Woodys 2.78972.26.0 Oh, and car brands use OE numbers for these parts, such as: VWA 9808e40923G VWA 9808e40923L VWA 9808e40923M VWA 9808e40923P VWA 9808e40923Q These internal part numbers are vitally important as most of my clients customers are garages/mechanics so they're very likely to search on OE numbers. So, would you suggest: Optimising 10 different pages for the same product (using the part numbers in the URL, Title and H1). The problem is there's no unique content for these pages, only the part number varies, so this would likely get penalised for dupe content, or not enough unique content. Optimising one page for all terms. If so, how do you suggest doing this to ensure all part/OE numbers rank well and part numbers are prominent in the SERPS?
Intermediate & Advanced SEO | | seowoody
Could Schema.org help here by marking up these EO numbers with the isSimilarTo property of the Product type? I'm trying to ensure these part number get equal presence in the SERP snippet when searched for, even though I can't physically include all these numbers in the Title tag, URL and H1 of one page. 3. Something else? Thanks, Woody 🙂1