Custom Permalinks (aka alias') - does it look spammy to googlebot?
-
I am moving my whole site over to wordpress (150+pgs). In the process I assigned pages to appropriate parent pages via "page attributes".
I was really excited about this. I like how it organizes everything in the pages dashboard. I also think that the sitemap that comes with my theme can create something really great for visitors with this info.
What I realized after doing that is that it changed my url to include the parent page. Basically, the url is now "domain.com/parent-page/child-page.html". This is rather disasterous because the url's of these newly created child pages on my old site are simple "domain.com/child-page". Not that they're defined as parent or child pages on my existing dreamweaver/html site... but you know what I mean - Right?!
I got a plugin called "Permalink Editor" to let me customize the url. So, I went through all of the child pages and got rid of the parent page in the url.
Then when I woke up this morning I realized that what I've created is a "permalink alias". That sounds a little bit scary to me. Perhaps like google could consider it spam and like I'm trying to "sculpt link flow".
I'm not... I'm just trying to recreate my site as it is in wordpress. I want the site to be exactly the same in terms of the url's. But, I want the many benefit's of wordpress' CMS.
Should I go an unassign all of the parent/child pages in the "Page Attributes". Or, am I being paranoid and should I leave it as is?
fyi - this is the first page that came up with I searched for permalink alias. It looks kind of black-hatty to me?!
- http://www.seodesignsolutions.com/blog/wordpress-seo/seo-ultimate-4-7/Thanks so much. I look forward to a response!
-
Hi There
Here's what I would do;
1. Set up the new WordPress site exactly how you want it to appear. Use the best URL structure that makes sense - don't worry about what it was on the old site. In most cases, the way WordPress does it with parent pages is totally fine.
2. In excel - make a column all the pages on your old web site - you can use Screaming Frog to crawl the site and do this. Then, in the next column, match the old pages with the corresponding new pages from the WordPress site. The URLs are going to be different but that's ok.
3. Last step - when you make the new WordPress site live, you just need to 301 redirect the old URLs from Dreamweaver to the new one for WordPress. A 301 redirect is something that directs users to the new updated page. You can do a 301 redirect with the Redirection plugin for wordpress.
What you end up with, is a new site with new URLs for each page - but the old pages get redirected to the correct new ones.
Hopefully that makes sense? And very sorry this question was not picked up sooner!
-Dan
-
Anyone??
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console 'Change of Address' Just 301s on source domain?
Hi all. New here, so please be gentle. 🙂 I've developed a new site, where my client also wanted to rebrand from .co.nz to .nz On the source (co.nz) domain, I've setup a load of 301 redirects to the relevant new page on the new domain (the URL structure is changing as well).
Technical SEO | | WebGuyNZ
E.G. On the old domain: https://www.mysite.co.nz/myonlinestore/t-shirt.html
In the HTACCESS on the old/source domain, I've setup 301's (using RewriteRule).
So that when **https://www.mysite.co.nz/**myonlinestore/t-shirt.html is accessed, it does a 301 to;
https://mysite.nz/shop/clothes/t-shirt All these 301's are working fine. I've checked in dev tools and a 301 is being returned. My question is, is having the 301's just on the source domain only enough, in regards to starting a 'Change of Address' in Google's Search Console? Their wording indicates it's enough but I'm concerned, maybe I also need redirects on the target domain as well? I.E. Does the Search Console Change of Address process work this way?
It looks at the source domain URL (that's already in Google's index), sees the 301 then updates the index (and hopefully pass the link juice) to the new URL. Also, I've setup both source and target Search Console properties as Domain Properties. Does that mean I no longer need to specify that the source and target properties are HTTP or HTTPS? I couldn't see that option when I created the properties. Thanks!0 -
URL with query string being indexed over it's parent page?
I noticed earlier this week that this page - https://www.ihasco.co.uk/courses/detail/bomb-threats-and-suspicious-packages?channel=care was being indexed instead of this page - https://www.ihasco.co.uk/courses/detail/bomb-threats-and-suspicious-packages for its various keywords We have rel=canonical tags correctly set up and all internal links to these pages with query strings are nofollow, so why is this page being indexed? Any help would be appreciated 🙂
Technical SEO | | iHasco0 -
GWT False Reporting or GoogleBot has weird crawling ability?
Hi I hope someone can help me. I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day. Example: http://www.mydomain.com/category/article/ (this would be a correct structure for the site). GWT reports: http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl. This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods. The site is using a htacess file with redirects and a rewrite condition. Rewrite Condition: Need to redirect when no trailing slash RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | baldnut
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301] The above condition forces the trailing slash on folders. Then we are using redirects in this manner: Redirect 301 /article.html http://www.domain.com/article/ In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so: <ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s] RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule> Is there anything that I have done that would cause this type of redirect 'loop' ? Any help greatly appreciated.\0 -
Looking for a technical solution for duplicate content
Hello, Are there any technical solutions to duplicate content similar to the nofollow tag? A tag which can indicate to Google that we know that this is duplicate content but we want it there because it makes sense to the user. Thank you.
Technical SEO | | FusionMediaLimited0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Building URL's is there a difference between = and - ?
I have a Product Based Search site where the URL's are built dynamically based on the User input Parameters Currently I use the '=' t o built the URL based on the search parameters for eg: /condition=New/keywords=Ford+Focus/category=Exterior etc Is there any value in using hypen's instead of = ? Could you please help me in any general guidelines to follow
Technical SEO | | Chaits0 -
OpenSite Explorer doesn't show Twitter
When I type competitors sites into opensiteexplorer it shows their Twitter page as one of the tops links back. However, my site, our Twitter page isn't even on the links back. Does twitter even though no follow hold any value? Why can't opensite explorer see it? If OSE can't see it then do you think Google isn't?
Technical SEO | | PhotoGazza0 -
Images on page appear as 404s to Googlebot
When I fetch my website as Googlebot it returns 404s for all the images on the page. This despite the fact that each image is hyperlinked! What could be causing this issue? Thanks!
Technical SEO | | Netpace0