Accessing the wednesday webinar series
-
Is there a place where I can access the wednesday webinar recordings. I missed the one today and was wondering if they are available at any time or if they are just live?
Thanks in advance
-
They aren't yet. But should be this week. I've assigned this question to Abe, who's our Helpster that's in charge of uploading them. He'll let you know when they are.
-
Thanks Erica
Any idea when they will be uploaded?
-
Yes, that was one was recorded (and we plan on recording all of them, but had some technical difficulties with the first ones). The recording will go up on our webinars page: http://www.seomoz.org/webinars
Enjoy!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Allow only Rogerbot, not googlebot nor undesired access
I'm in the middle of site development and wanted to start crawling my site with Rogerbot, but avoid googlebot or similar to crawl it. Actually mi site is protected with login (basic Joomla offline site, user and password required) so I thought that a good solution would be to remove that limitation and use .htaccess to protect with password for all users, except Rogerbot. Reading here and there, it seems that practice is not very recommended as it could lead to security holes - any other user could see allowed agents and emulate them. Ok, maybe it's necessary to be a hacker/cracker to get that info - or experienced developer - but was not able to get a clear information how to proceed in a secure way. The other solution was to continue using Joomla's access limitation for all, again, except Rogerbot. Still not sure how possible would that be. Mostly, my question is, how do you work on your site before wanting to be indexed from Google or similar, independently if you use or not some CMS? Is there some other way to perform it?
Moz Pro | | MilosMilcom
I would love to have my site ready and crawled before launching it and avoid fixing issues afterwards... Thanks in advance.0 -
Did moz stop doing webinars?
The last recorded webinar is from april did moz stop doing these? Luckily i have all the moz con videos t go thru (which are awesome by the way-thanks)
Moz Pro | | DavidKonigsberg1 -
When's the next Webinar?
I really love the webinars! I listen to them on long walks occasionally, but I haven't seen one in May or June. When will the next one be? Thanks! I apologize if this should have gone to the help desk...
Moz Pro | | WilliamBay0 -
Can i give other accounts access
I would like to be able to give limited access to members of our team so they can see SEO campaign results and print off reports without being able to edit the campaigns. Is this possible?
Moz Pro | | wouldBseoKING0 -
Critical factor Accessible to engine
Hello , i don't understand "Accesible to Engine" - critical factor - that indicate: <dl> <dt>Crawl status</dt> <dd>Status Code: 200
Moz Pro | | lbecarelli
meta-robots: None
meta-refresh: 0; URL=/shop/searchresult.seam
X-Robots: None</dd> <dt>Explanation</dt> <dd>Pages that can't be crawled or indexed have no opportunity to rank in the results. Before tweaking keyword targeting or leveraging other optimization techniques, it's essential to make sure this page is accessible.</dd> <dt>Recommendation</dt> <dd>Ensure the URL returns the HTTP code 200 and is not blocked with robots.txt, meta robots or x-robots protocol (and does not meta refresh to another URL)</dd> <dt>My data</dt> <dd>This is the content of my index and home page:</dd> <dd>and this is my file robots content:</dd> <dd>User-agent: *
Disallow: /shop/debug.seam
Disallow: /bhimg/
Disallow:/shop/cart/
Disallow:/shop/G10/
Disallow:/shop/help/
Disallow:/shop/img/
Disallow:/shop/jQueryUI/
Disallow:/shop/js/
Disallow:/shop/layout/
Disallow:/shop/myShop/
Disallow:/shop/newUser/
Disallow:/shop/shop/
Disallow:/shop/staticPages/
Disallow:/shop/stylesheet/
Disallow:/shop/error.seam
Disallow:/shop/login.seam
Disallow:/shop/login.seam
Disallow:/shop/test/
Disallow:/shop/utility/
Disallow:/shop/zoomifyer/</dd> <dd>Tks for any reply.</dd> </dl>0 -
I'd like to hook up a SEOMOZ campaign to one of my clients that I access through the AdWords MCC.
Can't seem to figure out how to go about it. Any help would be appreciated. Thanks JT
Moz Pro | | Johnthy320 -
Confounding "Accessible to Engines" error?
Most of the pages on our site "Accessible to Engines" test in the SEOmoz reports. We cannot find any problem with the code and it's largely identical to the few pages that come up with an "A" score. One item that may be a reason is that we use meta http-equiv="refresh" content="600; For example in www.weatherzone.com.au/nsw/sydney/sydney We use this to fresh dynamic content on our site. Do search engines penalise pages that use this form of page refresh? Alternatively, is there a known bug in the SEOmoz "Accessible to Engines" report? Many thanks
Moz Pro | | weatherzone0