How can I find my Webmaster Tools HTML file?
-
So, totally amateur hour here, but I can't for the life of me find our HTML verification file for webmaster tools. I see nowhere to look at it in Google Webmaster Tools console, I tried a site:, I googled it, all the info out there is about how to verify a site. Ours is verified, but I need the verification file code to sync up with the Google API and no one seems to have it. Any thoughts?
-
It was your second answer that did it. Here's a link to the documentation on it as well if anyone else runs across this thread. https://support.google.com/webmasters/bin/answer.py?hl=en&answer=140369&topic=2370564&ctx=topic
Once you say you want to change it it will give you the option to view details on your current verification file.
-
"Go into google webmaster tools and click add a site. Then go to the home dashboard (the main screen) > click manage site and then click verify site > click on the alternate methods in the verification tab and there is the html file option"
This was my orignial repsonse but only works if not verified sorry > if verified click manage site > add or remove users > manage site owners > verify using a different method.
-
Awesome, thank you. I knew it was in there somewhere!
-
Your html file will be in your root folder of your domain do you have ftp access?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots File
For some reason the robots file on this site: http://rushhour.net.au/robots.txt Is giving this in Google: <cite class="_Rm">www.rushhour.net.au/bootcamp.html</cite>A description for this result is not available because of this site's robots.txtLearn moreCan anyone tell me why please?thanks.
Technical SEO | | SuitsAdmin0 -
Webmaster tools reporting spurious errors?
For the past 3 or so months Webmaster tools has been reporting 404 errors on my pages... The odd thing is that I can't figure out what they are seeing. Here is an example of a link they claim is a 404 antiquebanknotes/nationalcurrency/rare/1895-Ten-Dollar-Bill.aspx This is strange because it's a malformed URL. It says it's linked from this page: http://www.antiquebanknotes.com/antiquebanknotes/rare/1882-twenty-dollar-bill.aspx Which is a URL that doesn't exist. The bolded portion of this URRL shouldn't be there. Can anyone give me an idea what is happening here? Kind regards, Greg
Technical SEO | | Banknotes1 -
How can I best handle parameters?
Thank you for your help in advance! I've read a ton of posts on this forum on this subject and while they've been super helpful I still don't feel entirely confident in what the right approach I should take it. Forgive my very obvious noob questions - I'm still learning! The problem: I am launching a site (coursereport.com) which will feature a directory of schools. The directory can be filtered by a handful of fields listed below. The URL for the schools directory will be coursereport.com/schools. The directory can be filtered by a number of fields listed here: Focus (ex: “Data Science”) Cost (ex: “$<5000”) City (ex: “Chicago”) State/Province (ex: “Illinois”) Country (ex: “Canada”) When a filter is applied to the directories page the CMS produces a new page with URLs like these: coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago coursereport.com/schools?cost=$>5000&city=buffalo&state=newyork My questions: 1) Is the above parameter-based approach appropriate? I’ve seen other directory sites that take a different approach (below) that would transform my examples into more “normal” urls. coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago VERSUS coursereport.com/schools/focus/datascience/cost/$<5000/city/chicago (no params at all) 2) Assuming I use either approach above isn't it likely that I will have duplicative content issues? Each filter does change on page content but there could be instance where 2 different URLs with different filters applied could produce identical content (ex: focus=datascience&city=chicago OR focus=datascience&state=illinois). Do I need to specify a canonical URL to solve for that case? I understand at a high level how rel=canonical works, but I am having a hard time wrapping my head around what versions of the filtered results ought to be specified as the preferred versions. For example, would I just take all of the /schools?focus=X combinations and call that the canonical version within any filtered page that contained other additional parameters like cost or city? Should I be changing page titles for the unique filtered URLs? I read through a few google resources to try to better understand the how to best configure url params via webmaster tools. Is my best bet just to follow the advice on the article below and define the rules for each parameter there and not worry about using rel=canonical ? https://support.google.com/webmasters/answer/1235687 An assortment of the other stuff I’ve read for reference: http://www.wordtracker.com/academy/seo-clean-urls http://www.practicalecommerce.com/articles/3857-SEO-When-Product-Facets-and-Filters-Fail http://www.searchenginejournal.com/five-steps-to-seo-friendly-site-url-structure/59813/ http://googlewebmastercentral.blogspot.com/2011/07/improved-handling-of-urls-with.html
Technical SEO | | alovallo0 -
GWT and html improvements
Hi all I am dealing with duplicate content issues on webmaster tool but I still don't understand what's happening as the number of issues keeps changing. Last week the duplicate meta description were 232, then went down to 170 now they are back to 218. Same story for duplicate meta title, 110, then 70 now 114. These ups and downs have been going on for a while and in the past two weeks I stopped changing things to see what would have happened. Also the issues reported on GWT are different from the ones shown in the Crawl Diagnostic on Moz. Furthermore, most URL's have been changed (more than a year ago) and 301 redirects have been implemented but Google doesn't seem to recognize them. Could anyone help me with this? Also can you suggest a tool to check redirects? Cheers Oscar
Technical SEO | | PremioOscar0 -
How to find an internal link that is generating a duplicate
Hello Mozers Can anybody help me. It's a bit OCD, but, I really want to find the internal links within a clients site that are generating duplicate urls. I did start looking page by page using search, but got a bit stir crazy! I'm sure one of you smart SEO's will have a simple, clever solution:) Thanks Catherine
Technical SEO | | catherine-2793880 -
Help with Webmaster Tools "Not Followed" Errors
I have been doing a bunch of 301 redirects on my site to address 404 pages and in each case I check the redirect to make sure it works. I have also been using tools like Xenu to make sure that I'm not linking to 404 or 301 content from my site. However on Friday I started getting "Not Followed" errors in GWT. When I check the URL that they tell me provided the error it seems to redirect correctly. One example is this... http://www.mybinding.com/.sc/ms/dd/ee/48738/Astrobrights-Pulsar-Pink-10-x-13-65lb-Cover-50pk I tried a redirect tracer and it reports the redirect correctly. Fetch as googlebot returns the correct page. Fetch as bing bot in the new bing webmaster tools shows that it redirects to the correct page but there is a small note that says "Status: Redirection limit reached". I see this on all of the redirects that I check in the bing webmaster portal. Do I have something misconfigured. Can anyone give me a hint on how to troubleshoot this type of issue. Thanks, Jeff
Technical SEO | | mybinding10 -
Track PDF files downloaded from my site
I came across this code for tracking PDF files [1. map.pdf ( name of PDF file ) and files is the folder name. Am i right ? 2. What shall i be able to track using the code given above ? a ) No. of clicks on links or how many persons downloaded the PDF files ? 3. Where in Google this report will be visible ? Thanks a lot.](http://www.example.com/files/map.pdf)
Technical SEO | | seoug_20050 -
Leaving Comments on blogs when html is removed
I found the following blog. It is pagerank 5 do follow http://www.unssc.org/web1/programmes/rcs/cca_undaf_training_material/teamrcs/forumdetail.asp?ID=32 If you attempt to leave a comment with html, the html is removed. There is a button which allows you to leave a comment but if you do it gets redirected to the domain of the blog not your site. However there are still people leaving links with the url of the intended site. As late as today. look at this comment
Technical SEO | | mickey11
Comment posted by : Alex on 09/09/2011 I love to se percorsi on this site very often How is this done, if anyone knows I got the code done to this your keywords The important part being mce_real_href0