Setting A Custom User Agent in Screaming Frog
-
Hi all,
Probably a dumb question, but I wanted to make sure I get this right.
How do we set a custom user agent in Screaming Frog? I know its in the configuration settings, but what do I have to do to create a custom user agent specifically for a website?
Thanks much!
- Malika
-
Setting a custom user agent determines things like HTTP/2 so there can be a big difference if you change it to something that might not take advantage of something like HTTP/2
Apparently, it is coming to Pingdom very soon just like it is to Googlebot
http://royal.pingdom.com/2015/06/11/http2-new-protocol/
This Is an excellent example of a user agent's ability to modify the way your site is crawled as well as how efficient it is.
https://www.keycdn.com/blog/https-performance-overhead/
It is important to note that we didn’t use Pingdom in any of our tests because they use Chrome 39, which doesn’t support the new HTTP/2 protocol. HTTP/2 in Chrome isn’t supported until Chrome 43. You can tell this by looking at the
User-Agent
in the request headers of your test results.Pingdom user-agent
Note: WebPageTest uses Chrome 47 which does support HTTP/2.
Hope that clears things up,
Tom
-
Hi Malika,
Think about screaming frog and what it has to detect in order to do that correctly it needs the correct user agent syntax for it will not be able to make a crawl that would satisfy people.
Using a proper syntax for a user agent is essential and I have tried to be non-technical in this explanation I hope it works.
the reason screaming frog needs the user agent because the user-agent was added to HTTP to help web application developers deliver a better user experience. By respecting the syntax and semantics of the header, we make it easier and faster for header parsers to extract useful information from the headers that we can then act on.
Browser vendors are motivated to make web sites work no matter what specification violations are made. When the developers building web applications don’t care about following the rules, the browser vendors work to accommodate that. It is only by us application developers developing a healthy respect
When the developers building web applications don’t care about following the rules, the browser vendors work to accommodate that. It is only by us application developers developing a healthy respect
It is only by us application developers developing a healthy respect for the standards of the web, that the browser vendors will be able to start tightening up their codebase knowing that they don’t need to account for non-conformances.
For client libraries that do not enforce the syntax rules, you run the risk of using invalid characters that many server side frameworks will not detect. It is possible that only certain users, in particular, environments would identify the syntax violation. This can lead to difficult to track down bugs.
I hope this is a good explanation I've tried to keep it very to the point.
Respectfully,
Thomas
-
Hi Thomas,
would you have a simpler tutorial for me to understand? I am struggling a bit.
Thanks heaps in advance
-
I think I want something that is dumbed down to my level for me to understand. The above tutorials are great but not being a full time coder, I get lost while reading those.
-
Hi Matt,
I havent had a luck with this one yet.
-
Hi Malika! How'd it go? Did everything work out?
-
happy I could be of help let me know if there's any issue and I will try to be of help with it. All the best
-
Hi Thomas,
That's a lot of useful information there. I will have a go on it and let you know how it went.
Thanks heaps!
-
please let me know if I did not answer the question or you have any other questions
-
this gives you a very clear breakdown of user agents and their set of syntax rules. The following is valid example of user-agent that is full of special characters,
read this please http://www.bizcoder.com/the-much-maligned-user-agent-header
user-agent: foo&bar-product!/1.0a$*+ (a;comment,full=of/delimiters
references but you want to pay attention to the first URL
https://developer.mozilla.org/en-US/docs/Web/HTTP/Gecko_user_agent_string_reference
| Mozilla/5.0 (X11; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 |
http://stackoverflow.com/questions/15069533/http-request-header-useragent-variable
-
if you formatted it correctly see below
User-Agent = product *( RWS ( product / comment ) )
and it was received by your headers yes you could fill in the blanks and test it.
https://mobiforge.com/research-analysis/webviews-and-user-agent-strings
http://mobiforge.com/news-comment/standards-and-browser-compatibility
-
No, you Cannot just put anything in there. The site has to recognize it and ask why you are doing this?
I have listed how to build and already built in addition to what your browser will create by using useragentstring.com
Must be formatted correctly and have it work with a header it is not as easy as it sometimes seems but not that hard either.
You can make & use this to make your own from your Mac or PC
http://www.useragentstring.com/
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2747.0 Safari/537.36
how to build a user agent
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Gecko_user_agent_string_reference
- https://developer.mozilla.org/en-US/docs/Setting_HTTP_request_headers
- https://msdn.microsoft.com/en-us/library/ms537503(VS.85).aspx
Lists of user agents
https://support.google.com/webmasters/answer/1061943?hl=en
https://msdn.microsoft.com/en-us/library/ms537503(v=vs.85).aspx
-
Hi Thomas,
Thanks for responding, much appreciated!
Does that mean, if I type in something like -
HTTP request user agent -
Crawler access V2
&
Robots user agent
Crawler access V2
This will work too?
-
To crawl using a different user agent, select ‘User Agent’ in the ‘Configuration’ menu, then select a search bot from the drop-down or type in your desired user agent strings.
http://i.imgur.com/qPbmxnk.png
&
Video http://cl.ly/gH7p/Screen Recording 2016-05-25 at 08.27 PM.mov
Or
Also see
http://www.seerinteractive.com/blog/screaming-frog-guide/
https://www.screamingfrog.co.uk/seo-spider/user-guide/general/#user-agent
https://www.screamingfrog.co.uk/seo-spider/user-guide/
https://www.screamingfrog.co.uk/seo-spider/faq/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
User intent
Hello, Can you rank on a keyword like "Loire Valley bike tours" (plural version) on the 1 st page that describes your tour with just 1 tour and not a lit of tours ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Optimizing A Homepage URL That Is Only Accessible To Logged In Users
I have a client who has a very old site with lots and lots of links to it. The site offers www.examplesite.com/loggedin as the homepage to logged in users. So, once you're logged in, you can't get back to examplesite.com anymore (unless you log out) and are instead given /loggedin as your new personalized homepage. The problem is that many users over time who linked to the site linked to the site they saw after they signed up and were logged in.... www.examplesite.com/loggedin. So, there's all these inbound links going to a page that is inaccessible to non-logged-in users. Thus linking to nowheresville. One idea is to fire off a 301 to non-logged in users, forwarding them to the homepage. Thus capturing much of that stranded link juice. Honestly, I'm not 100% sure you can fire off a server code conditioned on if they are logged in or not. I imagine you can, but don't know that for a technical fact. Another idea is to offer some content on /loggedin that is right now mostly currently blank, except for an offer to sign in. Which do you think is better and why? Thanks... Mike
Intermediate & Advanced SEO | | 945010 -
How does Infinite Scrolling work with unique URLS as users scroll down? And is this SEO friendly?
I was on a site today and as i scrolled down and viewed the other posts that were below the top one i read, i noticed that each post below the top one had its own unique URL. I have not seen this and was curious if this method of infinite scrolling is SEO friendly. Will Google's spiders scroll down and index these posts below the top one and index them? The URLs of these lower posts by the way were the same URLs that would be seen if i clicked on each of these posts. Looking at Google's preferred method for Infinite scrolling they recommend something different - https://webmasters.googleblog.com/2014/02/infinite-scroll-search-friendly.html . Welcome all insight. Thanks! Christian
Intermediate & Advanced SEO | | Sundance_Kidd0 -
Help FORUM ( User generated content ) SEO best practices
Hello Moz folks ! For the very first time im dealing with a massive community who rely on UGC ( user generated content ). Their forum is finding a great deal of duplicate content/broken link/ duplicate title and on-site issue. I have Advance SEO knowledge related to ecommerce or blogging but new to forum and UGC. I would really love to learn or get ressources links that would allow me to see/understand the best practices in term of SEO. Any help is greatly appreciated. Best, Yan
Intermediate & Advanced SEO | | ydesjardins2000 -
Pitfalls when implementing the “VARY User-Agent” server response
We serve up different desktop/mobile optimized html on the same URL, based on a visitor’s device type. While Google continue to recommend the HTTP Vary: User-Agent header for mobile specific versions of the page (http://www.youtube.com/watch?v=va6qtaiZRHg), we’re also aware of issues raised around CDN caching; http://searchengineland.com/mobile-site-configuration-the-varies-header-for-enterprise-seo-163004 / http://searchenginewatch.com/article/2249533/How-Googles-Mobile-Best-Practices-Can-Slow-Your-Site-Down / http://orcaman.blogspot.com/2013/08/cdn-caching-problems-vary-user-agent.html As this is primarily for Google's benefit, it's been proposed that we only returning the Vary: User-Agent header when a Google user agent is detected (Googlebot/MobileBot/AdBot). So here's the thing: as the server header response is not “content” per se I think this could be an okay solution, though wanted to throw it out there to the esteemed Moz community and get some additional feedback. You guys see any issues/problems with implementing this solution? Cheers! linklater
Intermediate & Advanced SEO | | linklater0 -
DNS Settings went wrong....
Hi, I'm going to have to give you a little bit of a back story here... In July last year we launched a brand new website, www.turnkeylandlords.co.uk. It was on a new domain. The IT department set it live, and unfortunately messed up the DNS settings so that the site was launched under the wrong domain, smartloan.co.uk. This error was rectified within hours. Unfortunately in those few hours, Google indexed it! I then had to set up webmaster tools for both domains, so I could use the 'remove URLs' tool in there, to remove all the URLs from the smartloan domain. That all worked fine, the Landlords site was probably set back a bit, but we're now achieving some quite good results for it. 2 weeks ago we launched Smartloan as a product, and of course launched the website we've been working on for months... You guessed it! Google is now looking for all those old Landlords pages under Smartloan. My first thought is that we should do 301s. Would that be the best course of action, do you think? Webmaster tools has found 25 of them so far, but I know there are more - the Landlords site launched with about 90 pages... And where should I send the 301's? To the Landlords site, or to the smartloan root? Is there anything else I should do? Thanks for your help! Amelia
Intermediate & Advanced SEO | | CommT0 -
Setting Up Google Analytics for domains with 301
I have a client with a google analytics account that is a mess. domaina.com domainb.com 302's to domaina.com domainc.com 302's to domaina.com domaind.com 302's to domaina.com I thought the client was doing 301s on all these domains to the primary domain. I have logged into there analytics account and found data is being tracked on the other domains i.e domainb,c,d.com etc. How is it possible that google analytics is tracking data on these domains when no analytics code has been created and the urls are redirecting to domaina.com? Also there are not sites on these domains so for webmaster tools should I enable domain verification through a cname on the dns? Also I can I best setup a way to track traffic coming from say domainb.com? Whats the best step by step guide to use to set this up.
Intermediate & Advanced SEO | | JohnW-UK0 -
202 error page set in robots.txt versus using crawl-able 404 error
We currently have our error page set up as a 202 page that is unreachable by the search engines as it is currently in our robots.txt file. Should the current error page be a 404 error page and reachable by the search engines? Is there more value or is it a better practice to use 404 over a 202? We noticed in our Google Webmaster account we have a number of broken links pointing the site, but the 404 error page was not accessible. If you have any insight that would be great, if you have any questions please let me know. Thanks, VPSEO
Intermediate & Advanced SEO | | VPSEO0