Setting A Custom User Agent in Screaming Frog
-
Hi all,
Probably a dumb question, but I wanted to make sure I get this right.
How do we set a custom user agent in Screaming Frog? I know its in the configuration settings, but what do I have to do to create a custom user agent specifically for a website?
Thanks much!
- Malika
-
Setting a custom user agent determines things like HTTP/2 so there can be a big difference if you change it to something that might not take advantage of something like HTTP/2
Apparently, it is coming to Pingdom very soon just like it is to Googlebot
http://royal.pingdom.com/2015/06/11/http2-new-protocol/
This Is an excellent example of a user agent's ability to modify the way your site is crawled as well as how efficient it is.
https://www.keycdn.com/blog/https-performance-overhead/
It is important to note that we didn’t use Pingdom in any of our tests because they use Chrome 39, which doesn’t support the new HTTP/2 protocol. HTTP/2 in Chrome isn’t supported until Chrome 43. You can tell this by looking at the
User-Agent
in the request headers of your test results.Pingdom user-agent
Note: WebPageTest uses Chrome 47 which does support HTTP/2.
Hope that clears things up,
Tom
-
Hi Malika,
Think about screaming frog and what it has to detect in order to do that correctly it needs the correct user agent syntax for it will not be able to make a crawl that would satisfy people.
Using a proper syntax for a user agent is essential and I have tried to be non-technical in this explanation I hope it works.
the reason screaming frog needs the user agent because the user-agent was added to HTTP to help web application developers deliver a better user experience. By respecting the syntax and semantics of the header, we make it easier and faster for header parsers to extract useful information from the headers that we can then act on.
Browser vendors are motivated to make web sites work no matter what specification violations are made. When the developers building web applications don’t care about following the rules, the browser vendors work to accommodate that. It is only by us application developers developing a healthy respect
When the developers building web applications don’t care about following the rules, the browser vendors work to accommodate that. It is only by us application developers developing a healthy respect
It is only by us application developers developing a healthy respect for the standards of the web, that the browser vendors will be able to start tightening up their codebase knowing that they don’t need to account for non-conformances.
For client libraries that do not enforce the syntax rules, you run the risk of using invalid characters that many server side frameworks will not detect. It is possible that only certain users, in particular, environments would identify the syntax violation. This can lead to difficult to track down bugs.
I hope this is a good explanation I've tried to keep it very to the point.
Respectfully,
Thomas
-
Hi Thomas,
would you have a simpler tutorial for me to understand? I am struggling a bit.
Thanks heaps in advance
-
I think I want something that is dumbed down to my level for me to understand. The above tutorials are great but not being a full time coder, I get lost while reading those.
-
Hi Matt,
I havent had a luck with this one yet.
-
Hi Malika! How'd it go? Did everything work out?
-
happy I could be of help let me know if there's any issue and I will try to be of help with it. All the best
-
Hi Thomas,
That's a lot of useful information there. I will have a go on it and let you know how it went.
Thanks heaps!
-
please let me know if I did not answer the question or you have any other questions
-
this gives you a very clear breakdown of user agents and their set of syntax rules. The following is valid example of user-agent that is full of special characters,
read this please http://www.bizcoder.com/the-much-maligned-user-agent-header
user-agent: foo&bar-product!/1.0a$*+ (a;comment,full=of/delimiters
references but you want to pay attention to the first URL
https://developer.mozilla.org/en-US/docs/Web/HTTP/Gecko_user_agent_string_reference
| Mozilla/5.0 (X11; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 |
http://stackoverflow.com/questions/15069533/http-request-header-useragent-variable
-
if you formatted it correctly see below
User-Agent = product *( RWS ( product / comment ) )
and it was received by your headers yes you could fill in the blanks and test it.
https://mobiforge.com/research-analysis/webviews-and-user-agent-strings
http://mobiforge.com/news-comment/standards-and-browser-compatibility
-
No, you Cannot just put anything in there. The site has to recognize it and ask why you are doing this?
I have listed how to build and already built in addition to what your browser will create by using useragentstring.com
Must be formatted correctly and have it work with a header it is not as easy as it sometimes seems but not that hard either.
You can make & use this to make your own from your Mac or PC
http://www.useragentstring.com/
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2747.0 Safari/537.36
how to build a user agent
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Gecko_user_agent_string_reference
- https://developer.mozilla.org/en-US/docs/Setting_HTTP_request_headers
- https://msdn.microsoft.com/en-us/library/ms537503(VS.85).aspx
Lists of user agents
https://support.google.com/webmasters/answer/1061943?hl=en
https://msdn.microsoft.com/en-us/library/ms537503(v=vs.85).aspx
-
Hi Thomas,
Thanks for responding, much appreciated!
Does that mean, if I type in something like -
HTTP request user agent -
Crawler access V2
&
Robots user agent
Crawler access V2
This will work too?
-
To crawl using a different user agent, select ‘User Agent’ in the ‘Configuration’ menu, then select a search bot from the drop-down or type in your desired user agent strings.
http://i.imgur.com/qPbmxnk.png
&
Video http://cl.ly/gH7p/Screen Recording 2016-05-25 at 08.27 PM.mov
Or
Also see
http://www.seerinteractive.com/blog/screaming-frog-guide/
https://www.screamingfrog.co.uk/seo-spider/user-guide/general/#user-agent
https://www.screamingfrog.co.uk/seo-spider/user-guide/
https://www.screamingfrog.co.uk/seo-spider/faq/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Setting country specific top level domain as alias - will site benefit from TLDs authority?
I have a host of sites that follow a top level domain strategy. For each local site they will be on the top level domain but with their country-languages prefix as the subdirectory. Such as below: example.com
Intermediate & Advanced SEO | | gracejo
example.com/uk-en
example.com/sg-en
example.com/de-de Each local site being on the TLD will benefit them in terms of SEO and it makes it easier to have one strategy. My question however, if the Netherlands comes on board, they would generally have example.com/nl-en. However they want their primary domain as examplenetherlands.nl and the TLD (example.com/nl-en) set as an alias/secondary domain that redirects to the primary. Will they benefit from any SEO if the TLD is not the primary address?0 -
SEO Best Practices for Customer Portals
We have a customer portal which is used to display customer's serial numbers, the knowledgebase, support ticketing information and forum. This information is behind a wall as a user must have support in order to view. The question is what are the best practices for SEO with the customer portal? Should we block these sections from bots? Is there a way to take advantage of the number of pages that are within the portal?
Intermediate & Advanced SEO | | ASCI-Marketing0 -
Print Button Creating Duplicate PDF URLs set to NoIndex, OK for SEO?
Our real estate website has 400 listings. We have added a button that allows the visitor to print listing pages in the for.m of a PDF. The PDF exists as a URL ending in ?print=17076. This print URL is set to noindex and follow. So our site has 400 additional URLs. Is this a negative for SEO? Or neutral? I have read it using CSS it is possible to set up printing without creating all these extra URLs. Is this method better from an SEO perspective? Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
How should I handle conflict between custom taxonomy and categories in a directory site?
Hello! I posted about this a week ago but haven't solidly figured it out yet. I'm building a website that is a directory of local therapists. I have categories for my blog and custom taxonomy to classify therapists. My problem is that my categories and my custom taxonomy overlap by necessity. For example I have the category "anxiety therapy" and the custom taxonomy "anxiety". Will this confuse google?...Do you think google will be able to figure out the differences between my blog archives and my therapist listing archives?...even though their names are similar and in a couple of cases the same? should I noindex my categories because the point of my site is to get customers to the directory....not the blog?.....even though the blog has lots of useful content? I should note here that I have my custom taxonomy pages set up so that they will display the 6 most recent blog posts in the corresponding category at the bottom of the page....so maybe that makes noindexing the categories more ok? Thank you for your help!
Intermediate & Advanced SEO | | angelamaemae0 -
Using rel="nofollow" when link has an exact match anchor but the link does add value for the user
Hi all, I am wondering what peoples thoughts are on using rel="nofollow" for a link on a page like this http://askgramps.org/9203/a-bushel-of-wheat-great-value-than-bushel-of-goldThe anchor text is "Brigham Young" and the page it's pointing to's title is Brigham Young and it goes into more detail on who he is. So it is exact match. And as we know if this page has too much exact match anchor text it is likely to be considered "over-optimized". I guess one of my questions is how much is too much exact match or partial match anchor text? I have heard ratios tossed around like for every 10 links; 7 of them should not be targeted at all while 3 out of the 10 would be okay. I know it's all about being natural and creating value but using exact match or partial match anchors can definitely create value as they are almost always highly relevant. One reason that prompted my question is I have heard that this is something Penguin 3.0 is really going look at.On the example URL I gave I want to keep that particular link as is because I think it does add value to the user experience but then I used rel="nofollow" so it doesn't pass PageRank. Anyone see a problem with doing this and/or have a different idea? An important detail is that both sites are owned by the same organization. Thanks
Intermediate & Advanced SEO | | ThridHour0 -
What are the SEO issues we should consider on a plug in that creates a custom home page based on zip code or GPS location.
We are developing a plug in the changes the home page relative to a users location or zip code. We believe this will provide users with a more personalized experience. We are concerned about how this might affect SEO. We are also wondering if we should partner with one of the SEO ply in developers. We were thinking about Yoast. Is there another partner that might be better? I would appreciate any feedback people can give.
Intermediate & Advanced SEO | | Ron_McCabe0 -
Set up a rel canonical
I have a question. I was wondering, if it was possible to set up a rel canonical. When I can't access the non canonical pages? For example, my site as at www.site.com , but the non cannocail is at site.com is their any way to set thet up without actually edting it at site.com ? Thanks for your help
Intermediate & Advanced SEO | | PeterRota0 -
Title tag showing in Google that we are not setting
Hello, We've noticed that when we do a specific search (print screen attached), that the business name and/or a completely different title is getting indexed into the search engine that we are not setting. Below is an example from the source code of how we're setting the title, this matches the 2nd listing circled in the attached image. The indexed title tag reflects "Animal Business Card Holders - Kyle Design" Any ideas or feedback on how this is happening? <title>Animal Business Card Cases in Pet, Insect and Wildlife Designstitle> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="description" content="Eye-catching business card holder cases personalized with custom animal designs for humane professionals and pet owners. Custom select a sleek metal finish, bold aluminum or iridescent accent color, size and unique design for the ultimate self-expressing animal gift!" /> <meta name="keywords" content="business card holder unique personalized custom holders silver gold wood metal cards cases sleek aluminum engraved contemporary case animal animals design designs black color accents iridescent pet insect wildlife cat dog dragonfly butterfly lions sea turtles sea otters elephants animal lover animal activist zoologist veterinarian breeder animal whisperer thin deep large credit Asian size engraving personalize gift gifts special monogram customized corporate logo name professional title meaningful sentiment" /> <meta name="copyright" content="Copyright Kyle Design" /> <meta name="author" content="Kyle Design" />
Intermediate & Advanced SEO | | marketing_zoovy.com
<meta name="generator" content="xyz Commerce System http://www.domain.com/" />
<link rel="canonical" href="xyz link"
<script type="text/javaScript"> Thanks,
Jamie0