Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Fetch as Google Desktop Render Width?
-
What is Google's minimum desktop responsive webpage width?
Fetch as Google for desktop is showing a skinnier version of our responsive page.
-
Clever PhD hit the nail on the head his answer Is excellent.
-
Howdy!
TLDR - I would estimate Google bot desktop to run at about about 980 pixels, but there is an easy way to test, just mess around with your site by adjusting the width of the browser and see if you can duplicate what you see in Google fetch and render.
http://www.w3schools.com/browsers/browsers_display.asp 97% of browsers have a width of 1024 or greater. Therefore, if you use that minimum of 1024, your width would be appropriate for pretty much everyone. That said, you might want to go with 980 as the width to account for things like scrolling bars and the fact that most people do not browse in full screen. This is a pretty standard starting point for width.
When you use fetch and render - Google uses one of it's bots depending on the type of page https://support.google.com/webmasters/answer/6066468?hl=en
When Google talks about responsive design https://developers.google.com/webmasters/mobile-sites/mobile-seo/responsive-design it notes, "When the meta viewport element is absent, mobile browsers default to rendering the page at a desktop screen width (usually about 980px, though this varies across devices)." In other words in some Google documentation they are giving a nod to the 980 pixels being a "standard desktop width"
Having that in mind, I would look at your site and see if you can tell if this jives. If you have setup the page to look "normal" at greater than 980 pixels, say 1200 pixels, set your width to 1200 pixels in your browser. Then play with the width of the browser and see if you can get it to match what you see in Google fetch and render. If your site looks the same as what you see in fetch and render and your browser is at 980 pixels, then you have a confirmation of the Googlebot desktop viewport size.
You could also setup a simple page and put several images on separate rows that are 950px 980px 1000px 1200px etc wide. Run fetch and render and see what happens, but I like my first suggestion better.
Have fun!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Favicon not showing in google serps
Hi, I have a website where the favicon is not showing in the google mobile serps. It's appearing the default icon instead (world icon). This is the tag I have place in the head section of the website: <link rel="shortcut icon" href="/favicon.ico" /> The size of the favicon is 48x48 and it's appearing correctly in the browser tag. I've checked that the google robot can crawl it and in the server logs I can see requests from the "Google Favicon" user-agent. Has anyone had this same problem? Any advice?
Technical SEO | | dMaLasp0 -
Page disappears from Google search results
Hi, I recently encountered a very strange problem.
Technical SEO | | JoelssonMedia
One of the pages I published in my website ranked very well for a couple of days on top 5, then after a couple of days, the page completely vanished, no matter how direct I search for it, does not appear on the results, I check GSC, everything seems to be normal, but when checking Google analytics, I find it strange that there is no data on the page since it disappeared and it also does not show up on the 'active pages' section no matter how many different computers i keep it open. I have checked to page 9, and used a couple of keyword tools and it appears nowhere! It didn't have any back links, but it was unique and high quality. I have checked on the page does still exist and it is still readable. Has this ´happened to anyone before? Any thoughts would be gratefully received.0 -
Image Indexing Issue by Google
Hello All,My URL is: www.thesalebox.comI have Submitted my image Sitemap in google webmaster tool on 10th Oct 2013,Still google could not indexing any of my web images,Please refer my sitemap - www.thesalebox.com/AppliancesHomeEntertainment.xml and www.thesalebox.com/Hardware.xmland my webmaster status and image indexing status are below,
Technical SEO | | CommercePundit
Can you please help me, why my images are not indexing in google yet? is there any issue? please give me suggestions?Thanks!
0 -
Pages removed from Google index?
Hi All, I had around 2,300 pages in the google index until a week ago. The index removed a load and left me with 152 submitted, 152 indexed? I have just re-submitted my sitemap and will wait to see what happens. Any idea why it has done this? I have seen a drop in my rankings since. Thanks
Technical SEO | | TomLondon0 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
How does Google Crawl Multi-Regional Sites?
I've been reading up on this on Webmaster Tools but just wanted to see if anyone could explain it a bit better. I have a website which is going live soon which is going to be set up to redirect to a localised URL based on the IP address i.e. NZ IP ranges will go to .co.nz, Aus IP addresses would go to .com.au and then USA or other non-specified IP addresses will go to the .com address. There is a single CMS installation for the website. Does this impact the way in which Google is able to search the site? Will all domains be crawled or just one? Any help would be great - thanks!
Technical SEO | | lemonz0 -
Tags showing up in Google
Yesterday a user pointed out to me that Tags were being indexed in Google search results and that was not a good idea. I went into my Yoast settings and checked the "nofollow, index" in my Taxanomies, but when checking the source code for no follow, I found nothing. So instead, I went into the robot.txt and disallowed /tag/ Is that ok? or is that a bad idea? The site is The Tech Block for anyone interested in looking.
Technical SEO | | ttb0 -
Why google index my IP URL
hi guys, a question please. if site:112.65.247.14 , you can see google index our website IP address, this could duplicate with our darwinmarketing.com content pages. i am not quite sure why google index my IP pages while index domain pages, i understand this could because of backlink, internal link and etc, but i don't see obvious issues there, also i have submit request to google team to remove ip address index, but seems no luck. Please do you have any other suggestion on this? i was trying to do change of address setting in Google Webmaster Tools, but didn't allow as it said "Restricted to root level domains only", any ideas? Thank you! boson
Technical SEO | | DarwinChinaSEO0