For starters, congratulations on the improved metrics you're experiencing! I would have to also agree with EGOL on this one, in that it will be important to observe how those numbers are pulled, because it's very easy to get false positives with small amounts of data after large changes.
Some tips:
- You may want to look into your numbers a little deeper, and isolate performance based on location and user type. Meaning: are your numbers being skewed by internal members of the organization? Your tech team is a notorious source of generating false numbers if their IPs are not filtered. An easy way to do this is to compare % new users before/after launch. If the % of new users is substantially down, you may want to drill down into location to see if there is anything fishy going on. Which brings me to #2...
- Drill down into city to ensure that all of the traffic isn't coming from one place. You would be surprised at how much traffic is actually bots. If you see a disproportionate number of sessions coming from one city, take a look at the % of new users from that city. If it's in the single digits, you have a bot or developer, or spammer. I wrote a blog post on how to identify bots (and if they are creating false positives).
**As for rankings...: **
- Lots of controversy over this one, but I think more SEOs than not seem to agree that dwell time (time between leaving SERPs and visiting a site, then returning to SERPs) is an important factor for RankBrain.
- Look up "pogosticking" and its relationship with bounce rate. This is also likely a RankBrain factor.
In my opinion, if the numbers are true, in a very cursory observation, it seems that you have created a better experience for visitors. I would imagine that this **may **result in better rankings. At least, there is a better chance than not resulting in better rankings.
Apologies, SEOs never seem to give clear-cut answers, and qualify every statement
Jeff