I made a report in Workspace and noticed hits from Linux. I broke it down by User Agent and most of the traffic were from Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.3. It’s also interesting that the Bounce Rate from these visits are very high, 99%. Are these from bots? If so, is the best way to handle it by making a Processing Rule (in Admin)?
Best answer by Jennifer_Dungan
It can be hard to tell from the User Agent alone….
Those are Linux running various versions of Chrome (the one you mentioned specifically is Chrome 130) which is older, but not so old that it raises immediate flags… this might be coming from an IT controlled computer that doesn’t allow users to update their own browser 🤷
That said, if the behaviour seems suspicious, I would also try to look at the Geo Location data, what referrers is potentially driving the traffic… I would also try to pull out the data in the Data Warehouse so that you can look at the IP address.
Also, what type of content/page is being viewed… is it something that makes sense to come in, read, then leave the site?
There are multiple ways to deal with this, if you determine this data to be a “bad bot”.
You could create an actual Bot Rule, that will collect basic info about this as a bot (so you can monitor it in your bot report):
If you care about monitoring what bots are doing, this would probably be your best option.
Other methods would be to do something with a processing rule (i.e. set a flag in a dedicated dimension to indicate a potential bot), then create a segment to remove “bot” traffic from your reports (but you need to make sure that segment is used everywhere - or build it into a Virtual Report Suite, and that all reports use that VRS)
Or you could attempt to create rules in your tracking to not trigger for certain User Agents.
All of these require some level of management… but as I said, using the actual “Bot Rules” is the traditional way to handle this…
It can be hard to tell from the User Agent alone….
Those are Linux running various versions of Chrome (the one you mentioned specifically is Chrome 130) which is older, but not so old that it raises immediate flags… this might be coming from an IT controlled computer that doesn’t allow users to update their own browser 🤷
That said, if the behaviour seems suspicious, I would also try to look at the Geo Location data, what referrers is potentially driving the traffic… I would also try to pull out the data in the Data Warehouse so that you can look at the IP address.
Also, what type of content/page is being viewed… is it something that makes sense to come in, read, then leave the site?
There are multiple ways to deal with this, if you determine this data to be a “bad bot”.
You could create an actual Bot Rule, that will collect basic info about this as a bot (so you can monitor it in your bot report):
If you care about monitoring what bots are doing, this would probably be your best option.
Other methods would be to do something with a processing rule (i.e. set a flag in a dedicated dimension to indicate a potential bot), then create a segment to remove “bot” traffic from your reports (but you need to make sure that segment is used everywhere - or build it into a Virtual Report Suite, and that all reports use that VRS)
Or you could attempt to create rules in your tracking to not trigger for certain User Agents.
All of these require some level of management… but as I said, using the actual “Bot Rules” is the traditional way to handle this…