Are hits from Linux most likely generated from bots? | Community
Skip to main content
Level 4
February 24, 2026
Solved

Are hits from Linux most likely generated from bots?

  • February 24, 2026
  • 1 reply
  • 12 views

I made a report in Workspace and noticed hits from Linux. I broke it down by User Agent and most of the traffic were from Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.3. It’s also interesting that the Bounce Rate from these visits are very high, 99%. Are these from bots? If so, is the best way to handle it by making a Processing Rule (in Admin)?

 

 

    Best answer by Jennifer_Dungan

    It can be hard to tell from the User Agent alone….

     

    Those are Linux running various versions of Chrome (the one you mentioned specifically is Chrome 130) which is older, but not so old that it raises immediate flags… this might be coming from an IT controlled computer that doesn’t allow users to update their own browser 🤷

     

     

    That said, if the behaviour seems suspicious, I would also try to look at the Geo Location data, what referrers is potentially driving the traffic… I would also try to pull out the data in the Data Warehouse so that you can look at the IP address.

     

    Also, what type of content/page is being viewed… is it something that makes sense to come in, read, then leave the site?

     

    There are multiple ways to deal with this, if you determine this data to be a “bad bot”.

     

    You could create an actual Bot Rule, that will collect basic info about this as a bot (so you can monitor it in your bot report):

     

    If you care about monitoring what bots are doing, this would probably be your best option.

     

     

    Other methods would be to do something with a processing rule (i.e. set a flag in a dedicated dimension to indicate a potential bot), then create a segment to remove “bot” traffic from your reports (but you need to make sure that segment is used everywhere - or build it into a Virtual Report Suite, and that all reports use that VRS)

     

    Or you could attempt to create rules in your tracking to not trigger for certain User Agents.

     

     

    All of these require some level of management… but as I said, using the actual “Bot Rules” is the traditional way to handle this… 

    1 reply

    Jennifer_Dungan
    Community Advisor and Adobe Champion
    Jennifer_DunganCommunity Advisor and Adobe ChampionAccepted solution
    Community Advisor and Adobe Champion
    February 24, 2026

    It can be hard to tell from the User Agent alone….

     

    Those are Linux running various versions of Chrome (the one you mentioned specifically is Chrome 130) which is older, but not so old that it raises immediate flags… this might be coming from an IT controlled computer that doesn’t allow users to update their own browser 🤷

     

     

    That said, if the behaviour seems suspicious, I would also try to look at the Geo Location data, what referrers is potentially driving the traffic… I would also try to pull out the data in the Data Warehouse so that you can look at the IP address.

     

    Also, what type of content/page is being viewed… is it something that makes sense to come in, read, then leave the site?

     

    There are multiple ways to deal with this, if you determine this data to be a “bad bot”.

     

    You could create an actual Bot Rule, that will collect basic info about this as a bot (so you can monitor it in your bot report):

     

    If you care about monitoring what bots are doing, this would probably be your best option.

     

     

    Other methods would be to do something with a processing rule (i.e. set a flag in a dedicated dimension to indicate a potential bot), then create a segment to remove “bot” traffic from your reports (but you need to make sure that segment is used everywhere - or build it into a Virtual Report Suite, and that all reports use that VRS)

     

    Or you could attempt to create rules in your tracking to not trigger for certain User Agents.

     

     

    All of these require some level of management… but as I said, using the actual “Bot Rules” is the traditional way to handle this… 

    Level 4
    February 26, 2026

    I will check the things you mentioned. Thanks!

    Jennifer_Dungan
    Community Advisor and Adobe Champion
    Community Advisor and Adobe Champion
    February 26, 2026

    Good Luck!

    Bots are a pain to deal with… and new varieties pop up all the time… but at the same time, we don’t want to throw out legitimate traffic…