Expand my Community achievements bar.

SOLVED

Public IP Address vs ISP Address

Avatar

Level 2

Hello.  I am trying to clean up some of our data and in looking at it from the raw data field perspective, it appears that, particularly in non-US countries (though it happens here as well), most of the IP Addresses we are capturing (and copying into an eVar with a processing rule) are those of the ISP rather than Public IP and show thousands of visits/day to the IP.  The problem is that I can see there is a LOT of behavior that is not consistent with actual customer behavior and appears to be bots.  The only options for manually blocking bots is IP or User Agent.  These visitors also delete their cookies after each visit, creating more unique visitors than there are customers in the regions.  How can we block these or how can we get Public IP instead of ISP?  The biggest problem is that SOME of the visitors from the ISP address are legitimate customers and buying products, so we do not want to block those.  Thanks in advance.

Topics

Topics help categorize Community content and increase your ability to discover relevant content.

1 Accepted Solution

Avatar

Correct answer by
Community Advisor and Adobe Champion

The ISP IP is the Public IP.. there is no other IP address that you will have access to.. this isn't specifically an "Adobe" thing... this is the IP address that is seen by your web server. This ISP is most likely using a shared IP for outgoing traffic. There's not much you can do on that front.

 

Can you draw any patterns based on User Agent? A lot of spammy or bot traffic tends to use older designations... for the longest times many bots identified as IE6 (so it was really easy to identify them and create compensations).

 

The other option, and I know it's not ideal, is to try and identify the behaviours of the traffic you want to exclude with a segment. One of the things I have done (which can be used for aberrant bot behaviour, or just issues with something on our side that can cause inflated stats due to bad code deployments), is to create a "Clean Data" segment, and a virtual suite that uses this segment...  All my reports, and use access is based on this "Clean Data Virtual Suite", and all my virtual suites off of the global suite apply the clean data segment along with whatever other segments are needed. 

 

This way, if I see odd inflation, I can add the logic to my Clean Data segment and all my virtual suites will get the update... and being a part of the suite, no one has to remember to apply it manually... 

 

This is a lot of work though, as you need to update all your reports to use the new suite.

View solution in original post

5 Replies

Avatar

Correct answer by
Community Advisor and Adobe Champion

The ISP IP is the Public IP.. there is no other IP address that you will have access to.. this isn't specifically an "Adobe" thing... this is the IP address that is seen by your web server. This ISP is most likely using a shared IP for outgoing traffic. There's not much you can do on that front.

 

Can you draw any patterns based on User Agent? A lot of spammy or bot traffic tends to use older designations... for the longest times many bots identified as IE6 (so it was really easy to identify them and create compensations).

 

The other option, and I know it's not ideal, is to try and identify the behaviours of the traffic you want to exclude with a segment. One of the things I have done (which can be used for aberrant bot behaviour, or just issues with something on our side that can cause inflated stats due to bad code deployments), is to create a "Clean Data" segment, and a virtual suite that uses this segment...  All my reports, and use access is based on this "Clean Data Virtual Suite", and all my virtual suites off of the global suite apply the clean data segment along with whatever other segments are needed. 

 

This way, if I see odd inflation, I can add the logic to my Clean Data segment and all my virtual suites will get the update... and being a part of the suite, no one has to remember to apply it manually... 

 

This is a lot of work though, as you need to update all your reports to use the new suite.

Avatar

Level 2

Thank you.  I kinda figured this was the answer, but was hoping it was not. Unfortunately, User Agent is not a very good indicator these days and the behavior may be consistent for a handful of bots but not all of them.  Just looking for a better whack-a-mole solution.  I like the "Clean Data" virtual suite but fear the upkeep would be a full time job.  The "we stop bots that identify themselves as bots" setting is less than useful.

Avatar

Community Advisor and Adobe Champion

I hear you! The Clean Data is really helpful for the oddities where a deployment breaks something, as opposed to managing bots that make it through the filters...

 

I wonder if you could create some sort of detection code in your implementation that identifies "odd behaviour" and applies some value (some dimension or event) once that visit has been flagged. You could then build a visit exclusion based on that flag value to exclude those visits in a "Clean Data" suite?

Avatar

Level 2

I was discussing things with my Devs and I think we are going to try flagging (in an eVar) through Launch visitors that do not have Headers (many bots do not) as a start.  That way, we can flag them, watch them for a while (in case any real customers slip through), and then pull them out with a segment or leave the logic and Launch and just not have the tags go through.  Its a start.

Avatar

Community Advisor and Adobe Champion

Ok, good luck! Let us know how you make out.