I am having difficulty creating a segment that holds User IDs.
For example, I have a list of User IDs that registered on our Website in the last week i.e. Over 1000 IDs.
I wish to create a segment that contains all these User IDs, however when I try to do so the app freezes.
How can I create a segment to contain these people? I wish then to use this segment in Workspace to find out more on the users within this segment's online behavior i.e. Where they are from, device type, page views etc.
The Segment I created used the following criteria:
User ID > contains > *Pasted in the User IDs
How should ID's be pasted in? x,y,z OR xyz OR x|y|z OR an alternative way?
Create a classification for the user ID custom variable.
In the classification template, find the 1000 user ID's you want to focus on, and give them the same classification value
Upload that file back into the classification importer, then wait for it to process
Create a segment where "eVar classification" equals "classified value".
Revel in this ultra-simple and well-performing segment as you have all your reporting needs met. If you need to change the user ID's in the segment, you can do so by uploading another classification file.
It may be possible to use a slightly different approach. for example, if you are putting the ID's into an evar, there is a metric created for that evar called, "evar instance"... for example, if the evar is cust ID, then there is a metric called cust ID Instance. you can create a segment that simply has "custom id instance exists" and that will capture all of the instances where there is a customer ID.
Similarly, if you have a Classification file, you can select those items that do not have "unspecified".
If you are trying to get a subset of the user ID's then the classification could work by having one column that indicates whether or not to include them in the segment.
Don't waste your time using workspace with large volumes of data, it is terrible for that freezing is typical on large data sets ...
I would use workspace to get a summary style of data pull you want then run the data pull in data warehouse. Caveat if the data file is going to be large run it ftp(try first you will be warned if that is the case)