Hi. I am attempting to step through the code in a notebook used to create a service within AEP DSW. The service completes weekly and the output looks correct, so that implies to me the code works. However, when I try to run it piece by piece (in a totally separate notebook), on step one I receive an error that the .utils package does not exist (see error below). I have tried running this a few ways, all give me errors. I looked at the pip list and do not see .utils listed, but thinking it might be an integration of some sort? This code is also provided in the Adobe documentation https://experienceleague.adobe.com/docs/experience-platform/data-science-workspace/authoring/python....).
Any ideas how to get this to work?
Code:
from .utils import get_client_context
--------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-14-7b37d326493a> in <module> ----> 1 from .utils import get_client_context ImportError: attempted relative import with no known parent package
Hi! I am having the same issue and realised you posted this out last year....did you manage to get a solution?
Views
Replies
Total Likes
I never did hear of any solutions to this issue. I ended up rewriting my code to not use this method for loading a dataset. I am not sure why the code still works in a recipe, but not elsewhere.
It's been over a year - so I'm having a hard time remembering which new code I wrote to avoid this issue - but I generally read in datasets using the below code:
#Create SQL connection
qs_connect()
#Read in dataset using SQL connection
%%read_sql dataset -c QS_CONNECTION
SELECT *
FROM aep_dataset
LIMIT 5000000
Where:
'dataset' would be replaced by the name you want the loaded data to have within notebooks
* in the SELECT statement can be replaced by desired variables (if you don't want to include all dataset fields)
'aep_dataset' replaced with the dataset name from the Datasets section of AEP (this method is also nice because you don't need the long dataset id, you can just use the actual dataset name)
LIMIT 5000000 quantity can be replaced or removed, depending on dataset size
Running this code puts the loaded data into a pandas dataframe automatically.
Hopefully this helps in some way!
thanks @dellenc
Although when I do SQL in recipe builder to read dataset - I get a syntax error.
I assumed that the "%%writefile" and "%%read_sql" in one cell conflicts with each other....I tried to separate those but I get a dataset is not defined.
Oh...well...I guess I will keep trying different things. Many thanks for your response though!
Views
Replies
Total Likes
Views
Likes
Replies