Have anybody tried something like this before? I have custom external datasource that i was able to connect to adobe and it works fine if i just pull some data and immediately change the dimension to use Adobe internal database.
However i have use cases where i cannot do it in the beginning and any activity in Adobe that uses internal templates to create the query will fail as the syntax is slightly different.
First time i got this error "Unable to load document of identifier 'xtk:crdb_<mydb>.xsl' and type 'xtk:xslt'." So what i did is that i created following .xsl files for this case
The crdb.xsl is a reference to load and call tools_.xsl which has some template for creating and inserting to table. My problem is that no matter what i change in these .xsl files, the query submitted by Adobe doesn't change.
Is there some other layer of how these templates are stored? What does updb do since it is not called anywhere but the name would imply that it is update database?
The connection i am trying to establish is standard ODBC but there is some syntax difference compared to what Adobe campaign does.
Yeah, i could. However the difference here is that Spark SQL doesn't support column list insert syntax that most 'standard' SQL languages use, E.G: INSERT INTO <tablename> (column_list) SELECT stufff FROM <tablename> so I need custom template to construct the query here.