Hi, as jantzen_belliston-Adobe asked me to provide more information,
I'm still having some issues to connect both services (Data Warehouse or
Data Feeds) to my SFTP server. What I tried is to upload the keys file
to /home/myuser/.ssh/authorized_keys. I've changed the permissions to
the file and also the path so that the user is the owner: chown
myuser:userg /home/myuser/.ssh/authorized_keys, in the config file
(sshd_config) I've changed the following: RSAAuthentication yes
Ok, I check it. By the way I'm not sure it the RSA key provided in Data
Feeds offers the same level of access to Data Warehouse or it is a
complete different thing. Does it apply the same thing to the Azure Blob
In case I want to create two or more DWH requests in the future and have
these reports stored in the blob storage, do I need to contact the
client care team to configure it? Does this operation have any extra
expense or limit?
If there is also the chance to configure a blob storage folder what
procedure/settings do I have to follow? I mean on the side of the blob
service in order to know if I have to provide ssh keys or does the blob
need to be pulic (something else)?
I'm not using Azure Blob storage, the service is a SFTP that is
currently using working and I can upload files to it (so at least I know
we can access it with SSH-2). I also want to know why is the service
sending notifications about the reports that failed even a few days
later and they are not scheduled, they are required to be sent
immediately. On the other hand is it possible that Adobe access the SFTP
service using the user/password credentials provided or is it mandatory
to use a key (Adob...
Thank you for your quick reply, I've tested the request also writing
sftp:220.127.116.11 but it also fails. If I try to connect with a tool
like filezilla or winscp it works perfectly and I can upload files to
the machine. Is there any configuration such as using other ports like
the 21 to try to connect to the service? Can it be that the path is not
correctly set? --> /load
Hi, I'm trying to send a basic report using Data Warehouse to a FTP
server (basically is a server in Azure) where it receives files from
other sources. I included in the firewall the list of IPs
but the report seem to fail. Error description: Request failed to send.
Please check send parameters and destination connectivity. I've ensured
that the login parameters are right...
Hi, I'm new in the communityMy problem is: I've tried to perform two
large extractions from different machines over the same VRS. I'm using
different project credentials (to be clear lets say I have project1 -->
machine 1 and project2--> machine 2). I've tried to launch the two
queries over different dimensions at the same time but I experienced by
the logs that the performance drops and works dead slow (only 5 requests
per minute). I'm using the API 2.0 to extract data using the python API.
Hi ishans52004352 I've tried to perform from two different machines
(with the same credentials) two different queries, but I've noticed some
sort of decrease of performance. I'm sure that I'm not reaching the
limit of queries (because of the speed of retrieval of the API). Is
there any configuration that allows me to run the two different
extractions at the highest throughput, so the performance of both
extractions is not compromised?