Expand my Community achievements bar.

Webinar: Adobe Customer Journey Analytics Product Innovations: A Quarterly Overview. Come learn for the Adobe Analytics Product team who will be covering AJO reporting, Graph-based Stitching, guided analysis for CJA, and more!
SOLVED

Voice Search data in Adobe

Avatar

Level 2

Hi all,

Is there a way that I can create a Workspace in Adobe Analytics and see the actual Voice Search terms from Google, Alexa etc.

Is there any documentation on this somewhere?

Regards,
Rasmus Aasted

1 Accepted Solution

Avatar

Correct answer by
Level 5

Out of the box? No. Currently there is no SDK or anything like that. It will take some development work to integrate the Adobe Analytics code into your voice app. You could implement with your own team, or have a vendor implement it for you. This link has the documentation on how to implement: Analytics for Digital Assistants

By voice search term, do you mean the exact voice request that is spoken by a user? That is not available, nor would it be from Adobe or any other vendor. You'd be able to collect how the digital assistant interpreted the users voice request, but you won't be able to collect the raw voice request itself.

Let's say a user says, "Alexa, Send John $20 for dinner last night from my banking app." The device (e.g. Amazon Echo Dot) sends the voice request to the digital assistant in the cloud (e.g. Alexa). This digital assistant then parses the voice request into machine-understandable intents and details/parameters. In our example this would be something like Intent = sendMoney; Who = John; Amount = 20; Why = Dinner. The intent and the details are then sent to the voice app, which processes the request.

This means that the full voice request is not available to any system or software beyond the digital assistant. You can definitely collect things like Intent = sendMoney; Who = John; Amount = 20; Why = Dinner, or put them together into something like "sendMoney 20 dollars to John for dinner" (i.e. concatenating the fields with descriptors).

For other examples, if you have a music app, you'll be able to report on what songs/artists/playlists, etc. users are asking the assistant to play.

View solution in original post

3 Replies

Avatar

Level 5

Yes, although it will take a bit of implementation. You can integrate Adobe Analytics with your apps for digital assistants (Alexa Skills, Google Assistant Actions). The voice search terms would go into the intents and parameters piece. Although this wouldn't be the specific voice terms the user said, but the intents and parameters that the digital assistant algorithms interpreted from the user.

As an example, say a user says "Siri, Send John $20 for dinner last night from my banking app" The intent would be something like sendMoney, and there would be parameters like Who = John; Amount = 20; Why = Dinner. You could put these into different evars/sprops or concatenate them together.

Once implemented, and users have installed, you'll see data populate in Workspace.

Analytics for Digital Assistants

Avatar

Level 2

Hi Nikitarama,

I'm not sure if I understand it correctly. So there is no built-in opportunity in Adobe Analytics to see voice search terms?
Are there any documentation from Adobe or vendors that describe it more in details how this can be implemented?

Avatar

Correct answer by
Level 5

Out of the box? No. Currently there is no SDK or anything like that. It will take some development work to integrate the Adobe Analytics code into your voice app. You could implement with your own team, or have a vendor implement it for you. This link has the documentation on how to implement: Analytics for Digital Assistants

By voice search term, do you mean the exact voice request that is spoken by a user? That is not available, nor would it be from Adobe or any other vendor. You'd be able to collect how the digital assistant interpreted the users voice request, but you won't be able to collect the raw voice request itself.

Let's say a user says, "Alexa, Send John $20 for dinner last night from my banking app." The device (e.g. Amazon Echo Dot) sends the voice request to the digital assistant in the cloud (e.g. Alexa). This digital assistant then parses the voice request into machine-understandable intents and details/parameters. In our example this would be something like Intent = sendMoney; Who = John; Amount = 20; Why = Dinner. The intent and the details are then sent to the voice app, which processes the request.

This means that the full voice request is not available to any system or software beyond the digital assistant. You can definitely collect things like Intent = sendMoney; Who = John; Amount = 20; Why = Dinner, or put them together into something like "sendMoney 20 dollars to John for dinner" (i.e. concatenating the fields with descriptors).

For other examples, if you have a music app, you'll be able to report on what songs/artists/playlists, etc. users are asking the assistant to play.