We have a concern that a malicious author could enter data that would expose the site to a XSS attack. Is there a way to sanitise author data inputted via dialogs before it is persisted in CRX? We are not rendering out HTML from AEM via Sightly but instead are using Sling Models and a custom Sling servlet to build a JSON view of the data in CRX that the React.js frontend then uses to render the page. Therefore we do not get any of the protection that Sightly would normally afford us as we simply read what is in CRX and output this. So if an author has entered a nasty string via a text field on the dialog for a component then we will end up outputting this in our JSON.
As you are not using HTL or AEM JQuery, which has token for XSS, not sure what you can do. You can look at trying to implement some sort of blacklist which would specify locations that would not be allowed. So if an author references a prohibited area, the request should be blocked. This is a custom featutre as AEM does not support ootb.