We are getting the following error when rending a guide on a clustered install. (2 x WAS)
It appears to be happening when the HTML wrapper is calling back to the LC server to pick up the js and swf files, and its being directed to the other instance of LC. (I assume they have sticky sessions off)
When each server is hit directly the guide is rendering ok.
Has anyone seen this before, and should it work without sticky sessions on?
0000003a LocalExceptio E CNTR0020E: EJB threw an unexpected (non-declared) exception during invocation of method "doSupports" on bean "BeanId(sx7tstsoss_LiveCycleES2#adobe-dscf.jar#EjbTransactionCMTAdapter, null)". Exception data: java.lang.RuntimeException: Collateral path authentication failed for "/Applications/NAB_POC/1.0/Runtime/AC_OETags:1263965319878:WIo3N9e_:vNtpiPro2Sw26Zmsiuo4LMDBuk8.:.js".
at com.adobe.livecycle.guides.services.utility.SecurePath. - to the Adobe ANZ database to stay informed about Adobe products & promotions
Response from Mark Bartel
It should work without sticky sessions.
First I'll describe how this is supposed to work, and then talk about what the issue might be.
When the Guide is delivered to the client, there is some content that is specific to that particular delivery (primarily the data), but most of the collateral required is fixed: the Guide runtime code only changes when the LiveCycle patches are delivered, the Guide definition only changes when the Guide is edited, etc. So the goal in Guide delivery is to ensure that all that fixed collateral only needs to be delivered once and will be cached by the browser. The basic idea is that the initial delivery to the client contains everything that is specific to this particular instance, and everything else is fixed collateral retrieved from the repository (which requires no state). However, the end user probably doesn't have permissions to access the files in the repository, since the direct security on the repository is for design-time. The access control at runtime is effectively that process code needs to enable access. A concrete example of this is XDP rendering: end-users can't retrieve the XDP from the repository, but they can run processes that pull the XDP out of the repository: that process might choose to deliver the XDP to the client, unchanged.
So what actually happens is this:
0. When the first Guide is delivered from a server, a random key and key id is generated for that server. This key/id pair is placed into the clustered cache so that other servers in the cluster can see it. Note that only servers ever see these keys: clients never need to access them. Note that there are recycling rules for these keys (they aren't valid indefinitely) but I won't get into that.
1. When a Guide is "rendered", any references to collateral present in the repository are converted into /guides URLs. These URLs are generally of the form http://server:port/guides/static/<repository path>:<etag>:<keyid>:<mac>.<extension>. The <mac> (Message Authentication Code) value is generated from the server key and the repository path (the default algorithm is HMACSHA1 but that can be configured).
2. When the /guides servlets receive one of these URLs, they find the key with the keyid from the URL and generate a new MAC from that. The new MAC is then compared to the <mac> piece of the URL; if they don't match, access is denied. If it does match, the appropriate content from the repository is delivered to the client with appropriate headers set so that the browser will cache that URL indefinitely. There is no risk of stale content because if the content changes in the repository, the <etag> piece of the URL will change.
Note that this protocol means that you can take one of these URLs and just paste it in a browser and it should work, no session required. This is by design: the goal is to prevent access to arbitrary repository content, not to provide fine-grained access control to specific pieces of repository content.
It sounds like #2 is failing due to the second server not having the appropriate key. This implies that the clustered cache failed to share the key. Not sure why that would be so: are other aspects of clustering failing?
Note that there are several other valid URL forms (in particular certain repository paths are configured to be public so that they don't require the MAC), but from the error this would seem to be what you are encountering.