I need help understanding the underlying LCDS mechanisms when a complex object hierarchy is managed in LCDS. I have a custom Assembler because I have specialized persistence requirements. My object hierarchy is basically the following:
Document is the [Managed] class. When a new Document is created, it is initialized with a Chapter. Pages and Text are created when the document is edited. I create new instance of Document and initialize it with an instance of Chapter. On the client, I invoke the DataService using the createItem method. My custom Assembler is called, I do the necessary persistence operation and return. Back on the client I receive the ItemReference containing the AS Document. This all works ok. I am now faced with the runtime operations when the user starts creating Chapters, Pages and entering text.
Given that I start the editing session with a single ItemReference, I don't understand how to handle the Document sub-tree. The LCDS documentation says the purpose of the [Managed] class tag is so the entire object tree does not need to be transmitted when a property changes on a child object. Its the responsibility of the sub class to keep the remote object in sync. But, I don't know the best way to go about doing this.
The [Managed] annotation makes the properties of the managed class bindable. I can add an event listener to the ItemReference to handle property changes on the Document, but what about the rest of the object tree? Do I explicitly make the properties of the child objects bindable? Do I make each parent object an event listener for its child object properties and propagate the event up the tree?
Any suggestions or patterns to make this a little more understandable would be greatly appreciated.
Hi Pa. You don't need to do all of that manual work. In your DataService object on the client, the autoCommit and autoSyncEnabled properties are set to true by default.
When autoCommit is true, each change is immediately sent to the remote destination when detected. You can set autoCommit to false and explcitly call commit() if desired.
When autoSyncEnabled is true,
for fill(), getItem(), and createItem() calls, managed instances listen for changes made by other clients. All clients are synced automatically with the changes.
Here's the ASDoc on the DataService class:
Here's some doc on working with Data Management clients. It discusses the various setting for getting data and updating data:
Hope this helps.
I have made each object a Managed class with a destination / assembler for each one. I configured one to many relations for Document -> Chapter, Chapter -> Page. The create operation works as expected. When the Document assembler is called, the Chapter and Page assemblers have done their thing.
I now working on getting Documents with a fill operation. Its working, but I am still not clear on how to sync changes across multiple clients. In an earlier small prototype, the way I found to sync across clients is each client does a getItem which returns an ItemReference. When the editing client makes a change, then the client does an updateItem and the other clients get the update. It seems beyond tedious to turn around and call getItem for each editable item so updates change be sync'd across each client. Presumably each client has to do the same thing to see any changes updated by the "editing" client. There must be a better way to do this. I would thing LCDS can do this automatically, but its not obvious to me how.
Would be nice for the links to the source to be in a readme like some of the other examples. There is source for a couple of the other examples in the readme tab, but those are for single destination apps. The config would be similar for the multi-destination app with associations.
In the lcds-samples web app that ships with LCDS, the Salesbuilder AIR app is a good example of an app with managed associations using relationship metadata (one-to-many, etc) in the data-management config file.
The Java source code for the Salesbuilder app is in the
WEB-INF\src\com\salesbuilder directory of the lcds-samples web app.
The CRM app is a good example of an app that establishes the relationship between entities using the "Query Approach". The following doc describes that approach with code snippets from the CRM app:
The Java source code for hte CRM app is in the
WEB-INF\src\flex\samples\crm directory of hte lcds-samples web app.
You can run lcds-samples locally at:
http://localhost:8400/lcds-samples (use whatever your port is)
We have also created an end to end sample application that demonstrates the use of LiveCycle Data Services, Model Driven Development, and Data Management.
You can also download the tutorial files here: http://www.adobe.com/go/learn_lcds31_createapp_collateral
We are in the process of creating a devnet article and some video to back it up. We are about a couple of weeks away.
Looking at Tour de Flex - Managing Associations and Lazy Loading... The source code for Product, Account, Contact, etc. would be nice, so I can see how they are setup w/regard to handling the data services. I assume these are all [Managed]? Is the whole example downloadable from somewhere?
Hi Pa. Here is the Tour de Flex:
In the Tree go to:
Flex Data Access > Data Management Service
You might also find some info you need in this section of the LCDS doc:
Ah. That certainly illuminates things a little better.
I currently have a single Assembler for Document and I have a hit a bit of a roadblock. I can create the full object tree. Document has a Chapter and the Chapter has a Page. But I want to edit the Page and sync any changes across multiple clients. So I need an ItemReference for the Page. I turn around a perform a getItem on the Page, but I get an error stating the item must be [Managed]. The reason I don't have it managed is because LCDS skips any class defined as [Managed] when building the object tree (on the server) prior to calling my Assembler. I figured out when you have a single Assembler, only the top object can have [Managed]. To get started, I was playing around with singletons and not collections of Chapters, etc. I hope that made sense.
Looking at your explanation using multiple Assemblers and managed collections might be the way to go. When a new Chapter or Page is created and added to the collection, then the sync-ing is done automatcally? I assume each class corresponding to an Assembler is [Managed]. I think this is what the documentation says.
I looked for some online examples, including Tour De LiveCycle, but didn't find any. I will look again.
Thanks again for your help.
If Hibernate cannot read/write your persistence layer (i.e. its not a database) then you probably wont be able to deploy a model and have the server side 'just work'. You can specify the assembler class in the model annotations and we will configure a destination of that type for each entity (you can specify a custom assembler for each different entity). This may not be a road that you want to go down, as manually configuring each assembler for each association will give you more transparency and control.
But you can still use the model in FlashBuilder to generate all of your client side value objects and you may be able to use the generated service wrappers.
Note that for each association, you will need an assembler. So there is the Document assembler, the Chapter assembler and the Page assembler. Each one is responsible for managing the persistence of each item type. You would then define the <one-to-many> relationships in the destination config, along with the destination that manages that type of item:
<one-to-many property="chapters" destination="ChapterAssembler" lazy="true" paged="true" page-size="10" />
<one-to-many property="pages" destination="PageAssembler" lazy="true" paged="true" page-size="10" />
And so on for PageAssembler. This is how the system can manage each item class. I made the associations lazy and paged, but you don't have to do this if you don't need it.
On the client, each of the managed collections (Documents, Chapters, Pages) is monitored for changes and the appropriate create/update/delete in the assembler is performed when a commit() is called. You perform a DataService.fill() on the "DocumentAssembler" destination to start things off, you get back a managed collection and just go to town, modifying the whole object tree as you see fit, then calling DataService.commit() on the Document, and all of the nested 'stuff' that you did will be persisted to each assembler for each type of collection (documents, chapters, pages). It is pretty powerfull.
To help reduce the work, you can use a model to generate code, then never generate it again. Or just define the AS value objects manually, using the generated code as a guide. The trick is to make sure the collection properties like "chapters" and "pages" are labeled with the [Managed] metadata.
There are plenty of 2 level association examples on the DevCenter and out in the web (check out Tour De LiveCycle for instance). You are just going down one more level.
All this being said, you can skip most of this and just have a single destination that does Documents and takes a full object graph of stuff each time. This will be pretty 'blunt force' as the whole Document will be handed to the updateItem function and you have to figure out how to persist it and all its sub-properties. I am not familiar with Jackrabbit, so I don't know how fine grained your persistence is.
Anyway, let us know what you come up with!
Hmm. I am not using Model driven. I hand-crafted everything including my custom assembler. When I create a Document what is returned, of course, is an ItemReference containing the created Document (the Managed class) along with some sub-parts. What is puzzling is how to get ItemReferences for all the sub-parts I want to change and have sync'd. I know I have to invoke a createItem when something doesn't exist, but there is a lot of getItem calls for any client interested in changes to any one part.
Either its just a lot of unavoidable work, or I am missing something really, really easy.
Like I said, I didn't look at the Model driven approach. I recall something I read made me think it wasn't right for me. Perhaps its because my persistence store is a content repository ala Jackrabbit.
Sorry, hit post too soon.
I wouldn't get too worried about the [Managed] metadata, as the Data Model code generation will apply it in the right places. Basically, it just allows the Data Management system to monitor the modifications (create/update/delete) to a collection or individual item in the way it needs to. For instance if you have a collection of Page entities and you add several and modify others, the Data Management system keeps track of the original items and the changed properties of the updated pages. When the changes are committed to the server (DataService.commit()), this additional data is sent along with the changes so the server has access to the original item, the updated item and the list of changed properties. Additions and removals from the collection generate createItem and deleteItem() calls to the Assembler, which is pretty straightforward. Again, when using model driven development, the assembler is configured (the FiberAssembler) when the model gets deployed on the server.
If you are using Model Driven Development, the Fiber model for this would look something like
<entity name="Document" persistent="true">
<id name="id" type="long"/>
<property name="title" type="string"/>
<property name="chapters" type="Chapter" />
<entity name="Chapter" persistent="true">
<id name="id" type="long"/>
<property name="heading" type="string"/>
<property name="pages" type="Page" />
<entity name="Page" persistent="true">
<id name="id" type="long"/>
<property name="number" type="integer"/>
<property name="text" type="string" />
This would set up associations in the hierarchy that you want. The code generator would create the proper Actionscript classes for this, and when deploying this model to the LCDS server, a destination for each persistent entity would be created.