Hello everyone - my name is Paul McMahon and I am one the lead
architects for AEM at Accenture (I started my AEM career at Acquity
Group), My love for the product goes back to my first implementation in
2002 on CQ 3.2 (before the days of the parsys or CRX :-)). The growth of
community over the years has been great to watch.
My thoughts below:Internationalization - the tool will support as many
languages as you want. If you talking about adding additional language
for sites you manage in the tool see the translation and multi-site
manager sections of the documentation
If you mean adding additional languages to the administration tool -
either in the consoles or in your cust...
You wouldn't normally be deploying templates and components via OSGI -
how did you template and component scripts get deployed to your author
server. Bottom line is whatever steps you used to deploy your code to
you author server you need to repeat on your publish server?
Have you deployed your code to the publish server? You mean using the
eclipse archetype so I assume you have created your own templates and
components. In addition to deploying you code to the author server you
also have to deploy it to the publish server.
Are the photos that are uploaded by this script isolated from other DAM
Assets. So in your example in /path/to/our/photos - are there other
assets managed by users, or is that path just photos uploaded by your
script. If they are just photos uploaded by your script you could create
copy of the standard DAM asset workflow and add an activation step at
the end. This approach assumes you can create the right patterns in your
launchers to achieve this. Another option would be if there was some
Almost all your storage in the data store (binary files) not tar files.
Are you sure someone didn't upload some really large files to either the
DAM or the Package Manager. If something is the repository is holding
onto a reference to a binary file the data store garbage collection
isn't going to remove it. I'd take a look at what's in your Package
Manager and your DAM to see if you can find any large files.
Have you considered that you optimized the repository size as much as
it's going to be optimized? How large is the repository on disk - what's
the size of the tar files and what's the size of the data store? Why do
you think repository should be smaller?
I didn't see TarPM optimization in your list? Are you sure the tar file
optimization is running to completion? If it doesn't finish within it's
schedule window it may not be catching everything. You can start Tar
file optimization from the console and then it runs to completion. This
may take many hours - I have seen repositories take 24 hours to
completely optimize after large content loads.
Are you actually sure that the process is still running? It's possible
that the process either never launched or ended abnormally but left the
workflow in a stalled state. Have you checked the logs around the time
the workflow kicked off for error messages that might indicate a problem
in the process?Are you seeing log output that indicates the job is still
running? Or looked at thread dump and seen the thread running? Short of
a restart I am not sure there is any way to ensure that the thread. ...