Hello everyone - my name is Paul McMahon and I am one the lead
architects for AEM at Accenture (I started my AEM career at Acquity
Group), My love for the product goes back to my first implementation in
2002 on CQ 3.2 (before the days of the parsys or CRX :-)). The growth of
community over the years has been great to watch.
My thoughts below:Internationalization - the tool will support as many
languages as you want. If you talking about adding additional language
for sites you manage in the tool see the translation and multi-site
manager sections of the documentation
If you mean adding additional languages to the administration tool -
either in the consoles or in your cust...
You wouldn't normally be deploying templates and components via OSGI -
how did you template and component scripts get deployed to your author
server. Bottom line is whatever steps you used to deploy your code to
you author server you need to repeat on your publish server?
Have you deployed your code to the publish server? You mean using the
eclipse archetype so I assume you have created your own templates and
components. In addition to deploying you code to the author server you
also have to deploy it to the publish server.
Are the photos that are uploaded by this script isolated from other DAM
Assets. So in your example in /path/to/our/photos - are there other
assets managed by users, or is that path just photos uploaded by your
script. If they are just photos uploaded by your script you could create
copy of the standard DAM asset workflow and add an activation step at
the end. This approach assumes you can create the right patterns in your
launchers to achieve this. Another option would be if there was some
Almost all your storage in the data store (binary files) not tar files.
Are you sure someone didn't upload some really large files to either the
DAM or the Package Manager. If something is the repository is holding
onto a reference to a binary file the data store garbage collection
isn't going to remove it. I'd take a look at what's in your Package
Manager and your DAM to see if you can find any large files.
Have you considered that you optimized the repository size as much as
it's going to be optimized? How large is the repository on disk - what's
the size of the tar files and what's the size of the data store? Why do
you think repository should be smaller?
I didn't see TarPM optimization in your list? Are you sure the tar file
optimization is running to completion? If it doesn't finish within it's
schedule window it may not be catching everything. You can start Tar
file optimization from the console and then it runs to completion. This
may take many hours - I have seen repositories take 24 hours to
completely optimize after large content loads.
Are you actually sure that the process is still running? It's possible
that the process either never launched or ended abnormally but left the
workflow in a stalled state. Have you checked the logs around the time
the workflow kicked off for error messages that might indicate a problem
in the process?Are you seeing log output that indicates the job is still
running? Or looked at thread dump and seen the thread running? Short of
a restart I am not sure there is any way to ensure that the thread. ...
50 concurrent users is close to the top end of the performance profile
for a single authoring instance. Which version of the software are you
on? What sort of hardware are you running on - what number of CPUs and
how much memory? How is your storage connected - do you have a fully
optimized IO solution? Have you considered that you may be at the point
that you need to implement a cluster or off loading strategy? Do you
have a performance testing environment. Are you able to duplicate these
Have you looked not just at the bundles but also the services? Check the
following:Is bundle A activeGo to services console - check to see if
Service A is active and runningGo to the services console - check to see
if Service B is active? If it isn't why - are there unfilled
dependencies? Try to activate the service then check error.log for
message details.Some potential reasons why you might be having
trouble:Service A's package isn't exported by bundle A (verify this in
Felix in the package in...
I am not sure I fully understand you use case, but it sounds to my like
what you are describing is something that falls under the Multi-Site
Manager functionality. In you use case you have the same page in two
locations, and one is the master and the other is a copy of the master
and you want to be able to keep the copy in sync with the master? If so
you wouldn't do a traditional copy you would be looking using the Live
Copy functionality in Multi-Site Manager and then do a rollout. See
As another user pointed out using the query string means you won't
caching the results in dispatcher. Can you cache the results in your
other layer? Or will the result of /shop/categoryA be cached in a CDN
for example? You will want to think through the performance impact of
this model and be sure you have the capacity in you publish instances to
deal with any increased traffic. One possible approach that would be
similar but still leverage Dispatcher would be switch from a query
string to a sel...
The short answer to you question is that I believe language copy
actually kills the live copy relationship so no roll outs will impact
the copy. In general language copy is intended to be used to create
language copies when you want to do something like manual translation
and you simply creating the initial copy of the site. Language copy does
not create a relationship between source and target the way that Live
Copy does. So in the normal scenario where you creating a language copy
of your mast...
Your first constraint shouldn't impact the proposed concept. A
replication event listener can listen for either sent or receive events.
So the replication event listener would sit on the publish servers and
listen for replication received events - which are happening in your
design. A replication event listener does not require a replication
agent. Now if you security restriction is that you can't have any
outgoing network traffic from your publish servers than you would have
an issue, but turni...
There are probably a bunch of ways to handle it. The simplest way -
assuming you have store all the information that is needed by Relay in
the content you could write a replication event listener that runs on
the publish instances and whenever it receives something it sends the
response to Relay. That assumes that Relay will ignore all the duplicate
responses since it will get 6 responses. You could set up the
replication event listener to only run on one publish server - but then
if that one wa...
The stale status is indicative of an error state. Essentially it means
that the workflow instance is not going to be able to continue to run.
One common reason for the stale state is that the payload of the
workflow has been deleted or moved. Depending on the version you are
running it may also indicate that the Sling events or jobs tied to the
workflow instance no longer exist. Generally if the workflow instance
has gone stale it means that the it's dead and you can't restart it so
you need to ...
My guess is that's probably the expected behavior. I don't know this for
sure but my guess is that there is Modification Event Listener that
listens for changes to the vanity URL property and updates the resource
resolver configuration whenever changes are made to the content.
Generally these kinds of listeners react quickly, but sometimes if there
are lots of events happening they can be delayed which is probably what
happened in your case. Again that's a total guess someone from Adobe
You should always be configuring dispatcher at the publisher level -
dispatcher in front of publisher instances is the standard approach -
almost all the dispatcher documentation is geared towards how to
configure dispatcher for a publisher instance. In addition to
configuring dispatcher in front of you publishers you can optionally
configure it in front of you author instances. This is not a replacement
for dispatcher in front of publishers but an additional step to take to
improve the performa...
What would best practice be however? I assume best practice would be use
a JSP or Ecma to generate a dynamic JS file. Sightly is defined as being
an HTML templating system right, so if you aren't generating HTML
shouldn't you be using a different templating language?
The standard solution for this in CQ is to use Closed User Groups -
on your exact use case this may be the right approach. If you are using
dispatch you will have make some changes to dispatcher to make that work
There some pros and cons to how you go about doing this (and some issues
with the instructions). You can check out this prese...
Other asset types could work pretty much the same way - you'd have to
write your servlet and figure out how to map selectors to the page
specific renditions. Unlike the images you would not find as many helper
APIs since most people do reference their PDFs directly these days.
Still the underlying concept isn't any different. If you follow
/libs/foundation/components/parbase/img.GET.java as an example you
remove all the image specific stuff, you add the code to map selectors
to renditions (and c...
If you can get the regular expression right you could probably get the
rewrites working. Personally i have always found complex rewrites in
/etc/maps challenging to get working as expected. If you do get it
working you should be aware that your cache flushing may not work as
expected since the path in the cache won't match the path in repository.
The best solution would be to not refer to the renditions directly. As a
general rule the recommended best practice is to refer to DAM assets
through a servlet rather than directly. This gives you better control of
the URL and allows you better cache control in general. So for example
consider how the image rendering servlet in parbase works. It's image
URLs look like
The base servlet always returns the or...
I don't believe that there are grunt plugins for full spectrum of tasks
needed to build and deploy all aspects of an AEM project. If you read
excellent blog post Scott pointed to from Citytech you'll see that they
integration to running author instances). Where you'll still need to
look to Maven are the more Java related tasks like building and
compiling an OSGI bundle for example. There are a lot of tasks in
between those two item...
If you want a guarantee that your configuration is going to be globally
effective you really can't us the design mode to manage the
configuration. There are strategies you can take to reduce duplication
across templates but those are workarounds - design mode isn't intended
to hold global configurations it's intended to hold design
configurations specific to a particular location in the page or site.
With a requirement like this you'll have to create some central location
in your repository wher...
If you are referring to server visualization there isn't any AEM related
implementation details - the software is installed that same on a
virtualized server as it is on physical server. The other place in AEM
where you see the term virtual is around virtual components like the
column control components. This is a component that doesn't actual have
any code - it exists only to allow the author to drag a particular
configuration on the page. Not a lot of documentation about that - you
need to dig...