Expand my Community achievements bar.

Guidelines for the Responsible Use of Generative AI in the Experience Cloud Community.
SOLVED

Workflow to Move Nodes and their child to another path [AEM6.5]

Avatar

Level 7

Hi Team,
I have a requirement to create a workflow where on executing on give payload it should first unpublish the node and their child nodes and
move the node and it's child node by retaining the node property to another path.

So For example I ran the workflow on path /content/project/en-us/test it should unpublish the node [including the child node] and then move this nodes [including the child node] to specific path i.e. /content/project-archive/en-us/test.
I tried multiple ways to achieve this but not able to get the exact ways to move node and their child nodes before unpublishing and also retaining the property of nodes.

            String payloadPath = workflowData.getPayload().toString();
            Node payloadNode = session.getNode(payloadPath);
            String parentPath = payloadNode.getParent().getPath();
            
            // Check if the payload node is published
            boolean isPublished = payloadNode.isNodeType("cq:Page")
                    ? payloadNode.getProperty("cq:lastReplicationAction").getString().equals("Activate")
                    : true; // assume non-page nodes are published
            
            if (isPublished) {
               //unpublish node and it;s child
            }
            
            // Get the archive node and its session
            String archivePath = "/content/project-archive" + payloadPath.substring("/content".length());
            Node archiveNode = null;
            if (session.nodeExists(archivePath)) {
                archiveNode = session.getNode(archivePath);
            } else {
                // If the archive node doesn't exist, create it along with its parent nodes
                String[] pathParts = archivePath.split("/");
                Node currentNode = session.getRootNode();
                for (String part : pathParts) {
                    if (!part.isEmpty() && !currentNode.hasNode(part)) {
                        currentNode = currentNode.addNode(part);
                    } else {
                        currentNode = currentNode.getNode(part);
                    }
                }
                archiveNode = currentNode;
                session.save();
            }
            
            // Move the payload node and its child nodes to the archive location
            Workspace workspace = session.getWorkspace();
            workspace.move(payloadPath, archiveNode.getPath() + "/" + payloadNode.getName());
            session.save();

 

Could you please guide me the correct and optimize approach for the above requirement.

 

Thanks

 

@arunpatidar  @kautuk_sahni  @BrianKasingli  @lukasz-m @Anudeep_Garnepudi 

1 Accepted Solution

Avatar

Correct answer by
Community Advisor

Hi @tushaar_srivastava,

I have create very simple workflow step that is doing what you need. It's using Sling api instead of JCR as this is what I prefer. Anyway, checked that code on my local environment and it works correctly.

  1. Gets payload value as a root path
  2. Gets all child, grandchild etc pages, but only those that are published
  3. Unpublish root path and all child, grandchild etc pages
  4. Move to archive root path and all child, grandchild etc pages

Code requires some changes - logic could be divided into smaller methods, proper logging and exception handling needs to be added. Also it is assumed that /content/project-archive exists and that user who will run workflow has enough permissions to deactivate pages and run move operation. It is operating only on Pages. If any of above assumption is not meet it will simply crash.

Nevertheless I did some simple testing and it worked for happy path scenarios.

package com.mysite.core.workflow;

import com.adobe.granite.workflow.WorkflowException;
import com.adobe.granite.workflow.WorkflowSession;
import com.adobe.granite.workflow.exec.WorkItem;
import com.adobe.granite.workflow.exec.WorkflowProcess;
import com.adobe.granite.workflow.metadata.MetaDataMap;
import com.day.cq.commons.Filter;
import com.day.cq.replication.ReplicationActionType;
import com.day.cq.replication.ReplicationException;
import com.day.cq.replication.ReplicationStatus;
import com.day.cq.replication.Replicator;
import com.day.cq.wcm.api.Page;
import org.apache.sling.api.resource.PersistenceException;
import org.apache.sling.api.resource.ResourceResolver;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

import javax.jcr.Session;
import java.util.*;

@Component(
        service = WorkflowProcess.class,
        property = "process.label=Archive Pages"
)
public class ArchivePagesWorkflowStep implements WorkflowProcess {

    @Reference
    private Replicator replicator;

    @Override
    public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap) throws WorkflowException {
        String payload = workItem.getWorkflowData().getPayload().toString();
        ResourceResolver resourceResolver = workflowSession.adaptTo(ResourceResolver.class);
        List<String> list = new ArrayList<String>();

        // adding root path to the list of pages that will be unpublished and archived
        list.add(payload);

        if (resourceResolver != null) {
            Page rootPage = resourceResolver.getResource(payload).adaptTo(Page.class);
            // getting all published pages - change to false in case only direct should be returned
            Iterator<Page> childPages = rootPage.listChildren(new PublishedPageFilter(), true);
            while (childPages.hasNext()) {
                Page page = childPages.next();
                list.add(page.getPath());
            }

            // converting to array as this is format accepted by replicator
            String [] paths = list.toArray(new String[list.size()]);

            // deactivating all the pages
            try {
                replicator.replicate(
                        resourceResolver.adaptTo(Session.class),
                        ReplicationActionType.DEACTIVATE, paths, null);
            } catch (ReplicationException e) {
                e.printStackTrace();
            }

            try {
                // moving pages to archive
                resourceResolver.move(payload, "/content/project-archive");
                resourceResolver.commit();
            } catch (PersistenceException e) {
                e.printStackTrace();
            }
        }
    }

    // inner class that defines simple filter that checks if specific page is published
    private class PublishedPageFilter implements Filter<Page> {

        @Override
        public boolean includes(Page page) {
            return (page != null && page.adaptTo(ReplicationStatus.class).isActivated());
        }
    }
}

View solution in original post

3 Replies

Avatar

Correct answer by
Community Advisor

Hi @tushaar_srivastava,

I have create very simple workflow step that is doing what you need. It's using Sling api instead of JCR as this is what I prefer. Anyway, checked that code on my local environment and it works correctly.

  1. Gets payload value as a root path
  2. Gets all child, grandchild etc pages, but only those that are published
  3. Unpublish root path and all child, grandchild etc pages
  4. Move to archive root path and all child, grandchild etc pages

Code requires some changes - logic could be divided into smaller methods, proper logging and exception handling needs to be added. Also it is assumed that /content/project-archive exists and that user who will run workflow has enough permissions to deactivate pages and run move operation. It is operating only on Pages. If any of above assumption is not meet it will simply crash.

Nevertheless I did some simple testing and it worked for happy path scenarios.

package com.mysite.core.workflow;

import com.adobe.granite.workflow.WorkflowException;
import com.adobe.granite.workflow.WorkflowSession;
import com.adobe.granite.workflow.exec.WorkItem;
import com.adobe.granite.workflow.exec.WorkflowProcess;
import com.adobe.granite.workflow.metadata.MetaDataMap;
import com.day.cq.commons.Filter;
import com.day.cq.replication.ReplicationActionType;
import com.day.cq.replication.ReplicationException;
import com.day.cq.replication.ReplicationStatus;
import com.day.cq.replication.Replicator;
import com.day.cq.wcm.api.Page;
import org.apache.sling.api.resource.PersistenceException;
import org.apache.sling.api.resource.ResourceResolver;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

import javax.jcr.Session;
import java.util.*;

@Component(
        service = WorkflowProcess.class,
        property = "process.label=Archive Pages"
)
public class ArchivePagesWorkflowStep implements WorkflowProcess {

    @Reference
    private Replicator replicator;

    @Override
    public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap) throws WorkflowException {
        String payload = workItem.getWorkflowData().getPayload().toString();
        ResourceResolver resourceResolver = workflowSession.adaptTo(ResourceResolver.class);
        List<String> list = new ArrayList<String>();

        // adding root path to the list of pages that will be unpublished and archived
        list.add(payload);

        if (resourceResolver != null) {
            Page rootPage = resourceResolver.getResource(payload).adaptTo(Page.class);
            // getting all published pages - change to false in case only direct should be returned
            Iterator<Page> childPages = rootPage.listChildren(new PublishedPageFilter(), true);
            while (childPages.hasNext()) {
                Page page = childPages.next();
                list.add(page.getPath());
            }

            // converting to array as this is format accepted by replicator
            String [] paths = list.toArray(new String[list.size()]);

            // deactivating all the pages
            try {
                replicator.replicate(
                        resourceResolver.adaptTo(Session.class),
                        ReplicationActionType.DEACTIVATE, paths, null);
            } catch (ReplicationException e) {
                e.printStackTrace();
            }

            try {
                // moving pages to archive
                resourceResolver.move(payload, "/content/project-archive");
                resourceResolver.commit();
            } catch (PersistenceException e) {
                e.printStackTrace();
            }
        }
    }

    // inner class that defines simple filter that checks if specific page is published
    private class PublishedPageFilter implements Filter<Page> {

        @Override
        public boolean includes(Page page) {
            return (page != null && page.adaptTo(ReplicationStatus.class).isActivated());
        }
    }
}

Avatar

Level 7

Hi @lukasz-m ,
Thank you for your response, this really helped and I am facing one challenge here like, if I run the workflow on let say /content/project/fr-fr/test, before moving node it should check and create the same node hirearchy under /content/project-archive, e.g. : /content/project-archive/fr-fr/test
Simiarly If I run workflow on payload /content/project/pl-pl/test
Then it sould check node exist and if not then create the same hireachy under /content/project-archive/pl-pl/test

I tried using : 

 // Get the archive node and its session
            String archivePath = "/content/project-archive" + payloadPath.substring("/content/project".length());
            Node archiveNode = null;
            if (session.nodeExists(archivePath)) {
                archiveNode = session.getNode(archivePath);
            } else {
                // If the archive node doesn't exist, create it along with its parent nodes
                String[] pathParts = archivePath.split("/");
                Node currentNode = session.getRootNode();
                for (String part : pathParts) {
                    if (!part.isEmpty() && !currentNode.hasNode(part)) {
                        currentNode = currentNode.addNode(part);
                    } else {
                        currentNode = currentNode.getNode(part);
                    }
                }
                archiveNode = currentNode;
                session.save();
            }


But this doesn't responding to my node, could you please guide me how should I proceed for this scenario.

Avatar

Community Advisor

Do you see any thing in the log for exception. I think 

workspace.move

And  

resourceResolver.move

do the same thing. It might be failing because the destination might have the same node already present