I have an AEM instance deployed on WebSphere Stack. Everything works as expected as per AEM side, but in WAS console, I see the status of all 4 EARs as down.
I have verified in the logs while startup of the JVMs, all 4 EARs starts successfully.
Complete Application Stack details :
-AEM On-Premise 6.5
-WebSphere Base 22.214.171.124
Application servers are :-
AEM is installed on cluster1 JVM, while the admin is for WebSphere by default.
Note :- This is a stand Alone setup, not Clustered one.
If I try starting up any application or stopping it from the WAS console, it throws error like following :
I am not aware of any console running on cluster1. This makes any action from WAS console as useless, like creating a heapdump/ uninstalling any application/monitoring logs.
Is there any config missing from WAS or during deployment of these EARs from Configuration manager there is a trick involved?
Any insight is appreciated.
Usually, standalone WAS set-ups are accessible using the server console itself (instead of DMGR console in Cluster ones), and that's what the error is indicating.
While setting up WAS, what profile did you select- Application server or cluster?
What WAS IP/FQDN did you select while setting up AEM Forms using LCM?
You can try accessing the console from the cluster1 URL, you need to check the port open- usually, it's 9060 onwards.
As this is a standalone WAS setup, there is no deployment manager involved and I am accessing the console and installed applications directly form the WAS admin console itself.
I am not sure which profile was select while setting up WAS, but I think it's the Application server only as I can see only one under the profiles:-
While setting up AEM Forms,, I tried both the JVMs, i,e., admin and cluster1 :-
Now, the curious thing is when EARs deployed on admin, all 4 application behave as expected and I can start/stop them directly from console, but gives the mentioned error when deployed on cluster1.
Hostname provided was the direct hostname of the unix box/server.
It seems there is no cluster1 URL for accessing WAS console, though I have tried with all the default ones, say mostly 9060/9080. If there is config file for checking console running on cluser1 from unix side or WAS console directly, please suggest.
The directory structure for profiles differs from the one I have in-house: /opt/IBM/WebSphere/AppServer/profiles/AppSrv01
Ideally, when you create an App server profile in WAS, you can create a standalone individual App server, so not sure how admin JVM is set. Can you check the list of profiles- [was_install_dir/bin] ./manageprofiles.sh –listProfiles
I want to check where admin JVM fits in the topology. FYI, you can check was.properties under the App server profile for the actual ports (check Administrative Console Port /WC_ adminhost)
Hi @Pulkit_Jain_ ,
I understand the differences between the directory structure of WAS setups. As by default it comes under /opt/IBM/WebSphere/AppServer/profiles/AppSrv01, but given the scalability constraints (sometimes, increasing/decreasing disk space for /opt may result in disk corruption which can affect all applications installed under /opt ) that unix has under /opt folder, I have this under a separate disk_volume /app for this.
Yes, I have only one profile which has been used to setup the individual App server,
The admin JVM is to support the WAS admin console solely. Any third party application is recommended to be deployed on cluster1 to avoid downtime and conflicts when some updates are being pushed to say WAS itself.
When I check serverIndex.xml file, I can get information about the various ports available to run my application say AEM via wc_defaulthost on 30140 port (excluding the traffic from HTTPS Web Server which eliminates port number requirement obviously).
but is there any config file which gives access to an admin console running specifically to manager my 4 EARs on cluster1?
Say, I have a requirement to clean up the 4 EARs, is there any interface to do so ? (eliminating the manual cleanup of installedApps folder of WAS)