Hi all
So far we have been using simple bash scripts to facilitate starting our Weblogic servers which run on Solaris zones. I am now in the process of trying to setup proper Ansible playbooks for the entire server setup and start. So far this works great, but now I hit a wall:
I wrote a task which calls the management script for a WLS domain and starts a Weblogic admin server. (The script is essentially just a wrapper around Weblogic scripting tools.)
`
Start admin server
- name: start admin server
shell: managementscript startAdminServer some-domain-name chdir=/
environment: env
register: dump
Print stdout
- name: check
debug: var=dump.stdout_lines
`
I can see that this results in managementscript startAdminServer some-domain-name being executed as a shell command - and according to the logs (as printed by the second task) everything works fine. The stdout shows as last lines:
"<Jan 8, 2015 9:39:21 AM MET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.> ", "<Jan 8, 2015 9:39:21 AM MET> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.> "
This is exactly how a successful admin server start looks like on the console if I do it manually…
However if I connect to the remote host and check for running servers, the admin server we supposedly just started is nowhere to be found!
This is especially confusing as I can manually simulate the ansible behavior by executing
ssh root@my-remote-host "PATH=/usr/bin:/usr/sbin:/data/bin managementscript startAdminServer some-domain-name"
… on the command line (note that the above used env variable sets exactly the path used in the ssh statement).
This goes through, shows the exact same stdout output BUT also successfully starts the server.
-
How can ansible show me logs of a successful server start, but fail to actually start the server?
-
How does the execution of my task in a playbook differ from the manually composed ssh statement?