Ansible task seems to start weblogic but doesn't

Hi all
So far we have been using simple bash scripts to facilitate starting our Weblogic servers which run on Solaris zones. I am now in the process of trying to setup proper Ansible playbooks for the entire server setup and start. So far this works great, but now I hit a wall:

I wrote a task which calls the management script for a WLS domain and starts a Weblogic admin server. (The script is essentially just a wrapper around Weblogic scripting tools.)

`

Start admin server

  • name: start admin server
    shell: managementscript startAdminServer some-domain-name chdir=/
    environment: env
    register: dump

Print stdout

  • name: check
    debug: var=dump.stdout_lines
    `

I can see that this results in managementscript startAdminServer some-domain-name being executed as a shell command - and according to the logs (as printed by the second task) everything works fine. The stdout shows as last lines:

"<Jan 8, 2015 9:39:21 AM MET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.> ", "<Jan 8, 2015 9:39:21 AM MET> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.> "

This is exactly how a successful admin server start looks like on the console if I do it manually…

However if I connect to the remote host and check for running servers, the admin server we supposedly just started is nowhere to be found!
This is especially confusing as I can manually simulate the ansible behavior by executing

ssh root@my-remote-host "PATH=/usr/bin:/usr/sbin:/data/bin managementscript startAdminServer some-domain-name"

… on the command line (note that the above used env variable sets exactly the path used in the ssh statement).
This goes through, shows the exact same stdout output BUT also successfully starts the server.

  • How can ansible show me logs of a successful server start, but fail to actually start the server?

  • How does the execution of my task in a playbook differ from the manually composed ssh statement?

Hi
This is a problem i am also facing. It would appear that the command runs within a process group that is killed when ansible finishes the task. I have tried using nohup and & to run the command, whi choose works but because they are background processes i cannot tell if they are succesful.

Another approach is async but i dont knkw how to ensure the command stays alive after the task has finished.

This us a short fall that makes ansible a little harder to develop for as background tasks that under a normal shell default to root context are not in ansible.

I wish there a more supported way to make this work.

Nicholas Irving

If these are services they should really use a service manager like
init, systemd, upstart, daemontools, supervisor, monit, runit, etc.
'manually' starting services is unreliable, pollutes their environment
with yours and makes checking them and/or stopping them also
unreliable.

Hello,

A common problem I’ve hit a number of times. You can do an async, yes, with a massive timeout (9999999999 or such like). I did have some success with this once though:

  • name: Ensure admin service is started
    shell: >
    running=$( netstat -ant | grep -cP ‘7001.+LISTEN’ ) ;
    [ $running -eq 0 ] && /opt/oracle/domains/domain_name/startWebLogic.sh**&**
    register: startup
    changed_when: startup.stdout | search(‘Starting WebLogic Server’)
    sudo: yes
    sudo_user: “{{ weblogic_username }}”

Note the backgrounding of the process.

–Mark

Hi,
I will have a look at how we can do this with the current scripts developed, as currently a daemon service starts (nodemanager in this case that does not use any daemin tools other than itself) and attaches to the root process.

It seems i am coding to solve an issue that does not exist if run directly on the host.

Regards
Nicholas Irving

Thanks will try this with nodemanager as that is all i need to be running in the background.

Regards
Nicholas Irving