How to retain the remote session till the script is fully executed |

Dear Ansible Team,

I have my customized Jboss start script “jboss-start-all.sh” to start my Domain/host controllers and the server groups. When i execute the the script locally its starting the instances successfully. but when i trigger it from ansible, the command is successfully executed but the instances are not starting .

  • name: Start Jboss EAP Server
    shell: “nohup /opt/jboss-eap-6.4/bin/jboss-start-all.sh &”
    tags: start

is it possible to keep the ansible session active till the script is starting all the instances. What i suspect is, Ansible fire the command in remote server and closing the session, As soon as session is closed my script also stopped, so its failing to start all my instances.

Please help fix this issue.

I think you might just need to use async - http://docs.ansible.com/ansible/playbooks_async.html to get this working.

However, depending on what you are trying to achieve, it might be worth re-organising things so that ansible starts the cluster nodes individually. This would let you do useful things like do rolling restarts of your cluster members (this is very easy to do using the ‘serial’ directive - see http://docs.ansible.com/ansible/playbooks_delegation.html#rolling-update-batch-size).

Also might be worth considering wrapping the scripts that you use to start your nodes as services. That way, once a host is booted, it could bring the service (jboss) up straight away.

We use tomcat, which we have wrapped as a service and use ansible to do rolling upgrades of our tomcat apps without downtime. Originally we used to just wait between nodes but once we had figured out how to poll our apps to check that they had completed deploying and were serving requests by polling them using the uri module we are now able to do relatively smooth rolling updates. Here’s the url task we use to poll.

  • name: check every 3 seconds for 40 attempts if tomcat is up and ready to serve the healthcheck json
    uri:
    url: ‘http://{{ inventory_hostname }}/application/healthcheck.json’
    return_content: yes
    timeout: 2
    delegate_to: localhost
    register: tomcat_result
    until: tomcat_result[‘status’]|default(0) == 200
    retries: 40
    delay: 3

Hope this helps,

Jon

Since you’re backgrounding the script, the shell will finish immediately. Not sure why this would be stopping you’re script from running.

If your script finishes when all of the instances are started, just remove the & at the end and run it in the foreground. Ansible will wait for it to return.

Thanks team,!

Have resolved the issue, just added “sleep 200” in the end of my startup script. So Ansible will wait 200 sec to terminate the ssh session. I could see my jboss instances are started and running fine.

Thanks for your time and suggestions …! Appreciate it