strange issue with shell or command modules never returning

Hi everyone,

I am stumped on an issue trying to run a command (/sbin/service jetty start) via any combination of ansible command line, playbook with either the shell or command modules and ansible never exiting. If I ask ansible to start my jetty instance via any of these methods the command never exits. I am using ansible 0.4. For overall context this work is intended as part of a web application deployment process. If anyone has any input on this I’d be much obliged.

I’ll walk through a command module and shell module example as they are easier to explain than the playbook.

here is the example command line:

ansible test2 -D --module-name=shell -a “/sbin/service jetty start”

or

ansible test2 -D -a “/sbin/service jetty start”

Executing either of these commands will actually do the “work” of starting jetty on the remote server, but ansible never exits from the tasks above. Executing this program on the target host directly happens right away with an exit code of 0 (ie: it works as expected when done manually).

the content of the test2 group is a single server, hosts content below:

I’d be suspicious of the init script and something it is doing with the console (for some reason, it seems common for Java application server init scripts to misbehave…. I am looking at you too WebLogic and JBoss).

You could possibly invoke it in async mode with a time limit, and ansible should kill the service start process after that time limit expires.

Jonathan,

I ran into the same exact problems you are running into now on my RHEL boxes. The jetty init scripts as shipped with recent versions of Jetty are somewhat broken, as they don’t present any real exit status codes that Ansible can read from since they basically hand off things to java and the JVM, which never return status after the shell detach. I made things work by re-writing portions of the init scripts, getting them to source the RedHat functions library script (like most daemons on RHEL) and compiling the daemonize wrapper that will launch the java process, detach and properly provide exit status to the system and Ansible. If you want, I could share my init scripts after I sanitize them and you should pulled down the daemonize source and compile it into an RPM. You can get it at http://software.clapper.org/daemonize/

I tried the async mode that Michael suggested when I had the same problems, but it caused more issues at the time since it always fired off and reported successful even if jetty bailed somewhere down the line. If you are on RHEL or a derivative, let me know if you’d like the help. Even if you aren’t, you could probably tweak them and make them work. The key piece for me was the daemonize wrapper. Thanks!

Dave

Hi Michael and Dave,

So as far as I can tell our init scripts (from jetty-hightide-server-8.1.3) were working properly in terms of providing exit codes. Actually stopping all the java processes that are spawned by jetty is another matter entirely. That said I’d be happy to hoover up a sanitized copy of your init scripts if you are willing, but of low priority.

In terms of the “not exiting issue” I’ve re-factored my approach as the step of restarting jetty was part of a long list of handlers for an RPM installation (ie: restart app on yum update) and I’ve moved the somewhat complicated logic associated with this (moving the machine out of the load balancer, stop/start, test app / pre-populate caches, re-attach to LB) into a script that is in the package that is being updated. This single script is then the sole handler associated to the yum module play.

After going through this process of moving the jetty start into a separate script I was still running into the issue of ansible not exiting (either w/ command or shell as the module) when a co-worker told me what I was facing was something to do w/ ssh waiting for output.

I changed my command line to:

/opt/foo/restart_application.pl

to

/opt/foo/restart_application.pl >& /dev/null

and that got things working.

I went back to the init script to see if the same fix would work …

ansible test2 -m command -a “/sbin/service jetty start >& /dev/null”

and it didn’t… so really not sure what the story is but I suspect this same issue could happen to others relying on ssh for transport.

cheers,
Jonathan

and that got things working.

I went back to the init script to see if the same fix would work ...

ansible test2 -m command -a "/sbin/service jetty start >& /dev/null"

You will want the 'shell' module if you are going to be doing shell things.

The command module does not use the shell.

I’m still having very similar issues using a nodejs npm module pm2. actions like
shell: pm2 start pm2-start.json chdir=~/ executable=/bin/bash

will correctly run the script but won’t move on afterwards. Using >& /dev/null will allow ansible to correctly move forward but is not ideal.

Hi James,

I can definitely see that would be confusing.

I’m not familiar with node pm 2, but does that normally return when you run it from the shell?

It may be that it is not daemonizing properly from what you said, so the redirect may be a good option, the other might be to just launch it in async mode in Ansible (fire and forget) and not poll.

It does return when run from the shell, the modules purpose is to run and maintain node servers in the background. I’ll give the async option a go. thanks.