Trying to create ansible playbook to reboot servers and getting an error

After much googling I haven’t found a solution to this. I want to reboot a single server from a group of servers in my hosts file. This is being done on OSX, with ansible 1.8.

Playbook:

  • name: restart server
    command: /sbin/reboot

  • name: wait for the server to restart
    local_action:
    module: wait_for
    host={{ inventory_hostname }}
    port=22
    delay=1
    timeout=300
    sudo: false

The command being run:

ansible-playbook -l -i ~/ansible_hosts ~/ans-reboot.yml --ask-sudo-pass

Here is the result:

ERROR: command is not a legal parameter at this level in an Ansible Playbook

I have tried substiting the command: with shell: with no improvement.

I have also looked at the following site which seems to suggestion I’m on the right track:

https://support.ansible.com/hc/en-us/articles/201958037-Reboot-a-server-and-wait-for-it-to-come-back

So any pointers and tips gratefully received.

Nick

​You are writing only *tasks* here and you put them on playbook level.​

You need to wrap them in a playbook declaration.

Thanks for the hint Serge - you put me on the right track to the solution.

As you rightly said there was more needed in the yml file

Here is the working one.

You say it’s working but do you continue playbook execution after the restart?

I’m also trying to use a task to restart my servers but I would like to wait_for them to come back and continue executing other tasks.

I’ve tried all variations of the above task (changing async, poll, ignore_errors, switch command/shell modules, etc) but I always get the following error:

fatal: [server] => SSH Error: Shared connection to 1.2.3.4 closed.
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

It seems that the connection is closed too fast and Ansible doesn’t understand what is happening.

Can anyone confirm this is the right way to reboot a server and continue playbook execution in Ansible 1.9.0.1?

  • name: Restart server
    shell: "shutdown -r now
    async: 0
    poll: 0
    ignore_errors: true

It doesn’t even reach the local_action tasks with the wait_for module. This is on CentOS 7.1 x86_64 running on a Digital Ocean VM.

Thanks,
Giovanni

That is because systemd in CentOS 7 is usually quite fast to stop the system. In my experience modules don’t get a chance to complete and return the results (even with async) in this case. As couple of minutes delay wasn’t a big deal in the particular scenario, I ended up with added delay in shutdown for CentOS 7, like this (replace ‘osid’ with whatever proper Ansible variables):

`

note - centos7 shutdown do scheduling and returns immediately, centos5/6 shutdown blocks till the actual reboot time

use 1 min delay for centos7 or else systemd stops the host before task returns

  • name: Reboot system because of kernel update
    raw: /sbin/shutdown -r “{{ ‘1’ if osid == ‘centos7’ else ‘now’ }}”
    changed_when: True

  • name: Wait for system to complete reboot (5 min max / 90 sec delay)
    wait_for: host={{ ansible_default_ipv4.address }} port=22 timeout={{ 5 * 60 }} delay=90 state=started
    delegate_to: 127.0.0.1

`

Regards,
Mikhail

Thank you Mikhail, working on Debian 8 too ( same issue with reboot - playbook stale on server reboot ).

Dňa pondelok, 13. apríla 2015 18:20:49 UTC+2 Mikhail Koshelev napísal(-a):

try this:

  • name: Restart server
    shell: " sleep 3; shutdown -r now"

async: 1
poll: 0
ignore_errors: true

Is there a simple way to use ‘shutdown -r 22:00’ in a playbook to schedule a reboot in the future? Or is it necessary to wrap the command in nohup and &?

Thanks!