create variable from value gathered on remote

I am trying to automate the creation of postgresql services on a shared postgres Server as my target.

we create an .env file for each service on the target with contains the portnumber used for that particular instance. Based on that I detect the largest currently reserved postgres port.

I can detect that number by this command on that host (not knowing whether this is the most scientific way to do this, but it works).

   grep -r 'PGPORT=' /opt/db/postgres/bin/ | cut -d: -f2 | cut -d= -f2 | tail -1

It is worth mentioning that port scannic with i.e. netstat can not be utilized because it is possible a service may not be running at the time of play which might return a false result.

Now I want to create a vairable pg_service_port for my playbook with the next larger number detected (if the above returns 5433 the new value 5434 (5433 +1) shall be applied for the current run.

can somebody kindly point me to the right approach for such an endeavour? I could imagine debug may play a role in this but don't really have much of a clue with it.

Instead of fragile grepping for port numbers in files that someone (you?) put somewhere and then incrementing those - why not just configure this yourself in the playbook?
You then have a single source of truth.

Dick above gave a better answer, but just to have the original
question literally answered:

- shell: grep -r 'PGPORT=' /opt/db/postgres/bin/ | cut -d: -f2 | cut
-d= -f2 | tail -1
   register: myport

  myport['stdout'] will have the value you want

For example, given the files

ssh admin@test_11 cat /tmp/.env

SMTPPORT: 5432
POPPORT: 5431
PGPORT: 5433

ssh admin@test_12 cat /tmp/.env

SMTPPORT: 4432
POPPORT: 4431
PGPORT: 4433

ssh admin@test_13 cat /tmp/.env

SMTPPORT: 3432
POPPORT: 3431
PGPORT: 3433

Fetch the files and declare the variables you need

cat pb.yml

- hosts: test_11,test_12,test_13
  vars:
    env_file: /tmp/.env
    env_dest: "{{ playbook_dir }}/env_dest"
    my_env_file: "{{ [env_dest, inventory_hostname, env_file]|
                     join('/') }}"
    my_ports: "{{ lookup('file', my_env_file)|from_yaml }}"
    pg_service_port: "{{ my_ports.PGPORT + 1 }}"
  tasks:
    - fetch:
        src: "{{ env_file }}"
        dest: "{{ env_dest }}"
    - debug:
        var: pg_service_port

give (abridged)

ok: [test_11] =>
  pg_service_port: '5434'
ok: [test_12] =>
  pg_service_port: '4434'
ok: [test_13] =>
  pg_service_port: '3434'

Thanks Vladimir ... that looks pretty sophisticated (almost out of sight-ish) ... but could be a nice challenge to even understand what is going on :slight_smile:

so far I detected one challenge. Maybe my initial question has not been sufficientely explicit with this.

I run this against a single host that has a couple of paralles postgres services running.
The pg-somename.env files, defining the port for each instance which I want to use to detext the next free port are located in the same folder and the only thing I can tell about the filenames is that they follow a pg*.env naming pattern.

as far as my experiments with fetch go it is not able to interpret such a pattern in the way regular bash (find, cat, less, ..) does. So the TASK below does not localize the files I am looking for.

   - name: fetch all .env files to fetched
     ansible.builtin.fetch:
       src: /opt/db/postgres/bin/.pg*env
       dest: fetched/
       flat: true
     become: yes

is there a trick to make this work? A colleague mentioned using the find module in order to creating a list for a loop precedding a fetch might be possible. But that seems to be pretty hairy as well.

Find the files first. See
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/find_module.html

I manage to get something like this from a set_fact following the find TASK

TASK [creating a list with the filenames] *************************************************************
task path: /home/gwagner/repos/ansible/open_source/postgres_create_service/tasks/fetchsomething.yml:20
ok: [vm-414001-0227.step.zrz.dvz.cn-mv.de] => {
    "ansible_facts": {
        "env_files": [
            "/opt/db/postgres/bin/.pg-service10.env",
            "/opt/db/postgres/bin/.pg-service11.env",
            "/opt/db/postgres/bin/.pg-service12.env",
            "/opt/db/postgres/bin/.pg-service13.env",
            "/opt/db/postgres/bin/.pg-service14.env",
            "/opt/db/postgres/bin/.pg-service15.env",
            "/opt/db/postgres/bin/.pg-service16.env",
            "/opt/db/postgres/bin/.pg-service17.env",
            "/opt/db/postgres/bin/.pg-service18.env"
        ]
    },
    "changed": false
}

You're so close. You're passing in a list as a single item. Instead of

   loop:
     - "{{ found_files.files | map(attribute='path') | map('basename') | list }}"

do this:

   loop: "{{ found_files.files | map(attribute='path') | map('basename') | list }}"

great, that works. So one part of the puzzle is solved, thx

this is how I manged this in the end. probably the solution suggested by @Vladimit Botka is more scientific but I could not indorporate the find operation required with the rest of the suggestion