Deploy *.war to wildfly with ansible

Hello everyone

I have a gitlab repository and I copile with maven using jenkins, the war I want to deploy to wildfly using ansible, wildfly server is a docker container in other machine, gitlab and jenkins are in the same virtual machine so how can I do this?

Best regards.

Hi,

how can I do this?

For starter, where do you store your artifacts ? If you use a maven repo, you could use this module to retrieve them. If you use something like Artifactory, you’d have to look for some module or custom role to achieve that. If your files are on your filesystem, just use copy, fetch or synchronize modules, depending on where you run this task from.

Your app server run in a container, so you could either build an image including freshly built wars, keep wars outside and copy them in a volume / bind mount, or directly in your running container with docker cp; your choice !
You might want to use an handler to restart or recreate your container after copying wars if needed by Wildfly, though I don’t think it’s necessary (not a Java guy here).

Some caveats:

  • Pure Ansible doesn’t come with a scheduler, nor listener you could receive webhooks notifications on. Depending on what you need, you could either create a cron job, systemd timer, use Event-driven Ansible as listener or install AWX for its API (among other things). Another decent option would be to wrap your deploy job in a Jenkins pipeline, as you’re already using it (I did exactly that for a project recently, though I’m not a Jenkins fan at all)
  • You’d probably like to log / audit your deploy jobs, so running your playbook from another tool (AWX, Jenkins, Gitlab-CI, …) that does that is a good idea. Other suggestions would be to configure a callback plugin to write in log files, DB, … or use a project like ARA

Hope this helps; don’t hesitate to ping me if you’d like some more clarification.

3 Likes

Hello Mister Pierre Touron

First thanks a lot to response and wow! it is a very complex response to me.
My artifacts are in the jenkins workspace in the filesystem.
Yes I am looking the copy, fetch and sync modules of ansible to execute it in a ansible playbook with jenkins freestyle proyect configuratioins.
I have 4 steps but one of them I can´t do it si really are 3.
-first step: copy git repository to jenkins workspace. This is all ok.
-second step: Compile with maven. This is all ok.
-third step: deplot the artifact with ansible.
I am using the Jenkins assistant but it gives me several options, I understand that one would be to execute the shell command line with the ansible program and the modules that you have mentioned and another would be to execute an ansible playbook file, which is better?
To execute commands from the terminal I am looking at this tutorial, sorry it is in Spanish. Simplificando los despliegues en Jboss con Ansible – El array de Jota
In the tutorial it shows me how to use the copy, synchronize and recover commands but I don’t understand the use of the main.yml file, is it the name of the module to use when using a playbook? it’s right?
The biggest problem is docker, which I am also not familiar with using docker and the idea is to have the wildfly container and if possible not touch it.
By the way, I have created an empty deployment in wildfly but I do not see the folder it generates in the container bash. The tutorial that I sent you also says that there are two possible projects, one would be domain and the other standalone, from what I understand the container. Wildfly is made as standalone. In practice, what does that mean?

Thank you very much for your help. It’s nice to find kind people who help selflessly.
I will keep going fighting…

1 Like

Hey,

Hello Mister Pierre Touron

You can call me Pierre, really :stuck_out_tongue:

I can give you some guidance on this, but I’m really busy these days. I’ll try to get back to you this week-end if not before.

1 Like

Ok perfect Pierre.
This is not my only war too, hahahahahaha.

Best regards Pierre.

2 Likes

So, I’ll split my answer into multiple posts as it’s going to be a long one. Also not guaranteed to address the whole point in one sitting, so I might get back to you for the rest a bit later.

My artifacts are in the jenkins workspace in the filesystem.

Ok, so we either deploy them from the same job or get them out of the workspace (which would by default be deleted after IIRC), then run another pipeline to deploy. I’m more used to have build pipelines separated from deploy ones, though there’s nothing wrong here.

I am using the Jenkins assistant but it gives me several options, I understand that one would be to execute the shell command line with the ansible program and the modules that you have mentioned and another would be to execute an ansible playbook file, which is better?

Ok let me explain. First, Ansible has two execution ‘modes’: adhoc commands (the ansible binary) and playbooks (ansible-playbook binary). Adhoc commands lets you run a single task at a time on targets, with all task args on command line, while playbooks are yaml files including tasks written in a declarative fashion and executed procedurally (well, tasks are arranged in one or multiple ‘play’ blocks, though that’s of no importance to understand the concepts).

In some cases, adhoc commands are fine, but you’ll usually want to use playbooks instead for multiple reasons I won’t expand on as this post will already be long enough :p.

Now whether you want to use adhoc commands or playbooks, Jenkins lets you either use a plugin, like this one, or run your commands from a script block. I personally prefer not to be bothered with another plugin, especially since all logic is in your playbook, but linked plugin should work just fine if that’s more your thing.

Here is an example step (declarative pipeline) from the project I mentioned earlier; you can see how I used ansible-playbook in an inline script block:

steps {
                withCredentials ([
                    string(credentialsId: 'ansible_become_pass_zzzic-jenkins', variable: 'ANSIBLE_BECOME_PASS_ZZZICJENKINS'),
                    file(credentialsId: 'ansible_vault_pwd_ec', variable: 'ANSIBLE_VAULT_PWD_EC')]
                ) { 
                    sshagent(credentials: ['gitlab-ec_zzzIC-Jenkins', 'psgservers_zzzic-jenkins']) {
                        sh label:'Deploy PSG stack', script:'''
                            unset GIT_SSH # Puisque l'envvar pointe par dĂŠfaut sur le path d'un binaire Windows -_-; liĂŠ Ă  la conf du plugin
            
                            cd ${_sourceFolder}
            
                            # VĂŠrification des hostkeys
                            for host in <redacted> $(ansible -i ./inventories/ all --list-hosts | tail -n+2 | sed -e 's/\\s\\+//g'); do
                                $(which ssh-keygen) -t rsa -F "${host}" || $(which ssh-keyscan) -t rsa "${host}" >> ~/.ssh/known_hosts
                            done
            
                            ansible --version
            
                            # Installation des requirements et de leurs dĂŠpendances (rĂ´les / collections) 
                            if '''+params.FORCE_INSTALL_REQS+'''; then
                                GALAXY_CMD_ARGS="${GALAXY_CMD_ARGS} --force"
                            fi
                            ansible-galaxy install -r requirements.yml ${GALAXY_CMD_ARGS:-}
            
                            # Set des params optionnels sur la commande ansible
                            if [ -n "'''+params.FORCE_RESTART_SVCS.replace('\n', ',')+'''" ]; then
                                ANSIBLE_CMD_ARGS="${ANSIBLE_CMD_ARGS} --extra-vars gbt_psg_force_restart_containers=$(echo -n '''+params.FORCE_RESTART_SVCS.replace('\n', ',')+''')"
                            fi
                            if [ -n "'''+params.FORCE_RECREATE_SVCS.replace('\n', ',')+'''" ]; then
                                ANSIBLE_CMD_ARGS="${ANSIBLE_CMD_ARGS} --extra-vars gbt_psg_force_recreate_containers=$(echo -n '''+params.FORCE_RECREATE_SVCS.replace('\n', ',')+''')"
                            fi
                            if '''+params.CONTAINERS_TASK_ONLY+'''; then
                                ANSIBLE_CMD_ARGS="${ANSIBLE_CMD_ARGS} --tags t_gbt_psg_create_containers"
                            fi
                            if '''+params.DRYRUN_MODE+'''; then
                                ANSIBLE_CMD_ARGS="${ANSIBLE_CMD_ARGS} --check"
                            fi
                            if '''+params.SKIP_EXEC_DEPS+'''; then
                                ANSIBLE_CMD_ARGS="${ANSIBLE_CMD_ARGS} --extra-vars skip_deps=true"
                            fi
            
                            ansible-playbook \
                                --inventory-file ./inventories/hosts_psg_'''+params.TARGET_ENV+''' \
                                --extra-vars "ansible_become_pass=${ANSIBLE_BECOME_PASS_ZZZICJENKINS}" \
                                --vault-id "ec@${ANSIBLE_VAULT_PWD_EC}" \
                                --extra-vars "gbt_psg_backend_image_version='''+params.TAG_BACKEND+'''" \
                                --extra-vars "gbt_psg_frontend_image_version='''+params.TAG_FRONTEND+'''" \
                                ${ANSIBLE_CMD_ARGS:-} \
                                main.yml
                        '''
                    }
                }
            }

In the tutorial it shows me how to use the copy, synchronize and recover commands but I don’t understand the use of the main.yml file, is it the name of the module to use when using a playbook? it’s right?

No :D. First article’s section shows you how to 1) Copy war file with copy module, and 2) Deploy JBoss app with jboss module, both of them using ansible adhoc commands.

Second part is to do the same thing using a playbook instead, but writer did something needlessly complicated here; I’ll explain a bit below, but just know you can put both tasks (written in declarative form) in a simple yaml file (called whatever you want), inside a ‘play’ block. This file is now a playbook that you can run with ansible-playbook command. Here is what it would look like:

---
- hosts: jboss
  remote_user: jota
  tasks:
    - name: copy package to remote server
      ansible.builtin.copy:
        src: /repository/apps/example.war
        dest: /tmp
     
    - name: deploy application
      community.general.jboss:
        deploy_path: /middleware/jboss-eap-7.1/node1/deployments
        src: /tmp/example.war
        deployment: example.war
        state: present

Now, for a quick explanation; writer used a role, which you can see as a library, containing scattered parts of a playbook: tasks, handlers, vars, … each of them stored in a specific folder, so tasks in ‘tasks’ folder, vars in ‘vars’ folder (also ‘default’ folder, but anyways), etc…
The tasks file is called ‘main.yml’ per convention, as roles only evaluate ‘main.yml’ files under its folders. It goes the same for every folder (except ‘files’ and ‘templates’ ones, which acts as stores).
There are no playbooks in a role, only parts of. And roles can’t function on their own, they need to be called from a playbook (here ’ deploy-example.yml’, which is not in the role, but calling (aka reusing) their role called ‘common’).

Anyways, roles are super useful, though it only bring extra complexity in this case, so I suggest you don’t bother with it. I hope my explanations are clear enough.

Stay tuned for the next post !

1 Like

The biggest problem is docker, which I am also not familiar with using docker

First, a few questions you can ask yourself:

  • It seems to me you already have a Wildfly container running; do you use the official image or a custom one ?
  • How do you currently manage your container (also network and volumes) ? By ‘manage’, I mostly mean ‘deploy’.
  • Does this container hosts other apps ? It would be an antipattern, but if that’s the case, rebuilding and redeploying the image would be more complicated
  • How often do you need to redeploy your wars ?
  • Does your wars have to be deployed on multiple Wildfly instances, or just this one ?
  • How much downtime is acceptable during redeployment ?

Depending on your answers, some options would be less appealing; here are a few of them:

  • Rebuild your container image including your wars and either push it on a registry to deploy later as needed without having to rebuild → I like this one though you then have to either rebuild the image everytime you want to redeploy or store it somewhere (including your Jenkins worker local image store) and having to manage tags and retention. In this case, you won’t need to copy wars but you’ll have to build and redeploy an image instead
  • Don’t manage the container in any way from this pipeline, and just deploy wars as needed → Faster than rebuilding and redeploying an image in most cases, but then Wildfly and wars version are decoupled so you’ll have to manage this aspect elsewhere, as well as others. Also if you need to redeploy wars later on, you’d then have to either re-run the whole build / deploy job or store them out of Jenkins workspace (unless you keep older workspaces, though I’d argue it’s not the best way to store and manage packages)

and the idea is to have the wildfly container and if possible not touch it

So likely option B then :). That means no container redeploy nor restart, ideally. I’m not sure about Wildfly, though I think you can just copy wars and jars in ‘deployments’ folder (in Wildfly install path) and they will be hot-deployed. If not, you’ll have to search how to either restart / reload Wildfly process
within your container or restart / recreate container for your files to be deployed.

Here are a few options to copy your wars:

  • Copy files with copy or synchronize modules over the infamous ‘deployments’ folder, then perhaps restart / reload Wildfly. The destination folder might either be exposed through a named volume or bind mount and you can copy to the host machine filesystem, or you can directly copy to its path inside the container using docker cp, but you’ll lose those files if you have to recreate the container and path is not exposed as volume (I recall wars / jars being “ingested” by Tomcat on deploy, so this might be a moot point)
  • Use copy or synchronize module to have those files available on remote filesystem, then community.general.jboss module to deploy those file on Wildfly, as hinted in your linked article. Honestly, I don’t know if this module does more than just copying files from A to B; perhaps it’s notifying or reloading Wildfly, I don’t know. Notes states it does some verifications on app level, so there’s that

Personally, I’d start simply by just copying files over, on a volume or directly in container and assert that Wildfly can deploy my files, then compare with JBoss module, see if it is useful in any way.
If you need some help on the docker volume part, just tell me how you deploy your container and perhaps link me your Compose file or other tool’s manifest you might be using, and I’ll point what changes you can make.

I forgot to point it out earlier, but Ansible needs to be runable on your Jenkins worker for you to use in your pipelines. So either install it from PyPI, your distribtion repos, or if you have a container runtime (docker, podman, pure containerd, …) running on your worker, run ansible in a container. You can build an Execution Environment, though I never tried it, or build a custom image yourself; here is the Dockerfile for an image I’m using, as an example: Ansible_CI Dockerfile · GitHub.

Also access management: Users and their permissions on remote system as well as keys or certificates to authenticate, but you might already have taken care of these aspects.

By the way, I have created an empty deployment in wildfly but I do not see the folder it generates in the container bash.

I don’t know for sure what you mean there, as I’m not familiar with Wilfdly. I vaguely remember doing some integration on Tomcat several years back, though I’m not sure JBoss / Wildfly works exactly the same.
Could you paste here the commands and config you used, as well as outputs, if you want me to take a look ?

The tutorial that I sent you also says that there are two possible projects, one would be domain and the other standalone, from what I understand the container. Wildfly is made as standalone. In practice, what does that mean?

As stated before, I don’t know much about Java ecosystem but a quick look online tells me that a managed domain deployment is a multi-server topology, aka cluster, and Standalone is a single-server one.
If you don’t use and don’t plan to have a Wildfly cluster, leave it at it. Article mention this specificity as Ansible JBoss module only supports Standalone deployments.

Thank you very much for your help. It’s nice to find kind people who help selflessly.

Don’t mention it ! :wink:

Please tell me if some point needs clarification, or if illustrated examples would help. I wait for you to fill the blanks and affirm the direction you want to move on, then we’ll take it from there !

Have a nice Sunday !

1 Like

Hello Mr Pierre
WoW! Great explication. My level is not so big and I am only trying to deploy using all more simple as posible, for this reason I am resolving problems when they show to me.
I am trying to conect with ansible but it do it with SSH but ssh don´t work with username and password, ssh needs username and fingerprint but the problem is that the wildfly server is in docker that it is in a virtual machine with suse15 and I do not have the control of this machine ( I can´t be root) I have a username and password rented and this is the way like I am working, horrible. What can I say? And that suse15 is used for many people and that machine can not goes down for evident reasons.
This the errors that show me jenkins in the final step, I hope you can help me, whe all works I will have to study a lot docker, yaml, ansible, scripts, arameo language, german language, chinese language and UFO language… hahahahahaha.

Have a nice day and best regards.

There is a lot to say here :smiley:

I can´t be root

That’s an important distinction here, though you might not need it if your CI user is in local ‘docker’ group on remote node and doesn’t need to write on root-owned parts of your filesystem.

I’m thinking you might as well copy your wars directly in your Wildfly container and avoid using volumes / mounts altogether.

From your screenshot:

  • You use ‘command’ module to execute ansible locally on what seems to be your Wildfy remote host to copy local files to your Wildfly server. Please don’t do that :slight_smile:. I don’t know where you run your command from but it’s becoming a bit confusing, and will surely also induce weird behaviors, which will be harder to troubleshoot.
    I’m guessing you do that because you don’t have your wars files locally, though in the end the command will be run from your Jenkins pipeline, so on the same node, in the same workspace.
    That’s still not the way to go, but it could be more efficient this way: ansible -i /etc/ansible/hosts.yml wildfly -m command -a 'ansible -c local localhost -m copy -a...', or better: ansible -i /etc/ansible/hosts.yml wildfly -m command -a 'mv <srcFiles> <dstFolder>/', or use a playbook like this instead:

    - hosts: wildfly
      gather_facts: false
      tasks:
        - name: Synchronize folders locally
          ansible.posix.synchronize:
            src: <srcPathOnRemoteHost>
            dest: <destPathOnRemoteHost>
          delegate_to: wildfly
    
    # Then perhaps delete files in src directory; synchronize module might even have a paramater for that, I don't know
    

    And run it with: ansible-playbook -i /etc/ansible/hosts.yml -l wildfly <yourPlaybookPath>

  • I don’t understand how you’re running a second command without a commands separator or operator (still in your first ansible -m command command). And I realize now there is a second task I haven’t put in my playbook example above. Oh well…

  • I see you can run sudo docker cp so it seems you have sudo permissions to run docker or full root access through sudo, though you said you didn’t. No biggie, just wondering.

  • Error message about ssh password is misleading; I think you just need to either connect through ssh manually to this host prior, add his hostkeys manually or disable hostkeys checking in Ansible config altogether (not best practice, but hey !). See a post I made on another thread earlier tonight on this: Headless Ansible - #5 by ptn

I will have to study a lot docker, yaml, ansible, scripts, arameo language, german language, chinese language and UFO language… hahahahahaha.

I love your quirkiness haha !

It’s getting late here (and I’m getting really hungry), so please allow me one day or two (or more depending on what’s on my plate) and I’ll get back to you with an example playbook you could adapt to your needs.

Have a nice evening !

You could try something like this:

---

- name: Deploy app on Wildfly (containerized)
  hosts: wildflyHost
  gather_facts: false

  vars:
    war_files: # This list could also be built dynamically with find module, read from file or else
      - a.war
      - b.war
      - c.war

  tasks:
    - name: Deploy war files on Wildfly
      block:
        - name: Copy war files to remote server
          ansible.builtin.copy:
            src: "/some/local/path/{{ item }}" # Adapt with your Jenkins workspace artifacts path
            dest: "/tmp/{{ item }}"
          loop: "{{ war_files }}"
          register: _copy_out

        - name: Copy war files into the Wildfly container
          when: 
            - item.changed|bool
            - item.dest is defined
          community.docker.docker_container_copy_into:
            container: wildfly_poc
            path: "{{ item.dest }}"
            container_path: "/opt/jboss/wildfly/standalone/deployments/{{ item.item }}" # Perhpas not the right path; adapt to your needs
          loop: "{{ _copy_out.results }}"

      rescue:
        - name: Continue on error
          ansible.builtin.meta: noop

      always:
        - name: Cleanup
          when: not ansible_check_mode|bool
          ansible.builtin.file:
            path: "{{ item.dest }}"
            state: absent
          loop: "{{ _copy_out.results }}"
          changed_when: false

This playbook is intended to be ran from your Jenkins worker, inside your build workspace (or having access to it, if you run it from a container for instance): ansible-playbook -i <yourInventoryFile> <playbook>.

Important note: The user you run playbook with must have permissions to access docker API (run docker commands, see containers, etc…).
You could also add become: true directive to either whole play or specific task, though:

  1. Your user must have permissions to run Ansible modules through sudo (or specify another non-root user with become_user directive)
  2. If not passwordless sudo, you’d also have to specify password, either with -K flag or through other means. Since you’re running this command from your Jenkins pipeline, you should probably add become (=sudo) password to Jenkins creds store and pass it as a variable to your command line. See one of my previous posts with Jenkins pipeline stage for an example.

Note that you could also directly copy local files into container on remote server, but you’d have to configure Docker socket to be remotely accessible, then define docker_host task parameter to point to your Docker socket (or set DOCKER_HOST envvar to the same effect). Depending on how you exposed the socket, you might also have to use other paramaters (i.e. SSL params if you exposed socket through HTTPS).

I tried on nodes with exposed Docker socket through SSH, but didn’t managed to actually copy files into them (broken pipe and a weird API error). I won’t try to troubleshoot, but I thought you might be interested to look into it at some point. Or not.

You’d probably also want to set appropriate ownership on copied wars inside the Wildfly deployment folder. I recall wars not being ingested on using wrong owner:group for Liferay deployments back in the days.
docker_container_copy_into module doesn’t have params for that, so I guess copied files are owned by user running PID 1 in your container. Have a few tries and adapt to your needs.

I don’t think you need to do anything else on Wildfly service; if I’m correct, your wars will be deployed as is and that’s it ! A nice thing to do would be to run tests afterwards, like polling your webapp url, perhaps send a webhook notification to another service, etc…

Keep me posted !

Hello Mr Pierre
Thanks a lot, I am not going to do anything until the next monday. Boss told me that I have to use the infraestructure in docker that enterprise has implemented, so now jenkins, gitlab, nexus and wildfly are dockers in a linux machine.

Best regards.

2 Likes

Hello Mr Pierre

I have problems to deploy my atifact .war in to a wildfly server, can you help me?
The compile goes ok but when I try to deploy in wildfly show me “FORBIDDEN”.

Best regards.

Hi Jose,

Can you post the whole trace ? Also the playbook you’re using. We’ll take it from here.

Hello Pierre

I am using a script but I have a problen to download the libraries from maven to compile the .war file because I have the linux machine back of a proxy and I am trying to conect to the maven repository with no success. Can you help me?
I have modified this files /etc/systemd/system/docker.service.d/http-proxy.conf and the environment variables http_proxy and https_proxy and nothing.

Canyou help me?

Best regards.

Ah right ! You said earlier that all your CI is now running in containers. Is the proxy a new component in your setup ? I mean, if it was there before, you should be able to get it working by reapplying the same config you used before migrating to containers.

Anyways, you could probably just configure maven to use an HTTP proxy. Here is a more guided article: https://www.baeldung.com/maven-behind-proxy, also showing how to pass these settings through MAVEN_OPTS envvar, which is golden for containerized apps IMO.

You could also configure either your whole container (not limited to docker run, you could also pass those envvars through compose environment: key if you’re deploying container this way) or the docker daemon (so each new container would inherit this configuration) to use a proxy.

I feel some examples would help:

---
# docker-compose.yml for a simple maven container; adapt for the one you're using

version: "3.8"

services:
  maven:
    image: maven:3.9.4-eclipse-temurin-21-alpine
    environment:
      MAVEN_OPTS: "-Dhttp.proxyHost=10.10.0.100 -Dhttp.proxyPort=8080" # Exemple 1, for maven only
      ALL_PROXY: "http://10.10.0.100:8080" # Example 2: For the whole container
...
# /etc/docker/daemon.json; don't forget to restart dockerd after saving this file for these settings to take effect

{
  "proxies": {
    "allProxy": "http://10.10.0.100:8080" # Example 3: For every container created through this docker daemon
  }
...
}

I haven’t tried any of them specifically as I’m writing this, though I’m pretty sure I had to configure Docker to use a proxy at some point.

Please tell me if you feel some clarification is needed, or if you encounter errors.

Edit: And yes, you could also configure daemon through systemd units. You mentioned it and I haven’t cover that for no specific reason. I usually prefer to not meddle with systemd for specific applications config, but it surely is a valid way to do so as well.

Please ensure you removed files and vars you created from your previous attempts; I don’t think I need to explain why :wink:

Good Morning Mr Pierre
Wow, amazing answer, my level in dockerfile and making scripts is not good, I am a hardware profesional but I don´t have fear with software and to the software profesionals are better payment (DevOps).
I am going to show you my new error, jenkisn lookfor a file called script.sh but this file is generated by itself, it is not mine.
Best regards and thanks again a lot.
Remember gitlab, jenkins, wildfly in a docker in the same linux machine.

Hello Pierre
The team is making a project in sping boot with maven and deply it in tomcat but client has a wildfly server and the war file generated don´t deploy correctly in Wildfly using like IDE Visual Studio Code to develop the proyect so, What changes have we to make? maybe changes in pom.xml, maybe changes in properties…

Best regards again and thanks a lot.

Hey,

Yeah, you try to run a shell command: 127.0.0.1... from this stage of your pipeline; that won’t work indeed ! I’m thinking you expanded the wrong variable or your command have been split on two lines, something like that.
Could you share the corresponding stage from your Jenkinsfile, so I can see what commands you’re running from script block ?

Also, how did you configured your tools to use proxy in the end ? I see you added parameters directly to maven command; is that it ? Something else ? (just ensuring you won’t have issues down the line setting your parameters all over the place :stuck_out_tongue: ).

The team is making a project in sping boot with maven and deply it in tomcat but client has a wildfly server and the war file generated don´t deploy correctly in Wildfly using like IDE Visual Studio Code to develop the proyect so, What changes have we to make? maybe changes in pom.xml, maybe changes in properties…

Dude, I will need a lot more info here haha ! How do you deploy ? What errors or erroneous behavior do you get ? Where ?

Help me help you lol

Good Morning Mr Pierre
Well I have two infraestructures, the first in linux machine with docker and containers of jenkins, gitlab and wildfly I think it is in a ESXi but I am not sure, and second in a virtual machine in virtual box in debian 11 installed gitlab, jenkins and wildfly, virtualbox installed in a windows 10 machine, why two? Because my username and my password is necesary for all and docker complicate all the deployment in my opinion.
the first deployment works only the cloned from gitlab, the build with maven don´t work and de deploy to wildfly neither.
the second deployment works the cloned from gitlab, the build with maven works but the wildfly deployment don´t work.
The next screen is the script in the jenkins of the first infraestructure that it was in docker. The script of the second infraestructure is very similar without proxy parameters from maven and it works.
To show error abot wildfly will be difficult because there are many information about my work and people of my work is working in it so let them work and we will already see.

Thanks a lot for your help and your time.

Look again your mvn command, http.nonProxyHosts paramater value shoud be either ‘localhost’ or ‘127.0.0.1’. The pipe in there is interpreted by sh plugin, effectively running these as two commands.