Ok some updates on this but first information:
Domain controller : 172.16.10.6
Ansible controller - 172.16.19.1
server that works (STS03) - 172.16.19.41
servers that DOESNT work (STS01) - 172.16.1.114
now if i try with a domain username to access from ansible to STS03 (that works), it is all good.
if i try with a domain username to access from ansible to STS01 (doesnt work) - i get the “server not found in kerberos database” and “username is incorrect”
now if i take the server that doesnt work and move it to the same network (172.16.19.42) near the server that works - everything is working on both servers.
as soon as it is in another vlan, the domain username doesnt work anylonger (a local username on the machine works anywhere).
so i suspected it is maybe something on the dc (in the firewall i have ANY to ANY on all 4 servers: DC, ansible , STS01 & STS 03).
i ran wireshark on the DC and ran against both servers:
when the ansible runs again the server INSIDE the network (STS03) i see this:
172.16.10.6 172.16.19.41 TCP 66 kerberos > 55200 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
172.16.10.6 172.16.19.41 TCP 54 kerberos > 55200 [RST, ACK] Seq=1441 Ack=1419 Win=0 Len=0
so it seems that the DC is working directly against the destination server.
BUT if i run the same winrm against the server in another VLAN i see this:
172.16.10.6 172.16.12.71 KRB5 176 KRB Error: KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN
172.16.10.6 172.16.12.71 TCP 54 kerberos > 60772 [RST, ACK] Seq=111 Ack=1441 Win=0 Len=0
it seems that when the destination server is in another VLAN, the kerberos is checked against the controller machine and not the destination server.
could i be on to something?