michael: yes, this is regarding the ansible-redhad-openstack playbook
darragh: I’ve tried a few things. I was hoping to be able to configure my Linux-OpenStack-Admin node with a POSTROUTING route instead of changing the upstream router which requires buggin the tech team.
We did try setting the upstream router to forward from the equivalent to 1.1.1.0/24 in our setting.
iptables -t nat -A POSTROUTING --source 10.0.11.74/28 --output-interface eth0 -j MASQUERADE
corporate gateway is 10.0.10.1
ip route add 10.0.11.74/28 via 10.0.10.1 dev eth0
We also tried having the quantum gateway forward to the eth0 on Linux-OpenStack-Admin ala
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i gq-d39ac7f4-3f 10.0.11.66 -o eth0 -j ACCEPT
After a fresh install, the compute node can ping 8.8.8.8, but the admin node can’t.
The problem is the default gateway assigned to the quantum gateway gq-*** below
[root@patch-test-admin ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
1.1.1.0 * 255.255.255.0 U 0 0 0 qg-7586114d-c1
4.4.4.0 * 255.255.255.0 U 0 0 0 qr-4869ec23-73
10.0.8.0 * 255.255.252.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth1
link-local * 255.255.0.0 U 1003 0 0 eth0
default 1.1.1.1 0.0.0.0 UG 0 0 0 qg-7586114d-c1
default vpn.21technolog 0.0.0.0 UG 0 0 0 eth0
This route get looks correct, but ping out doesn’t work:
root@patch-test-admin:~$ ip route get 8.8.8.8
8.8.8.8 via 10.0.10.1 dev eth0 src 10.0.9.176
cache mtu 1500 advmss 1460 hoplimit 64
If I delete the default 1.1.1.1 route, and then do a service network restart I can ping out.
root@patch-test-admin:~$ ip route del default via 1.1.1.1 dev qg-7586114d-c1
203 root@patch-test-admin:~$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.11.64 * 255.255.255.240 U 0 0 0 qg-d39ac7f4-3f
10.0.8.0 * 255.255.252.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 1002 0 0 eth1
link-local * 255.255.0.0 U 1003 0 0 eth0
default vpn.21technolog 0.0.0.0 UG 0 0 0 eth0
204 root@patch-test-admin:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
— 8.8.8.8 ping statistics —
7 packets transmitted, 0 received, 100% packet loss, time 6680ms
flushing the cash, also not sufficient.
ip route flush cache
still no ping
root@patch-test-admin:~$ service network restart
214 root@patch-test-admin:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=13.6 ms
However, I think having the quantum gateway is essential to the vm instances on the compute node to see out.
Just to be sure I’ve got it straight, here are the group_vars/all again with a few modifications:
This interface device should not have an ip assigned
quantum_external_interfaces: eth1
This interface device should have an ip assigned
iface: eth0
external_subnet_cidr: 10.0.11.74/28 # our vpn hands out ips on 10.0.8.0/255.255.255.252, chose it to be a subset this time
any more troubleshooting tips would be appreciated.
kesten