This is close to TL;DR but maybe someone can help. I've got a fairly sizeable hub and spoke network with each of the 15 of so edges connected to then centre (our datacentre) by a series of IPSec VPNs. For legacy reasons the edges use 10.25.x.0/24 subnets, but the centre is 10.1.0.0/20. I have a separate server connected using StrongSWAN (IKEv1) to the centre, with a local VM-based subnet of 10.1.14.0/24. The centre and each of the edge offices needs to be reachable from the VM-based subnet. The gateway for the VM cluster can do this, but the VMs can't. I'm bridging the VMs, and they're KVM-based and managed via libvirt if that matters. The firewall on the gateway for the VM cluster is shorewall. If I disable the IPSec routes from the VM cluster gateway, it can see its VMs. When I bring up the IPSec routes it can no longer see its VMs on the local 10.1.14.0/24 network but it can see the larger 10.1.0.0/24 subnet at the end of the IPSec link and also the edge offices on their 10.25.x.0/24 subnets. Likewise, with the link up the centre and edges can see the VM cluster gateway using its 10.1.14.254 address, but not the VMs themselves. I assume this is because the routing for the 10.1.0.0/20 IPSec tunnel overrides the local 10.1.14.0/24 network. I really don't want to have to apply NETMAP (1:1 NAT) for the subnet, even supposing that's a viable solution, and I'd prefer to avoid moving the VM subnet out of the 10.1.0.0/20 range because that means I've got to run up extra tunnels from the edge devices to the centre to support an additional route. And given the kit I've got, that's not necessarily as straightforward as it could be. Is there any way around this? I've looked at the iptables rules generated by shorewall until I'm blue in the face. The local 10.1.14.0/24 route has "policy match dir in pol none", whereas the remainder of the 10.1.0.0/20 network has "policy match dir in pol ipsec". And corresponding outbound policy rules. Cheers, Chris