We have several MDS iNET radios linking remote locations. The data at these locations are sensitive and I need to keep it off of the corporate network. These radios only have 256k of bandwidth at best. I’m trying to have two IPSEC VPNs to backhaul back to the servers they need to talk to. The locations that are a couple blocks away operate just fine. However, the locations that are greater than 3 miles out are locking up. The locations have Cradlepoints and connect back to a Palo alto. The cradlepoints have an active LTE modem. Failover is setup where the iNET radio is the primary (cause it’s cheaper than paying for data) and we would like to have backup internet to these locations.
There are two VPNs. One for each subnet where two sets of devices live.
Now, the VPNs never disconnect. After about 20 minutes, no traffic will flow through ONE VPN. I can restart the cradlepoint, and traffic will flow again. OR I can turn off the ethernet interface connected to the iNET radio and the data will flow just fine over LTE once the VPNs are reestablished.
I thought it was a routing table problem not properly being changed at a failover, but then I noticed that the VPN never actually goes down, so failover is never activated. This lead me to think that IPSEC is producing more bandwidth than the iNET radios can handle.
Traffic flowing from the remote locations is low. We don’t really need a lot of bandwidth to these locations. Only DNP traffic is being pushed. Think SNMP, but for industrial equipment.
Also, I can access the cradlepoint’s web GUI through WAN and see that the VPNs are still connected.
I looked through the logs and nothing stands out. Traffic just dies and the server can’t ping the devices and the local network can ping the server.
Does IPSEC provide enough overhead to where there’s a minimum bandwidth requirement?