Simply put, what the title says.

The network is based on a centralized location and a bunch of satellite locations around the world. These satellite locations connect to the centralized location via IPSec VPN so we can service the production systems.

In the past these have been based on Fortigate 101 (D for the older ones, E for the newer ones), as well as Aruba 2930m switches, and for the most part this worked well. The only issue is that this was hard to manage on a large scale.

To make it more manageable we moved over to a setup around Cisco Meraki. MX85 as routers and MS225 as switches. This mane the management a lot easier, but with some significant drawbacks:

  • ONLY cloud managed
  • On our satellite locations the bandwith is often low or completely gone. Meraki don’t like this at all.
  • Our satellite locations are mostly onboard ships, and Meraki s8mply doesn’t handle the harsh operating environment as well as Fortigate+Aruba
  • Meraki doesn’t provide a whole lot of info as to why when it is unable to connect to its cloud platform. It’s pretty adaptive and tries a lot of configurations before it gives up, but in some cases it’d be nice to be able to set it up according to the wan connection available. Some sort of local diagnostics would be nice.

So, any recommendations for hardware that is:

  • Cloud managed
  • Allows local configuration when cloud is unreachable
  • Durable
  • Preferably with load balancing between up to four Wans
  • Your Huckleberry@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Palo Alto would do what you want. PA410 or 420 would probably do for your ships. They’re not at all rated for harsh conditions, but they’re about as robust as you’ll find for basic network gear. If you get a PA for the home office as well, you can use their SDWAN for connecting everything.

    For switching…how many ports do you need on each ship? I’m using Unifi industrial switches in our manufacturing plants. They stand up to the Texas summers in a highly alkaline environment. They’re only ten ports though (8 poe).

    • vettnerk@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      In the same rack as the router we usually have two 48P PoE switches in a stack8ng configuration. Depending on the scale of the setup, we often have the same type of switches elsewhere, trunked in via 10gig fber. On rare occasions we have a bunch of extra fibers for which we use Aruba 3810 with only SFP ports.

      We also have 100gig in use, but that’s only for a few closed off networks between the servers, with its dedicated Mellanox switch. While they do connect to the 10gig network via a breakout cable, the 100gig bandwidth is only needed internally in the cluster.