Operations grimoire/Network: Difference between revisions

From Nasqueron Agora
(If we divide into subnets of 16 adresses, that's 2⁴ addresses, so a /28, not a /29)
Line 189: Line 189:
=== A non IP packet doesn't pass ===
=== A non IP packet doesn't pass ===
If the connection is managed by tinc, ensure it's configured in switch mode: in router mode, it only forwards IPv4 and IPv6 unicast packets. For linux, [https://www.tinc-vpn.org/examples/bridging/ check this bridging reference guide].
If the connection is managed by tinc, ensure it's configured in switch mode: in router mode, it only forwards IPv4 and IPv6 unicast packets. For linux, [https://www.tinc-vpn.org/examples/bridging/ check this bridging reference guide].
=== A route is missing ===
* '''Linux:''' ip route add 172.27.27.0/24 via $GW
* '''FreeBSD:''' route add -net 172.27.27.0/24 $GW
$GW is:
* 172.27.27.1 for IntraNought (dwellers, docker-001) and Tinc tunnels
* the tunnel IP, for example 172.27.27.27 for GRE tunnels


[[Category:Drake]]
[[Category:Drake]]
[[Category:Reference]]
[[Category:Reference]]

Revision as of 21:52, 13 March 2022

Network ranges and topology

172.27.27.0/24

Nasqueron servers are managed through Drake Network private IPs.

This subnet is divided into 16 subnets of 16 addresses.

✱ denotes currently a false subnet, containing isolated bare metal servers, not linked to any private network excepted through tunnels, with IP are assigned as /32 (netmask 255.255.255.255 0xffffffff)

172.27.27.0/28

IntraNought, VM hosted on Dreadnought

Netmask: 255.255.255.240 / 0xFFFFFFF0

IP Server Reverse DNS OS Purpose AUP
172.27.27.1 router-001 router-001.nasqueron.drake FreeBSD 12 Router Infrastructure server
172.27.27.2 Reserved for DNS server
172.27.27.3 Reserved for mail server
172.27.27.4 Dwellers dwellers.nasqueron.drake CentOS 8 Docker development server hosting Open for Docker images building
172.27.27.5 Equatower equatower.nasqueron.drake CentOS 8 Docker engine Infrastructure server
172.27.27.6 docker-001 docker-001.nasqueron.drake CentOS 8 Docker engine Infrastructure server
172.27.27.7 Free
... Free
172.27.27.14 Free

172.27.27.16/28

Servers for the production service mesh. Kubernetes.

Netmask could be:

  • if you need to target the service mesh for access purpose: 255.255.255.240 / 0xFFFFFFF0
  • if you need to address a specific IP of a server: 255.255.255.255 / 0xFFFFFFFF - servers are currently baremetal not linked to any private network ethernet card
IP Server Reverse DNS OS Purpose AUP
172.27.27.28 CloudHugger cloudhugger.nasqueron.drake Debian 10 Kubernetes Infrastructure server

172.27.27.32/28

Development and management servers. Work by humans should always be from those servers.

Netmask could be:

  • if you need to target the servers humans will use to manage the infrastructure and deploy applications: 255.255.255.240 / 0xFFFFFFF0
  • if you need to address a specific IP of a server: 255.255.255.255 / 0xFFFFFFFF - servers are currently baremetal not linked to any private network ethernet card
IP Server Reverse DNS OS Purpose AUP
172.27.27.33 Ysul ysul.nasqueron.drake FreeBSD 12.1 Nasqueron development server Access for any Nasqueron or Wolfplex project
172.27.27.34 Free
172.27.27.35 WindRiver windriver.nasqueron.drake FreeBSD 12.1 Nasqueron development server Access for any Nasqueron project

172.27.27.240/28

IP range for tunnels from router-001.nasqueron.org

Netmask: 255.255.255.240 / 0xFFFFFFF0

IP Server Reverse DNS OS Purpose AUP
172.27.27.252 router-001 - - Reserved for tunnel with Ysul -
172.27.27.253 router-001 - - Reserved for tunnel with CloudHugger -
172.27.27.254 router-001 - - Tinc tunnel with WindRiver (and perhaps all others?) -

DNS entries

Domain IP Description
k8s.prod.nasqueron.drake 172.27.27.28 Advertise address for k8s cluster

Other network ranges

Kubernetes clusters use the following network ranges:

Cluster name IP range DNS domain Use
nasqueron-k8s-prod 10.92.0.0/12 k8s.prod.nasqueron.local Kubernetes services
nasqueron-k8s-prod-pods 10.192.0.0/12 None Pods for nasqueron-k8s-prod

Network manual

Build the network

This private network isn't trivial to build as machines are located in different datacenter cabinets, without sharing a common private physical network.

We use the following techniques to recreate those connections:

  • On an hypervisor, each VM has a second network card, with a Drake IP assigned
  • tunnels using ICANNnet as pipelines allow parts of Drake to be connected, with software like tinc

Configure private network card

In rOPS: pillar/nodes/nodes.sls, define a private_interface block the Drake network information for this machine.

The network unit in the core role should pick it and configure it, rOPS: roles/core/network/private.sls at least for CentOS/RHEL/Rocky and FreeBSD}}.

Tinc

Tinc allows to create a mesh network and bridge the network segments.

In router mode, it only forwards IPv4 and IPv6 traffic. In switch and hub mode, it can broadcast all packets to other daemons, including in switch mode a correct Ethernet bridge with routing directly to MAC addresses.

Troubleshoot

A non IP packet doesn't pass

If the connection is managed by tinc, ensure it's configured in switch mode: in router mode, it only forwards IPv4 and IPv6 unicast packets. For linux, check this bridging reference guide.

A route is missing

  • Linux: ip route add 172.27.27.0/24 via $GW
  • FreeBSD: route add -net 172.27.27.0/24 $GW

$GW is:

  • 172.27.27.1 for IntraNought (dwellers, docker-001) and Tinc tunnels
  • the tunnel IP, for example 172.27.27.27 for GRE tunnels