Master node | Minimum |
---|---|
CPU | 16 cores |
RAM | 64GB |
Disk space in /opt/anaconda | 500GB |
Disk space in /var/lib/gravity | 300GB |
Disk space in /tmp or $TMPDIR | 50GB |
/var/lib/gravity
is utilized as additional space to accommodate upgrades. Anaconda recommends having this available during installation./var/lib/gravity
volume must be mounted on local storage. Core components of Kubernetes run from this directory, some of which are extremely intolerant of disk latency. Therefore, Network-Attached Storage (NAS) and Storage Area Network (SAN) solutions are not supported for this volume./opt/anaconda
is utilized for project and package storage (including mirrored packages)./opt/anaconda
and /var/lib/gravity
partitions using Logical Volume Management (LVM) to provide the flexibility needed to accommodate easier future expansion./opt
and /opt/anaconda
must be an ext4
or xfs
filesystem, and cannot be an NFS mountpoint. Subdirectories of /opt/anaconda
may be mounted through NFS. For more information, see Mounting an external file share.xfs
filesystem must support d_type
file labeling to work properly. To support d-type
file labeling, set ftype=1
by running the following command prior to installing Workbench.This command will erase all data on the specified device! Make sure you are targeting the correct device and that you have backed up any important data from it before proceeding.Worker node | Minimum |
---|---|
CPU | 16 cores |
RAM | 64GB |
Disk space in /var/lib/gravity | 300GB |
Disk space in /tmp or $TMPDIR | 50GB |
m4.4xlarge
for both master and worker nodes. You must have a minimum of 3000 IOPS.DefaultTasksMax=infinity
in /etc/systemd/system.conf
)ip rule
and the networkmanager service. Remove the bad rule and disable the networkmanager service prior to installation./var/lib/gravity
volume from its security scans.sudo
access.disabled
or permissive
mode in the /etc/selinux/config
file.Configuring SELinux
/etc/selinux/config
file using your preferred file editor.SELINUX=
and set it to either disabled
or permissive
.Linux Distribution | Version | Required Modules |
---|---|---|
CentOS | 7.2 | bridge, ebtable_filter, ebtables, iptable_filter, iptable_nat, overlay |
CentOS | 7.3-7.7, 8.0 | br_netfilter, ebtable_filter, ebtables, iptable_filter, iptable_nat, overlay |
RedHat Linux | 7.2 | bridge, ebtable_filter, ebtables, iptable_filter, iptable_nat |
RedHat Linux | 7.3-7.7, 8.0 | br_netfilter, ebtable_filter, ebtables, iptable_filter, iptable_nat, overlay |
Ubuntu | 16.04 | br_netfilter, ebtable_filter, ebtables, iptable_filter, iptable_nat, overlay |
Suse | 12 SP2, 12 SP3, 12 SP5 | br_netfilter, ebtable_filter, ebtables, iptable_filter, iptable_nat, overlay |
Module Name | Purpose |
---|---|
bridge | Enables Kubernetes iptables-based proxy to operate |
br_nerfilter | Enables Kubernetes iptables-based proxy to operate |
overlay | Enables the use of the overlay or overlay2 Docker storage driver |
ebtable_filter | Allows a service to communicate back to itself via internal load balancing when necessary |
ebtables | Allows a service to communicate back to itself via internal load balancing when necessary |
iptable_filter | Ensures the firewall rules set up by Kubernetes function properly |
iptable_nat | Ensures the firewall rules set up by Kubernetes function properly |
sysctl
settings to function properly:
sysctl setting | Purpose |
---|---|
net.bridge.bridge-nf-call-iptables | Communicates with bridge kernel module to ensure Kubernetes iptables-based proxy operates |
net.bridge.bridge-nf-call-ip6tables | Communicates with bridge kernel module to ensure Kubernetes iptables-based proxy operates |
fs.may_detach_mounts | Allows the unmount operation to complete even if there are active references to the filesystem remaining |
net.ipv4.ip_forward | Required for internal load balancing between servers to work properly |
fs.inotify.max_user_watches | Set to 1048576 to improve cluster longevity |
10.2
11.2
11.4
11.6
rpm (local)
or rpm (network)
.deb (local)
or deb (network)
.11.6
:
External ports
Port | Protocol | Description |
---|---|---|
80 | TCP | Workbench UI (plaintext) |
443 | TCP | Workbench UI (encrypted) |
32009 | TCP | Operations Center Admin UI |
Install ports
Port | Protocol | Description |
---|---|---|
4242 | TCP | Bandwidth checker utility |
61009 | TCP | Install wizard UI access required during cluster installation |
61008, 61010, 61022-61024 | TCP | Installer agent ports |
Cluster communication ports
Port | Protocol | Description |
---|---|---|
53 | TCP and UDP | Internal cluster DNS |
2379, 2380, 4001, 7001 | TCP | Etcd server communication |
3008-3012 | TCP | Internal Workbench service |
3022-3025 | TCP | Teleport internal SSH control panel |
3080 | TCP | Teleport Web UI |
5000 | TCP | Docker registry |
6443 | TCP | Kubernetes API Server |
6990 | TCP | Internal Workbench service |
7496, 7373 | TCP | Peer-to-peer health check |
7575 | TCP | Cluster status gRPC API |
8081, 8086-8091, 8095 | TCP | Internal Workbench service |
8472 | UDP | Overlay network |
9080, 9090, 9091 | TCP | Internal Workbench service |
10248-10250, 10255 | TCP | Kubernetes components |
30000-32767 | TCP | Kubernetes internal services range |
iptables
, firewall-cmd
, susefirewall2
, and more!10.244.0.0/16
pod subnet and 10.100.0.0/16
service subnet are accessible to every node in the cluster, and grant all nodes the ability to communicate via their primary interface.
For example, if you’re using iptables
:
https://uuid001.anaconda.yourdomain.com
.
This requires the use of wildcard DNS entries that apply to a set of domain names such as *.anaconda.yourdomain.com
.
For example, if you are using the domain name anaconda.yourdomain.com
with a master node IP address of 12.34.56.78
, the DNS entries would be as follows:
/etc/hosts
entries to the gravity environment.
dnsmasq
is installed on the master node or any worker nodes, you’ll need to remove it from all nodes prior to installing Workbench.Verify dnsmasq
is disabled by running the following command:dnsmasq
, run the following commands:~/anaconda-enterprise-<VERSION>
, on your intended master and worker nodes:
To perform system checks on the master
node, run the following command as sudo or
root user:
worker
node, run the following command as sudo or
root user:
Gravity pre-inallation checklist
sudo
access on all nodes and is not a root user.grav\_tls\_ssl\_reqs
to be installed with Workbench have been obtained, including the private keys.A
or CNAME
domain record is fully operational, and points to the IP address of the master node./etc/resolv.conf
file on all the nodes does not include the rotate
option.dockerd
), dnsmasq
, and lxd
have been removed from all nodes, as they will conflict with Workbench.