Master node | Minimum |
---|---|
CPU | 16 cores |
RAM | 64GB |
Disk space in /opt/anaconda | 500GB* |
Disk space in /var/lib/gravity | 300GB** |
Disk space in /tmp or $TMPDIR | 50GB |
Worker nodes | Minimum |
---|---|
CPU | 16 cores |
RAM | 64GB |
Disk space in /var/lib/gravity | 300GB |
Disk space in /tmp or $TMPDIR | 50GB |
/opt/anaconda
:/opt
and /opt/anaconda
must be an ext4
or xfs
filesystem, and cannot be an NFS mountpoint. Subdirectories of /opt/anaconda
may be mounted through NFS. See Mounting an external file share for more information.
xfs
filesystem, it needs to support d_type
to work properly. If your XFS filesystem has been formatted with the -n ftype=0
option, it won’t support d_type
, and will therefore need to be recreated using a command similar to the following before installing Anaconda Enterprise:
/var/lib/gravity
:/opt/anaconda
and /var/lib/gravity
partitions using Logical Volume Management (LVM), to provide the flexibility needed to accommodate easier future expansion.nproc
.
/tmp
directory during the installation process.
If adequate free space is not available in the /tmp
directory, you can specify the location of the temporary directory to be used during installation by setting the TMPDIR
environment variable to a different location.
EXAMPLE:
sudo
to install, the temporary directory must be set explicitly in the command line to preserve TMPDIR
. The master node and each worker node all require a temporary directory of the same size, and should each use the TMPDIR
variable as needed.df
utility with the -h
parameter for human readable format:
DefaultTasksMax=infinity
in /etc/systemd/system.conf
.ip rule
and the networkmanager service. You will need to remove the bad rule and disable the networkmanager service prior to installcat /etc/*release*
or lsb-release -a
.
TMPDIR
. User 1000 (or the UID for the service account) needs to be able to write to this directory. This means they can read, write and execute on the $TMPDIR
.
For example, to give write access to UID 1000, run the following command:
sudo
access.iptables
, firewall-cmd
, susefirewall2
, and others.enforcing
mode, by either disabling it or putting it in permissive
mode in the /etc/selinux/config
file.Disabled
or Permissive
.
Linux Distribution | Version Modules | |
---|---|---|
CentOS | 7.2 | bridge, ebtable_filter, ebtables, iptable_filter, overlay |
RedHat Linux | 7.2 | bridge, ebtable_filter, ebtables, iptable_filter |
CentOS | 7.3, 7.4, 7.5, 7.6, 7.7, 8.0 | br_netfilter, ebtable_filter, ebtables, iptable_filter, overlay |
RedHat Linux | 7.3, 7.4, 7.5, 7.6, 7.7, 8.0 | br_netfilter, ebtable_filter, ebtables, iptable_filter, overlay |
Ubuntu | 16.04 | br_netfilter, ebtable_filter, ebtables, ebtable_filter, iptable_filter, overlay |
Suse | 12 SP2, 12 SP3 | br_netfilter, ebtable_filter, ebtables, iptable_filter, overlay |
Module name | Purpose |
---|---|
bridge | Required for Kubernetes iptables-based proxy to work correctly |
br_netfilter | Required for Kubernetes iptables-based proxy to work correctly |
overlay | Required to use overlay or overlay2 Docker storage driver |
ebtable_filter | Required to allow a service to communicate back to itself via internal load balancing when necessary |
ebtables | Required to allow a service to communicate back to itself via internal load balancing when necessary |
iptable_filter | Required to make sure that the firewall rules that Kubernetes sets up function properly |
iptable_nat | Required to make sure that the firewall rules that Kubernetes sets up function properly |
sysctl
settings to function properly:
System setting | Purpose |
---|---|
net.bridge.bridge-nf-call-iptables | Works with bridge kernel module to ensure Kubernetes iptables-based proxy works correctly |
net.bridge.bridge-nf-call-ip6tables | Works with bridge kernel module to ensure Kubernetes iptables-based proxy works correctly |
fs.may_detach_mounts | Can cause conflicts with the docker daemon, and leave pods in stuck state if not enabled |
net.ipv4.ip_forward | Required for internal load balancing between servers to work properly |
fs.inotify.max_user_watches | Set to 1048576 to improve cluster longevity |
~/anaconda-enterprise-<installer-version>
.
rpm (local)
or rpm (network)
for SLES, CentOS, and RHEL, and deb(local)
or deb (network)
for Ubuntu.
Current supported CUDA Driver versions:
Port | Protocol | Description |
---|---|---|
80 | TCP | Anaconda Enterprise UI (plaintext) |
443 | TCP | Anaconda Enterprise UI (encrypted) |
32009 | TCP | Operations Center Admin UI |
Port | Protocol | Description |
---|---|---|
4242 | TCP | Bandwidth checker utility |
61009 | TCP | Install wizard UI access required during cluster installation |
61008, 61010, 61022-61024 | TCP | Installer agent ports |
Port | Protocol | Description |
---|---|---|
53 | TCP and UDP | Internal cluster DNS |
2379, 2380, 4001, 7001 | TCP | Etcd server communication |
3008-3012 | TCP | Internal Anaconda Enterprise service |
3022-3025 | TCP | Teleport internal SSH control panel |
3080 | TCP | Teleport Web UI |
5000 | TCP | Docker registry |
6443 | TCP | Kubernetes API Server |
6990 | TCP | Internal Anaconda Enterprise service |
7496, 7373 | TCP | Peer-to-peer health check |
7575 | TCP | Cluster status gRPC API |
8081, 8086-8091, 8095 | TCP | Internal Anaconda Enterprise service |
8472 | UDP | Overlay network |
9080, 9090, 9091 | TCP | Internal Anaconda Enterprise service |
10248-10250, 10255 | TCP | Kubernetes components |
30000-32767 | TCP | Kubernetes internal services range |
10.244.0.0/16
pod subnet and 10.100.0.0/16
service subnet are accessible to every node in the cluster, and grant all nodes the ability to communicate via their primary interface.
For example, if you’re using iptables
:
<node_ip>
specifies the internal IP address(es) used by all nodes in the cluster to connect to the AE5 master.
If you plan to use online package mirroring, you’ll need to allowlist the following domains:
https://anaconda.yourdomain.com/apps/001
and
https://anaconda.yourdomain.com/apps/002
, one app could access the cookies of the other, and JavaScript in one app could access the other app.
To prevent this potential security risk, Anaconda assigns deployments unique addresses such as
https://uuid001.anaconda.yourdomain.com
and
https://uuid002.anaconda.yourdomain.com
, where yourdomain.com
is replaced with your organization’s domain name, and uuid001
and uuid002
is replaced with dynamically generated universally unique identifiers (UUIDs), for example.
To facilitate this, Anaconda Enterprise requires the use of wildcard DNS entries that apply to a set of domain names such as *.anaconda.yourdomain.com
.
For example, if you are using the fully qualified domain name (FQDN) anaconda.yourdomain.com
with a master node IP address of 12.34.56.78
, the DNS entries would be as follows:
/etc/hosts
entries used
must be propagated to the gravity environment.
Existing installations of dnsmasq
will conflict with Anaconda Enterprise. If dnsmasq
is installed on the master node or any worker nodes, you’ll need to remove it from all nodes before installing Anaconda Enterprise.
Run the following commands to ensure dnsmasq
is stopped and disabled:
dnsmasq
: sudo systemctl stop dnsmasq
dnsmasq
: sudo systemctl disable dnsmasq
dnsmasq
is disabled: sudo systemctl status dnsmasq