anaconda-enterprise-env-var-config
and restart the workspace pod. This will enable the session to start successfully.kubernetes.run_as_root
from false to true in the configmap, so that sessions, deployments, and jobs will run as UID from the start. This enables features like authenticated NFS and the sparkconfig script. This also resolved the issue with sessions taking a long time to open.conda.other_variables
appear in the log list and are available in sessions, deployments, and job.sudo dnf install --installroot=/opt vim 2
Sudo dnf list
/opt/continuum/project
to persist, allowing the existing volume mount support to apply to /opt/continuum/project
and not lose uncommitted data. This will ensure any changes the user has made to the conda environment will be properly encapsulated into changes to anaconda-project.yml
.env-var-config.yml
.lab_launch
environment.condarc
files could be overwritten if the file was placed in a directory of “lower priority” that the user’s home directory.noarch
to the default platforms in the anaconda.yaml
mirror config file.anaconda-project.yml
file to make the Hadoop-Spark environment template work properly.sudo yum
operations in sessions
across the platform.default_channels
,
channel_alias
, ssl_verify
settings in the conda
section of
configmap to be consistent with conda
configuration settings.noarch
packages in package mirroring tool.anaconda-enterprise-cli
.kube-router
and a CrashLoopBackOff
errorContainerCreating
state for 5 to 10 minutes while all AE images are being
pre-pulled, after which the AE user interface will become available.verify_ssl
setting with
anaconda-enterprise-cli
ae-admin
)sudo yum install
system packages from within project
sessionssparklyr
in Spark Projectverify_ssl
to ssl_verify
throughout AE CLI for
consistency with conda
DiskNodeUnderPressure
and cluster stability issues