Data Science & AI Workbench enables you to connect to Amazon Simple Storage Service (S3) object storage service, to access data stored there.Before you can do so, however, you’ll need to install the s3fs package, which contains the Python filesystem interface required to connect to S3:
Copy
Ask AI
conda install --channel anaconda s3fs
Any packages you install from the command line are available during the current session only. If you want them to persist, add them to the project’s anaconda-project.yml file. For more information, see Project configurations.
You can then use code such as this to access a specific S3 bucket from within a notebook session:
Copy
Ask AI
from s3fs.core import S3FileSystemimport configparser
Credentials would need to be in .ini format and look like the following:
# Configparser is in the standard library, and can be used to read the .ini file so that you can use it to set up the S3 object below.config = configparser.ConfigParser()config.read('/var/run/secrets/user_credentials/aws_credentials')# Set up the object using the credentials from the filefs = S3FileSystem( anon=False, key=config.get('default', 'aws_access_key_id'), secret=config.get('default', 'aws_secret_access_key') )# Provide the bucket and file namebucket = 'test_bucket'file_name = 'testing.txt'# To list what is in the specified bucketall_files = fs.ls(f'{bucket}')print(all_files)# To read the specified file in the named bucketwith fs.open(f'{bucket}/{file_name}', mode='rb') as f: print(f.read())
See Secrets for information about adding credentials to the platform, to make them available in your projects. Any secrets you add will be available across all sessions and deployments associated with your user account.
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.