Data Hub File Service - a service enabling file inspection and re-encryption at Data Hubs
Here you should provide a short summary of the purpose of this microservice.
We recommend using the provided Docker container.
A pre-built version is available on Docker Hub:
docker pull ghga/datahub-file-service:1.0.2Or you can build the container yourself from the ./Dockerfile:
# Execute in the repo's root dir:
docker build -t ghga/datahub-file-service:1.0.2 .For production-ready deployment, we recommend using Kubernetes. However for simple use cases, you could execute the service using docker on a single server:
# The entrypoint is pre-configured:
docker run -p 8080:8080 ghga/datahub-file-service:1.0.2 --helpIf you prefer not to use containers, you may install the service from source:
# Execute in the repo's root dir:
pip install .
# To run the service:
dhfs --helpThe service requires the following configuration parameters:
-
client_cache_capacity(integer): Maximum number of entries to store in the cache. Older entries are evicted once this limit is reached. Exclusive minimum:0. Default:128. -
client_cache_ttl(integer): Number of seconds after which a stored response is considered stale. Minimum:0. Default:60. -
client_cacheable_methods(array): HTTP methods for which responses are allowed to be cached. Default:["POST", "GET"]. -
client_exponential_backoff_max(integer): Maximum number of seconds to wait between retries when using exponential backoff retry strategies. The client timeout might need to be adjusted accordingly. Minimum:0. Default:60. -
client_num_retries(integer): Number of times to retry failed API calls. Minimum:0. Default:3. -
client_retry_status_codes(array): List of status codes that should trigger retrying a request. Default:[408, 429, 500, 502, 503, 504]. -
client_reraise_from_retry_error(boolean): Specifies if the exception wrapped in the final RetryError is reraised or the RetryError is returned as is. Default:true. -
per_request_jitter(number): Max amount of jitter (in seconds) to add to each request. Minimum:0. Default:0.0. -
retry_after_applicable_for_num_requests(integer): Amount of requests after which the stored delay from a 429 response is ignored again. Can be useful to adjust if concurrent requests are fired in quick succession. Exclusive minimum:0. Default:1. -
http_request_timeout_seconds(number): Request timeout setting in seconds. Default:60.0. -
data_hub_crypt4gh_private_key_path(string, format: path, required): Path to the Data Hub's Crypt4GH private key file.Examples:
"./key.sec" -
crypt4gh_private_key_passphrase: Passphrase needed to read the content of the private key file. Only needed if the private key is encrypted. Default:null. -
central_api_crypt4gh_public_key(string, required): The Crypt4GH public key used by the Central API. This is used to encrypt new file encryption secrets. -
central_api_url(string, format: uri, required): The base URL used to connect to to the GHGA Central API. Length must be between 1 and 2083 (inclusive). -
data_hub_signing_key(string, format: password, required and write-only): The Data Hub's private JWK for signing JWT auth tokens.Examples:
"{\"crv\": \"P-256\", \"kty\": \"EC\", \"x\": \"...\", \"y\": \"...\", \"d\": \"...\"}" -
storage_alias(string, required): An alias identifying the Data Hub at which this instance of DHFS is running. This value should be set in coordination with GHGA Central.Examples:
"HD""TUE""B" -
s3_endpoint_url(string, required): URL to the S3 API.Examples:
"http://localhost:4566" -
s3_access_key_id(string, required): Part of credentials for login into the S3 service. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html.Examples:
"my-access-key-id" -
s3_secret_access_key(string, format: password, required and write-only): Part of credentials for login into the S3 service. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html.Examples:
"my-secret-access-key" -
s3_session_token: Part of credentials for login into the S3 service. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html. Default:null.Examples:
"my-session-token" -
aws_config_ini: Path to a config file for specifying more advanced S3 parameters. This should follow the format described here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-a-configuration-file. Default:null.Examples:
"~/.aws/config" -
log_level(string): The minimum log level to capture. Must be one of: "CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", or "TRACE". Default:"INFO". -
service_name(string): Short name of this service. Default:"dhfs". -
service_instance_id(string, required): A string that uniquely identifies this instance across all instances of this service. This is included in log messages.Examples:
"germany-bw-instance-001" -
log_format: If set, will replace JSON formatting with the specified string format. If not set, has no effect. In addition to the standard attributes, the following can also be specified: timestamp, service, instance, level, correlation_id, and details. Default:null.Examples:
"%(timestamp)s - %(service)s - %(level)s - %(message)s""%(asctime)s - Severity: %(levelno)s - %(msg)s" -
log_traceback(boolean): Whether to include exception tracebacks in log messages. Default:true. -
min_run_interval_seconds(integer): The minimum number of seconds to wait before asking the CentralAPI about new files for interrogation. Default:60. -
interrogation_bucket_id(string): The name for the S3 'interrogation' bucket. Default:"interrogation".
A template YAML file for configuring the service can be found at
./example_config.yaml.
Please adapt it, rename it to .dhfs.yaml, and place it in one of the following locations:
- in the current working directory where you execute the service (on Linux:
./.dhfs.yaml) - in your home directory (on Linux:
~/.dhfs.yaml)
The config YAML file will be automatically parsed by the service.
Important: If you are using containers, the locations refer to paths within the container.
All parameters mentioned in the ./example_config.yaml
can also be set using environment variables or file secrets.
For naming the environment variables, just prefix the parameter name with dhfs_,
e.g. for the host set an environment variable named dhfs_host
(you may use both upper or lower cases, however, it is standard to define all env
variables in upper cases).
To use file secrets, please refer to the corresponding section of the pydantic documentation.
An OpenAPI specification for this service can be found here.
This is a Python-based service following the Triple Hexagonal Architecture pattern. It uses protocol/provider pairs and dependency injection mechanisms provided by the hexkit library.
For setting up the development environment, we rely on the devcontainer feature of VS Code in combination with Docker Compose.
To use it, you have to have Docker Compose as well as VS Code with its "Remote - Containers"
extension (ms-vscode-remote.remote-containers) installed.
Then open this repository in VS Code and run the command
Remote-Containers: Reopen in Container from the VS Code "Command Palette".
This will give you a full-fledged, pre-configured development environment including:
- infrastructural dependencies of the service (databases, etc.)
- all relevant VS Code extensions pre-installed
- pre-configured linting and auto-formatting
- a pre-configured debugger
- automatic license-header insertion
Inside the devcontainer, a command dev_install is available for convenience.
It installs the service with all development dependencies, and it installs pre-commit.
The installation is performed automatically when you build the devcontainer. However,
if you update dependencies in the ./pyproject.toml or the
lock/requirements-dev.txt, run it again.
This repository is free to use and modify according to the Apache 2.0 License.
This README file is auto-generated, please see .readme_generation/README.md for details.