ClusterInfo
Properties
num_workers
Option<i32>
If num_workers, number of worker nodes that this cluster must have. A cluster has one Spark driver and num_workers executors for a total of num_workers + 1 Spark nodes. Note: When reading the properties of a cluster, this field reflects the desired number of workers rather than the actual number of workers. For instance, if a cluster is resized from 5 to 10 workers, this field is immediately updated to reflect the target size of 10 workers, whereas the workers listed in executors
gradually increase from 5 to 10 as the new nodes are provisioned.
[optional]
autoscale
[optional]
cluster_id
Option<String>
Canonical identifier for the cluster. This ID is retained during cluster restarts and resizes, while each new cluster has a globally unique ID.
[optional]
creator_user_name
Option<String>
Creator user name. The field won’t be included in the response if the user has already been deleted.
[optional]
driver
[optional]
executors
Nodes on which the Spark executors reside.
[optional]
spark_context_id
Option<i64>
A canonical SparkContext identifier. This value does change when the Spark driver restarts. The pair (cluster_id, spark_context_id)
is a globally unique identifier over all Spark contexts.
[optional]
jdbc_port
Option<i32>
Port on which Spark JDBC server is listening in the driver node. No service listens on this port in executor nodes.
[optional]
cluster_name
Option<String>
Cluster name requested by the user. This doesn’t have to be unique. If not specified at creation, the cluster name is an empty string.
[optional]
spark_version
Option<String>
[optional]
spark_conf
An arbitrary object where the object key is a configuration propery name and the value is a configuration property value.
[optional]
aws_attributes
[optional]
node_type_id
Option<String>
[optional]
driver_node_type_id
Option<String>
The node type of the Spark driver. This field is optional; if unset, the driver node type is set as the same value as node_type_id
defined above.
[optional]
ssh_public_keys
Option<Vec>
SSH public key contents that are added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu
on port 2200
. Up to 10 keys can be specified.
[optional]
custom_tags
[optional]
cluster_log_conf
[optional]
init_scripts
The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If cluster_log_conf
is specified, init script logs are sent to <destination>/<cluster-ID>/init_scripts
.
[optional]
docker_image
[optional]
spark_env_vars
An arbitrary object where the object key is an environment variable name and the value is an environment variable value.
[optional]
autotermination_minutes
Option<i32>
Automatically terminates the cluster after it is inactive for this time in minutes. If not set, this cluster is not be automatically terminated. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination.
[optional]
enable_elastic_disk
Option<bool>
[optional]
instance_pool_id
Option<String>
[optional]
cluster_source
[optional]
state
[optional]
state_message
Option<String>
A message associated with the most recent state transition (for example, the reason why the cluster entered a TERMINATED
state). This field is unstructured, and its exact format is subject to change.
[optional]
start_time
Option<i64>
Time (in epoch milliseconds) when the cluster creation request was received (when the cluster entered a PENDING
state).
[optional]
terminated_time
Option<i64>
Time (in epoch milliseconds) when the cluster was terminated, if applicable.
[optional]
last_state_loss_time
Option<i64>
Time when the cluster driver last lost its state (due to a restart or driver failure).
[optional]
last_activity_time
Option<i64>
[optional]
cluster_memory_mb
Option<i64>
Total amount of cluster memory, in megabytes.
[optional]
cluster_cores
Option<f32>
Number of CPU cores available for this cluster. This can be fractional since certain node types are configured to share cores between Spark nodes on the same instance.
[optional]
default_tags
[optional]
cluster_log_status
[optional]
termination_reason
[optional]
Last updated