JobsRunNowRequest
Last updated
Last updated
job_id
Option<i64>
The ID of the job to be executed
[optional]
idempotency_token
Option<String>
An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see .
[optional]
jar_params
Option<Vec>
A list of parameters for jobs with Spark JAR tasks, for example \"jar_params\": [\"john doe\", \"35\"]
. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now
, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {\"jar_params\":[\"john doe\",\"35\"]}
) cannot exceed 10,000 bytes. Use to set parameters containing information about job runs.
[optional]
notebook_params
Option<>
A map from keys to values for jobs with notebook task, for example \"notebook_params\": {\"name\": \"john doe\", \"age\": \"35\"}
. The map is passed to the notebook and is accessible through the function. If not specified upon run-now
, the triggered run uses the job’s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use to set parameters containing information about job runs. The JSON representation of this field (for example {\"notebook_params\":{\"name\":\"john doe\",\"age\":\"35\"}}
) cannot exceed 10,000 bytes.
[optional]
python_params
Option<Vec>
A list of parameters for jobs with Python tasks, for example \"python_params\": [\"john doe\", \"35\"]
. The parameters are passed to Python file as command-line parameters. If specified upon run-now
, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {\"python_params\":[\"john doe\",\"35\"]}
) cannot exceed 10,000 bytes. Use to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis.
[optional]
spark_submit_params
Option<Vec>
A list of parameters for jobs with spark submit task, for example \"spark_submit_params\": [\"--class\", \"org.apache.spark.examples.SparkPi\"]
. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now
, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {\"python_params\":[\"john doe\",\"35\"]}
) cannot exceed 10,000 bytes. Use to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis.
[optional]
python_named_params
Option<>
A map from keys to values for jobs with Python wheel task, for example \"python_named_params\": {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}
.
[optional]
pipeline_params
Option<>
[optional]
sql_params
Option<>
A map from keys to values for SQL tasks, for example \"sql_params\": {\"name\": \"john doe\", \"age\": \"35\"}
. The SQL alert task does not support custom parameters.
[optional]
dbt_commands
Option<Vec>
An array of commands to execute for jobs with the dbt task, for example \"dbt_commands\": [\"dbt deps\", \"dbt seed\", \"dbt run\"]
[optional]