AutoML Forecasting¶
Experimental AutoML forecasting components.
Components:
|
Ensembles AutoML Forecasting models. |
|
Searches AutoML Forecasting architectures and selects the top trials. |
|
Tunes AutoML Forecasting models and selects top trials. |
-
preview.automl.forecasting.ForecastingEnsembleOp(project: str, location: str, root_dir: str, transform_output: dsl.Input[system.Artifact], metadata: dsl.Input[system.Artifact], tuning_result_input: dsl.Input[system.Artifact], instance_baseline: dsl.Input[system.Artifact], instance_schema_path: dsl.Input[system.Artifact], prediction_image_uri: str, gcp_resources: dsl.OutputPath(str), model_architecture: dsl.Output[system.Artifact], example_instance: dsl.Output[system.Artifact], unmanaged_container_model: dsl.Output[google.UnmanagedContainerModel], explanation_metadata: dsl.OutputPath(dict), explanation_metadata_artifact: dsl.Output[system.Artifact], explanation_parameters: dsl.OutputPath(dict), encryption_spec_key_name: str | None =
''
)¶ Ensembles AutoML Forecasting models.
- Parameters¶
- project: str¶
Project to run the job in.
- location: str¶
Region to run the job in.
- root_dir: str¶
The Cloud Storage path to store the output.
- transform_output: dsl.Input[system.Artifact]¶
The transform output artifact.
- metadata: dsl.Input[system.Artifact]¶
The tabular example gen metadata.
- tuning_result_input: dsl.Input[system.Artifact]¶
AutoML Tabular tuning result.
- instance_baseline: dsl.Input[system.Artifact]¶
The instance baseline used to calculate explanations.
- instance_schema_path: dsl.Input[system.Artifact]¶
The path to the instance schema, describing the input data for the tf_model at serving time.
- encryption_spec_key_name: str | None =
''
¶ Customer-managed encryption key.
- prediction_image_uri: str¶
URI of the Docker image to be used as the container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry.
- Returns¶
gcp_resources: dsl.OutputPath(str)
GCP resources created by this component. For more details, see https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/proto/README.md.
model_architecture: dsl.Output[system.Artifact]
The architecture of the output model.
nmanaged_container_model: dsl.Output[google.UnmanagedContainerModel]
Model information needed to perform batch prediction.
explanation_metadata: dsl.OutputPath(dict)
The explanation metadata used by Vertex online and batch explanations.
explanation_metadata_artifact: dsl.Output[system.Artifact]
The explanation metadata used by Vertex online and batch explanations in the format of a KFP Artifact.
explanation_parameters: dsl.OutputPath(dict)
The explanation parameters used by Vertex online and batch explanations.
example_instance: dsl.Output[system.Artifact]
An example instance which may be used as an input for predictions.
-
preview.automl.forecasting.ForecastingStage1TunerOp(project: str, location: str, root_dir: str, num_selected_trials: int, deadline_hours: float, num_parallel_trials: int, single_run_max_secs: int, metadata: dsl.Input[system.Artifact], transform_output: dsl.Input[system.Artifact], materialized_train_split: dsl.Input[system.Artifact], materialized_eval_split: dsl.Input[system.Artifact], gcp_resources: dsl.OutputPath(str), tuning_result_output: dsl.Output[system.Artifact], study_spec_parameters_override: list | None =
[]
, worker_pool_specs_override_json: list | None =[]
, reduce_search_space_mode: str | None ='regular'
, encryption_spec_key_name: str | None =''
)¶ Searches AutoML Forecasting architectures and selects the top trials.
- Parameters¶
- project: str¶
Project to run hyperparameter tuning.
- location: str¶
Location for running the hyperparameter tuning.
- root_dir: str¶
The Cloud Storage location to store the output.
- study_spec_parameters_override: list | None =
[]
¶ JSON study spec. E.g., [{“parameter_id”: “activation”,”categorical_value_spec”: {“values”: [“tanh”]}}]
- worker_pool_specs_override_json: list | None =
[]
¶ JSON worker pool specs. E.g., [{“machine_spec”: {“machine_type”: “n1-standard-16”}},{},{},{“machine_spec”: {“machine_type”: “n1-standard-16”}}]
- reduce_search_space_mode: str | None =
'regular'
¶ The reduce search space mode. Possible values: “regular” (default), “minimal”, “full”.
- num_selected_trials: int¶
Number of selected trials. The number of weak learners in the final model is 5 * num_selected_trials.
- deadline_hours: float¶
Number of hours the hyperparameter tuning should run.
- num_parallel_trials: int¶
Number of parallel training trials.
- single_run_max_secs: int¶
Max number of seconds each training trial runs.
- metadata: dsl.Input[system.Artifact]¶
The tabular example gen metadata.
- transform_output: dsl.Input[system.Artifact]¶
The transform output artifact.
- materialized_train_split: dsl.Input[system.Artifact]¶
The materialized train split.
- materialized_eval_split: dsl.Input[system.Artifact]¶
The materialized eval split.
- encryption_spec_key_name: str | None =
''
¶ Customer-managed encryption key.
- Returns¶
gcp_resources: dsl.OutputPath(str)
GCP resources created by this component. For more details, see https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/proto/README.md.
ning_result_output: dsl.Output[system.Artifact]
The trained model and architectures.
-
preview.automl.forecasting.ForecastingStage2TunerOp(project: str, location: str, root_dir: str, num_selected_trials: int, deadline_hours: float, num_parallel_trials: int, single_run_max_secs: int, metadata: dsl.Input[system.Artifact], transform_output: dsl.Input[system.Artifact], materialized_train_split: dsl.Input[system.Artifact], materialized_eval_split: dsl.Input[system.Artifact], tuning_result_input_path: dsl.Input[system.Artifact], gcp_resources: dsl.OutputPath(str), tuning_result_output: dsl.Output[system.Artifact], worker_pool_specs_override_json: list | None =
[]
, encryption_spec_key_name: str | None =''
)¶ Tunes AutoML Forecasting models and selects top trials.
- Parameters¶
- project: str¶
Project to run stage 2 tuner.
- location: str¶
Cloud region for running the component: us-central1).
- root_dir: str¶
The Cloud Storage location to store the output.
- worker_pool_specs_override_json: list | None =
[]
¶ JSON worker pool specs. E.g., [{“machine_spec”: {“machine_type”: “n1-standard-16”}},{},{},{“machine_spec”: {“machine_type”: “n1-standard-16”}}]
- num_selected_trials: int¶
Number of selected trials. The number of weak learners in the final model.
- deadline_hours: float¶
Number of hours the cross-validation trainer should run.
- num_parallel_trials: int¶
Number of parallel training trials.
- single_run_max_secs: int¶
Max number of seconds each training trial runs.
- metadata: dsl.Input[system.Artifact]¶
The forecasting example gen metadata.
- transform_output: dsl.Input[system.Artifact]¶
The transform output artifact.
- materialized_train_split: dsl.Input[system.Artifact]¶
The materialized train split.
- materialized_eval_split: dsl.Input[system.Artifact]¶
The materialized eval split.
- encryption_spec_key_name: str | None =
''
¶ Customer-managed encryption key.
- tuning_result_input_path: dsl.Input[system.Artifact]¶
Path to the json of hyperparameter tuning results to use when evaluating models.
- Returns¶
gcp_resources: dsl.OutputPath(str)
GCP resources created by this component. For more details, see https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/proto/README.md.
ning_result_output: dsl.Output[system.Artifact]
The trained (private) model artifact paths and their hyperparameters.