google_cloud_pipeline_components.experimental.forecasting package

Google Cloud Pipeline Experimental Forecasting Components.

google_cloud_pipeline_components.experimental.forecasting.ForecastingPrepareDataForTrainOp(input_tables: str, preprocess_metadata: str, model_feature_columns: str = None)

Prepare data for train Prepares the parameters for the training step.

Args:
input_tables (str):

Required. Serialized Json array that specifies input BigQuery tables and specs.

preprocess_metadata (str):

Required. The output of ForecastingPreprocessingOp that is a serialized dictionary with 2 fields: processed_bigquery_table_uri and column_metadata.

model_feature_columns (str):

Optional. Serialized list of column names that will be used as input feature in the training step. If None, all columns will be used in training.

Returns:
NamedTuple:
time_series_identifier_column (str):

Name of the column that identifies the time series.

time_series_attribute_columns (str):

Serialized column names that should be used as attribute columns.

available_at_forecast_columns (str):

Serialized column names of columns that are available at forecast.

unavailable_at_forecast_columns (str):

Serialized column names of columns that are unavailable at forecast.

column_transformations (str):

Serialized transformations to apply to the input columns.

preprocess_bq_uri (str):

The BigQuery table that saves the preprocessing result and will be used as training input.

target_column (str):

The name of the column values of which the Model is to predict.

time_column (str):

Name of the column that identifies time order in the time series.

predefined_split_column (str):

Name of the column that specifies an ML use of the row.

weight_column (str):

Name of the column that should be used as the weight column.

data_granularity_unit (str):

The data granularity unit.

data_granularity_count (str):

The number of data granularity units between data points in the training data.

google_cloud_pipeline_components.experimental.forecasting.ForecastingPreprocessingOp()

Forecasting Preprocessing Preprocesses BigQuery tables for training or prediction.

Creates a BigQuery table for training or prediction based on the input tables. For training, a primary table is required. Optionally, you can include some attribute tables. For prediction, you need to include all the tables that were used in the training, plus a plan table.

Args:
project (str):

The GCP project id that runs the pipeline.

input_tables (str):

Serialized Json array that specifies input BigQuery tables and specs.

preprocessing_bigquery_dataset (str):

Optional BigQuery dataset to save the preprocessing result BigQuery table. If not present, a new dataset will be created by the component.

Returns:

preprocess_metadata (str)

google_cloud_pipeline_components.experimental.forecasting.ForecastingTrainingWithExperimentsOp(display_name: str, dataset: google.cloud.aiplatform.datasets.time_series_dataset.TimeSeriesDataset, target_column: str, time_column: str, time_series_identifier_column: str, unavailable_at_forecast_columns: List[str], available_at_forecast_columns: List[str], forecast_horizon: int, data_granularity_unit: str, data_granularity_count: int, optimization_objective: Optional[str] = None, column_specs: Optional[Dict[str, str]] = None, column_transformations: Optional[List[Dict[str, Dict[str, str]]]] = None, project: Optional[str] = None, location: Optional[str] = None, labels: Optional[Dict[str, str]] = None, training_encryption_spec_key_name: Optional[str] = None, model_encryption_spec_key_name: Optional[str] = None, predefined_split_column_name: Optional[str] = None, weight_column: Optional[str] = None, time_series_attribute_columns: Optional[List[str]] = None, context_window: Optional[int] = None, export_evaluated_data_items: bool = False, export_evaluated_data_items_bigquery_destination_uri: Optional[str] = None, export_evaluated_data_items_override_destination: bool = False, quantiles: Optional[List[float]] = None, validation_options: Optional[str] = None, budget_milli_node_hours: int = 1000, model_display_name: Optional[str] = None, model_labels: Optional[Dict[str, str]] = None, additional_experiments: Optional[List[str]] = None) google.cloud.aiplatform.models.Model

Runs the training job with experiment flags and returns a model. The training data splits are set by default: Roughly 80% will be used for training, 10% for validation, and 10% for test.

Args:
dataset:

Required. The dataset within the same Project from which data will be used to train the Model. The Dataset must use schema compatible with Model being trained, and what is compatible should be described in the used TrainingPipeline’s [training_task_definition] [google.cloud.aiplatform.v1beta1.TrainingPipeline.training_task_definition]. For time series Datasets, all their data is exported to training, to pick and choose from.

target_column:

Required. Name of the column that the Model is to predict values for.

time_column:

Required. Name of the column that identifies time order in the time series.

time_series_identifier_column:

Required. Name of the column that identifies the time series.

unavailable_at_forecast_columns:

Required. Column names of columns that are unavailable at forecast. Each column contains information for the given entity (identified by the [time_series_identifier_column]) that is unknown before the forecast (e.g. population of a city in a given year, or weather on a given day).

available_at_forecast_columns:

Required. Column names of columns that are available at forecast. Each column contains information for the given entity (identified by the [time_series_identifier_column]) that is known at forecast.

forecast_horizon:

(int): Required. The amount of time into the future for which forecasted values for the target are returned. Expressed in number of units defined by the [data_granularity_unit] and [data_granularity_count] field. Inclusive.

data_granularity_unit:

Required. The data granularity unit. Accepted values are minute, hour, day, week, month, year.

data_granularity_count:

Required. The number of data granularity units between data points in the training data. If [data_granularity_unit] is minute, can be 1, 5, 10, 15, or 30. For all other values of [data_granularity_unit], must be 1.

predefined_split_column_name:

Optional. The key is a name of one of the Dataset’s data columns. The value of the key (either the label’s value or value in the column) must be one of {TRAIN, VALIDATE, TEST}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline.

Supported only for tabular and time series Datasets.

weight_column:

Optional. Name of the column that should be used as the weight column. Higher values in this column give more importance to the row during Model training. The column must have numeric values between 0 and 10000 inclusively, and 0 value means that the row is ignored. If the weight column field is not set, then all rows are assumed to have equal weight of 1.

time_series_attribute_columns:

Optional. Column names that should be used as attribute columns. Each column is constant within a time series.

context_window:

Optional. The amount of time into the past training and prediction data is used for model training and prediction respectively. Expressed in number of units defined by the [data_granularity_unit] and [data_granularity_count] fields. When not provided uses the default value of 0 which means the model sets each series context window to be 0 (also known as “cold start”). Inclusive.

export_evaluated_data_items:

Whether to export the test set predictions to a BigQuery table. If False, then the export is not performed.

export_evaluated_data_items_bigquery_destination_uri:

Optional. URI of desired destination BigQuery table for exported test set predictions.

Expected format: bq://<project_id>:<dataset_id>:<table>

If not specified, then results are exported to the following auto-created BigQuery table: <project_id>:export_evaluated_examples_<model_name>_<yyyy_MM_dd'T'HH_mm_ss_SSS'Z'>.evaluated_examples

Applies only if [export_evaluated_data_items] is True.

export_evaluated_data_items_override_destination:

Whether to override the contents of [export_evaluated_data_items_bigquery_destination_uri], if the table exists, for exported test set predictions. If False, and the table exists, then the training job will fail.

Applies only if [export_evaluated_data_items] is True and [export_evaluated_data_items_bigquery_destination_uri] is specified.

quantiles:

Quantiles to use for the minizmize-quantile-loss [AutoMLForecastingTrainingJob.optimization_objective]. This argument is required in this case.

Accepts up to 5 quantiles in the form of a double from 0 to 1, exclusive. Each quantile must be unique.

validation_options:

Validation options for the data validation component. The available options are: “fail-pipeline” - (default), will validate against the validation and fail the pipeline

if it fails.

“ignore-validation” - ignore the results of the validation and continue the pipeline

budget_milli_node_hours:

Optional. The train budget of creating this Model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model will not exceed this budget. The final cost will be attempted to be close to the budget, though may end up being (even) noticeably smaller - at the backend’s discretion. This especially may happen when further model training ceases to provide any improvements. If the budget is set to a value known to be insufficient to train a Model for the given training set, the training won’t be attempted and will error. The minimum value is 1000 and the maximum is 72000.

model_display_name:

Optional. If the script produces a managed Vertex AI Model. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters.

If not provided upon creation, the job’s display_name is used.

model_labels:

Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

additional_experiments:

Additional experiment flags for the time series forcasting training.

display_name:

Required. The user-defined name of this TrainingPipeline.

optimization_objective:

Optional. Objective function the model is to be optimized towards. The training process creates a Model that optimizes the value of the objective function over the validation set. The supported optimization objectives: “minimize-rmse” (default) - Minimize root-mean-squared error (RMSE). “minimize-mae” - Minimize mean-absolute error (MAE). “minimize-rmsle” - Minimize root-mean-squared log error (RMSLE). “minimize-rmspe” - Minimize root-mean-squared percentage error (RMSPE). “minimize-wape-mae” - Minimize the combination of weighted absolute percentage error (WAPE)

and mean-absolute-error (MAE).

“minimize-quantile-loss” - Minimize the quantile loss at the defined quantiles.

(Set this objective to build quantile forecasts.)

column_specs:

Optional. Alternative to column_transformations where the keys of the dict are column names and their respective values are one of AutoMLTabularTrainingJob.column_data_types. When creating transformation for BigQuery Struct column, the column should be flattened using “.” as the delimiter. Only columns with no child should have a transformation. If an input column has no transformations on it, such a column is ignored by the training, except for the targetColumn, which should have no transformations defined on. Only one of column_transformations or column_specs should be passed.

column_transformations:

Optional. Transformations to apply to the input columns (i.e. columns other than the targetColumn). Each transformation may produce multiple result values from the column’s value, and all are used for training. When creating transformation for BigQuery Struct column, the column should be flattened using “.” as the delimiter. Only columns with no child should have a transformation. If an input column has no transformations on it, such a column is ignored by the training, except for the targetColumn, which should have no transformations defined on. Only one of column_transformations or column_specs should be passed. Consider using column_specs as column_transformations will be deprecated eventually.

project:

Optional. Project to run training in. Overrides project set in aiplatform.init.

location:

Optional. Location to run training in. Overrides location set in aiplatform.init.

labels:

Optional. The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

training_encryption_spec_key_name:

Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the training pipeline. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

If set, this TrainingPipeline will be secured by this key.

Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload is not set separately.

Overrides encryption_spec_key_name set in aiplatform.init.

model_encryption_spec_key_name:

Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

If set, the trained Model will be secured by this key.

Overrides encryption_spec_key_name set in aiplatform.init.

Returns:

The trained Vertex AI Model resource or None if training did not produce a Vertex AI Model.

Raises:
RuntimeError:

If Training job has already been run or is waiting to run.

google_cloud_pipeline_components.experimental.forecasting.ForecastingValidationOp()

Forecasting Validation Validates BigQuery tables for training or prediction.

Validates BigQuery tables for training or prediction based on predefined requirements. For training, a primary table is required. Optionally, you can include some attribute tables. For prediction, you need to include all the tables that were used in the training, plus a plan table.

Args:
input_tables (str):

Serialized Json array that specifies input BigQuery tables and specs.

validation_theme (str):

Theme to use for validating the BigQuery tables. Acceptable values are FORECASTING_TRAINING and FORECASTING_PREDICTION.

Returns:

None