google_cloud_pipeline_components.experimental.forecasting package

Google Cloud Pipeline Experimental Forecasting Components.

google_cloud_pipeline_components.experimental.forecasting.ForecastingPrepareDataForTrainOp(input_tables: str, preprocess_metadata: str, model_feature_columns: str = None)

prepare_data_for_train Prepares the parameters for the training step.

Args:
input_tables (str):

Required. Serialized Json array that specifies input BigQuery tables and specs.

preprocess_metadata (str):

Required. The output of ForecastingPreprocessingOp that is a serialized dictionary with 2 fields: processed_bigquery_table_uri and column_metadata.

model_feature_columns (str):

Optional. Serialized list of column names that will be used as input feature in the training step. If None, all columns will be used in training.

Returns:
NamedTuple:
time_series_identifier_column (str):

Name of the column that identifies the time series.

time_series_attribute_columns (str):

Serialized column names that should be used as attribute columns.

available_at_forecast_columns (str):

Serialized column names of columns that are available at forecast.

unavailable_at_forecast_columns (str):

Serialized column names of columns that are unavailable at forecast.

column_transformations (str):

Serialized transformations to apply to the input columns.

preprocess_bq_uri (str):

The BigQuery table that saves the preprocessing result and will be used as training input.

target_column (str):

The name of the column values of which the Model is to predict.

time_column (str):

Name of the column that identifies time order in the time series.

predefined_split_column (str):

Name of the column that specifies an ML use of the row.

weight_column (str):

Name of the column that should be used as the weight column.

data_granularity_unit (str):

The data granularity unit.

data_granularity_count (str):

The number of data granularity units between data points in the training data.

google_cloud_pipeline_components.experimental.forecasting.ForecastingPreprocessingOp()

forecasting_preprocessing Preprocesses BigQuery tables for training or prediction.

Creates a BigQuery table for training or prediction based on the input tables. For training, a primary table is required. Optionally, you can include some attribute tables. For prediction, you need to include all the tables that were used in the training, plus a plan table.

Args:
project (str):

The GCP project id that runs the pipeline.

input_tables (str):

Serialized Json array that specifies input BigQuery tables and specs.

preprocessing_bigquery_dataset (str):

Optional BigQuery dataset to save the preprocessing result BigQuery table. If not present, a new dataset will be created by the component.

location (str):

Optional location for the BigQuery data, default is US.

Returns:

preprocess_metadata (str)

google_cloud_pipeline_components.experimental.forecasting.ForecastingValidationOp()

forecasting_validation Validates BigQuery tables for training or prediction.

Validates BigQuery tables for training or prediction based on predefined requirements. For training, a primary table is required. Optionally, you can include some attribute tables. For prediction, you need to include all the tables that were used in the training, plus a plan table.

Args:
input_tables (str):

Serialized Json array that specifies input BigQuery tables and specs.

validation_theme (str):

Theme to use for validating the BigQuery tables. Acceptable values are FORECASTING_TRAINING and FORECASTING_PREDICTION.

location (str):

Optional location for the BigQuery data, default is US.

Returns:

None