Forecasting Intermittent Demand

In this tutorial, we show how to use TimeGPT on an intermittent series where we have many values at zero. Here, we use a subset of the M5 dataset that tracks the demand for food items in a Californian store. The dataset also includes exogenous variables like the sell price and the type of event occuring at a particular day.

TimeGPT achieves the best performance at a MAE of 0.49, which represents a 14% improvement over the best statistical model specifically built to handle intermittent time series data.

Predicting with TimeGPT took 6.8 seconds, while fitting and predicting with statistical models took 5.2 seconds. TimeGPT is technically slower, but for a difference in time of roughly 1 second only, we get much better predictions with TimeGPT.

Initial setup

We start off by importing the required packages for this tutorial and create an instace of NixtlaClient.

import time
import pandas as pd
import numpy as np

from nixtla import NixtlaClient

from utilsforecast.losses import mae
from utilsforecast.evaluation import evaluate
nixtla_client = NixtlaClient(
    # defaults to os.environ.get("NIXTLA_API_KEY")
    api_key = 'my_api_key_provided_by_nixtla'
)

๐Ÿ‘

Use an Azure AI endpoint

To use an Azure AI endpoint, remember to set also the base_url argument:

nixtla_client = NixtlaClient(base_url="you azure ai endpoint", api_key="your api_key")

We now read the dataset and plot it.

df = pd.read_csv("https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/m5_sales_exog_small.csv")
df['ds'] = pd.to_datetime(df['ds'])

df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_Sporting
0FOODS_1_0012011-01-2932.00000
1FOODS_1_0012011-01-3002.00000
2FOODS_1_0012011-01-3102.00000
3FOODS_1_0012011-02-0112.00000
4FOODS_1_0012011-02-0242.00000
nixtla_client.plot(
    df, 
    max_insample_length=365, 
)

In the figure above, we can see the intermittent nature of this dataset, with many periods with zero demand.

Now, letโ€™s use TimeGPT to forecast the demand of each product.

Bounded forecasts

To avoid getting negative predictions coming from the model, we use a log transformation on the data. That way, the model will be forced to predict only positive values.

Note that due to the presence of zeros in our dataset, we add one to all points before taking the log.

df_transformed = df.copy()

df_transformed['y'] = np.log(df_transformed['y']+1)

df_transformed.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_Sporting
0FOODS_1_0012011-01-291.3862942.00000
1FOODS_1_0012011-01-300.0000002.00000
2FOODS_1_0012011-01-310.0000002.00000
3FOODS_1_0012011-02-010.6931472.00000
4FOODS_1_0012011-02-021.6094382.00000

Now, letโ€™s keep the last 28 time steps for the test set and use the rest as input to the model.

test_df = df_transformed.groupby('unique_id').tail(28)                                                      

input_df = df_transformed.drop(test_df.index).reset_index(drop=True)

Forecasting with TimeGPT

start = time.time()

fcst_df = nixtla_client.forecast(
    df=input_df,
    h=28,                            
    level=[80],                        # Generate a 80% confidence interval
    finetune_steps=10,                 # Specify the number of steps for fine-tuning
    finetune_loss='mae',               # Use the MAE as the loss function for fine-tuning
    model='timegpt-1-long-horizon',    # Use the model for long-horizon forecasting
    time_col='ds',
    target_col='y',
    id_col='unique_id'
)

end = time.time()

timegpt_duration = end - start

print(f"Time (TimeGPT): {timegpt_duration}")
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: D
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...

Time (TimeGPT): 6.164413213729858

๐Ÿ“˜

Available models in Azure AI

If you are using an Azure AI endpoint, please be sure to set model="azureai":

nixtla_client.forecast(..., model="azureai")

For the public API, we support two models: timegpt-1 and timegpt-1-long-horizon.

By default, timegpt-1 is used. Please see this tutorial on how and when to use timegpt-1-long-horizon.

Great! TimeGPT was done in 5.8 seconds and we now have predictions. However, those predictions are transformed, so we need to inverse the transformation to get back to the orignal scale. Therefore, we take the exponential and subtract one from each data point.

cols = [col for col in fcst_df.columns if col not in ['ds', 'unique_id']]

for col in cols:
    fcst_df[col] = np.exp(fcst_df[col])-1

fcst_df.head()
unique_iddsTimeGPTTimeGPT-lo-80TimeGPT-hi-80
0FOODS_1_0012016-05-230.286841-0.2671011.259465
1FOODS_1_0012016-05-240.320482-0.2412361.298046
2FOODS_1_0012016-05-250.287392-0.3622501.598791
3FOODS_1_0012016-05-260.295326-0.1454890.963542
4FOODS_1_0012016-05-270.315868-0.1665161.077437

Evaluation

Before measuring the performance metric, letโ€™s plot the predictions against the actual values.

nixtla_client.plot(test_df, fcst_df, models=['TimeGPT'], level=[80], time_col='ds', target_col='y')

Finally, we can measure the mean absolute error (MAE) of the model.

fcst_df['ds'] = pd.to_datetime(fcst_df['ds'])

test_df = pd.merge(test_df, fcst_df, 'left', ['unique_id', 'ds'])
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=["TimeGPT"],
    target_col="y",
    id_col='unique_id'
)

average_metrics = evaluation.groupby('metric')['TimeGPT'].mean()
average_metrics
metric
mae    0.492559
Name: TimeGPT, dtype: float64

Forecasting with statistical models

The library statsforecast by Nixtla provides a suite of statistical models specifically built for intermittent forecasting, such as Croston, IMAPA and TSB. Letโ€™s use these models and see how they perform against TimeGPT.

from statsforecast import StatsForecast
from statsforecast.models import CrostonClassic, CrostonOptimized, IMAPA, TSB

Here, we use four models: two versions of Croston, IMAPA and TSB.

models = [CrostonClassic(), CrostonOptimized(), IMAPA(), TSB(0.1, 0.1)]

sf = StatsForecast(
    models=models,
    freq='D',
    n_jobs=-1
)

Then, we can fit the models on our data.

start = time.time()

sf.fit(df=input_df)

sf_preds = sf.predict(h=28)

end = time.time()

sf_duration = end - start

print(f"Statistical models took :{sf_duration}s")

Here, fitting and predicting with four statistical models took 5.2 seconds, while TimeGPT took 5.8 seconds, so TimeGPT was only 0.6 seconds slower.

Again, we need to inverse the transformation. Remember that the training data was previously transformed using the log function.

cols = [col for col in sf_preds.columns if col not in ['ds', 'unique_id']]

for col in cols:
    sf_preds[col] = np.exp(sf_preds[col])-1

sf_preds.head()
dsCrostonClassicCrostonOptimizedIMAPATSB
unique_id
FOODS_1_0012016-05-230.5990930.5990930.4457790.396258
FOODS_1_0012016-05-240.5990930.5990930.4457790.396258
FOODS_1_0012016-05-250.5990930.5990930.4457790.396258
FOODS_1_0012016-05-260.5990930.5990930.4457790.396258
FOODS_1_0012016-05-270.5990930.5990930.4457790.396258

Evaluation

Now, letโ€™s combine the predictions from all methods and see which performs best.

test_df = pd.merge(test_df, sf_preds, 'left', ['unique_id', 'ds'])
test_df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_SportingTimeGPTTimeGPT-lo-80TimeGPT-hi-80CrostonClassicCrostonOptimizedIMAPATSB
0FOODS_1_0012016-05-231.3862942.2400000.286841-0.2671011.2594650.5990930.5990930.4457790.396258
1FOODS_1_0012016-05-240.0000002.2400000.320482-0.2412361.2980460.5990930.5990930.4457790.396258
2FOODS_1_0012016-05-250.0000002.2400000.287392-0.3622501.5987910.5990930.5990930.4457790.396258
3FOODS_1_0012016-05-260.0000002.2400000.295326-0.1454890.9635420.5990930.5990930.4457790.396258
4FOODS_1_0012016-05-271.9459102.2400000.315868-0.1665161.0774370.5990930.5990930.4457790.396258
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=["TimeGPT", "CrostonClassic", "CrostonOptimized", "IMAPA", "TSB"],
    target_col="y",
    id_col='unique_id'
)

average_metrics = evaluation.groupby('metric')[["TimeGPT", "CrostonClassic", "CrostonOptimized", "IMAPA", "TSB"]].mean()
average_metrics
TimeGPTCrostonClassicCrostonOptimizedIMAPATSB
metric
mae0.4925590.5645630.5809220.5719430.567178

In the table above, we can see that TimeGPT achieves the lowest MAE, achieving a 12.8% improvement over the best performing statistical model.

Now, this was done without using any of the available exogenous features. While the statsitical models do not support them, letโ€™s try including them in TimeGPT.

Forecasting with exogenous variables using TimeGPT

To forecast with exogenous variables, we need to specify their future values over the forecast horizon. Therefore, letโ€™s simply take the types of events, as those dates are known in advance.

futr_exog_df = test_df.drop(["TimeGPT", "CrostonClassic", "CrostonOptimized", "IMAPA", "TSB", "y", "TimeGPT-lo-80", "TimeGPT-hi-80", "sell_price"], axis=1)
futr_exog_df.head()
unique_iddsevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_Sporting
0FOODS_1_0012016-05-230000
1FOODS_1_0012016-05-240000
2FOODS_1_0012016-05-250000
3FOODS_1_0012016-05-260000
4FOODS_1_0012016-05-270000

Then, we simply call the forecast method and pass the futr_exog_df in the X_df parameter.

start = time.time()

fcst_df = nixtla_client.forecast(
    df=input_df,
    X_df=futr_exog_df,
    h=28,                            
    level=[80],                        # Generate a 80% confidence interval
    finetune_steps=10,                 # Specify the number of steps for fine-tuning
    finetune_loss='mae',               # Use the MAE as the loss function for fine-tuning
    model='timegpt-1-long-horizon',    # Use the model for long-horizon forecasting
    time_col='ds',
    target_col='y',
    id_col='unique_id'
)

end = time.time()

timegpt_duration = end - start

print(f"Time (TimeGPT): {timegpt_duration}")
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: D
INFO:nixtla.nixtla_client:Using the following exogenous variables: event_type_Cultural, event_type_National, event_type_Religious, event_type_Sporting
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...

Time (TimeGPT): 7.173351287841797

๐Ÿ“˜

Available models in Azure AI

If you are using an Azure AI endpoint, please be sure to set model="azureai":

nixtla_client.forecast(..., model="azureai")

For the public API, we support two models: timegpt-1 and timegpt-1-long-horizon.

By default, timegpt-1 is used. Please see this tutorial on how and when to use timegpt-1-long-horizon.

Great! Remember that the predictions are transformed, so we have to inverse the transformation again.

fcst_df.rename(columns={
    'TimeGPT': 'TimeGPT_ex',
}, inplace=True)

cols = [col for col in fcst_df.columns if col not in ['ds', 'unique_id']]

for col in cols:
    fcst_df[col] = np.exp(fcst_df[col])-1

fcst_df.head()
unique_iddsTimeGPT_exTimeGPT-lo-80TimeGPT-hi-80
0FOODS_1_0012016-05-230.281922-0.2699021.250828
1FOODS_1_0012016-05-240.313774-0.2450911.286372
2FOODS_1_0012016-05-250.285639-0.3631191.595252
3FOODS_1_0012016-05-260.295037-0.1456790.963104
4FOODS_1_0012016-05-270.315484-0.1667601.076830

Evaluation

Finally, letโ€™s evaluate the performance of TimeGPT with exogenous features.

test_df['TimeGPT_ex'] = fcst_df['TimeGPT_ex'].values
test_df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_SportingTimeGPTTimeGPT-lo-80TimeGPT-hi-80CrostonClassicCrostonOptimizedIMAPATSBTimeGPT_ex
0FOODS_1_0012016-05-231.3862942.2400000.286841-0.2671011.2594650.5990930.5990930.4457790.3962580.281922
1FOODS_1_0012016-05-240.0000002.2400000.320482-0.2412361.2980460.5990930.5990930.4457790.3962580.313774
2FOODS_1_0012016-05-250.0000002.2400000.287392-0.3622501.5987910.5990930.5990930.4457790.3962580.285639
3FOODS_1_0012016-05-260.0000002.2400000.295326-0.1454890.9635420.5990930.5990930.4457790.3962580.295037
4FOODS_1_0012016-05-271.9459102.2400000.315868-0.1665161.0774370.5990930.5990930.4457790.3962580.315484
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=["TimeGPT", "CrostonClassic", "CrostonOptimized", "IMAPA", "TSB", "TimeGPT_ex"],
    target_col="y",
    id_col='unique_id'
)

average_metrics = evaluation.groupby('metric')[["TimeGPT", "CrostonClassic", "CrostonOptimized", "IMAPA", "TSB", "TimeGPT_ex"]].mean()
average_metrics
TimeGPTCrostonClassicCrostonOptimizedIMAPATSBTimeGPT_ex
metric
mae0.4925590.5645630.5809220.5719430.5671780.485352

From the table above, we can see that using exogenous features improved the performance of TimeGPT. Now, it represents a 14% improvement over the best statistical model.

Using TimeGPT with exogenous features took 6.8 seconds. This is 1.6 seconds slower than statitstical models, but it resulted in much better predictions.