Running and Verification
Write an inference script to check whether the model can be used properly for inference.
- Create and compile the test script dlrm_test.py.
- Create dlrm_test.py.
vi dlrm_test.py
- Press i to enter the insert mode and add the following content to the dlrm_test.py file:
import logging import numpy as np import tensorflow as tf from tensorflow.data import Dataset from noddlrm.recommenders import DLRM def load_criteo(dataset_folder='dataset/'): with np.load(dataset_folder + 'criteo/kaggle_processed.npz') as data: X_int = data["X_int"] X_cat = data["X_cat"] y = data["y"] counts = data["counts"] indices = np.arange(len(y)) indices = np.array_split(indices, 7) test_indices = indices[-1] val_indices, test_indices = np.array_split(test_indices, 2) raw_data = dict() raw_data['counts'] = counts raw_data['X_cat_test'] = X_cat[test_indices] raw_data['X_int_test'] = np.log(X_int[test_indices]+1).astype(np.float32) raw_data['y_test'] = y[test_indices] return raw_data @tf.function def eval_step(dense_features, sparse_features, label): pred = dlrm_model.inference(dense_features, sparse_features) auc.update_state(y_true=label, y_pred=pred) if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format='%(message)s') batch_size = 1024 raw_data = load_criteo('../../dataset/') test_dataset = Dataset.from_tensor_slices({ 'dense_features': raw_data['X_int_test'][:], 'sparse_features': raw_data['X_cat_test'][:], 'label': raw_data['y_test'][:] }).batch(batch_size).prefetch(1) dlrm_model = DLRM( m_spa=4, ln_emb=raw_data['counts'], ln_bot=[8, 4], ln_top=[128, 64, 1] ) dlrm_model.load_weights('mymodel') auc = tf.keras.metrics.AUC() for _, batch_data in enumerate(test_dataset): eval_step(**batch_data) logging.info('auc: %s', auc.result().numpy()) - Press Esc, type :wq, and press Enter to save the file and exit.
- Create dlrm_test.py.
- Run the test.
python dlrm_test.py

If the test program does not report an error and the AUC value is displayed, DLRM inference is normal. The AUC value is related to the effect of the model trained in Training the DLRM. If the trained AUC value is different from that provided in this document, the actual command line information does not need to be the same as that in the figure. However, if the same model is used for inference for multiple times, the inference results must remain the same.