Rate This Document
Findability
Accuracy
Completeness
Readability

Generating Datasets

Generating a CP10M1K Dataset

  1. Modify the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/svd.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/svd.conf
      
    3. Press i to enter the insert mode and modify the file as follows:
      1
      2
      3
      hibench.svd.bigdata.examples            10000000
      hibench.svd.bigdata.features            1000
      hibench.workload.input                  ${hibench.hdfs.data.dir}/CP10M1K
      
    4. Press Esc, type :wq!, and press Enter to save the file and exit.
    5. Go to the HiBench-HiBench-7.0/conf directory and modify the /HiBench-7.0/conf/hibench.conf configuration file.
      1
      2
      cd /HiBench-HiBench-7.0/conf
      vi hibench.conf
      
    6. Press i to enter the insert mode and modify the file as follows:

    7. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate a dataset.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/svd/prepare/
      
    3. Run the script to generate the CP10M1K dataset.
      1
      sh prepare.sh
      
    4. View the result.
      1
      hadoop fs -ls /HiBench/CP10M1K
      
    5. If a permission error is reported during the generation, change the permission for the corresponding directories as the root user and in HDFS respectively.
  3. Create a folder in HDFS.
    1
    hadoop fs -mkdir -p /tmp/ml/dataset
    
  4. Start spark-shell.
    1
    spark-shell
    
  5. Run the following command:
    1
    :paste
    
  6. Execute the following code to process the dataset:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    import org.apache.spark.internal.Logging
    import org.apache.spark.ml.linalg.SQLDataTypes.VectorType
    import org.apache.spark.ml.linalg.{Matrix, Vectors}
    import org.apache.spark.mllib.linalg.DenseVector
    import org.apache.spark.sql.{Row, SparkSession}
    import org.apache.spark.sql.types.{StructField, StructType}
    import org.apache.spark.storage.StorageLevel
    val dataPath = "/HiBench/CP10M1K"
    val outputPath = "/tmp/ml/dataset/CP10M1K"
    spark
    .sparkContext
    .objectFile[DenseVector](dataPath)
    .map(row => Vectors.dense(row.values).toArray.map{u=>f"$u%.2f"}.mkString(","))
    .saveAsTextFile(outputPath)
    
  7. Check the HDFS directory. The following figure shows the result.
    1
    hadoop fs -ls /tmp/ml/dataset/CP10M1K
    

Generating a CP2M5K Dataset

  1. Modify the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/svd.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/svd.conf
      
    3. Press i to enter the insert mode and modify the file as follows:
      1
      2
      3
      hibench.svd.bigdata.examples            2000000
      hibench.svd.bigdata.features            5000
      hibench.workload.input                  ${hibench.hdfs.data.dir}/CP2M5K
      
    4. Press Esc, type :wq!, and press Enter to save the file and exit.
    5. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    6. Open the /HiBench-7.0/conf/hibench.conf configuration file.
      1
      vi hibench.conf
      
    7. Press i to enter the insert mode and modify the file as follows:
      1
      2
      3
      hibench.scale.profile                 bigdata
      hibench.default.map.parallelism       500
      hibench.default.shuffle.parallelism   600
      

    8. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/svd/prepare/
      
    3. Run the script to generate the CP2M5K dataset.
      1
      sh prepare.sh
      
    4. If a permission error is reported during the generation, change the permission for the corresponding directories as the root user and in HDFS respectively.
  3. Create a folder in HDFS.
    1
    hadoop fs -mkdir -p /tmp/ml/dataset
    
  4. Start spark-shell.
    1
    spark-shell
    
  5. Run the following command:
    1
    :paste
    
  6. Execute the following code to process the dataset:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    import org.apache.spark.internal.Logging
    import org.apache.spark.ml.linalg.SQLDataTypes.VectorType
    import org.apache.spark.ml.linalg.{Matrix, Vectors}
    import org.apache.spark.mllib.linalg.DenseVector
    import org.apache.spark.sql.{Row, SparkSession}
    import org.apache.spark.sql.types.{StructField, StructType}
    import org.apache.spark.storage.StorageLevel
    val dataPath = "/HiBench/CP2M5K"
    val outputPath = "/tmp/ml/dataset/CP2M5K"
    spark
    .sparkContext
    .objectFile[DenseVector](dataPath)
    .map(row => Vectors.dense(row.values).toArray.map{u=>f"$u%.2f"}.mkString(","))
    .saveAsTextFile(outputPath)
    
  7. Check the HDFS directory. The following figure shows the result.
    1
    hadoop fs -ls /tmp/ml/dataset/CP2M5K
    

Generating an ALS Dataset

  1. Set the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/als.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/als.conf
      
    3. Press i to enter the insert mode and modify the file as follows:

    4. Press Esc, type :wq!, and press Enter to save the file and exit.
    5. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    6. Open the /HiBench-7.0/conf/hibench.conf configuration file.
      1
      vi hibench.conf
      
    7. Press i to enter the insert mode and modify the file as follows:

    8. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/ALS
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/als/prepare/
      
    3. Run the script to generate the ALS dataset.
      1
      sh prepare.sh
      
    4. View the result.
      1
      hadoop fs -ls /tmp/ml/dataset/ALS
      

      If a permission error is reported during the generation, change the permission for the corresponding directories as the root user and in HDFS respectively.

Generating a D200M100 Dataset

  1. Set the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/kmeans.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/kmeans.conf
      
    3. Press i to enter the insert mode and modify the file as follows:
      1
      2
      3
      4
      5
      6
      7
      8
      9
      hibench.kmeans.gigantic.num_of_clusters		5
      hibench.kmeans.gigantic.dimensions		100
      hibench.kmeans.gigantic.num_of_samples		200000000
      hibench.kmeans.gigantic.samples_per_inputfile	40000000
      hibench.kmeans.gigantic.max_iteration		5
      hibench.kmeans.gigantic.k			10
      hibench.kmeans.gigantic.convergedist		0.5
      
      hibench.workload.input                          hdfs://server1:8020/tmp/ml/dataset/kmeans_200m20_tmp
      

    4. Press Esc, type :wq!, and press Enter to save the file and exit.
    5. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    6. Open the /HiBench-HiBench-7.0/conf/hibench.conf configuration file.
      1
      vi hibench.conf
      
    7. Press i to enter the insert mode and modify the file as follows:

    8. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/kmeans_200m100_tmp
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/kmeans/prepare/
      
    3. Run the script to generate the D200M100 dataset.
      1
      sh prepare.sh
      
  3. View the result.
    1
    hdfs dfs -ls /tmp/ml/dataset/kmeans_200m100_tmp
    

    If a permission error is reported during the generation, change the permission for the corresponding directories as the root user and in HDFS respectively.

  4. Create a directory for storing the generated dataset in HDFS.
    1
    hdfs dfs -ls /tmp/ml/dataset/kmeans_200m100
    
  5. Move the dataset to a specified path.
    1
    hdfs dfs -mv /tmp/ml/dataset/kmeans_200m100_tmp/samples/* /tmp/ml/dataset/kmeans_200m100/
    
  6. View the result.
    1
    hdfs dfs -ls /tmp/ml/dataset/kmeans_200m100/
    

  7. Delete redundant directories.
    1
    hdfs dfs -rm -r /tmp/ml/dataset/kmeans_200m100_tmp
    

Generating a D10M4096 Dataset

  1. Set the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/lr.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/lr.conf
      
    3. Press i to enter the insert mode and modify the file as follows: Change the number of data samples to 10000000 and the data feature to 4096. Generate a dataset with 40 million data samples.
      1
      2
      hibench.lr.bigdata.examples  10000000
      hibench.lr.bigdata.features  4096
      

    4. Press Esc, type :wq!, and press Enter to save the file and exit.
    5. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    6. Open the /HiBench-HiBench-7.0/conf/hibench.conf configuration file.
      1
      vi hibench.conf
      
    7. Press i to enter the insert mode and modify the file as follows:
      1
      2
      3
      hibench.scale.profile                 bigdata
      hibench.default.map.parallelism       300
      hibench.default.shuffle.parallelism   300
      

    8. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/lr/prepare/
      
    3. Run the script to generate the D10M4096 dataset.
      1
      sh prepare.sh
      
  3. View the result.
    1
    hdfs dfs -ls /HiBench/HiBench/LR/Input
    

    If a permission error is reported during the generation, change the permission for the corresponding directories as the root user and in HDFS respectively.

  4. Start spark-shell.
    1
    spark-shell
    
  5. Run the following command:
    1
    :paste
    
  6. Execute the following code to process the dataset:
    1
    2
    3
    4
    5
    6
    7
    import org.apache.spark.rdd.RDD
    import org.apache.spark.mllib.regression.LabeledPoint
    val data: RDD[LabeledPoint] = sc.objectFile("/HiBench/HiBench/LR/Input/10m4096")
    val i = data.map{t=>t.label.toString+","+t.features.toArray.mkString(" ")}
    val splits = i.randomSplit(Array(0.6, 0.4), seed = 11L)
    splits(0).saveAsTextFile("/HiBench/HiBench/LR/Output/10m4096_train")
    splits(1).saveAsTextFile("/HiBench/HiBench/LR/Output/10m4096_test")
    

Generating a HiBench_10M_200M Dataset

  1. Set the HiBench configuration file.
    1. Go to the HiBench-HiBench-7.0/conf directory.
      1
      cd /HiBench-HiBench-7.0/conf
      
    2. Open the /HiBench-HiBench-7.0/conf/workloads/ml/lda.conf configuration file.
      1
      vi /HiBench-HiBench-7.0/conf/workloads/ml/lda.conf
      
    3. Press i to enter the insert mode and modify the file as follows:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      hibench.lda.bigdata.num_of_documents              10000000
      hibench.lda.bigdata.num_of_vocabulary             200007152
      hibench.lda.bigdata.num_of_topics                 100
      hibench.lda.bigdata.doc_len_min                   500
      hibench.lda.bigdata.doc_len_max                   10000
      hibench.lda.bigdata.maxresultsize                 "6g"
      hibench.lda.num_of_documents                      ${hibench.lda.${hibench.scale.profile}.num_of_documents}
      hibench.lda.num_of_vocabulary                     ${hibench.lda.${hibench.scale.profile}.num_of_vocabulary}
      hibench.lda.num_of_topics                         ${hibench.lda.${hibench.scale.profile}.num_of_topics}
      hibench.lda.doc_len_min                           ${hibench.lda.${hibench.scale.profile}.doc_len_min}
      hibench.lda.doc_len_max                           ${hibench.lda.${hibench.scale.profile}.doc_len_max}
      hibench.lda.maxresultsize                         ${hibench.lda.${hibench.scale.profile}.maxresultsize}
      hibench.lda.partitions                            ${hibench.default.map.parallelism}
      hibench.lda.optimizer                             "online"
      hibench.lda.num_iterations                        10
      
    4. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Create a directory for storing the generated data in HDFS.
      1
      hdfs dfs -mkdir -p /tmp/ml/dataset/
      
    2. Go to the path where the execution script resides.
      1
      cd /HiBench-HiBench-7.0/bin/workloads/ml/lda/prepare/
      
    3. Run bin/workloads/ml/lda/prepare/prepare.sh to generate a dataset.
      1
      sh prepare.sh
      
  3. Start spark-shell.
    1
    spark-shell
    
  4. Run the following command:
    1
    :paste
    
  5. Run the following code to convert the generated data into the ORC format:
    1
    2
    3
    4
    5
    6
    import org.apache.spark.mllib.linalg.{Vector => OldVector, Vectors => OldVectors} 
    import org.apache.spark.ml.linalg.{Vector, Vectors}
    case class DocSchema(id: Long, tf: Vector)
    val data: RDD[(Long, OldVector)] = sc.objectFile(dataPath)
    val df = spark.createDataFrame(data.map {doc => DocSchema(doc._1, doc._2.asML)})
    df.repartition(200).write.mode("overwrite").format("orc").save(outputPath)
    

Generating a HibenchRating3wx3w Dataset

  1. Modify parameters in the scala file.
    1. Open the Hibench/sparkbench/ml/src/main/scala/com/intel/sparkbench/ml/RatingDataGenerator.scala file.
      1
      vi Hibench/sparkbench/ml/src/main/scala/com/intel/sparkbench/ml/RatingDataGenerator.scala
      
    2. Press i to enter the insert mode and modify the numPartitions parameter in. (Comment out lines 36 and 37 and add line 38, as shown in the following figure.)
      1
      val numPartitions = parallel
      

    3. Press Esc, type :wq!, and press Enter to save the file and exit.
  2. Generate data.
    1. Compile the sparkbench module.
      1
      mvn package
      
    2. Save the compiled sparkbench-common-8.0-SNAPSHOT.jar and sparkbench-ml-8.0-SNAPSHOT.jar files in the same folder and call RatingDataGenerator to generate data.
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      spark-submit \
      --class com.intel.hibench.sparkbench.ml.RatingDataGenerator \
      --jars sparkbench-common-8.0-SNAPSHOT.jar \
      --conf "spark.executor.instances=71" \
      --conf "spark.executor.cores=4" \
      --conf "spark.executor.memory=12g" \
      --conf "spark.executor.memoryOverhead=2g" \
      --conf "spark.default.parallelism=284" \
      --master yarn \
      --deploy-mode client \
      --driver-cores 36 \
      --driver-memory 50g \
      ./sparkbench-ml-8.0-SNAPSHOT.jar \
      /tmp/hibench/HibenchRating3wx3w 24000 6000 900000 false
      

      Parameters:

      • /tmp/hibench/HibenchRating3wx3w: location where the generated data is stored.
      • 24000: number of users.
      • 6000: number of products.
      • 900000: number of ratings.
      • false: Implicit feedback data is not generated.

Generating a BostonHousing Dataset

Click here to obtain the dataset.

Generating a Titanic Dataset

Click here to obtain the dataset.

Generating an Avazu Dataset

Click here to obtain the dataset.

Select a dataset whose file name extension is -site. Number of rows in the training set/Number of rows in the test set/Number of features: 25,832,830/2,858,160/1,000,000.

Generating a Movielens Dataset

Click here to obtain the dataset.

Script: process_movielens.zip

Generating a Criteo40M&Criteo150M Dataset

Click here to obtain the dataset.

Script: process_criteo.zip

Generating a BremenSmall Dataset

Click here to obtain the dataset.

Script: dataProcess_bremenSmall.zip

Generating a Farm Dataset

Click here to obtain the dataset.

Script: ds file processing script scala.txt