Rate This Document
Findability
Accuracy
Completeness
Readability

Kmeans

The K-means algorithm uses ML APIs.

Model API Type

Function API

ML API

def fit(dataset: Dataset[_]): KMeansModel

def fit(dataset: Dataset[_], paramMaps:

Array[ParamMap]): Seq[KMeansModel]

def fit(dataset: Dataset[_], paramMap: ParamMap): KMeansModel

def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): KMeansModel

ML API

  • Function

    This type of APIs is used to import sample data in dataset format, call the fit API, and output the K-means clustering model.

  • Input and output
    1. Package name: package org.apache.spark.ml.clustering
    2. Class name: KMeans
    3. Method name: fit
    4. Input: training sample data (Dataset[_]). The following is a mandatory field.

      Param name

      Type(s)

      Default

      Description

      featuresCol

      Vector

      "features"

      Feature label

    5. Algorithm parameters

      Algorithm Parameter

      def setFeaturesCol(value: String): KMeans.this.type

      def setPredictionCol(value: String): KMeans.this.type

      def setK(value: Int): KMeans.this.type

      def setInitMode(value: String): KMeans.this.type

      def setInitSteps(value: Int): KMeans.this.type

      def setMaxIter(value: Int): KMeans.this.type

      def setThreshold(value: Double): KMeans.this.type

      def setTol(value: Double): KMeans.this.type

      def setSeed(value: Long): KMeans.this.type

    6. Added algorithm parameters

      Parameter

      Description

      Type

      sampleRate

      Ratio of the data used in each iteration to the full data set

      0~1[Double]

      optMethod

      Whether to trigger sampling

      default/allData[String]

      An example is provided as follows:

      import org.apache.spark.ml.param.{ParamMap, ParamPair}
      
      val kmeans = new MlKMeans()
      // Define the def fit(dataset: Dataset[_], paramMap: ParamMap) API parameter.
      val paramMap = ParamMap(kmeans.initSteps -> initSteps)
      .put(kmeans.maxIter, maxIter)
      
      // Define the def fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): API parameter.
      val paramMaps: Array[ParamMap] = new Array[ParamMap](2) for (i <- 0 to  2) {
      paramMaps(i) = ParamMap(kmeans.initSteps -> initSteps)
      .put(kmeans.maxIter, maxIter)
      }// Assign a value to paramMaps.
      
      // Define the def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*) API parameter.
      val initStepsParamPair = ParamPair(kmeans.initSteps, initSteps)
      val maxIterParamPair = ParamPair(kmeans.maxIter, maxIter)
      val tolParamPair = ParamPair(kmeans.tol, tol)
      
      // Call the fit APIs.
      model = kmeans.fit(trainingData)
      model = kmeans.fit(trainingData, paramMap)
      models = kemans.fit(trainingData, paramMaps)
      model = kemans.fit(trainingData, initStepsParamPair, maxIterParamPair, tolParamPair)
    7. Output: K-means clustering model (KMeansModel). The output in model prediction is as follows.

      Param name

      Type(s)

      Default

      Description

      predictionCol

      Int

      "prediction"

      predictionCol

  • Sample usage
    import org.apache.spark.ml.clustering.KMeans
    import org.apache.spark.ml.evaluation.ClusteringEvaluator
    
    // Loads data.
    val dataset = spark.read.format("libsvm").load("data/mllib/sample_kmeans_data.txt")
    
    // Trains a k-means model.
    val kmeans = new KMeans().setK(2).setSeed(1L)
    val model = kmeans.fit(dataset)
    
    // Make predictions
    val predictions = model.transform(dataset)
    
    // Evaluate clustering by computing Silhouette score
    val evaluator = new ClusteringEvaluator()
    
    val silhouette = evaluator.evaluate(predictions)
    println(s"Silhouette with squared euclidean distance = $silhouette")
    
    // Shows the result.
    println("Cluster Centers: ")
    model.clusterCenters.foreach(println)