元素的映射变坏了

我正在实现k-means ,我想创建新的质心。 但映射留下了一个元素! 但是,当K的值较小时,如15,它将正常工作。

基于该代码,我有:

 val K = 25 // number of clusters val data = sc.textFile("dense.txt").map( t => (t.split("#")(0), parseVector(t.split("#")(1)))).cache() val count = data.count() println("Number of records " + count) var centroids = data.takeSample(false, K, 42).map(x => x._2) do { var closest = data.map(p => (closestPoint(p._2, centroids), p._2)) var pointsGroup = closest.groupByKey() println(pointsGroup) pointsGroup.foreach { println } var newCentroids = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap() //var newCentroids = pointsGroup.mapValues(ps => average(ps)).collectAsMap() this will produce an error println(centroids.size) println(newCentroids.size) for (i <- 0 until K) { tempDist += centroids(i).squaredDist(newCentroids(i)) } .. 

并且在for循环中,我将得到它将找不到元素的错误(它不总是相同的,它取决于K

java.util.NoSuchElementException:找不到密钥:2

错误出现前的输出:

 Number of records 27776 ShuffledRDD[5] at groupByKey at kmeans.scala:72 25 24 <- IT SHOULD BE 25 

问题是什么?


 >>> println(newCentroids) Map(23 -> (-0.0050852959701492536, 0.005512245104477607, -0.004460964477611937), 17 -> (-0.005459583045685268, 0.0029015278781725795, -8.451635532994901E-4), 8 -> (-4.691649213483123E-4, 0.0025375451685393366, 0.0063490755505617585), 11 -> (0.30361112034069937, -0.0017342255382385204, -0.005751167731061906), 20 -> (-5.839587918939964E-4, -0.0038189763756820145, -0.007067070459859708), 5 -> (-0.3787612396704685, -0.005814121628643806, -0.0014961713117870657), 14 -> (0.0024755681263616547, 0.0015191503267973836, 0.003411769193899781), 13 -> (-0.002657690932944597, 0.0077671050923225635, -0.0034652379980563263), 4 -> (-0.006963114731610361, 1.1751361829025871E-4, -0.7481135105367823), 22 -> (0.015318187079953534, -1.2929035958285013, -0.0044176372190034684), 7 -> (-0.002321059060773483, -0.006316359116022083, 0.006164669723756913), 16 -> (0.005341800955165691, -0.0017540737037037035, 0.004066574093567247), 1 -> (0.0024547379611650484, 0.0056298656504855955, 0.002504618082524296), 10 -> (3.421068671121009E-4, 0.0045169004751299275, 5.696239049740164E-4), 19 -> (-0.005453716071428539, -0.001450277556818192, 0.003860007248376626), 9 -> (-0.0032921685273631807, 1.8477108457711313E-4, -0.003070412228855717), 18 -> (-0.0026803160958904053, 0.00913904078767124, -0.0023528013698630146), 3 -> (0.005750011594202901, -0.003607098309178754, -0.003615918896940412), 21 -> (0.0024925166025641056, -0.0037607353461538507, -2.1588444871794858E-4), 12 -> (-7.920202960526356E-4, 0.5390774232894769, -4.928884539473694E-4), 15 -> (-0.0018608492323232324, -0.006973787272727284, -0.0027266663434343404), 24 -> (6.151173211963486E-4, 7.081812613784045E-4, 5.612962808842611E-4), 6 -> (0.005323933953732931, 0.0024014750473186123, -2.969338590956889E-4), 0 -> (-0.0015991676750160377, -0.003001317289659613, 0.5384176139563245)) 

带有相关错误的问题: spark scala throws java.util.NoSuchElementException:key not found:0 exception


编辑:

在观察到零323之后,两个质心是相同的,我改变了代码,以便所有质心都是唯一的。 但是,行为保持不变。 出于这个原因,我怀疑closestPoint()可能会为两个质心返回相同的索引。 这是function:

  def closestPoint(p: Vector, centers: Array[Vector]): Int = { var index = 0 var bestIndex = 0 var closest = Double.PositiveInfinity for (i <- 0 until centers.length) { val tempDist = p.squaredDist(centers(i)) if (tempDist < closest) { closest = tempDist bestIndex = i } } return bestIndex } 

如何摆脱这个? 我正在运行我在Spark集群中描述的代码。

它可以发生在“E步骤”(聚类索引的点的分配类似于EM算法的E步骤),其中一个索引将不被分配任何点。 如果发生这种情况,那么你需要有一种方法将该指数与某个点相关联,否则你将在“M步”之后结束更少的聚类(指数的质心分配类似于M- EM算法的一步。)像这样的东西应该可行:

 val newCentroids = { val temp = pointsGroup.mapValues(ps => average(ps.toSeq)).collectAsMap() val nMissing = K - temp.size val sample = data.takeSample(false, nMissing, seed) var c = -1 (for (i <- 0 until K) yield { val point = temp.getOrElse(i, {c += 1; sample(c) }) (i, point) }).toMap } 

只需将该代码替换为您当前用于计算newCentroids

还有其他方法可以解决这个问题,上面的方法可能不是最好的(多次调用takeSample是一个好主意,对于k-means算法的每次迭代都是一次?如果data包含很多重复值?等),但这是一个简单的起点。

顺便说一句,您可能想要考虑如何使用groupByKey替换reduceByKey

注意:对于好奇,这里有一个描述EM算法和k-means算法之间相似性的参考: http : //papers.nips.cc/paper/989-convergence-properties-of-the-k-means- algorithms.pdf 。