Optimality and stability of the K-hyperline clustering algorithm

Jayaraman J. Thiagarajan, Karthikeyan N. Ramamurthy, Andreas Spanias

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


K-hyperline clustering is an iterative algorithm based on singular value decomposition and it has been successfully used in sparse component analysis. In this paper, we prove that the algorithm converges to a locally optimal solution for a given set of training data, based on Lloyd's optimality conditions. Furthermore, the local optimality is shown by developing an Expectation- Maximization procedure for learning dictionaries to be used in sparse representations and by deriving the clustering algorithm as its special case. The cluster centroids obtained from the algorithm are proved to tessellate the space into convex Voronoi regions. The stability of clustering is shown by posing the problem as an empirical risk minimization procedure over a function class. It is proved that, under certain conditions, the cluster centroids learned from two sets of i.i.d. training samples drawn from the same probability space become arbitrarily close to each other, as the number of training samples increase asymptotically.

Original languageEnglish (US)
Pages (from-to)1299-1304
Number of pages6
JournalPattern Recognition Letters
Issue number9
StatePublished - Jul 1 2011


  • Empirical risk minimization
  • K-hyperline clustering
  • Optimality
  • Stability
  • Voronoi

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Optimality and stability of the K-hyperline clustering algorithm'. Together they form a unique fingerprint.

Cite this