Predicting movie success with machine learning techniques: ways to improve accuracy

Kyuhan Lee, Jinsoo Park, Iljoo Kim, Youngseok Choi

Research output: Contribution to journalArticlepeer-review

34 Scopus citations


Previous studies on predicting the box-office performance of a movie using machine learning techniques have shown practical levels of predictive accuracy. Their works are technically- and methodologically-oriented, focusing mainly on what algorithms are better at predicting the movie performance. However, the accuracy of prediction model can also be elevated by taking other perspectives such as introducing unexplored features that might be related to the prediction of the outcomes. In this paper, we examine multiple approaches to improve the performance of the prediction model. First, we develop and add a new feature derived from the theory of transmedia storytelling. Such theory-driven feature selection not only increases the forecast accuracy, but also enhances the interpretability of a prediction model. Second, we use an ensemble approach, which has rarely been adopted in the research on predicting box-office performance. As a result, the proposed model, Cinema Ensemble Model (CEM), outperforms the prediction models from the past studies that use machine learning algorithms. We suggest that CEM can be extensively used for industrial experts as a powerful tool for improving decision-making process.

Original languageEnglish (US)
Pages (from-to)577-588
Number of pages12
JournalInformation Systems Frontiers
Issue number3
StatePublished - Jun 1 2018
Externally publishedYes


  • Cinema ensemble model
  • Feature selection
  • Machine learning techniques
  • Movie performance
  • Prediction model
  • Transmedia storytelling

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Software
  • Information Systems
  • Computer Networks and Communications


Dive into the research topics of 'Predicting movie success with machine learning techniques: ways to improve accuracy'. Together they form a unique fingerprint.

Cite this