TY - JOUR
T1 - Model elements identification using neural networks
T2 - a comprehensive study
AU - Madala, Kaushik
AU - Piparia, Shraddha
AU - Blanco, Eduardo
AU - Do, Hyunsook
AU - Bryce, Renee
N1 - Funding Information:
This work was supported, in part, by NSF CAREER Award CCF-1564238 to University of North Texas.
Publisher Copyright:
© 2020, Springer-Verlag London Ltd., part of Springer Nature.
PY - 2021/3
Y1 - 2021/3
N2 - Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element identification tasks has not been studied. In this paper, we perform an empirical study on automatic model elements identification for component state transition models from use case documents. We analyzed four different neural network architectures: feed forward neural network, convolutional neural network, recurrent neural network (RNN) with long short-term memory, and RNN with gated recurrent unit (GRU), and the trade-offs among them using six use case documents. We analyzed the effect of factors such as types of splitting, types of predictions, types of designs, and types of annotations on performance of neural networks. The results of neural networks on the test and unseen data showed that RNN with GRU is the most effective neural network architecture. However, the factors that result in effective predictions of neural networks are dependent on the type of the model element.
AB - Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element identification tasks has not been studied. In this paper, we perform an empirical study on automatic model elements identification for component state transition models from use case documents. We analyzed four different neural network architectures: feed forward neural network, convolutional neural network, recurrent neural network (RNN) with long short-term memory, and RNN with gated recurrent unit (GRU), and the trade-offs among them using six use case documents. We analyzed the effect of factors such as types of splitting, types of predictions, types of designs, and types of annotations on performance of neural networks. The results of neural networks on the test and unseen data showed that RNN with GRU is the most effective neural network architecture. However, the factors that result in effective predictions of neural networks are dependent on the type of the model element.
KW - Automated requirements modeling
KW - Component state transition diagrams
KW - Empirical study
KW - Neural networks
KW - Requirements analysis
KW - Sequence labeling
UR - http://www.scopus.com/inward/record.url?scp=85085771049&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085771049&partnerID=8YFLogxK
U2 - 10.1007/s00766-020-00332-2
DO - 10.1007/s00766-020-00332-2
M3 - Article
AN - SCOPUS:85085771049
SN - 0947-3602
VL - 26
SP - 67
EP - 96
JO - Requirements Engineering
JF - Requirements Engineering
IS - 1
ER -