Basit öğe kaydını göster

dc.contributor.authorAydoğan, M. and Karci, A.
dc.date.accessioned2021-04-08T12:06:26Z
dc.date.available2021-04-08T12:06:26Z
dc.date.issued2020
dc.identifier10.1016/j.physa.2019.123288
dc.identifier.issn03784371
dc.identifier.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85074525713&doi=10.1016%2fj.physa.2019.123288&partnerID=40&md5=ad9ba61d5c460a35c347e740ca4888b8
dc.identifier.urihttp://acikerisim.bingol.edu.tr/handle/20.500.12898/3928
dc.description.abstractToday, extreme amounts of data are produced, and this is commonly referred to as Big Data. A significant amount of big data is composed of textual data, and as such, text processing has correspondingly increased in its importance. This is especially valid to the development of word embedding and other groundbreaking advancements in this field. However, When studies on text processing and word embedding are examined, it can be seen that while there have been many world language-oriented studies, especially for the English language, there has been an insufficient level of study undertaken specific to the Turkish language. As a result, Turkish was chosen as the target language for the current study. Two Turkish datasets were created for this study. Word vectors were trained using the Word2Vec method on an unlabeled large corpus of approximately 11 billion words. Using these word vectors, text classification was applied with deep neural networks on a second dataset of 1.5 million examples and 10 classes. The current study employed the Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) methods – other types of this architecture – and their variations as deep neural network architectures. The performances of the embedding methods for the words used in this study, their effects on the rate of accuracy, and the success of the deep neural network architectures were then analyzed in detail. When studying the experimental results, it was determined that the GRU and LSTM methods were more successful compared to the other deep neural network models used in this study. The results showed that the pre-trained word vectors’ (PWVs) accuracy on deep neural networks improved at rates of approximately 5% and 7%. The datasets and word vectors of the current study will be shared in order to contribute to the Turkish language literature in this field. © 2019 Elsevier B.V.
dc.language.isoEnglish
dc.sourcePhysica A: Statistical Mechanics and its Applications
dc.titleImproving the accuracy using pre-trained word embeddings on deep neural networks for Turkish text classification


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster