[1] R., Mihalcea, C. Corley, and C. Strapparava, "Corpus-based and Knowledge-based Measures of Text Semantic Similarity," In Aaai, Vol. 6, pp.775-780, 2006.
[2] D. T. Tolciu, C. Sacarea, and C. Matei, "Analysis of Patterns and Similarities in Service Tickets using Natural Language Processing," Journal of Communications Software and Systems, vol. 17, no. 1, pp. 29-35, 2021.
[3] V. Bahel and A. Thomas, "Text Similarity Analysis for Evaluation of Descriptive Answers," ArXiv Preprint ArXiv, 2105.02935, 2021.
[4] S. Mizzaro, M. Pavan, and I. Scagnetto, "Content-based Similarity of Twitter Users," In European conference on information retrieval, Springer, 2015.
[5] Z. Sepehrian, S. S. Sadidpour, and H. Shirazi, "An Approach Based on Semantic Similarity in Persian Query-Based Summarization," Scientific Journal of Electronic and Cyber Defense, vol. 2, no. 3, pp. 51-63, 2014 (in Persian).
[6] Z. Wang, W. Hamza, and R. Florian, "Bilateral Multi-perspective Matching for Natural Language Sentences," ArXiv Preprint ArXiv, 1702.03814, 2017.
[7] J. Mueller and A. Thyagarajan, "Siamese Recurrent Architectures for Learning Sentence Similarity," In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
[8] W. H. Gomaa and A. A. Fahmy, "A Survey of Text Similarity Approaches," International Journal of Computer Applications, vol. 68, no. 13, pp. 13-18, 2013.
[9] M. Farouk, "Measuring Sentences Similarity: A Survey," arXiv:1910.03940v1, July 2019.
[10] Y. Wang, X. Di, J. Li, H. Yang, and L. Bi, "Sentence Similarity Learning Method based on Attention Hybrid Model," In Journal of Physics: Conference Series, IOP Publishing, 2018.
[11] T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient Estimation of Word Representations in Vector Space." ArXiv Preprint ArXiv, 1301.3781, 2013.
[12] Y. Doval, J. Camacho-Collados, L. Espinosa-Anke, and S. Schockaert, "Improving Cross-lingual Word Embeddings by Meeting in the Middle," ArXiv Preprint ArXiv, 1808.08780, 2018.
[13] A. Conneau, G. Lample, M. A. Ranzato, L. Denoyer, and H. Jégou, "Word Translation Without Parallel Data," ArXiv Preprint ArXiv, 1710.04087, 2017.
[14] M. Artetxe, G. Labaka, and E. Agirre, "Learning Bilingual Word Embeddings with (almost) no Bilingual Data," In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017.
[15] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding," ArXiv Preprint ArXiv,1810.04805, 2018.
[16] H., Huang, Y. Liang, N. Duan, M. Gong, L. Shou, D. Jiang, and M. Zhou, "Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks," ArXiv Preprint ArXiv, 1909.00964, 2019.
[17] G. Lample and A. Conneau, "Cross-lingual Language Model Pretraining," ArXiv Preprint ArXiv, 1901.07291, 2019.
[18] H. Gonen, S. Ravfogel, Y. Elazar, and Y. Goldberg, "It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT," ArXiv Preprint ArXiv, 2010.08275, 2020.
[19] R. Samuel, G. A. Bowman, and C. Potts, "The Stanford Natural Language Inference (SNLI) Corpus, arXiv preprint arXiv:1508.05326, 2015.