[1] M. A. Calijorne Soares and F. S. Parreiras, “A Literature Review on Question Answering Techniques, Paradigms and Systems,” J. King Saud Univ. - Comput. Inf. Sci., vol. 32, no. 6, pp. 635–646, 2020, doi: 10.1016/j.jksuci.2018.08.005.
[2] P. Rajpurkar, R. Jia, and P. Liang, “Know what you don’t know: Unanswerable questions for SQuAD,” ACL 2018 - 56th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf. (Long Pap., vol. 2, pp. 784–789, 2018, doi: 10.18653/v1/p18-2124.
[3] Z. Yang et al., “HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering,” Proc. 2018 Conf. Empir. Methods Nat. Lang. Process. EMNLP 2018, pp. 2369–2380, 2018, doi: 10.18653/v1/d18-1259.
[4] Y. Feldman and R. El-Yaniv, “Multi-hop Paragraph Retrieval for Open-domain Question Answering,” ACL 2019 - 57th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., pp. 2296–2309, 2020, doi: 10.18653/v1/p19-1222.
[5] L. Qiu et al., “Dynamically fused graph network for multi-hop reasoning,” ACL 2019 - 57th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., pp. 6140–6150, 2020, doi: 10.18653/v1/p19-1617.
[6] J. Devlin, M.-W. Chang, K. Lee, K. T. Google, and A. I. Language, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Naacl-Hlt 2019, no. Mlm, 2018, [Online]. Available: https://github.com/tensorflow/tensor2tensor
[7] A. Asai, K. Hashimoto, H. Hajishirzi, R. Socher, and C. Xiong, “Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering,” 2019, [Online]. Available: http://arxiv.org/abs/1911.10470
[8] R. Nogueira and K. Cho, “Passage Re-ranking with BERT,” 2019, [Online]. Available: http://arxiv.org/abs/1901.04085
[9] Y. Nie, S. Wang, and M. Bansal, “Revealing the importance of semantic retrieval for machine reading at scale,” EMNLP-IJCNLP 2019 - 2019 Conf. Empir. Methods Nat. Lang. Process. 9th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pp. 2553–2566, 2019, doi: 10.18653/v1/d19-1258.
[10] J. Ni, C. Zhu, W. Chen, and J. McAuley, “Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question Answering,” NAACL HLT 2019 - 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, pp. 335–344, 2019, doi: 10.18653/v1/n19-1030.
[11] S. Hochreiter and J. Urgen Schmidhuber, “Long Shortterm Memory,” Neural Comput., vol. 9, no. 8, p. 17351780, 1997, [Online]. Available: http://www7.informatik.tu-muenchen.de/~hochreit%0Ahttp://www.idsia.ch/~juergen
[12] G. Bebis and M. Georgiopoulos, “Feed-forward Neural Networks,” IEEE Potentials, vol. 13, no. 4, pp. 27–31, 2002, doi: 10.1109/45.329294.
[13] M. F. Rabby, Y. Tu, M. I. Hossen, I. Lee, A. S. Maida, and X. Hei, “Stacked LSTM based deep recurrent neural network with kalman smoothing for blood glucose prediction,” BMC Med. Inform. Decis. Mak., vol. 21, no. 1, 2021, doi: 10.1186/s12911-021-01462-5.
[14] M. Neumann, D. King, I. Beltagy, and W. Ammar, “ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing,” BioNLP 2019 - SIGBioMed Work. Biomed. Nat. Lang. Process. Proc. 18th BioNLP Work. Shar. Task, pp. 319–327, 2019, doi: 10.18653/v1/w19-5034.
[15] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning, “Stanza: A Python Natural Language Processing Toolkit for Many Human Languages,” pp. 101–108, 2020, doi: 10.18653/v1/2020.acl-demos.14.
[16] M. Grootendorst, “KeyBERT: Minimal Keyword Extraction with BERT,” Zenodo, 2020, [Online]. Available: https://github.com/MaartenGr/KeyBERT
[17] Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, and J. Liu, “Hierarchical Graph Network for Multi-hop Question Answering,” EMNLP 2020 - 2020 Conf. Empir. Methods Nat. Lang. Process. Proc. Conf., pp. 8823–8838, 2020, doi: 10.18653/v1/2020.emnlp-main.710.