مدل مفهومی برای ارزیابی محاسباتی عملیات نفوذ در شبکه‌های اجتماعی برخط

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دانشجوی دکتری، دانشگاه امام حسین (ع)، تهران، ایران.

2 استاد، دانشگاه علم و صنعت ایران دانشکده مهندسی کامپیوتر، تهران، ایران.

چکیده

عملیات نفوذ در کنار عملیات سایبری و جنگ الکترونیک، یکی از انواع عملیات اطلاعاتی است. فراگیری شبکه های اجتماعی برخط، بستری مناسب جهت عملیات نفوذ فراهم کرده است؛ لذا دستیابی به قابلیت ارزیابی عملیات نفوذ در شبکه‌های اجتماعی برخط ضروری است. قابلیت ارزیابی عملیات نفوذ مستلزم وجود یک مدل مفهومی است که از طرفی متصل به سازه نظری مستحکم باشد و از طرف دیگر قابلیت پیاده‌سازی محاسباتی داشته باشد. هدف این مقاله ارائه یک مدل مفهومی است که زمینه مدل‌سازی محاسباتی برای ارزیابی عملیات نفوذ در شبکههای اجتماعی برخط را فراهم آورد. در این مقاله پس از مطالعه عمیق و اکتشافی در ادبیات تحقیق، با به‌کارگیری روش فراترکیب و مصاحبه با خبرگان، چارچوب نظری قدرت شناختی سایبری و اجزاء، مفاهیم، ابعاد و مؤلفه‌های مدل مفهومی ارائه شده است. جهت ارزیابی اعتبار مدل ارائه  شده از روش دلفی دومرحله‌ای استفاده شد.  مدل پیشنهادی از لحاظ روایی و پایایی تأیید شد و همچنین مبتنی بر روش دلفی، مورد توافق بالای هشتاد درصد قرار گرفت. در این تحقیق، صورتبندی مفهومی جدیدی برای عملیات نفوذ در شبکههای اجتماعی برخط مبتنی بر مفهومپردازی سرمایههای شناختی و قدرت شناختی سایبری ارائه شده است. مدل مفهومی پیشنهادی که به‌صورت توصیفی ارائه شده است، امکان ارزیابی شاخص‌های موفقیت عملیات نفوذ را فراهم نموده است. مزیت مدل مفهومی پیشنهادی این است که ارزیابی عملیات نفوذ را مبتنی بر مفهوم قدرت انجام می‌دهد و قابلیت ارزیابی محاسباتی عملیات نفوذ را فراهم می‌آورد. همچنین این مدل امکان شناسایی و تشخیص عملیات نفوذ را در شبکه‌های اجتماعی برخط را فراهم می‌آورد.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

A Conceptual Model for Computational Evaluation of Influence Operations in Online Social Networks

نویسندگان [English]

  • gholamreza bazdar 1
  • mohammad Abdollahi Azgomi 2
1 PhD student, Imam Hossein University, Tehran, Iran.
2 Professor, University of Science and Technology, Tehran, Iran
چکیده [English]

Influence operations are considered as one of the types of information operations along with cyber network operations and electronic warfare operations. The spread of online social networks has provided a suitable platform for influence operations. Therefore, it is essential to be able to evaluate influence operations in online social networks. The ability to evaluate influence operation requires the existence of a conceptual model that is connected to a consistent theoretical structure on the one hand, and has the ability to be implemented computationally on the other hand. The aim of this article is to present a conceptual model that provides a basis for computational modeling to evaluate influence operations in online social networks. In this article, after an in-depth and exploratory study in the research literature, using the meta-combination method and interviews with experts, the theoretical framework of cyber cognitive power and the characteristics, concepts, dimensions and components of the conceptual model are presented. Two-step Delphi method was used to evaluate the validity of the presented model. The proposed model was confirmed in terms of validity and reliability and based on the Delphi method, it was agreed by more than eighty percent. In this research, a new conceptual formulation for influence operations in online social networks based on the conceptualization of cognitive capital and cyber cognitive power is presented. The proposed conceptual model, which is presented descriptively, has provided the possibility to evaluate the success indicators of the influence operation. The advantage of the proposed conceptual model is that it evaluates influence operation based on the concept of power and provides computational evaluation of influence operation. Also, this model provides the possibility to identify and detect influence operations in online social networks.
 

کلیدواژه‌ها [English]

  • Online Social Network (OSN)
  • Information Operation
  • Influence Operation
  • Information Power
  • Communication Power
  • Cyber power
  • Cognitive Discourse Theory
  • Cyber Cognitive Power

Smiley face

 

[1]    G. Krishnan Rajbahadur, S. Wang, G. A. Oliva,Y.Kamei, and A.E.Hassan “The impact of feature importance methods on the interpretation of defect classifiers“,IEEE Transactions on Software Engineering TSE 2021.3056941
[2]    S. Kumar Pandey, R. Bhushan Mishra, A. Kumar Tripathi “Machine learning based methods for software fault prediction: A survey” 0957-4174, 2021 Elsevier
[3]    J.P.D.Wijesekara, P.G.T.P. Gunawardhana” A Review on Mining Software Engineering Data for Software Defect Prediction” ITRU RESEARCH SYMPOSIUM 2020, FACULTY OF INFORMATION TECHNOLOGY, UNIVERSITY OF MORATUWA,
[4]    R. Moussaa, D. Azara” A PSO-GA Approach Targeting Fault-Prone Software Modules” Preprint submitted to Journal of Systems and Software June 19, 2017
[5]    E. Arisholm, L. C. Briand, and E. B. Johannessen, “A systematic and comprehensive investigation of methods to build and evaluate fault prediction models,” Journal of Systems and Software, vol. 83, no. 1, pp. 2 – 17, 2010.
[6]    T. Gilb and S. Finzi, Principles of software engineering management, vol. 4. Addison-Wesley Wokingham, 1988.
[7]    T. DeMarco, Controlling software projects: management, measurement & estimation. Yourdon Press, 1982.
[8]    N. E. Fenton and S. L. Pfleeger, Software metrics: a rigorous and practical approach. PWS Publishing Co., 1998.
[9]    C. Catal and B. Diri, “A systematic review of software fault prediction studies,” Expert Systems with Applications, vol. 36, no. 4, pp. 7346–7354, 2009.
[10] K. Pan, S. Kim, and E. J. Whitehead Jr, “Bug classification using program slicing metrics,” in Source Code Analysis and Manipulation, 2006. SCAM’06. Sixth IEEE International Workshop on, 2006, pp. 31–42.
[11] T. L. Graves, A. F. Karr, J. S. Marron, and H. Siy, “Predicting fault incidence using software change history,” Software Engineering, IEEE Transactions on, vol. 26, no. 7, pp. 653–661, 2000.
[12] E. J. Weyuker, T. J. Ostrand, and R. M. Bell, “Do too many cooks spoil the broth? using the number of developers to enhance defect prediction models,” Empirical Software Engineering, vol. 13, no. 5, pp. 539–559, 2008.
[13] N. Nagappan, B. Murphy, and V. Basili, “The influence of organizational structure on software quality,” in Software Engineering, 2008. ICSE’08. ACM/IEEE 30th International Conference on, 2008, pp. 521–530.
[14] A. Mockus, “Organizational volatility and its effects on software defects,” in Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering, 2010, pp. 117–126.
[15] M. Pinzger, N. Nagappan, and B. Murphy, “Can developer-module networks predict failures?,” in Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, 2008, pp. 2–12.
[16] [A. Meneely, L. Williams, W. Snipes, and J. Osborne, “Predicting failures with developer networks and social network analysis,” in Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, 2008, pp. 13–23.
[17] C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu, “Don’t touch my code!: examining the effects of ownership on software quality,” in Proceedings of the 19th Symposium on the Foundations of Software Engineering and the 13rd European Software Engineering Conference, 2011, pp. 4–14.
[18] F. Rahman and P. Devanbu, “Ownership, experience and defects: a fine-grained study of authorship,” in Proceedings of the 33rd International Conference on Software Engineering, 2011, pp. 491–500.
[19] C. Bird, N. Nagappan, P. Devanbu, H. Gall, and B. Murphy, “Does distributed development affect software quality?: an empirical case study of windows vista,” Communications of the ACM, vol. 52, no. 8, pp. 85–93, 2009.
[20] J. Kumar Chhabra and V. Gupta, “A survey of dynamic software metrics,” Journal of Computer Science and Technology, vol. 25, no. 5, pp. 1016–1029, 2010.
[21] J. Czerwonka, R. Das, N. Nagappan, A. Tarvo, and A. Teterev, “CRANE: Failure Prediction, Change Analysis and Test Prioritization in Practice -- Experiences from Windows,” in Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on, 2011, pp. 357–366.
[22] T. Zimmerman, N. Nagappan, K. Herzig, R. Premraj, and L. Williams, “An Empirical Study on the Relation between Dependency Neighborhoods and Failures,” in Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on, 2011, pp. 347–356.
[23] N. Nagappan and T. Ball, “Use of relative code churn measures to predict system defect density,” in Software Engineering, 2005. ICSE 2005. Proceedings. 27th International Conference on, 2005, pp. 284– 292.
[24] N. Nagappan, A. Zeller, T. Zimmermann, K. Herzig, and B. Murphy, “Change bursts as defect predictors,” Changes, vol. 2, no. 4, p. 3, 2010.
[25] N. Nagappan and T. Ball, “Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study,” in Empirical Software Engineering and Measurement, 2007. ESEM 2007. First International Symposium on, 2007, pp. 364–373.
[26] T. Zimmermann and N. Nagappan, “Predicting defects using network analysis on dependency graphs,” in Software Engineering, 2008. ICSE’08. ACM/IEEE 30th International Conference on, 2008, pp. 531–540.
[27] H. Wang, T. M. Khoshgoftaar, and N. Seliya, “How Many Software Metrics Should be Selected for Defect Prediction?,” in Twenty-Fourth International FLAIRS Conference, 2011.
[28] K. Gao, T. M. Khoshgoftaar, and H. Wang, “An empirical investigation of filter attribute selection techniques for software quality classification,” in Information Reuse & Integration, 2009. IRI’09. IEEE International Conference on, 2009, pp. 272–277.
[29] H. Hata, O. Mizuno, and T. Kikuno, “Bug prediction based on fine-grained module histories,” in Software Engineering (ICSE), 2012 34th International Conference on, 2012, pp. 200–210.
[30] “Understand Your Code.” [Online]. Available: http://www.scitools.com/index.php. [Accessed: 02-Feb-2013].
[31] T. Zimmermann, R. Premraj, and A. Zeller, “Predicting defects for eclipse,” in Predictor Models in Software Engineering, 2007. PROMISE’07: ICSE Workshops 2007. International Workshop on, 2007, pp. 9–9.
[32] “Eclipse Bug Data! - Software Engineering Chair (Prof. Zeller) - Saarland University.” [Online]. Available: http://www.st.cs.uni-saarland.de/softevo/bug-data/eclipse/. [Accessed: 02-Feb-2013].
[33] “Eclipse Burst Data! - Software Engineering Chair (Prof. Zeller) - Saarland University.” [Online]. Available: http://www.st.cs.uni-saarland.de/softevo/burst-data/eclipse/. [Accessed: 02-Feb-2013].
[34] T. M. Khoshgoftaar and N. Seliya, “Software quality classification modeling using the SPRINT decision tree algorithm,” in Tools with Artificial Intelligence, 2002.(ICTAI 2002). Proceedings. 14th IEEE International Conference on, 2002, pp. 365–374.
[35] Y. Jiang, J. Lin, B. Cukic, and T. Menzies, “Variance analysis in software fault prediction models,” in Software Reliability Engineering, 2009. ISSRE’09. 20th International Symposium on, 2009, pp. 99–108.
[36] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” Software Engineering, IEEE Transactions on, vol. 20, no. 6, pp. 476–493, 1994.
[37] F. B. Abreu and R. Carapuça, “Object-oriented software engineering: Measuring and controlling the development process,” in proceedings of the 4th International Conference on Software Quality, 1994.
[38] M. Lorenz and J. Kidd, Object-oriented software metrics: a practical guide. Prentice-Hall, Inc., 1994.
[39] J. Bansiya and C. G. Davis, “A hierarchical model for object-oriented design quality assessment,” Software Engineering, IEEE Transactions on, vol. 28, no. 1, pp. 4–17, 2002.
[40] T. J. Ostrand, E. J. Weyuker, and R. M. Bell, “Predicting the location and number of faults in large software systems,” Software Engineering, IEEE Transactions on, vol. 31, no. 4, pp. 340–355, 2005.
[41] S. Bibi, G. Tsoumakas, I. Stamelos, and I. Vlahavas, “Software defect prediction using regression via classification,” in IEEE International Conference on, 2006, pp. 330–336.
[42] T. M. Khoshgoftaar and N. Seliya, “Fault Prediction Modeling for Software Quality Estimation: Comparing Commonly Used Techniques,” Empirical Software Engineering, vol. 8, no. 3, pp. 255–283, 2003.
[43] B. Selic, G. Gullekson, and P. Ward, “Real-time object oriented modeling and design,” 1994.
[44] E. Arisholm, L. C. Briand, and A. Foyen, “Dynamic coupling measurement for object-oriented software,” Software Engineering, IEEE Transactions on, vol. 30, no. 8, pp. 491–506, 2004.
[45] Á. Mitchell and J. F. Power, “A study of the influence of coverage on the relationship between static and dynamic coupling metrics,” Science of Computer Programming, vol. 59, no. 1, pp. 4–25, 2006.
[46] Y. Hassoun, S. Counsell, and R. Johnson, “Dynamic coupling metric: proof of concept,” in Software, IEE Proceedings-, 2005, vol. 152, pp. 273–279.
[47] Y. Hassoun, R. Johnson, and S. Counsell, “A dynamic runtime coupling metric for meta-level architectures,” in Software Maintenance and Reengineering, 2004. CSMR 2004. Proceedings. Eighth European Conference on, 2004, pp. 339–346.
دوره 11، شماره 4 - شماره پیاپی 44
(شماره پیاپی 44، فصلنامه زمستان)
اسفند 1402
صفحه 1-16
  • تاریخ دریافت: 13 تیر 1402
  • تاریخ بازنگری: 27 آذر 1402
  • تاریخ پذیرش: 12 دی 1402
  • تاریخ انتشار: 28 دی 1402