AI AND ORGANIZATIONAL AGILITY: CAN AI OVERCOME LARMAN’S LAWS OF RESISTANCE?
DOI:
https://doi.org/10.31732/2663-2209-2025-79-370-376Keywords:
AI adoption, Larman’s Laws, organizational agility, agile transformation, leadership, governance, human–AI collaborationAbstract
This study investigates whether the adoption of artificial intelligence (AI) in Agile organizations acts as a genuine catalyst of transformation or is co‑opted by existing structures, thereby reinforcing resistance patterns described by Larman’s Laws of Organizational Behavior. Building on a theory‑driven framework that integrates Agile transformation, organizational change, and human–AI collaboration, we analyze a survey of 97 practitioners across software, product and operations roles. The survey captures three constructs: level of AI usage, self‑reported satisfaction with AI tools, and confidence in the accuracy and reliability of AI‑generated outputs. Descriptive distributions indicate broad, but not yet deep, adoption: 79% use AI either for specific tasks or regularly with customization, whereas only 3% report deep, consistent integration. Satisfaction is high (~67% satisfied or very satisfied), while confidence is mostly moderate (54% moderately confident; 14% very confident). Exploratory associations suggest that higher AI usage and higher satisfaction are positively related to confidence in AI outputs. These patterns are consistent with an incremental adoption path in which AI is primarily applied to bounded, low‑risk tasks, avoiding disruption to decision rights and role boundaries—an empirical manifestation of Larman’s Laws. To explain these findings, we propose and visualize a conceptual model in which leadership and governance moderate the relationship between AI adoption and organizational agility: AI enables agility when accompanied by structural redesign and responsible governance, but risks becoming superficial when inserted into unchanged structures. The article contributes by (i) extending Larman’s Laws to the AI era with empirical evidence from agile settings; (ii) specifying measurable indicators for AI‑enabled agility; and (iii) outlining managerial implications that reconcile agile principles with responsible AI. We discuss limitations of the dataset and propose directions for longitudinal and multi‑method research on human–AI teaming, leadership, and structural change.
Downloads
References
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072
Larman, C., & Vodde, B. (2016). Large-scale scrum: More with LeSS. Addison-Wesley.
Dikert, K., Paasivaara, M., & Lassenius, C. (2016). Challenges and success factors for large-scale agile transformations: A systematic literature review. Journal of Systems and Software, 119, 87–108. https://doi.org/10.1016/j.jss.2016.06.013
Amershi, S., Weld, D. S., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S. T., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human–AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). ACM. https://doi.org/10.1145/3290605.3300233
Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. https://doi.org/10.1002/smj.640
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Zhang, J., & Sheng, Y. (2022). Trust in AI systems: A review. Information & Management, 59(6), 103686. https://doi.org/10.1016/j.im.2022.103686
Microsoft & GitHub. (2023). The impact of AI on developer productivity: GitHub Copilot field study. Retrieved from https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
Vakili, K., & McGahan, A. M. (2023). AI and organizational learning. Organization Science. Advance online publication.
McKinsey & Company. (2023). The state of AI in 2023: Generative AI’s breakout year. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year

Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.