Challenges

Description

Organizational Cultural Barriers

Organizational cultural barriers can complicate AI’s successful deployment. Fountaine et al. (2019) find that many companies need help implementing AI technology due to cultural and organizational barriers. In this regard, some leaders mistakenly believe that AI is a simple plug-and-play solution with immediate benefits from pilot projects in isolated business units. However, they also mention that achieving the desired customer experience takes time, effort, and company-wide deployment. The banks need to consider that AI deployment is more than just technology; it is about overcoming the cultural barriers built and reinforced over the decades. Dismantling these barriers to enable organizational transformation can be complex.

Unconducive Regulatory Environment

The regulatory environment can complicate AI transformation if its enforcement is not sensible. Truby et al. (2022) find that regulators play an essential role in the development of AI. However, they caution against strict liability rules implementation due to the potential for high costs, complexity, and slow progress. Instead, they propose a “sandbox regulation” (p. 293) approach where regulators permit experimental development within controlled boundaries and with appropriate oversight. While regulations are essential to govern AI implementation and management, their thoughtful crafting and enforcement are necessary.

Privacy of Customer Data

Banks should take the privacy of customer data seriously, as it can have material ethical and financial consequences. Fares et al. (2022) mention that protecting customers’ privacy is essential when collecting and sharing their information during [and after] AI implementation. The need for securing the data can cause AI implementation to be slower than expected, as Naik et al. (2022) noted. Banks can expose themselves to ethical and legal risks when collecting and analyzing customers’ private information without consent.

Cyber Attacks

Even with AI deployment, cyber security remains a real threat. Naik et al. (2022) mention that fraudsters can exploit algorithms and identify newly discovered software vulnerabilities. To accomplish this, hackers can create AI-powered programs that learn from security systems’ responses to attacks, making it easier for successful attacks in the future. Secondly, Naik et al. (2022) find that banks must adequately supervise AI-powered security systems, particularly when safeguarding extensive computer networks. It is inevitable for Cyber hackers to attempt to penetrate a bank’s AI-powered security barriers, so adequate supervision of the technology is essential to ensure a fast response in unlikely events. Thirdly, a suitable database is necessary for AI to access and learn from effectively. Fourthly, cybercrime incidents can be challenging since anonymity preservation is essential to AI technology configuration. So, with AI technology evolving and becoming more sophisticated, the risk of cybercriminals using similar technology to exploit vulnerabilities is inevitable. The risk amplifies when banks implement AI in isolated areas of the bank rather than across the entire organization.