The biggest challenge the banking industry faces today is the growth of Internet finance, as intermediate traders and Internet financial services providers have taken many retail customers from traditional banks, and are also working to attract enterprise customers.
In 2020, this process has been accelerated by the global outbreak of COVID-19, which has directly led to the exponential growth of contactless transactions, boosting the development of Internet finance in the Middle East and beyond. To tackle the challenges Internet finance brings and improve their ability to first retain and then gain customers, banks must make the most of their reputations for relative trustworthiness and reliability, and greatly improve the flexibility and convenience of their service rollouts at the same time.
To capitalize on the use of data, banks have a growing need to deploy data storage architecture that best suits their size and their service development expectations. As the data foundation that supports the modernization of the banking industry, storage technology needs to be clean, tough, and agile. But with a plethora of options on the market today, it can be difficult to evaluate the right solutions. Two factors are coming to the forefront of that decision making: service stability, and tailored service modules.
The Starting Point
As a starting point, all banks have the same requirements for service stability, which is to have always-online services. A reliability of 99.999 percent for single devices simply isn’t sufficient to meet their service stability requirements. All-flash storage solutions for stable performance, and 3-Data-Center (3DC) geo-redundancy for multi-level Disaster Recovery (DR) protection, have become the basic configurations for mission-critical services.
However, when selecting a specific solution, a bank should design the solution based on its own unique situation and environment. To give an example, one bank used a top vendor’s products to implement 3DC geo-redundancy ring networking. The two primary sites worked in dual-active mode and replicated the data to the third site to form a ring network. However, the country’s long-haul network was unstable, leading to numerous bit errors — and even fiber cuts at times. In networking modes such as this, the coupling between sites is too strong. Frequent switchovers caused by network instability mean that a large amount of data that is to be replicated accumulates in the memory of the main site.
When the memory is exhausted, faults occur. In regions where the long-haul network is unstable, it’s imperative to minimize the impact of the network’s performance on the solution and therefore reduce coupling between sites. In this example, the vendor should have deployed a tightly coupled active-active solution at the primary site, and it should have formed loosely coupled asynchronous replication relationships between the main site and other sites.
Moreover, services must also remain online during a configuration change, data migration, upgrades, and device maintenance. Multiple factors, including human factors and the external environment, need to be considered. More than 20 percent of faults in a Data Center (DC) are caused by human error. Conducting repeated checks and drills before change operations, along with the formulation of recovery plans in advance, is essential in mitigating this type of fault.
The Next Step
Once addressing the issue of service stability, financial institutions must then look at the common service modules of most relevance for them. Banks, for example, tend to focus on competition requirements. Many banks choose Software-Defined Storage (SDS) because it’s easy to obtain, expand, and manage. Meanwhile, Hyper-Converged Infrastructure (HCI) integrates computing, network, and storage resources, simplifying management and capacity expansion.
Customers have multiple options here, including open-source SDS software and general-purpose servers; commercial SDS software provided by vendors and general-purpose servers; commercial distributed storage products; commercial HCI software and general-purpose servers; or commercial HCI integrated software/hardware products. As no product is perfect in every aspect, enterprises must choose the solutions that best satisfy their needs.
With their own data storage Operations and Maintenance (O&M) teams and plenty of development and testing personnel, large banks typically have strong R&D capabilities and tend to choose open-source software and general-purpose servers. This is because the developers working at banks aren’t professional storage experts and they have limited methods for optimizing and monitoring the underlying layers, limiting their ability to deliver optimal performance for every scenario. Instead, banks rely on the maturity of open-source software to monitor and handle faults.
Critical Choices
In this context, HCI simplifies management and capacity expansion, and tightly couples various resources. During capacity expansion, computing or storage resources often become redundant. Determining the ratio of computing resources to storage resources in advance is therefore key to deploying HCI. Using HCI products integrated with applications is also feasible, and solution providers can more accurately calculate the optimal ratio of various resources.
In the end, the choices that financial institutions make in deploying their data storage architecture—particularly around service stability and service modules—will have an increasingly profound impact on their mission-critical services in today’s digital society.