Unleashing Fintech Performance with In-Memory Computing

It’s no surprise that the financial technology (fintech) market, which grew from $1.8 billion in 2010 to more than $19 billion in 2015, is soaring. Financial institutions are increasingly dealing with growing volumes of data, higher transaction volumes, and increasing regulations. The more financial institutions can apply advanced third-party solutions to their most pressing real-time data challenges, the better.

[easy-tweet tweet=”In-memory computing platform delivers the speed, scale and availability companies need for a modern fintech architecture ” hashtags=”Fintech, Comuting”]

With so much riding on high performance, how do fintech companies ensure that their applications are up to the task? For Misys, a financial services software provider with more than 2,000 clients, including 48 of the world’s 50 largest banks and 12 of the top 20 asset managers, the answer was in-memory computing. The infrastructure demands on Misys were tremendous as they accessed huge amounts of trading and accounting data to drive high-speed transactions and real-time reporting. Processing bottlenecks were limiting the company’s ability to scale and launch new services. An in-memory computing platform delivered the speed, scale and high availability the company needed for a modern fintech architecture.

In-memory computing is not a new idea. Until recently, however, the cost of RAM was too high to implement large-scale in-memory computing infrastructures for any but very high-value, time-sensitive applications such as credit card fraud detection or high-speed trading. The cost of memory has been dropping around 30 percent per year for many years, and today it is just a little more expensive than disk. This means that in-memory computing is now cost-effective for a broad range of applications that require high levels of performance.

By keeping data in RAM, in-memory computing eliminates the delay when retrieving data from disk prior to processing. When in-memory computing is implemented on a distributed computing platform, further performance gains can be realized from parallel processing across the cluster nodes. The total performance gains from in-memory computing can be 1,000x or more.

Industry Impact of In-Memory Computing

In-memory computing currently supports data-intensive applications in many industries. For example, financial services firms face tremendous challenges from the tsunami of data created by 24-hour mobile banking. At the same time, financial regulators continue to add new reporting requirements so firms must be able to increase the speed and accuracy of calculations when changes occur in currency exchange rates, interest rates, commodity prices and stock prices.

Similarly, in the hospitality sector, online travel booking sites are looking to migrate from legacy technology to a modern platform that will enable them to scale cost-effectively. They face 24×7 web-scale traffic while retrieving and processing availability and pricing information from a wide variety of sources, which must be processed and delivered to visitors in real-time. Web-scale sports betting solutions face the same scaling challenges. They are faced with a growing number of users, a growing number of sports, more real-time betting scenarios, and increasing regulations and oversight.

In all these cases, a scalable in-memory computing platform offers lightning fast parallel processing across a cluster of commodity servers. This allows companies to plan for growth and scale more cost-effectively. Moving forward, in-memory computing will be critical to supporting a wide variety of industries including online business services, the Internet of Things, telecom, healthcare, ecommerce and much more, while addressing use cases such as digital transformation, web-scale applications, and hybrid transactional/analytical processing (HTAP).

In-Memory Computing Platforms – A Look Inside

To satisfy the requirements for extreme speed and scale with high availability for a variety of different applications, fintech companies use the latest generation of in-memory computing platforms. These typically employ the following in-memory computing components:

  • In-memory data grid – A memory cache inserted between application and database layers of new or existing applications – the cache is automatically replicated and partitioned across multiple nodes, and scalability is achieved simply by adding nodes to the cluster
  • In-memory compute grid – Distributed parallel processing for accelerating resource-intensive compute tasks
  • Distributed SQL – Support for SQL queries running across the cluster nodes including support for DDL and DML, horizontal scalability, fault tolerance, and ANSI SQL-99 compliance
  • In-memory service grid – Provides control over services deployed on each cluster node – guarantees continuous availability of services in case of node failures
  • In-memory streaming and continuous event processing – Customizable workflow capabilities for running and processing one-time or continuous queries – offers parallel indexing and streaming of distributed SQL queries for real-time analytics
  • In-memory Apache® Hadoopâ„¢ acceleration – In-memory computing platform layered atop an existing Hadoop Distributed File System (HDFS) to cache the Hadoop data – MapReduce is run in the in-memory compute grid
  • Persistent store – Ability to keep the full dataset on disk, with only user-defined, time-sensitive data in memory – enables optimal tradeoff between infrastructure costs and application performance by adjusting the amount of RAM used in the system
  • Broad set of integrations – Native support for a wide range of third-party solutions such as Apache® Sparkâ„¢, Apache® Cassandraâ„¢, Apache® Kafkaâ„¢ and Tableau

Conclusion

For fintech companies looking to cost-effectively deliver the speed and scale their clients demand, in-memory computing platforms are a proven solution that delivers speed, scale and high availability. The ability to easily scale out in-memory computing platforms means that fintech companies can expand their infrastructure as needed while ensuring optimal performance – even as they continue to innovate to create new, differentiated services.

 

+ posts

CIF Presents TWF - Miguel Clarke

Newsletter

Related articles

Generative AI and the copyright conundrum

In the last days of 2023, The New York...

Cloud ERP shouldn’t be a challenge or a chore

More integrated applications and a streamlined approach mean that...

Top 7 Cloud FinOps Strategies for Optimising Cloud Costs

According to a survey by Everest Group, 67% of...

Eco-friendly Data Centres Demand Hybrid Cloud Sustainability

With COP28’s talking points echoing globally, sustainability commitments and...

The Path to Cloud Adoption Success

As digital transformation continues to be a priority for...

Subscribe to our Newsletter