Sustainable-first banking demands that efficiency is hard-wired.
We’re about to launch the UK’s only e-money current account dedicated to accelerating scientific discovery, as a way to build a sustainable future. That’s a mission we’re deeply committed to.
It’s not enough to just provide our customers with a way of contributing to the scientific research that’s shaping the future they want. The infrastructure that we’ve built — the technology that we use to enable our accounts to exist — has also needed to be as sustainable as possible itself, with efficiency at its core.
Here’s how we’ve done it.
Event-driven processing (if it’s good enough for whales…)
Birds and aquatic mammals such as whales experience a sleep behaviour called unihemispheric slow-wave sleep — in essence, sleeping with one eye open. At Science Card we’ve applied a similar principle to the software microservices that we run when we process transactions.
Rather than being always-on — and always consuming energy — our payments software only runs a process as and when it’s required.
This is known as event-driven processing, and it means that no process runs unless an event happens that requires it to run.
As an example, when a customer of ours uses their debit card to make a payment that launches a precisely-defined series of events. These events cover aspects such as ensuring that there’s sufficient money in the account to cover the payment; that the correct security requirements have been met.
This scenario involves a multitude of distinct events, forming a precise, pre-defined sequence. But outside of these events, when the debit card or Science Card app is not being used, there’s absolutely no need to keep the hundreds of microservices that process these events constantly running.
This event-driven approach streamlines our operations and ensures efficiency in our payment processing system. Less waste, less cost, and with absolutely no reduction in our customers’ experience.
The wisdom of clouds: we use 40% less energy
Science Card’s software runs in the cloud. We use serverless computing — a straightforward and practical computing model designed for efficiency — enabling us to operate without the need for physical servers.
Serverless computing has flexibility and scalability built in by default: the technology allows us to pay only for the resources we use, with no cost incurred when services are idle, and with the ability to scale-up on demand the computing power available to us. We are able to adjust our computing resources quickly based on immediate needs, leading to more effective use of computing resources.
This means that we’re able to use 30–40% less energy compared to traditional, kubernetes-like server systems. We also use the AWS cloud, which itself runs predominately on green energy.
Clearly, then, operating in the cloud is the right approach from a sustainability perspective, and that alone would make cloud computing the wise choice for Science Card.
But it goes further than this.
Costs down, security up
The same attributes that make serverless computing the right choice from a sustainability point of view also enable us to revolutionise our approach to operational cost control.
Serverless computing’s auto-scaling feature dynamically adjusts resources based on real-time demand, and scales back to minimise costs when idle. This model reduces ongoing expenses related to reserved CPUs and memory, especially for always-on processes in environments like J2E application servers.
A significant benefit of serverless is its impact on our development team — and that efficiency is also a material driver of cost reduction. We need fewer developers to achieve more, thanks to increased productivity and reduced downtime, and a simplified workload with no need to manage the underlying infrastructure. They can focus entirely on coding, free from concerns about system management, provisioning, maintenance, and scaling. This aspect in particular allows our team to dedicate more time and creativity to developing the innovative technology our mission demands.
Freeing development teams from the necessity of managing server infrastructure also strengthens our security. When there are no servers to manage, there is no potential for someone to forget to configure them correctly, or for updates to not be applied in time. Our cloud provider — AWS — undertakes server maintenance, and unlike our own lean development squad, they have sizeable teams dedicated to this task. This significantly reduces security risk — particularly when coupled with our event-driven processing, which materially reduces the size of the target for potential attacks.
New levels of agility, building a new kind of bank
Serverless computing minimises our energy usage, optimises our cost control, and frees up our developers. It’s not just about what it saves, though. We’re setting out to build a new kind of customer experience, and serverless computing provides us with the operational agility and efficiency we need to be able to do this. Critically, it also has future-readiness built in.
Serverless architecture swiftly scales up or down, ideal for the variable nature of payment processing, and is able to match the patterns of financial transactions occurring in specific timeframes by operating only when needed, avoiding unnecessary resource use.
Serverless allows for incremental changes in system architecture, meaning that we’re able to stay agile and rapidly adapt without overhauling our entire system.
And critically for a financial services institution, serverless computing’s fine-grained microservices architecture is adept at creating detailed audit trails. Storing extensive audit data comes with potentially high resource demands: by leveraging serverless, we are able to balance the need for comprehensive auditing with the necessity of operational efficiency.
The outcome? We are able to be faster and leaner, operating at up to 30% of the cost levels of the industry’s incumbents. These aren’t peripheral benefits. They are fundamental to us achieving our mission, and in providing our customers with a genuinely sustainable, cost-efficient and high-impact alternative to their other banks.
How it works: Science Card’s serverless backend
Our cloud technology stack is 100% serverless operating on AWS and deployed using IoC (infrastructure-as-code) tools. We collaborate with over ten external partners through secure APIs.
For our high-performing services, we primarily use Go, which provides an efficient solution for the quick cold-start times required during service scaling. For tasks requiring more flexibility, mainly when interfacing with third-party APIs and handling variable JSON structures, we use TypeScript. Additionally, Python is our go-to for internal reporting and analytics due to its versatility and robust data processing capabilities. All these micro services, written in different languages run in harmony on our Serverless platform.
A key aspect of our payment system is ensuring idempotency, allowing processes to run multiple times while executing the action only once. For instance, if there’s a failure to update payment status, the system checks if the necessary files for the payment have already been created when the process restarts. We achieve this through a combination of serverless SQL and NoSQL databases, like AWS Aurora and DynamoDB, integrated with State machines and queues. This setup efficiently manages the order of events and handles exceptions with granular restarts.
Our microservices architecture is designed to operate at multiple speeds:
Fast services: an example is the synchronisation of card spending with bank balances for transfers, which occurs in just a few milliseconds.
Slow services: these include user onboarding, which involves biometric authentication and ID validation, taking a minute or two due to dependencies on third-party services.
Multi-gear services: services such as bank transfers require AML and fraud screening. The screening is quick, but it may take longer to resolve if a transaction is flagged for compliance review.
Our serverless microservices are finely tuned to handle these diverse requirements seamlessly. Fast services benefit from rapid scaling, engaging many servers simultaneously. Slow services, on the other hand, consume minimal resources as they remain “off” until an event triggers them to continue. This approach ensures efficient resource utilisation across all our service operations.