Why Your Bank Account Just Got Hit Multiple Times

Jack’s Rants dives into the world of banking mainframes and the batch jobs that quietly keep everything moving — until one of them doesn’t. What starts as a routine overnight run turns into a multiple transactions on thousands of accounts that didnt happen in the real world, but the mainframe says something else.

Listen now on Apple Music, Spotify, Deezer, Youtube or where-ever you get your panic attacks.

Batch Job Blues: Why Your Bank Account Just Got Hit Multiple Times

Welcome to Jack’s Rants, where I, Jack Smith, break down IT and security news that made waves—whether it’s a quick local stir or a worldwide “oh no.” This isn’t a massive horror story episode, but it’s something every tech-interested soul, bank customer, or IT pro will want to think about. So, buckle up. Today, we’re talking about why your bank account might suddenly look emptier than you remember, all thanks to the humble mainframe and an oops in batch jobs.


What Sparked This Rant?

A couple of weeks ago, there was quite the kerfuffle in the local news. A major bank (no names, but you can probably toss a coin and guess) experienced a pretty wild technical hiccup.

Imagine checking your bank account and seeing your payment to the grocery store leaving your account a few times—like multiple deductions for the same purchase. Yikes. Whether it’s one, two, or ten times your rent, the effects are never pretty.

“When suddenly 8, 9, 10k of your local currency, in this case Euros, would disappear from your bank account, you would go into the negative. You would have all kind of costs associated to that.”


The Bank Fiasco: What Happened?

Let’s set the scene:

  • Customers noticed multiple repeated transactions.
  • Money vanished into thin air—overdrafts, fees, and all the emotional fun that comes with realizing your account balance isn’t what you thought it was.
  • Social media and news outlets went into a mini frenzy. (Wouldn’t you?)

And this isn’t a one-off. If you sniff around, you’ll find stories about people at other banks experiencing suspicious repeats—payments going in or out more than once.

Not Just a Local Problem

It’s not just “that one bank.” A fair few financial institutions have had this happen over the years. The core reason? Banks still use mainframes, and batch jobs sometimes don’t behave.

“This has happened before, left and right at different banks.”


A Quick Crash Course: Mainframes and Batch Jobs

You might be thinking: “Mainframes? Isn’t that tech from the 1960s?”
Yes. And no.

Why Banks Still Love Their Mainframes

Mainframes are the big iron workhorses of the banking world. Think of them as the freight train of IT: massive, reliable, maybe a little creaky, but they rarely crash and they keep your data safe. Apps and the cloud are cool and all, but banks? They love something that just works.

Mainframe Pros:

  • Consistency: Rules the day. You know what’s going in and out.
  • Transaction Control: Every single cent is tracked.
  • Resource Management: They run massive operations efficiently.
  • Serialization: All the operations can happen in strict order (no messy overlaps).

So, What’s a Batch?

In mainframe speak, a batch job is a bundle of work—think of it as a bucket of transactions waiting, scheduled to run together at a specific time.

Life of a Batch:

  1. During the day, the main database is locked down, read-only—no random changes allowed. (Safety first!)
  2. Any new transactions—swipe your card at a shop or transfer funds—are stuck in a temporary area.
  3. At night, these “in-waiting” jobs are released and applied to the main database in one fell swoop.

“When Batches Attack!”: How It All Goes Wrong

Here’s where things got spicy in the recent bank mess-up.

“When you look at what’s a batch, what does it do, how does it work? Well, a batch in mainframe speak is a data set in wait state…”

Let’s break it down with a little table magic:

| Mainframe Phase | What Happens? ||———————-|——————————————|| Daytime | Database is read-only || Transaction Happens | Added to a temporary file (wait state) || Night Time | Batch job applies temp file to main DB |

Two Databases? Sort of.

  • Main Data Set: The “master” record—can’t touch this (during the day, at least).
  • Wait State Data Set: All the changes you (and thousands of others) made during the day.

When you peek at your account balance in your banking app, you’re seeing the main record plus whatever is on hold in the temp file. It all looks seamless—but under the hood, you’re working with two versions.


How Banks Process Transactions

Still awake? Good. Let’s walk through what happens when you tap your debit card:

  1. You buy a coffee.
    • The €3 charge is logged in a temporary file.
  2. You check your account.
    • The app shows your “current” balance factoring in that €3 from the temp file.
  3. Night rolls around.
    • The batch job commits your coffee debt to the main database. Your record is updated for real.

This process avoids the database getting messy with every minor transaction all day long. It groups updates into manageable nightly chunks—in banking, this is stability.

ACKNOWLEDGE VISUAL:


The Danger Zone: Multiple Commits

So, what if that batch is run… not once but seven, eight, nine, or ten times?

Imagine replaying your last week over and over and over. Sounds tiring—now imagine your account is charged €50 not once, but ten times for the same transaction. That adds up fast.

Example:

| Transaction | # of Times Applied | Net Effect ||——————|——————-|———————|| Rent (€1000) | 3 | You pay €3000 || Groceries (€50) | 4 | You pay €200 || Salary (€2000) | 2 | You get €4000 (if you’re lucky!) |

You can see how it gets ugly quickly:

  • Your balance depletes fast.
  • Overdrafts start.
  • Automatic payments bounce (hello, late rent).
  • Fees pile on—even if it’s not your fault.

“Instead of one or two series, they got erroneously applied 7, 8, 9, 10 times… you replay financial transactions… until your balance runs out.”

PAGE BREAK – ILLUSTRATION OF REPEATED TRANSACTIONS


How Could This Happen?

Let’s not just blame the tech—let’s talk about human error and process mishaps.

Possible Culprits:

  1. Scheduling Errors
    • There’s usually a big, detailed scheduling database that says: “This temp file applies at 2 AM, that one at 4 AM.” If it gets out of sync, batches re-run. Oops.
  2. Training in Production
    • The mother of all “don’t do this!” stories: running student training on the actual, live mainframe.
    • “Student 1 applies the wait state, Student 2 applies it, and so on…” And your bank account gets walloped.

“What would be the most… Well, I don’t want to use the word funny… some training might’ve been going on and instead of doing it on a test environment… they just applied the wait state on production multiple times.”

The Reality

Do we know what exactly happened each time? Not always. But these are tried-and-terrible failure scenarios.


Are Mainframes To Blame?

Nope. Mainframes are still, all these years later, the backbone of entire industries.

Why Mainframes Just Won’t Die

  • Big data pumps: Banks, logistics, global retail, you name it.
  • Proven: They’ve worked forever.
  • Scaling: When Excel can’t cut it and SAP starts to wheeze, mainframes step in.
  • Longevity: Same code, same process—decades later.

PAGE BREAK – ANALOG OF TECH “GROWING UP”: EXCEL ➔ DYNAMICS ➔ SAP ➔ MAINFRAME

But They’re Not Immune!

The very thing that helps—batching—can cause cascading errors when a mistake gets repeated.

“Mainframes are still the big, big data pumps that keep the industry going.”


Cleanup on Aisle 3: Fixing The Mess

All right, so the batch job ran seven times and my account is negative. Now what?

Steps to Undo the Damage

  1. Analyze What Happened
    • Figure out which transactions were applied (and re-applied).
  2. Reverse the Actions
    • The bank can create a reversal batch: a series of opposite transactions to claw back the mistake.
    • E.g., if you were debited €50 ten times, the reversal would add back €450.
  3. Testing (and More Testing)
    • In mainframe land, nothing goes live without endless checks.-Development: Build the reversal batch.-Integration Test: IT confirms it doesn’t break everything else.-Acceptance Test: Business checks that it worked—no more, no less than intended.

“If you know what was in your temporary database, you can of course reverse it… make a new data set, a new temp file with the reverse actions and replay it as many times as needed and then things should level out.”

Why This Takes Forever

Mainframes are waterfall, not agile:

  • Weeks, not days.
  • Banks don’t want to return too much, so every change is cautious and slow.
  • The goal: fix the error and nothing else.

“If you’re looking at timing, that could be weeks. It’s not a matter of days. And doing it quickly… you wish to put in too many errors, even more than what you initially started out with.”


Mainframes: Unmoveable Object Meets Human Error

Let’s be clear: mainframes are not evil. For most of us, they are invisible, reliable engines.

What They Get Right:

  • Solid as a rock: “Works solid, it’s stable, people know it.”
  • Compatibility: You can keep old code running for years.
  • Support: Pay IBM (or whoever made yours) and you’re set.

The Catch:

  • If a mistake slips in, it tends to repeat big.
  • Human error or schedule slip = systemwide pain.

The Long Road to Resolution

Here’s the “fun” part for affected customers:

  • Waiting in limbo for account corrections.
  • Hoping the reversal batch works—perfectly.
  • Potentially facing late fees or bounced payments in the meantime.

Banks want to avoid paying out “too much” in the fix, and processes are anything but quick.

Summing Up the Fix Process

1. Identify all repeated transactions.2. Build a reversal batch job.3. Run tests—lots of them.4. Accept the fix after business checks.5. Apply—carefully!—to customer accounts.6. Communicate to everyone.

But Why Not More Modern Tech?

Because swapping out mainframes is like replacing all the bones in your body while running a marathon. Not impossible, but you’re more likely to trip and faceplant.


The Takeaway: Always Check Before Commit

So, what have we learned from this saga?

  • Banks aren’t invincible. The stability we count on is sometimes just a well-worn process away from chaos.
  • Batch jobs are fantastic—until human oops comes to play.
  • If you’re an IT pro, never run training in production. (Seriously, don’t.)
  • When stuff goes wrong on a mainframe, fixing it is like turning a cruise ship—slow and careful.

“Don’t apply your data set in wait state more than once.”


Jack’s Final Thoughts

I love mainframes. I really do. They’re the backbone of nearly every industry that moves massive amounts of money and data, from banking to logistics to retail. But when things go wrong, as they sometimes do, it’s rarely the machine’s fault. It’s process, oversight, or a simple “oops” that sets off a chain reaction.

And if you’re there banging your head against the wall when your bank account’s gone wild, you’re not alone—the rest of us are right here with you, wondering why software that’s so boringly reliable sometimes gets spicy.


Where to Find More Jack’s Rants

“Don’t forget, you are one of us.”


A Light-Hearted Disclaimer

The content of this podcast (and this blog post!) is intended for entertainment purposes only. We humorously explore wild tech situations—don’t take offense! All stories are situations, not personal jabs. Approach tech with a sense of humor and an open mind. Listener (and reader) discretion advised!


FAQ

What should I do if my bank account gets hit by repeated transactions?

Contact your bank immediately. Take screenshots of all the affected transactions, and don’t panic—banks eventually fix the mistake, though it can take time.

Is my money safe on mainframes?

Despite the odd headlines, mainframes are still one of the safest, surest places for money. The odds of catastrophic failure are very, very low.

Why do banks still use mainframes?

Because they’re proven, reliable, and handle millions of transactions without breaking a sweat. Replacing them is risky and expensive.


Closing Rant: Don’t Blame the Machines—Blame the Process

Next time you hear about a banking “technical error,” just remember: somewhere, a tired IT person is running through logs, batch lists, and files, wondering why it’s always their night shift when the world catches fire.

We’re all in this together. Until next time—happy banking, and don’t forget to check your statements!



Leave a Reply

Your email address will not be published. Required fields are marked *