200 Techs Using A Single Raspberry Pi: The Temporary Server That Failed
A few hundred field engineers. Real humans. With vans. And jobs. All coordinated by a device roughly the size of a coaster and powered by something suspiciously similar to a phone charger.
For months, it runs slow but flawlessly. Tickets dispatched. Routes optimized. Managers happy. The tiny silicon hero hums along in its noble plastic case, bravely pretending it’s not one ‘apt upgrade’ away from a crisis.
Listen now on Apple Music, Spotify, Deezer, Youtube or where-ever you get your panic attacks.

Raspberry Mystery: When Your Mission-Critical Platform Runs on a Pi
Welcome to the latest installment of IT Horror Stories with Jack Smith. Buckle in, because today’s true tale is a ride through the land of Shadow IT, confusing subcontractor realities, a “data center” you never want to see, and—yes—the production ticketing system powering 200 engineers from a single, lonely Raspberry Pi.
Table of Contents
- Introduction
- The World of Shadow IT
- A Regular Morning… Until It Wasn’t
- Outages, Denial, and the Sound of Chaos
- Field Engineers Hit a Wall
- Diving into the Data Center
- Where’s the Server?!
- Raspberry Pi Reveal: How Did We Get Here?
- Migration and Grown-Up Decisions
- Battle Scars and Lessons Learned
- The Final Words
- Merch, Support, and Where to Find Us
Introduction
Welcome back, tech fans and veteran sufferers of production nightmares! Jack Smith here. After our DevOps Royale with Cheese and Shadow IT Reports sagas, Philip (veteran guest and unwitting hero) joins me for another ride through IT gone deeply, deeply sideways.
“Things as usual does not sound really motivating.”
The tale we cover today could only happen in an enterprise so complicated with contractors and outsourcing that nobody can remember who actually works for whom. But let’s not spoil the ending. Let’s see how running a telecom’s vital ticketing app on a microcontroller became a reality.
The World of Shadow IT
Our protagonist, Philip, begins with that all-too-familiar setup: working as a contractor for a subcontractor of a “big name” telco.
- “We’re a telecommunications company, yeehaw.”
If you’ve set up a router and changed a few cables, you’ll recognize this: multiple companies in the chain, all responsible for different parts—the field services team (no end-user contact), another for installing backend infrastructure, yet another for direct customer complaints.
It’s a structure almost designed for confusion. And baked into this confusion is an attitude some of you will find too familiar: “Cowboy IT.”
- Little documentation
- Everything’s a little rough-and-ready
- Updates? Sure, but never on a Friday (“when there’s happy hour. Priorities!”)
“Isn’t the wonderful world of outsourcing overly, needlessly complicated? Oh, it’s overly, needlessly complicated by design.”
A Regular Morning… Until It Wasn’t
Imagine: You walk in early, coffee in hand, ready to start another day of development work. Out front, the help desk is already looking twitchier than usual.
The Calm Before the Storm
At first, it’s just a few more calls than normal. Some low grumblings:
- “The platform is down.”
- “No, it’s not—just slow as usual. Be patient.”
This platform, as Philip explains, is nothing short of business critical. The field engineers—about 200 of them—use it to log jobs, record what work they’ve done, and plan their days.
Now, field engineers expect slowness. Not to the point of minutes-per-page labor (we’re talking slower than 56k modems), but to the point where:
“You could almost prepare a full English breakfast while waiting for it to load.”
So, a bit of slowness? Normal. Total outage? That’s a new one.
Outages, Denial, and the Sound of Chaos
A crash landing unfolds at field services HQ.
Symptoms
- NGINX error pages start showing up (for those who know: 404s and 504s coming from the proxy).
- Some can’t get any connection at all (machines can’t be pinged).
- “But it worked yesterday and nothing changed…” Famous last words.
“Help desk people were frantically running around like headless chickens.”
No coordination. No ownership. Just panic. Philip, being a contractor and not on-call for infrastructure, stands aside—until even he’s sucked into the spiral.
Field Engineers Hit a Wall
And a hard one at that.
Let’s recap the structure:
- The Platform: A web app for field engineers, used daily.
- 200+ Users need it for basic job functioning.
- Ticket assignment, day planning, status updates: All in here.
When it goes down, the field stops.
Technically, it was always slow (multi-minute-per-page loads), but total unavailability is on a whole new level.
Diving into the Data Center
This is where the story gets, well… horrifying.
Step 1: The Armchair Investigation
- The IP can’t be pinged.
- No one quite knows where the server for the platform actually sits.
- “Where is this machine located? Is it cloud? Is it in a datacenter?”
“Apparently the data center was local in the sense that it was in the same building. Oh, that’s a good one.”
Step 2: The Quest for the Holy Key
- Data center isn’t in a locked, sanitized server room.
- In fact, the “data center” is basically a utility closet behind a receptionist’s desk (Linda, the “Gatekeeper High Priestess of the Key Cabinet.”)
- Security by obscurity, held together by a single, unassuming key.
“The only thing stopping you from entering was basically just Linda, wielder of the one key to rule them all.”
No badge, no biometrics—just Linda.
Where’s the Server?
With key in hand, the team enters what’s arguably the most depressing “data center” in recorded history.
Scene Setting
- Two half-empty racks, filled with old, dusty hardware.
- Some servers old enough to drive (seriously—drivers licenses would not have been out of place).
- Labeling is minimal, cable management is a memory, and “abandon hope, all ye who enter here.”
“Instead of a data center, we found ourselves in a glorified storage closet with commitment issues.”
The team starts looking for their target:
- Labels? Some exist, most don’t.
- Reference files? An ancient Excel sheet, stored on an uncared-for network share.
- Hope? Fading quickly.
After an hour, the truth emerges: The spot in the rack that should hold the main ticketing server is… empty.
Just dangling cables:
- One patch cable
- One micro-USB (!) cable
If you know hardware, you know what’s coming next.
Raspberry Pi Reveal: How Did We Get Here?
Drumroll, please. The “server” was a Raspberry Pi—not the 4, not even the 3, but a first-generation model.
How Did This Happen?
- Some enthusiastic engineer, excited by the PI’s popularity as a tool for home automation, had deployed the ticketing system for 200 engineers…on a Pi.
- When the engineer wanted to experiment with the Pi at home, he took it—and its SD card—home.
- No backups (other than whatever was on the SD card).
- The rest of the team? No idea. The Pi was production.
“So if I understand correctly, you had 200 engineers on a first or second generation Raspberry Pi doing your ticketing for all your field services?”
Yes. Yes, you’re reading that right.
Why Was Performance Terrible?
- The Pi is not a production-grade server.
- The only thing that kept it semi-alive was—possibly—a helpful NGINX proxy, caching what it could.
PS: Data? On the SD Card
- SQLite database for the whole thing, running atop an SD card (not exactly enterprise storage).
- By the time the outage crisis reached its peak, the engineer who’d taken the Pi home still had the SD card.
Luck, Not Skill
- Luckily, he hadn’t reused/formatted it.
- “Yeehaw IT” with “just enough caution.”
“The guy wasn’t incompetent. He just… didn’t realize his test PI was powering the field service operation.”
Migration and Grown-Up Decisions
Eventually, as the firefight died down and the cause was discovered, the team swung into action.
What Actually Happened?
- The data was just barely recovered (from the original SD card).
- The ticketing application was moved to a proper VM—one with:
- Regular backups
- Postgres as the database (no more SD card SQLite!)
- Proper server-grade hardware
Side Effects
- Finance, awakening to the existential threat, signed off on a “decent box” for ALL legacy applications (some of which had been running on hardware older than many employees).
- At last, the analog servers whined their last complaints, powered down for good.
“Nothing says production-grade infrastructure, hosting an app for 200 field engineers, like a €35 hobby board normally used to teach kids Python.”
Battle Scars and Lessons Learned
Happy Ending?
Mostly. The field engineers, newly blessed with sub-minute page loads, were thrilled.
Key Lessons
- Never trust “Shadow IT”
Small tests can, and often do, grow into mission-critical tasks. - Document, audit, and verify your infrastructure regularly
If you don’t know where your servers live, neither does anyone else. - Production is not for hobby boards
Raspberry Pis are magic for home automation, quick dev, and network monitoring—not distribution platforms. - Check your closets
- Do you know what’s running in that broom cupboard?
- Is there a Linda with “the key to rule them all?”
- Don’t wait for a disaster to justify an upgrade
Memorable Moments
- The look on people’s faces realizing that their “data center” was protected only by Linda and a filing cabinet key.
- A legacy platform clinging to life with only NGINX’s cache, held together with duct tape and hope.
Techie Footnotes: Okay To Use Pis… Sometimes
Let’s get something out of the way: Raspberry Pis are awesome tools. But:
- Use them for network probes, sensors, personal projects.
- Maybe as a VPN jump-box for a tiny remote site.
- Never, ever, use them as your production line-of-business platform.
The Final Words
What can we take away from this winding tale? Both a warning and a wink:
“For the love of God, don’t host production applications on a Raspberry Pi.”
Keep documenting. Audit your “data center” (even if it’s a glorified closet). And if Linda knows more about your critical infrastructure than the CTO, it’s time for a company meeting.
Pro Tip:
Give your team a checklist (update/patch), and never assume something mission-critical is being run by the right piece of hardware. Because somewhere, someone is still running a business on a Pi.
Merch, Support, and Where to Find Us
If you loved this tale (or just want to make sure the next Pi horror story isn’t about your company), support us!
- Shop for IT Horror Stories merch
- Buy us a coffee over at Ko-Fi
- Change your shop region at the bottom of the page if you’re in the US
Find us wherever you get your podcasts: Spotify, Apple Music, YouTube, Deezer, and more.
Connect with us on Instagram, TikTok, Facebook, LinkedIn, BlueSky, and Mastodon.
One Last Thing
Before you finish that cup of coffee, ask yourself: “What would I find if I opened the ‘data closet’ at our office?”
Thanks for reading, and remember: You’re one of us.

Leave a Reply