Backs Against The Wall - ACM-VIT's Biggest (almost) Failure

You don’t forget some stuff. You just don’t.
Especially if the “stuff” we’re talking about is umpteen discord pings of 800 event participants saying things along the lines of:

If you understood what’s going on, you’re one of the 50 odd people who lived through the trauma of 27th September, 2024 - day 0 of ACM-VIT’s flagship event Cryptic Hunt 3.0.
If you don’t understand, I envy you - but allow me to give you some context (I’ll keep it to the point, I promise). Vellore Institute of Technology, one of India’s very reputed engineering universities, holds its annual tech fest in the months of September/October, called graVITas. Many clubs and chapters organise events during it - hackathons, ideathons, various tech competitions - but a select few organise the “premium events”, premium enough to be mentioned in The Hindu newspaper. Our chapter, ACM-VIT, organises one of them - our legacy scavenger hunt which we call “Cryptic Hunt”.
What is Cryptic Hunt?
800 participants. 36 hours. A hunt across campus.
In simple words - it’s a scavenger hunt across the huge campus we know as VIT, but with a technological twist. We developed our own mobile app (from scratch - and yes, the tech details will also follow). Our dedicated research team builds a set of questions related to cryptography and network security. Upon answering these questions, you’re hinted to a particular location on campus - where we have discretely pasted QR codes. Scan the correct code for a question in our app, and voila - you get points and move up on the leaderboard.
Pretty simple, right?
Let’s have a look at a question from last year:
In the heart of a vibrant kingdom called Diversia, where fiery Blazers clashed with serene Tranquils, chaos reigned as arguments over the annual festival escalated into fierce brawls. Each side, convinced of their own superiority, filled the air with shouts and despair. But when a wise old woman shared the tale of two mountains--one jagged and bold, the other smooth and gentle--their hearts began to shift. Realizing that their differences were not flaws but essential parts of a beautiful whole, they merged their visions into an unforgettable celebration, where raucous laughter intertwined with soothing melodies, revealing that the true magic of Diversia lay in the harmony of its polar opposites. |
The solution for this, along with all our other questions is available on our GitHub repository - we make our solutions public once any event ends ;)
But hey, that’s all the context I can give you - now begins the real story. I’ll split it up into sections, each being technical or non-technical - if you’re here for the drama, for the tea, feel free to skip the tech part. However, if you’re a nerd like almost all of us are, the tech we’ve implemented is quite cool - do check it out.
The grand tale of Cryptic Hunt 2024 shall be divided into the following chapters:
The Planning [non-tech]
The System Design [tech]
The Implementation [tech]
The D-Day [non-tech]
The Downfall [non-tech]
The Resilience [tech (mostly?)]
The Post-Mortem [non-tech]
The Planning [non-tech]
It all started off during the summer break - just a group of 15 odd kids sitting on google meets whiling away the odd hours at night. We were the new senior core of ACM-VIT, and at that time we thought we were the most important people in the world - “make ACM great again” is pretty much what our slogan was. Reminds you of a certain personality in world politics, maybe? Yeah, there were red flags quite early on, I guess.
However, the first order of action - graVITas was approaching, and we needed to make sure Cryptic Hunt was a grand, grand success. We needed to show everyone that we’re the real deal, the real stuff. Who was watching? No one, really, but hey - let some kids be happy thinking they matter.
Utopian world. Don’t you wish we lived in one? A world where everything went according to plan, everything was ideal. But nah, utopia is only achievable as an album, not in implementation. Yet, everyone plans an ideal situation. An ideal approach. An ideal timeline, maybe.
So did we.
And when I look at it now, 12 months later, I can’t help but laugh. In all fairness, it isn’t a bad schedule in the least - extremely achievable. It’s just funny how absolutely NOTHING went as per schedule - the “plan” going absolutely nuts from the very beginning.
Here it is, the “ideal plan” we had in mind (as of July 2024):
| Sr No | Task | Start Date | Deadline |
| 1 | Introducing CH officially to our Junior Core | Jul 24, 2024 | |
| 2 | Design starts work | Jul 28, 2024 | |
| 3 | Sponsorship brochure completion (design) | Aug 1, 2024 | |
| 4 | App designs finalized | Aug 10, 2024 | |
| 5 | Tech starts work | Aug 11, 2024 | |
| 6 | Frontend and backend individually completed | Aug 24, 2024 | |
| 7 | Begin FE + BE integration | Sep 1, 2024 | |
| 8 | Complete integration + testing | Sep 15, 2024 | |
| 9 | Purchase apple developer license | Sep 15, 2024 | |
| 10 | Push updates to play store + full app to app store | Sep 16, 2024 |
How much of this actually went according to schedule? Good question. Good, good question.
There’s a lot of reasons why we weren’t able to achieve a lot of what we set out for, and since I’m being completely transparent out here, I won’t hold anything back. It wasn’t just a lack of skill, it was lack of effort, a whole lot of politics, constant ego clashes and, well, some bad luck as well.
Coming back to the plan - the tech part was quite simple. Everyone takes up certain tasks, or “issues”, codes them and raises a pull request from their fork to our official GitHub repository. We had the tech split into 2 - backend and frontend (app) - and each division had one final boss, the best in each field we had in our senior core, who were supposed to give the final approval before a piece of code was merged.
This was probably our first mistake. Not the tech division, don’t get me wrong, but the appointed final bosses. Because what ensued was a massive cold war between frontend and backend, one on such a scale that it threatened the very existence of our entire event. Okay, maybe that’s a bit of an exaggeration, but it was pretty bad to say the least - because when you’re way behind deadlines and find out something major is broken, the last thing you want is to hear:
Mr. Frontend: “I’ve made sure the app is perfect. Whatever issue is there is in the backend, tell them to figure out what the hell is wrong with their sh\t”*
Mr. Backend: “The backend is made foolproof, I can give you my 100% guarantee on that. Ask frontend to fix their stupid app, not me.”
Oh boy.
The System Design [tech]
Alright, time for the nerdy stuff. If you're still reading, you're either genuinely interested in our tech stack or you're procrastinating on something else (no judgment, I've been there).
We built our backend using Go Fiber - fast, lightweight, and honestly just fun to work with. For our database, we went with CockroachDB, and before you ask - yes, the name is as weird as it sounds, but hear me out. This thing has distributed SQL capabilities, maintains high consistency, and scales horizontally like a dream. Plus, it survived our chaos (literally), so it earned some respect.
Everything was containerized and deployed on Google Cloud Run. Why? Because we're lazy developers who don't want to manage servers, obviously. Auto-scaling, load balancing, pay-per-use - it was supposed to be our silver bullet. Spoiler alert: it wasn't, but we'll get to that trainwreck in a bit.
Authentication was handled through Firebase with Google Sign-In as the primary method. Static assets like question images and media were stored in Google Cloud Storage buckets - nothing fancy there, just good old reliable cloud storage.
The mobile app was built with Expo React Native. One codebase for both iOS and Android - the dream of every developer who's tired of maintaining separate native apps. We cached question data locally for better performance, and used Firebase Cloud Messaging (FCM) for cache invalidation. Whenever new questions went live, FCM notifications would trigger cache clears to make sure everyone got the latest data. In theory, this was brilliant - fast performance with real-time updates. In practice... well, let's just say FCM and we had some trust issues.

The Implementation [tech]
The journey started innocently enough. Our design team huddled together in what I can only describe as the most intense brainstorming session of our lives. After cycling through themes that ranged from "cyberpunk dystopia" to "medieval fantasy" (don't ask), we settled on Minecraft. Yes, Minecraft. Look, it worked, okay? The blocky aesthetic was perfect for a tech event, and everyone immediately got the vibe.
With the creative direction locked in, we split into two teams - backend and app developers. Classic divide and conquer strategy. The backend team, bless their souls, documented every single API endpoint they planned to build. This wasn't just good practice; it was survival. The app team needed to know exactly what they'd be working with, and nobody had time for "oh wait, I changed that endpoint yesterday" surprises.
Both teams worked in parallel, which sounds organized and professional, but was actually just organized chaos. Daily standups that went on for an hour, messages at 2 AM asking "did you push the auth fix?", and the occasional panic when someone realized their local changes would break everything else.
Once the core features were done, both teams came together for the anti-cheat implementation. This was actually pretty cool - making sure both sides of the system worked in perfect harmony to catch cheaters. Spoiler: it worked... mostly.
Deep breath Alright, storytime. And not the fun kind.
Mobile App Challenges
Let's start with the app stores, shall we? You know how they say "it works on my machine"? Well, it worked on our phones, but the app stores? They had opinions. Strong opinions.
Sign-in inconsistencies - We kept getting rejections from both Google Play and Apple's App Store because our sign-in flow was apparently "inconsistent". What does that even mean? We still don't know, but after the 4th rejection, you just start making random changes and hoping for the best.
Permission handling issues - iOS is particularly picky about why you want camera and location permissions. Our initial reason strings were apparently too vague. "We need your camera" wasn't good enough - they wanted to know WHY we need it, HOW we'll use it, and probably our grandmother's maiden name too. Fair enough, I guess.
Apple's mandatory sign-in requirement - Here's a fun one. Apple decided that if you offer Google Sign-In, you MUST also offer "Sign in with Apple". But our university ecosystem was built around Google. Everyone had Google accounts. Nobody wanted Apple sign-in. But Apple didn't care. So we had to implement it anyway, for exactly zero users who wanted it.
Legacy Firebase client issues - This one almost killed us. We were reusing older Firebase authentication clients that were tied to outdated SHA keys. Everything worked fine in testing, but on the actual event day? Complete authentication failure. We spent the entire final day scrambling to fix this while 800 participants waited. Not our finest moment.
Backend Challenges
If you thought the app issues were bad, wait till you hear about the backend disasters.
Unreliable FCM delivery - FCM doesn't guarantee 100% delivery. Fair enough, networks can be unreliable. But our system was designed to retry failed sends automatically. So when FCM inevitably failed to deliver some notifications, it triggered retries, which prolonged the table locks, which made everything worse. It was a beautiful cascade of failure.
Escalating infrastructure costs - And here's the cherry on top of this disaster sundae. In our desperation to keep the system running, we started throwing money at the problem. Increased Cloud Run instance limits. Scaled up everything. "Just make it work, we'll worry about the bill later."
The bill came later. ₹36,000 later, to be exact. That's about $430 for our international readers, which might not sound like much, but remember - we're college students organizing a campus event. That was more than our entire budget for prizes, food, and everything else combined. Our treasurer nearly had a heart attack when they saw the Google Cloud invoice.
But hey, at least the system was running, right? Right?
The D-Day [non-tech]
It finally was here. The day we had been working so hard for.
Day 0 of Cryptic Hunt 2024 was here.
The app was live on the Play Store, and iOS users could access it using testflight (we spent 10k dude, had to do something). The backend was live, the database was up, the admin app was in place.
Oh wait - the admin app. We thought it was in place. When I entered the auditorium, there was a frenzy. I asked what’s up, and that's when I found out - the admin app simply wasn’t ready. “It’s okay, calm down, it’s just 800 people waiting to play your game. That’s not much. Calm down.” - that’s what I was telling myself constantly (horrible advice, in hindsight). The admin app was responsible for linking all questions to the correct QR codes, and all our QR pasting teams were ready to get the codes up across campus, but they could do nothing without the admin app. So we put them on standby, and got to work.
First things first, I needed to get an update on the status of the admin app. I called up my friend who was working on it, in quite audible panic. “It’s more or less done, I have all my changes on local - gonna push it in 5 minutes”, he said - and true to his word, he pushed all his changes to GitHub (after 55 minutes). But hey, better late than never - we had a working admin app. QR team got into action and put up the codes ASAP, while on-site management handled the crowd and got them seated for the opening ceremony.
The opening ceremony started, and was the only thing that happened without any major fumbles. Towards the end, they showed the links for the participants to download our official Cryptic Hunt App and register themselves and their teams on it.
And that’s when the downfall began.
The Downfall [non-tech]
They say “You know the good part about hitting rock bottom? There is only one way to go, and that’s up”. I wanna know who’s the “they” who say this - because boy, oh boy, does the “rock go bottom-er”.
- “Dude, it’s crashing.”
“Huh?”
- “The app, it’s crashing.”
“Nah, must be some net issue. Just tell that guy to restart and all will be good.”
- “Dude. It’s crashing for everyone. NOBODY can log in.”
That’s all the conversation was between me and my friend who was handling management. At first, I thought “eh, can’t always be completely smooth, must be the extra load”. Oh, innocent little boy - you had no idea how wrong you were.
We opened GCP logs - and what greeted us was a horrendous site. A bloodbath of 500 coded error messages, with metrics worse than the great collapse of the financial market in 2008. What? How? Why? No idea - but we still weren’t very demotivated. We’ve debugged through stuff before, no biggie - so we got to work doing some root cause analysis.
Here’s when management stepped up. They handled hundreds of students in the auditorium, each shouting their problem out loud. They calmed the ruckus. They settled the tension. While we sat and figured out why the heck would our onboarding section keep crashing, management were on their feet - collecting participant names and emails, their teams, their teammates’ emails - everything on pen and paper - just so that we could immediately cut to action once tech figures stuff out.
But tech - were we figuring stuff out? Uhhhhhh, not really.
We were completely clueless as to what was causing the issue. The issue we had pinpointed - all the requests to create/join a team were timing out. Why so? It was because the users table in our database was locked - i.e. it was inaccessible to be written to since it was undergoing some sort of a change. However, rather than being locked for an infinitesimal amount of time (that it would ideally be), it seemed to be permanently locked, hence all create/join team requests were just waiting in queue for the locked database table, and eventually timing out.
But what was causing this issue? We had no flipping clue. It’s not often that I’ve felt this helpless. Usually there is someone who knows what is going on when something messes up. Someone who has been in a niche situation like that. Someone who could be our stack overflow. But that day, everyone was equally clueless. The board, the senior core, the junior core - everyone just had a massive question mark on their faces.
We turned to AI. We did something only the most desperate (or the most stupid) devs do - fed entire files of code into ChatGPT and asked it - “oh lord, please tell us why our users table permalocked”. And GPT responded. We were shocked - how did we miss something this trivial? How? So many tech minds, and all of us missed something this small? It was unbelievable. We never added transaction rollbacks to our database querying code. That’s it. No tx.rollBack() code. That’s literally it. So, I got to action. Instantly found the few functions where there was a missing rollback, and instantly added it and pushed. Mr. Backend hit redeploy on Google Cloud Console - and we saw the code reach prod. I opened my phone, opened the CH app, and tried to join a team. It worked. 100 students wearing ACM t-shirts across the auditorium breathed a very audible sigh of relief. Management made the announcement to the participants - we’re good to go guys, lets get this event started.
And it crashed. Again. Bloody GPT, knew we shouldn’t have relied on it.
Oh my God. The auditorium was exploding with angry shouts from the participants, yet the silence felt deafening. It felt like an implosion of sorts. Suddenly it felt like the event was screwed, big time.
We were told to empty the auditorium, since we had used up our entire time duration (which was for the opening ceremony). The participants were livid. Most of our team was involved in calming them down, and assuring them that the event will start soon and they will be kept updated through our discord. As the participants left, and so did ACM members, the tech team sat outside the auditorium, on the floor, racking our brains hard - trying desperately to find any small error. ACM team was assigned a control room in an SJT smart classroom, so it was just 4 of us, and our dear chairperson - who was somehow quite calm and composed. Have to give it to him - when everyone was panicking and a lot of his reputation was at stake, guy managed to stay completely calm and just keep his trust in us - telling us to focus on getting the app functional while he handled any curveballs graVITas team threw at us.
This is where I made a big mistake. Mr. Backend had insisted on using a serverless backend, connected to a cockroachDB postgreSQL instance. We had our doubts about it. He assured us he had enough experience with it, the instances will upscale whenever the load is high. It won’t crash. It’ll work. He spent hours convincing us it’ll work. And when everything was in a state of chaos and panic, when we should have taken his advice and sat down and traced our steps through the code to see what was causing the issue, when we should have been working as a team - I flipped. Mr. Frontend and I turned on the backend guy, and started ranting about how this was an issue due to the serverless architecture of our backend. It simply wasn’t able to handle the load, we said. He disagreed, but we didn’t care at that moment.
We convinced the others, and soon most of the team was trying to find server-based alternatives for hosting our backend. We started finding possible issues with our serverless backend, nitpicking at everything. At one point, I noticed that the maximum active connections to our backend was capped at 100. “Voila!”, I remember thinking. I found the issue. Only 100 people were able to connect to our backend at once. Everyone started celebrating - we found the issue! A small oversight on the cloud console. No biggie, we instantly scaled it up to 1000 and redeployed. Metrics became green again, the cheers became louder. I felt like I was the most important person in the room for a while.
Why did I feel that only for a while? Because within 5 minutes, it crashed again.
Have you ever heard an entire classroom cuss out loud at once? Yeah, that definitely didn’t happen then.
We wasted more than 5 hours trying to switch to a server-based, locally hosted solution to our backend - when we shouldn't have ever doubted our most experienced guy in the first place. Sure, his entire argument was just “trust me bro”, but we should’ve done our due diligence on whether serverless was actually the issue or not, before directly assuming that it was the issue.
It was soon 8.30PM. Cryptic Hunt 2024 was cancelled for that day, officially. The events team was LIVID - they were facing a lot of backlash for the failure of our event, and we were being given (very well deserved) flak for it. ACM core went back to their rooms, awaiting further instructions. Our chairperson, along with the research lead and a senior core member of ours who was a part of graVITas events committee, headed to the fest control room to try and calm matters down there.
I remember leaving SJT all alone, while raindrops were pouring on me. I could just look at the floor while I walked. That walk will always stick by me. That walk back was one that was low on so many levels - it was straight up as if a ball of dark energy was around me as I walked. So many thoughts, so many mental conflicts. “Is any of this even worth it?”, “should have just left this chapter long ago”, “what’s the bloody point” - everything was hitting at once. There were quite a few losses/issues many of us were facing in our personal lives as well, while building up to Cryptic Hunt 2024 - so that didn’t help either. It really wasn’t that bad, don’t get me wrong. It’s just an event. In a college fest. A damn college fest. I know that as well, it all sounds so over-exaggerated - but at that moment, after 12+ hours of constant debugging in prod with absolutely 0 progress, any developer would’ve felt like absolute shit.
At the same time, three of our very own were waging a different war - one to make sure our event doesn’t get cancelled (even though it’d save us a lot of pain, imagine telling 800 people they won’t get refunded their INR250 + GST for absolutely no rhyme or reason). Our chair, research lead, and a senior core member (who was part of the events committee) were handling negotiations with graVITas control room. If you want a mild understanding as to how phenomenally grave the situation was, I’ll let you in on something the senior core member in concern told me:
“For the first time, I saw our chair sit on the footpath - and he looked… dejected? He looked as if he was about to cry. The same person who was calm throughout the day, who was the pillar of support we needed throughout the day - he was seated there all alone facing the wrath of the committee, just thinking ‘why did we put in this much of effort, if it was to all go in vain’ - that hit me hard.”
The Resilience [tech (mostly?)]
I crashed on my bed somewhere around 8:45PM - after more than 12 hours of useless debugging. I woke up at 9.15PM to a call from Mr. Frontend himself.
“Come to R3xx, we’re sitting and figuring out the problem,” he said. I just wanted some sleep, man. This seems impossible to figure out anyways - what's the point. Yet, despite my body telling me no in 15 different ways, I got up and slugged my way towards our senior’s room.
The atmosphere there wasn’t very bright either, but there definitely was some hope. It was at this moment I saw the entire tech community of VIT (not ACM, mind you - the entirety of VIT) come together. In hindsight, it was something beautiful - all these big tech chapters having insane amounts of not-so-friendly friendly competition amidst one another, always trying to one-up the others - yet when one chapter was drowning, they all came together to help. We had Anuj Parihar from GDSC, Pratham Mishra from CSI, Kaushal Rathi from CSI alongside ACM board, and 3 of us senior core members - all sitting in one R block 3-bed room - with one common goal in mind: save the Cryptic Hunt.
I’ll deviate from topic for a bit here, because this is a good moment to express my genuine gratitude towards all the seniors who came to our aid that night. From myself, and the entirety of ACM, we genuinely appreciate the camaraderie and brotherhood you guys showed that night - while y’all could have very well enjoyed watching us sink, y’all chose not to. You guys are the biggest reason we could save the event, and it means a lot to all of us.
Back to topic, though. It was no longer just us - we now had seniors working our case as well, and many of them were returning from internships in big tech firms. Suddenly things seemed to fall into place. Suddenly issues were getting clearer, actual problems were being solved.
To the seniors, the problem was identified almost immediately. What was it?
FCM tokens. Firebase Cloud Messaging tokens. Four innocent words that would haunt our dreams for months to come.
You see, when you're building a real-time system that needs to notify users about cache invalidation - essentially telling their apps "hey, new questions are live, refresh your local data" - FCM is your go-to solution. It's reliable, it's fast, it's used by literally millions of apps worldwide. What could go wrong?
Everything. Absolutely everything.
Here's what we did - and I want you to cringe along with me as I explain this architectural nightmare. All FCM tokens for our 800+ participants were stored in a single database table. The same table that stored user information, team memberships, scores - basically everything. Our beloved users table that was the heart and soul of our entire application.
Now, every time we wanted to send push notifications for cache invalidation (which happened every time we published new questions or made updates), our backend would:
Query the users table to get all FCM tokens
Start a database transaction
Send notifications to all ~1000 tokens one by one
Update delivery status back in the database
Commit the transaction
Sounds reasonable, right? WRONG.
Here's the kicker - the entire FCM sending process was wrapped inside a database transaction. And during a transaction, the table gets locked. Not for a few milliseconds like it should be, but for the ENTIRE duration of sending 1000+ push notifications.
Do you know how long it takes to send 1000 FCM notifications? About 2-3 minutes on a good day. During those 2-3 minutes, our users table was completely inaccessible for writes. Every login attempt, every team join request, every score update, every single write operation that our app needed to perform was just... waiting. Queuing up. Timing out.
Picture this: User opens the app, tries to join a team. Backend tries to write to the users table. Table is locked because FCM is busy sending notifications. Request waits. And waits. And waits. Eventually times out with a 500 error. User tries again. Same thing. Multiply this by 800 frustrated participants all mashing the "Join Team" button simultaneously.
But wait, it gets worse! (I know, I know, how is that even possible?)
FCM doesn't guarantee 100% delivery success. Network hiccups, invalid tokens, devices that are offline - stuff happens. And what did our brilliant system do when FCM failed to deliver a notification? It retried. Automatically. Which extended the transaction duration. Which kept the table locked for even longer. Which made more requests timeout. Which made more users retry. Which created more load. Which made FCM fail more often. Which triggered more retries.
It was a beautiful, perfectly orchestrated cascade of failure. A masterpiece of how NOT to design a system.
The seniors took one look at our database logs, saw the transaction duration metrics, and immediately knew what was happening. "Your FCM implementation is locking your users table," Anuj said, as casually as someone pointing out that the sky is blue. "Move it outside the transaction."
That's it. That was the fix. Move the FCM calls outside the database transaction. Let the transaction handle just the database operations, and let FCM do its thing separately. If notifications fail, who cares? Cache invalidation is a nice-to-have, not a must-have. Users can manually refresh if needed.
We pushed the fix somewhere around 2AM, after a few hours of removing EVERY damn transaction from the codebase. We didn’t have any business logic complicated enough to wrap inside a transaction - absolute naivety on our part. However, we didn’t stop then. We were traumatised from the day before.
What followed was close to 4 hours of absolute drama. We stocked up on snacks, cold drinks, chocolates - and opened up Grafana K6. Grafana K6 isn’t something any normal college student uses - it’s an industry level load tester. But we were scared to a whole new level - so we began the process of tormenting our backend to loads of up to 6000 concurrent hits. Parallelly, we sent a message to our ACM core group - asking all our members to start spamming the create team and join team feature on our app. Absolute chaos ensued. Hundreds of messages of people sending random team codes to join, creating teams at will, leaving teams every other second - absolute mayhem. But oh boy - that was some insane fun. The feeling of seeing your app, which crashed at 800 users, handle 6500+ concurrent API calls without crashing, the high of an entire team of students up at 3AM just spamming buttons on a mobile app (which they made, btw) just silently praying they don’t see an error message, the first-degree chaos - it all formed a moment that I just won’t ever forget.
It was soon 6AM. We had done everything we possibly could. We headed back to our rooms, hoping to get some sleep before the event started at 8AM.
Somehow, I woke up at 7:45AM. Not because I wasn’t tired - I was. Simply because I was scared. Frightened - to a whole new level. I stared at the clock as it ticked closer to 8AM, the time when participants would start using the app again. Forty-five… Fifty… Fifty-five… Eight AM. The moment of truth.
For the next 20 minutes, I don’t think I blinked once. My eyes were plastered to my laptop screen, occasionally glancing at my WhatsApp to see if anyone reported any crashes. Time seemed to be passing even slower than usual. Everything seemed to pause.
And then came the moment. At 8:20AM, our vice-chair texted us - “no issues guys, cryptic hunt is live - have a good night” - and that’s when I finally let out a huge sigh of relief.
We actually did it. Cryptic Hunt was saved. The trauma we faced on day 0 was finally over.
That’s when I closed my eyes, and my head hit the pillow.
The Post-Mortem [non-tech]
I made my way to the CH control room somewhere around 11:30AM, which was basically the ground floor of Mahatma Gandhi block. The moment I entered the block, I was greeted by a sight that - to this day - stays etched in my heart.
Those who have heard me speak over the past year or so know that I have a famous line of sorts - “ACM is my family” - and at that moment, I saw everyone dressed in ethnic clothing - kurtas, salwars, sarees - just smiling and messing around. To the normal eye, it was probably nothing very special - but to me, after a day of just tense faces, these were the smiles and gleaming eyes of my very own people who were enjoying an event they have (almost) successfully organized. No angry participants, no malfunctioning tech - just 3 generations of ACM-VIT having fun, together. And that, my friends, is one of my favourite moments throughout my entire college life.
GraVITas committee allowed a 1-day extension for our event, making our event the “only 3-day event” in the entire fest (or so we marketed it, lol). The app never crashed again, and the event proceeded uneventfully (pun intended).
And so we concluded Cryptic Hunt 2024 - a day so forgettable, it’s become a tale that is extremely unforgettable :)
Cheers







