<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[ACM-VIT Blogs]]></title><description><![CDATA[Because Technology Matters]]></description><link>https://blog.acmvit.in</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 00:11:27 GMT</lastBuildDate><atom:link href="https://blog.acmvit.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Private Repo: Is it Just a Myth Now?]]></title><description><![CDATA[You’re working on a private repository.  Maybe it’s a hackathon project. Maybe it’s code for a stealth startup. Maybe it’s just something you’re not ready to show anyone yet.
You install GitHub Copilo]]></description><link>https://blog.acmvit.in/the-private-repo-myth</link><guid isPermaLink="true">https://blog.acmvit.in/the-private-repo-myth</guid><category><![CDATA[GitHub]]></category><category><![CDATA[privacy]]></category><category><![CDATA[AI]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Ishita Joshi]]></dc:creator><pubDate>Tue, 14 Apr 2026 10:49:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/d090cbf5-7bc3-49e2-84fe-ff5f455e5c2e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You’re working on a private repository.  Maybe it’s a hackathon project. Maybe it’s code for a stealth startup. Maybe it’s just something you’re not ready to show anyone yet.</p>
<p>You install GitHub Copilot, Codex, Cursor, Claude. You turn on “privacy mode” and even go into settings and disable “training on my data”.</p>
<p>So, your code is secret…right?</p>
<p>Not entirely, the digital walls of your private repository may be more permeable than they seem.</p>
<h3><strong>The Illusion of Control</strong></h3>
<img src="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/2d935ce0-6c9d-44db-a91f-78665d8d6a14.png" alt="" style="display:block;margin:0 auto" />

<p>GitHub recently mailed all its users regarding a change in its data usage policy to collect interaction data from all users – “including inputs, outputs, code snippets, and associated context” to train its models (<a href="https://docs.github.com/en/site-policy/privacy-policies/github-general-privacy-statement">source</a>). The most jarring part is that this is enabled by default unless you explicitly opt out.</p>
<p>On its face, that doesn’t sound so bad. If you support AI, why wouldn’t you want to contribute to improving it?</p>
<p>But that’s not really the problem.</p>
<p>The problem isn’t contribution. It’s consent.</p>
<p>Because voluntary participation in the construction of better AI is radically different from being signed up to do so without clear awareness, and the “data” that might feed into such a system could include proprietary code, sensitive logic or work that was never intended to leave your own box.</p>
<h3><strong>Privacy Settings …or a Maze?</strong></h3>
<p>Cursor has privacy mode.</p>
<p>Claude has opt-outs.</p>
<p>Somewhere in your account, Copilot has settings.</p>
<p>Codex has 3 separate settings in 3 separate toggles across 3 separate menus that don’t talk to each other(<a href="https://developers.openai.com/codex/agent-approvals-security">source</a> <a href="https://developers.openai.com/codex/enterprise/managed-configuration">source2</a>).</p>
<p>(Almost like it was designed to be confusing.)</p>
<p>But are these the solution to protecting our data?</p>
<p>There’s more nuance to this question’s answer than a simple yes or no. To understand how safe our data really is let’s follow the journey of your code.</p>
<h3><strong>Where does your data go?</strong></h3>
<p>One of the most overlooked aspects of modern AI tooling is the slick “pre-read”. Before you even begin a session, your AI assistant is already taking a first pass at your code. It isn’t just passively waiting. It’s actively processing your project in the background. Most modern AI coding tools operate as Retrieval-Augmented Generation (RAG) systems(<a href="https://wp.astera.com/type/blog/rag-pipeline/">rag-pipeline</a>), meaning they don’t ‘know’ your code, but dynamically fetch relevant parts of it from a database every time you ask a question.</p>
<p>The second you open your project in a tool like Cursor, a background process starts monitoring your entire directory. It doesn’t even wait for a prompt before it’s already reading your file tree, beginning to index and chunk your codebase into tiny bits of data. Tools often use syntax-aware parsers to break code into meaningful chunks like functions, classes, and logical blocks before embedding them. These chunks, which on their own have no meaning then get sent to an embedding model to convert the text of their codebase into vectors (<a href="https://cloud.google.com/blog/products/ai-machine-learning/how-to-use-grounding-for-your-llms-with-text-embeddings">source</a>) which are numerical representations of data that along with metadata get stored in a vector database.</p>
<p>The plaintext of your code is not permanently stored (in privacy mode) but the embeddings and metadata are. The vector database essentially forms a map of your private repository on someone else’s servers (<a href="https://www.researchgate.net/publication/389819110_Securing_Retrieval-Augmented_Generation_Privacy_Risks_and_Mitigation_Strategies">rag-systems</a>).</p>
<h3><strong>You Didn’t Just Send a Prompt</strong></h3>
<p>When you type a prompt into this LLM now, the prompt too gets converted into a vector using the same embedding model so that it can be mathematically compared to the other vectors already in the vector DB. It is compared with the rest of the database to find the mathematically closest chunks(semantic searching using cosine similarity).</p>
<p>The tool then combines:</p>
<ul>
<li><p>Your original prompt</p>
</li>
<li><p>Retrieved code chunks</p>
</li>
<li><p>The file you currently have open</p>
</li>
<li><p>Your cursor position</p>
</li>
<li><p>System prompts</p>
</li>
<li><p>Git history and conversation history(sometimes)</p>
</li>
</ul>
<p>to build the final augmented prompt that gets sent to the LLM.</p>
<p>The model processes this and streams the response back into the tool’s backend before reaching your IDE.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/f3fd7ff5-f138-4e6f-bba2-69eda798071a.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>There’s More Than One Code Spy</strong></h3>
<p>In parallel to everything above, a telemetry event fires to the tool’s analytics infrastructure separate from the inference pipeline and includes lots of metadata. (Telemetry simply refers to automatic data collection about how you use a system like what you click, type, accept, or change.)</p>
<p>If you edit a file based on the response:</p>
<ul>
<li><p>The file watcher detects the change</p>
</li>
<li><p>Re-chunks the file</p>
</li>
<li><p>Re-embeds it</p>
</li>
<li><p>Updates the vector database</p>
</li>
</ul>
<p>This creates a continuous, real-time feedback loop.</p>
<h3>The Full Data Flow</h3>
<p>So to summarize data flows through:</p>
<p>Your local codebase → file ingester → chunking engine → embedding model → vector database → LLM → response back in your IDE.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/6b1e598c-d569-4dee-83aa-6cac64d7557a.png" alt="" style="display:block;margin:0 auto" />

<p>At no point, in most common implementations of this pipeline, is your data purely local. All this data transmission occurs while you were never asked if you wanted your code chunked, never informed what parts of your files became embeddings and no receipts showing what was transmitted from where or to whom.</p>
<p>Moreover, if you haven’t disabled training on your data, telemetry goes onto these LLM’s analytical infrastructures like Anthropic(AWS)(<a href="https://privacy.claude.com/en/">source</a>), OpenAI(Microsoft Azure) and even potentially Datadog and MongoDB for logging. This stored data is then used to train and improve these company’s models.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/3919f808-7dda-479e-adcc-2e58b3194ca0.png" alt="" style="display:block;margin:0 auto" />

<p>Across these steps,</p>
<ul>
<li><p>A Claude prompt flows through the servers of Anthropic, AWS and Google Cloud.</p>
</li>
<li><p>A GitHub Copilot prompt touches GitHub, Microsoft Azure, OpenAI and AWS/Anthropic or Google Cloud servers depending on cloud selection(<a href="https://github.com/trust-center">source</a>).</p>
</li>
<li><p>Cursor passes data through Cursor, AWS, Cloudflare, Microsoft Azure, Turbopuffer(Google Cloud), OpenAI, Anthropic, Google Vertex AI, Fireworks AI, Baseten, Xai and optionally Exa/Serp API when web search is involved(<a href="https://cursor.com/docs/agent/security">source</a>).</p>
</li>
</ul>
<p>Cursor’s own documentation states that it stores embeddings from user’s indexed codebases along with metadata like file names and hashes(<a href="https://cursor.com/help/security-and-privacy/privacy">source</a>). GitHub Copilot ups the ante and explicitly mentions indexing entire codebases to build deeper understanding and fine-tuning model behavior.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69dc964abe21b1bd0149fc36/31f42f6e-08f7-4975-9d82-2c494438d6c9.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Privacy Mode Isn’t Really.. Private</strong></h3>
<p>Coding models may not train on your private repositories at rest in privacy mode but while they are still interacting with it i.e via prompting code, accepting suggestions, rejecting suggestions, sending a thumbs up or thumbs down feedback- data is still collected and processed. Thus, user code may be in “privacy mode” at static storage but it is still being accessed by these companies during usage.</p>
<p>But it doesn’t stop:</p>
<ul>
<li><p>Code from being processed</p>
</li>
<li><p>Embeddings from being created</p>
</li>
<li><p>Requests from being sent to cloud APIs</p>
</li>
</ul>
<p>This is largely because these steps are required for the tools to function effectively.</p>
<p>There are actually two separate data flows:</p>
<ul>
<li><p>The service pipeline (to run the tool)</p>
</li>
<li><p>The training pipeline (to improve models)</p>
</li>
</ul>
<p>Opting out only affects the second.</p>
<p>This isn’t just about privacy, AI-assisted code is also bringing about thousands of new security vulnerabilities every month. As of June 2025, AI-generated code was introducing over 10,000 new security findings per month, a 10× increase from December 2024 (<a href="https://futureciso.tech/ciso-alert-ai-code-vulnerabilities-on-the-rise/">source</a>). What was meant to improve developer convenience is now, increasingly, expanding the potential attack surface(<a href="https://fortune.com/2025/12/15/ai-coding-tools-security-exploit-software/">security-exploits</a>).</p>
<h3>Security Implications</h3>
<p>This isn’t just about privacy.</p>
<p>AI-assisted code is also introducing new security risks (<a href="https://www.sciencedirect.com/science/article/pii/S0167404826000180">source</a>).</p>
<p>As of June 2025, AI-generated code was associated with over 10,000 new security findings per month, roughly a 10× increase from late 2024(<a href="https://futureciso.tech/ciso-alert-ai-code-vulnerabilities-on-the-rise/">source</a>).</p>
<p>What was meant to improve developer productivity is also expanding the potential attack surface in some cases.</p>
<h3><strong>So What Can We Actually Do?</strong></h3>
<p>So now that you and I know, our data can never truly be “private”, what do we do about it?</p>
<p>There is no perfect way out, only trade-offs because the same thing that makes these tools so helpful is the same thing that requires your code to leave your machine- context.</p>
<p>Some options include:</p>
<p><strong>Option 1: Maximum Privacy</strong></p>
<p>“Code like it’s 2015”</p>
<p>No AI assistants, no prompts and no risks, but also no autocomplete, debugging or speed.</p>
<p>But this is safety at the cost of practicality.</p>
<p><strong>Option 2: Local-Only Models</strong></p>
<p>So the next most efficient way out would be to use tools that run completely locally and avoiding third party APIs entirely but this comes at the cost of lesser efficient and heavier local models and a higher token cost.</p>
<p><strong>Option 3: Practical Middle Ground</strong></p>
<p>If option 2 too is too much of a sacrifice, the next best approach would be using Privacy Mode, opting out of training, using temporary chats and avoiding feedback buttons. On GitHub Copilot the privacy option is under Settings → Copilot → Policies.</p>
<p>This helps but only partially (as we discussed above).</p>
<h3><strong>The Bigger Shift</strong></h3>
<p>So no, the private repo is not simply a myth but it’s also not what we think it is anymore. A private repo today is not as private as a private repo from 2018.</p>
<p>The game has changed. The lines between proprietary code and AI training data have blurred significantly. If you're a developer today, you need to realize that privacy in the age of AI isn't about where your code lives on a server but also what happens the moment you interact with it.</p>
<h3><strong>The Uncomfortable Truth</strong></h3>
<p>All programming models today were trained on millions of lines of publicly available GitHub code (<a href="https://arxiv.org/abs/2107.03374">source</a>). This didn’t start with copilot settings or privacy toggles but in fact, the entire AI industry was built first, consent came later. “Public code” became something like a confused misnomer because human programmers believed public meant open source for humans, but big tech interpreted “public” as an all-you-can-eat buffet for machines.</p>
<p>“Shutting the doors at this point won't change the fact that the AI industry is built on data gathered without asking for a strong indicator of enthusiastic consent. “(<a href="https://www.theregister.com/2026/03/26/github_ai_training_policy_changes/">The Register</a>)</p>
]]></content:encoded></item><item><title><![CDATA[The Idea That Can Measure Everything, Except Itself]]></title><description><![CDATA[Let's start with something small. Two strings of text:
String A:  ABABABABABABABABABABABABABABAB
String B:  4c1j5b2p0cv4w1x8rx2y39umgw5q
  

Both are 28 characters. Both take up the same space on a ha]]></description><link>https://blog.acmvit.in/kolmogorov-complexity</link><guid isPermaLink="true">https://blog.acmvit.in/kolmogorov-complexity</guid><dc:creator><![CDATA[Patel Jiya]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:42:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69ce3b512d4d5bd0f0e93d56/2f6124b0-fad6-4544-9bad-fc7f7ab9b9dc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let's start with something small. Two strings of text:</p>
<pre><code class="language-plaintext">String A:  ABABABABABABABABABABABABABABAB
String B:  4c1j5b2p0cv4w1x8rx2y39umgw5q
  
</code></pre>
<p>Both are 28 characters. Both take up the same space on a hard drive. By most conventional measures, they're equivalent. But intuitively, you already know they're not. String A has a pattern. You could describe it in one sentence: <em>"AB repeated 14 times."</em> That description is far shorter than the string itself.</p>
<p>String B? You’d essentially have to write it out directly - there’s no obvious shortcut. The shortest description of that string <em>is</em> the string.</p>
<h2>The Core Idea, Stated Plainly</h2>
<p>Formal Definition:</p>
<p>Let U be a fixed universal Turing machine(it is the "Universal Computer" that Kolmogorov's theory relies on). The Kolmogorov Complexity of a string x is:</p>
<img src="https://cdn.hashnode.com/uploads/covers/69ce3b512d4d5bd0f0e93d56/dd1e1d7a-0872-48ab-9d7e-8337f473adf8.png" alt="" style="display:block;margin:0 auto" />

<p>Where:</p>
<ul>
<li><p>p is a program (binary string),</p>
</li>
<li><p>∣p∣ is its length,</p>
</li>
<li><p>U(p)=x means the program outputs x and halts.</p>
</li>
</ul>
<h3><strong>Why the Machine Choice Doesn’t Matter Much</strong></h3>
<img src="https://cdn.hashnode.com/uploads/covers/69ce3b512d4d5bd0f0e93d56/c700e0b3-e257-4365-ac83-63956f8ae2d9.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>This means even if we change the underlying computer model, the complexity only changes by a fixed constant. So we can treat Kolmogorov Complexity as an inherent property of the string.</p>
</blockquote>
<p>The <strong>Kolmogorov Complexity</strong> of an object: a string, a number, an image, any piece of information is defined as the length of the shortest computer program that outputs that object and then halts. If we call the object <em>x</em> and the complexity <em>K(x)</em>:</p>
<pre><code class="language-plaintext">Intuition, not actual syntax 

K("ABABABABABABABABABABABABABABAB")
  shortest program: print("AB" * 14)   approx.15 chars
     K(x) is LOW and there's structure here

K("4c1j5b2p0cv4w1x8rx2y39umgw5q")
  shortest program: print("4c1j5b2p0cv4w1x8rx2y39umgw5q")
     K(x) approx. length of x itself
     K(x) is HIGH and this is randomness
</code></pre>
<p>Low complexity indicates a simple pattern or explanation. High complexity means the string is its own shortest description. In information theory, a string with complexity equal to its length is called incompressible, which is the formal definition of random.</p>
<h3><strong>Basic Upper Bound</strong></h3>
<p><strong>K(x) ≤ |x| +c</strong></p>
<blockquote>
<p>At worst, a program can simply print the string directly, so the complexity can never be much larger than the string itself.</p>
</blockquote>
<p><strong>The Key Shift</strong></p>
<p>Before Kolmogorov, randomness was defined by process - a sequence was random if it was produced by a fair coin flip. But that means HHHHHHHHHHHH is technically "possible" under a fair coin. It just has low probability.</p>
<p>Kolmogorov gives us a structural definition instead: a sequence is random if no program can describe it shorter than the sequence itself. It doesn't matter how it was produced. What matters is what it <em>is.</em></p>
<h2>A Few Real Examples</h2>
<table>
<thead>
<tr>
<th>Object</th>
<th>Complexity K(x)</th>
<th>Why</th>
</tr>
</thead>
<tbody><tr>
<td>000000000000</td>
<td>Very low</td>
<td>"N zeros" - one tiny program covers it.</td>
</tr>
<tr>
<td>3.14159265…</td>
<td>Low-medium</td>
<td>It's π - a short algorithm generates it forever.</td>
</tr>
<tr>
<td>k4xQ9mZ2pL7wR1s</td>
<td>≈ its own length</td>
<td>No pattern, no shortcut. This is randomness.</td>
</tr>
</tbody></table>
<p>The π example is interesting. Its digits appear random and pass randomness tests, yet π has low Kolmogorov Complexity since a simple algorithm can generate it. This illustrates the difference between statistical and Kolmogorov randomness; something can seem patternless but still have a simple structure.</p>
<h2>A Fundamental Limitation: Uncomputability</h2>
<p>Here's where it stops being a neat conceptual tool and becomes something uncomfortable.</p>
<p>Kolmogorov Complexity is <strong>uncomputable.</strong></p>
<h3><strong>Connection to the Halting Problem</strong></h3>
<p>The uncomputability of Kolmogorov Complexity is closely tied to the Halting Problem.</p>
<p>The Halting Problem asks:<br /><em>Given a program and an input, can we determine whether the program will eventually stop or run forever?</em> Alan Turing proved that no algorithm can solve this problem for all possible programs. It is proven, mathematically and fundamentally, to be impossible to compute in general.</p>
<p>Suppose we had an algorithm that computes K(x). Then we could:</p>
<ol>
<li><p>Enumerate all programs of length ≤ n</p>
</li>
<li><p>Run them and observe outputs</p>
</li>
<li><p>Identify the shortest program that produces a given string</p>
</li>
</ol>
<p>But this process requires knowing which programs halt. If we could compute Kolmogorov Complexity exactly, we would effectively solve the Halting Problem -which is impossible.</p>
<p>There lies the contradiction: the program is L characters long but claims to output a string with a shortest description longer than L. This is impossible since the program itself describes the string in L characters.</p>
<p>This resembles Berry's Paradox in algorithm form: "the smallest positive integer not definable in under thirteen words"-yet it's defined in twelve. It's about self-reference causing contradictions. Any system capable of fully measuring Kolmogorov Complexity ultimately contradicts itself.</p>
<blockquote>
<p>"We can compute upper bounds on complexity. We can never verify the true minimum. Every compression is an approximation of something fundamentally out of reach."</p>
<p>~ Paraphrasing Li &amp; Vitányi, An Introduction to Kolmogorov Complexity</p>
</blockquote>
<p><strong>What We Can Actually Do</strong></p>
<p>Zip a file-the compressed size provides an upper bound on its Kolmogorov Complexity. Use a better algorithm for a tighter bound. But the true minimum? We can only approach it, never confirm it. Every compression algorithm is an attempt at something it can never fully achieve.</p>
<h3><strong>Incompressibility Theorem</strong></h3>
<p><strong>K(x) ≥ n;</strong></p>
<p>(for most strings of length n)</p>
<p>There are far fewer short programs than long strings. So most strings cannot be compressed - in fact, most strings are essentially random.</p>
<h2>Where It Shows Up in the Real World</h2>
<h3>Biology - Your Genome Is a Compressed Program</h3>
<p>The human genome encodes a 37-trillion-cell organism in about 750 megabytes, less than a standard HD movie. It describes the entire human body, from development to immune responses and neurological architecture, in a compact file.</p>
<p>Researchers in computational biology apply Kolmogorov Complexity to compare species' genomes, track evolution, and explore why diverse organisms share similar genetic complexity. By questioning "How compressible is this genome?" we gain insights into life.</p>
<h3>Machine Learning - Compression as Understanding</h3>
<p>One way to interpret learning is that a model discovers a shorter description of the data. Instead of memorizing every example, it captures patterns that allow it to represent the data more efficiently.<br />This idea is the basis of the Minimum Description Length (MDL) principle-choose the hypothesis that makes the total length of the model and the encoded data as short as possible. It's like a mathematical version of Occam's Razor.</p>
<h3>Anomaly Detection - Finding What Doesn't Compress</h3>
<p>In cybersecurity and network monitoring, normal traffic has structure - it compresses. Malicious or anomalous traffic often doesn't fit learned patterns and therefore compresses poorly relative to baseline. Systems that flag "this data is harder to describe than expected" are doing Kolmogorov reasoning in practice, even without calling it that. The idea leaks into engineering even when nobody's tracking it back to its source.</p>
<h3>Physics - The Universe Has a Complexity Score</h3>
<p>If you think physics can be fully calculated and the universe follows specific rules, then there should be a shortest program that explains the universe's entire history. Finding a Theory of Everything is like looking for the simplest way to describe all of physical reality.</p>
<p>And here's the sting: we can never verify we've found it. A simpler set of laws might produce identical observations. The uncomputability result means we'd have no way to prove our theory is actually the shortest. Every Grand Unified Theory is, formally speaking, an upper bound pretending to be an answer.</p>
<h2>AIXI - What Happens When You Build AI Out of This</h2>
<p>AIXI assigns higher weight to simpler environments:</p>
<img src="https://cdn.hashnode.com/uploads/covers/69ce3b512d4d5bd0f0e93d56/bfcbcf15-2fe3-4da2-b3ed-b80a5dcf946e.png" alt="" style="display:block;margin:0 auto" />

<p>Shorter programs (simpler explanations) are given exponentially higher importance, reflecting a formal version of Occam’s Razor.</p>
<p>The answer he arrived at was <strong>AIXI</strong> - a theoretical model of a perfect AI agent. Its core loop:</p>
<p><strong>1 Observe the world</strong> Take in all sensory data and history of interactions so far.</p>
<p><strong>2 Build the simplest model consistent with observations</strong> Using Kolmogorov Complexity to find the shortest program that explains everything seen so far.</p>
<p><strong>3 Act to get the best future rewards based on that model</strong><br />Pick actions that lead to the best results using the simplest model of the world.</p>
<p><strong>4 Update. Repeat.</strong><br />Each new observation improves the model. Keep compressing. Keep getting better.</p>
<p>AIXI can be mathematically proven to be optimal - no other agent can consistently outperform it across all possible environments. It is the formal theoretical ceiling of intelligence itself.</p>
<p>It cannot be built due to Step 2 requiring the computation of Kolmogorov Complexity, which is uncomputable. AIXI is a provably optimal agent that cannot exist. It's the most useful concept in AI theory that will never run on any machine.</p>
<p><strong>Why AIXI Still Matters Despite Being Uncomputable</strong></p>
<p>AIXI defines the ceiling. Every real AI system — every large language model, every reinforcement learning agent, every decision tree — is a computable approximation of something that, in its perfect form, is grounded in Kolmogorov's idea.</p>
<p>When researchers design smarter AI, they are in a precise sense trying to close the gap to AIXI while staying within the bounds of what a computer can actually execute. The uncomputable ideal isn't a dead end. It's a compass pointing at something we can keep getting closer to, even if we can never fully arrive.</p>
<h2>The Big Debate - Is Compression the Same as Understanding?</h2>
<p>This is where the idea becomes genuinely controversial — and where serious researchers take hard, opposing positions.</p>
<p>If intelligence is compression, and compression is what our best AI systems do - finding compact representations, predicting structure, generalising from patterns — then maybe a model that compresses language well enough is, in some meaningful sense, actually understanding it. The debate:</p>
<h3><strong>The Compression Camp</strong></h3>
<p>Understanding <em>is</em> finding the shortest description. To understand something is to see the pattern beneath the noise. GPT-4 "understands" English because it has found an extraordinarily compact representation of its statistical structure. Intelligence is what good compression looks like from the outside.</p>
<h3><strong>The Meaning Camp</strong></h3>
<p>A ZIP file compresses a novel but understands nothing. Kolmogorov Complexity measures structural regularity, yet meaning, reference, and intentionality are entirely different. While compression might be necessary for intelligence, it is far from sufficient. The real challenge lies in what the compression represents.</p>
<p>No side has prevailed. Schmidhuber, a key figure in deep learning, has long claimed that curiosity, creativity, and intelligence all come down to seeking compression. The other side thinks this view oversimplifies and misses the main point. The debate is a mix of computer science, philosophy of mind, and AI alignment, and it remains unresolved.</p>
<p>There's a final implication of all this that rarely makes it into the textbooks, but probably should.</p>
<p>Science is about finding simpler ways to describe the world. Every theory, model, and equation tries to make observations shorter and more general. Newton's laws sum up the motion of all objects in three sentences. Maxwell's equations cover classical electromagnetism in four. General relativity explains spacetime curvature with one compact equation. This is Kolmogorov thinking. Science is compression in action.</p>
<p>But here's what the uncomputability result tells us that we tend to quietly ignore: <strong>we can never know if any of our theories are actually the shortest description.</strong> A simpler set of equations might produce every observation we've ever made. We would have no way to prove it doesn't exist. Every Theory of Everything is, mathematically speaking, an upper bound on the complexity of the universe - not a proof that we've reached the minimum.</p>
<p>We have Occam's Razor - prefer the simpler explanation. We also have a mathematical proof that we can never confirm which explanation is truly simplest. Kolmogorov gives you the razor and removes the certainty of knowing when to stop shaving.</p>
<p>The shortest program that outputs the universe exists - in principle.<br />We just can never prove we've found it.</p>
<p>Every theory is an upper bound.<br />Every model is an approximation of something unreachable.<br />Every answer is a best guess at a minimum we can approach but never touch.</p>
<p>Which is either the most humbling thing in the history of human thought -<br />or the most compelling reason to keep going.</p>
<p>Probably both</p>
<p>Further reading: Li &amp; Vitányi - <em>An Introduction to Kolmogorov Complexity and Its Applications</em> · Marcus Hutter's AIXI paper · Schmidhuber on compression and creativity · Chaitin on algorithmic information theory</p>
]]></content:encoded></item><item><title><![CDATA[Brain-Computer Interfaces: Can Your Brain Be Hacked?]]></title><description><![CDATA[There’s a moment in every sci-fi film where someone jacks a cable into the back of their skull and instantly downloads kung-fu. We laugh at it. We call it fiction. But right now, a former HR director ]]></description><link>https://blog.acmvit.in/brainhacking</link><guid isPermaLink="true">https://blog.acmvit.in/brainhacking</guid><category><![CDATA[neuralink]]></category><category><![CDATA[brain]]></category><category><![CDATA[hacking]]></category><category><![CDATA[Brain-Computer Interfaces ]]></category><dc:creator><![CDATA[Devi kiran]]></dc:creator><pubDate>Sun, 12 Apr 2026 10:18:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67fa7fdd6e0898820cbfc9f7/80b5867c-4f64-45c6-b59a-1cdaa66b2390.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There’s a moment in every sci-fi film where someone jacks a cable into the back of their skull and instantly downloads kung-fu. We laugh at it. We call it fiction. But right now, a former HR director named Pat Bennett  living with ALS  is generating sentences on a screen at 62 words per minute using only her thoughts. No cable. No hands. Just neurons and a cluster of electrodes embedded in her brain.</p>
<p>That’s not a movie. That’s a 2023 paper in Springer Nature( a German-British academic publishing company). And it raises a question nobody in the press release bothered to ask: her brain signals were being transmitted wirelessly, processed by external software, and stored on servers. So what happens when the most intimate data that exists of your actual neural activity  becomes just another thing that can be breached?</p>
<p>We spent decades arguing about whether your phone was listening to you. We never thought to ask what happens when the device is inside your skull.</p>
<h3>What a BCI Actually Does</h3>
<p>A Brain-Computer Interface reads your brain's electrical signals and translates them into something a machine can act on. Your neurons fire in patterns. The BCI captures those patterns, filters out noise, runs them through a machine learning model, and converts the output into a command: move a cursor, type a word, control a robotic arm.</p>
<p>There are two architectures, and the difference matters enormously,  both for capability and for security.</p>
<p><strong>Non-invasive BCIs</strong> use electrodes placed on your scalp. These are the EEG headsets you've seen in documentaries, increasingly on retail shelves. Easy to put on, easy to take off. The tradeoff here is signal clarity, you're reading brain activity through skin and bone, so the resolution is limited. Consumer devices from companies like Emotiv, Muse, and Neurosity work this way.</p>
<p>On the other hand, <strong>Invasive BCIs</strong> implant microelectrode arrays directly into brain tissue. Much higher fidelity. Much higher stakes. Neuralink’s device works this way. So does the system used in the BrainGate clinical trials that produced Pat Bennett’s results.</p>
<p>(Neuralink is a neurotechnology company founded by Elon Musk that develops implantable Brain-Computer Interface (BCIs) to connect human brains directly to computers.)</p>
<p>BrainGate implanted four microelectrode arrays in the region of Bennett's brain responsible for speech. Within 30 minutes of activation, in a separate trial published in the New England Journal of Medicine, the system demonstrated 99.6% accuracy on a 50-word vocabulary. Bennett's own results achieved 62 words per minute which is more than three times faster than any previous BCI that demonstrated a <strong>9.1% word error rate</strong> on a 50-word vocabulary and <strong>23.8%</strong> on a 125,000-word vocabulary. She used the system at home. Unsupervised. Via a wireless connection.</p>
<p>That last part is the part nobody talks about.</p>
<h3>The Threat Landscape</h3>
<p>The phrase “hacking a brain” sounds cinematic. Let’s be precise about what it actually means,  because the reality is both more mundane and more alarming than the movies suggest.</p>
<p><strong>Signal interception</strong>
Most modern BCIs transmit data wirelessly because the alternative is a cable permanently protruding from someone's head, which creates its own obvious problems. Wireless transmission solves that. It also introduces a new one: anything broadcast over radio can, in principle, be received by anyone within range with the right equipment.</p>
<p>Neural signals are not like passwords. A stolen password reveals what you typed. Intercepted neural data can reveal things you didn't consciously express such as  stress responses, emotional states, subconscious reactions to faces or images or words. A previously conducted study demonstrated that EEG data alone could be used to infer a user's PIN with meaningful accuracy. Another research group showed that neural signals recorded during passive viewing could be used to reconstruct, in rough form, what someone was looking at.</p>
<p>The data being transmitted isn't just "she wanted to type the letter A." In high-fidelity invasive systems, it's a continuous stream of electrical activity from inside the brain which is definitely far richer, and far more revealing, than anything the user explicitly intended to send.</p>
<p><strong>Firmware and parameter tampering</strong>
Any device that accepts over-the-air updates or remote configuration commands can, in principle, be told to do something its owner didn't authorise. This is not a theoretical concern; it's already happened with other implanted medical devices.</p>
<p>In 2012, security researcher Barnaby Jack demonstrated that pacemakers could be wirelessly commanded to deliver an 830-volt shock from fifty feet away from a potentially fatal attack requiring no physical access to the patient. The attack worked because the device trusted any command that arrived in the right format. It didn't verify who was sending it.</p>
<p>BCIs share this exact attack surface. A device that stimulates the motor cortex (a region in the frontal lobe of the brain, responsible for planning, controlling, and executing voluntary skeletal muscle movements) to help a paralysed patient move a limb is, functionally, a device that applies electrical signals to brain tissue. The security question of who gets to authorise those signals  and how the device verifies that authorisation  is not a minor implementation detail.</p>
<p><a href="https://nptl.stanford.edu/sites/g/files/sbiybj23731/files/styles/responsive_large/public/media/image/naturalandbcisystemconceptsketch_0.jpg.webp?itok=lcQkWlnK"><img src="https://nptl.stanford.edu/sites/g/files/sbiybj23731/files/styles/responsive_large/public/media/image/naturalandbcisystemconceptsketch_0.jpg.webp?itok=lcQkWlnK" alt="Brain-computer interface concept sketch" /></a></p>
<p>This makes over-the-air updates a particular concern. An unprotected update channel is an attack surface. A device that accepts remote updates can, if that channel isn't properly secured, accept them from anyone, not just the manufacturer. For most software, a malicious update is a serious nuisance. For an implanted neural device, it becomes a different league of threat.</p>
<p>(Over-the-air updates means delivering a software update wirelessly, without any physical connection. For example your phone does this when it downloads an iOS or Android update in the background. )</p>
<p><strong>Data exfiltration at the platform level</strong>
This is the attack vector that requires the least sophistication and carries the most scale. It doesn't require hacking the implanted device at all.</p>
<p>Neural data in most current BCI systems is offloaded to external servers, for processing, storage, model training. Those servers are subject to the same breaches as any other database. The difference is the nature of what's stored. A leaked email password is annoying. A leaked database of neural recordings is something you cannot fix. You can't reset your brain. You can't issue yourself a new neural profile. The data is permanently identifying, permanently sensitive, and permanently yours in the sense that it can never stop describing you, even after it's out of your hands.</p>
<h3>The Privacy Problem Is Actually Worse</h3>
<p>Here’s something worth sitting with: most of the neural data risk doesn’t come from sophisticated cyberattacks. It comes from completely legal data practices.</p>
<p>Emotiv, one of the world's largest consumer EEG manufacturers, allows sharing of anonymised neural data with third parties for research and commercial purposes. This isn't a breach. It's in terms of service. A Neurorights Foundation survey of 30 direct-to-consumer neurotechnology companies found that clicking "I agree" on 29 of them would grant the company the right to sell that neural data to third parties. The users generating those neural profiles had no meaningful visibility into how the data would be used, by whom, or for how long.</p>
<p>Think about what that actually means. You buy a consumer EEG headset to meditate, or to study, or to experiment. You click through the setup screens. You have now, legally, handed a company the right to sell recordings of your brain activity, which may contain traces of your emotional states, your cognitive responses, your subconscious reactions to the content you consumed while wearing the device to anyone willing to pay for it.</p>
<p>The legal weight of this became sharply visible in 2023, when Chile's Supreme Court ordered Emotiv to delete the brain data it had collected on a former senator. The court found that retaining anonymised neural data for research purposes, without specific prior consent, violated his constitutional rights. That case was possible because Chile had done something no other country on earth had done: it amended its constitution to enshrine the right to mental privacy and cognitive liberty directly in its founding law. The senator had a right to invoke. Most people don't.</p>
<p>Major data protection frameworks, including GDPR in Europe, have no specific provisions for neural data. In the United States, Colorado and Minnesota have begun developing targeted neurotechnology legislation. Federal protections do not yet exist. The gap between what the technology can do and what the law protects against is wide, and it is being filled, right now, by individual companies making individual choices about what to do with data that has never existed before in human history.</p>
<h3>What Secure Neurotechnology Looks Like</h3>
<p>The security community has been here before with pacemakers, with insulin pumps, with every connected medical device that shipped convenience before it shipped caution. The technical lessons from those fights aren't complicated. They're just being ignored again.</p>
<p><strong>Encryption, end-to-end.</strong> Neural data should be encrypted both in transit and at rest. Manufacturers will tell you implanted devices don't have the power budget for heavy cryptography. That's an engineering problem, in that  a solvable one, not an acceptable reason to ship unencrypted hardware into someone's brain.</p>
<p><strong>Proper authentication for every connection.</strong> The attack that let Barnaby Jack command a pacemaker to fire from fifty feet away worked because the device accepted any command that arrived in the right format. No verification of who was sending it. Every wireless connection to a BCI should require real mutual authentication not "the device responded, so we trust it."</p>
<p><strong>Secured over-the-air updates.</strong> Firmware updates need to exist, because the alternative which would be requiring surgical intervention every time a security patch is needed is clearly untenable. But they need to be cryptographically signed and verified, so that a device can confirm it's receiving a legitimate update from a legitimate source rather than instructions from an attacker who's learned the update protocol.</p>
<p><strong>A real off switch.</strong> This one gets the least attention. Users should be able to disable the wireless radio entirely. Not airplane-mode-but-still-broadcasting off. Actually TURN off. A device that is permanently broadcasting neural data is a device that is permanently exposed. The user should have the ability to make that exposure stop.</p>
<p>And underlying all of these: security needs to be designed in from the beginning, not bolted on after the product ships. The IoT industry made the convenience-first choice a decade and a half ago with smart home devices, and we are still cleaning up the consequences. The difference is that a compromised thermostat sends your heating schedule to a stranger. A compromised BCI has electrodes in your motor cortex.</p>
<h3>The Window Is Closing</h3>
<p>Maybe you're not planning to get a brain implant. Here's the scale of what's already happening, whether you are or not.</p>
<p>On March 30, 2025, Precision Neuroscience received FDA 510(k) clearance for its Layer 7 Cortical Interface,  the first full regulatory clearance granted to a company developing a next-generation wireless BCI. Neuralink has implanted devices in multiple human patients. Emotiv, Muse, and Neurosity EEG headsets are on retail shelves. The distance between "laboratory prototype" and "consumer product" is compressing faster than anyone predicted five years ago.</p>
<p>(Precision Neuroscience is a neurotechnology company developing a minimally invasive brain-computer interface (BCI) designed to help patients with paralysis control digital devices using only their thoughts.)</p>
<p>The security and regulatory infrastructure is not keeping pace. The devices are moving faster than the rules, faster than the security research, faster than the legal frameworks that would tell a company what it is and isn't allowed to do with the data those devices generate. Someone has to close that gap.</p>
<p>The skills that make someone good at CTF  reading systems for weaknesses, thinking like an attacker, finding the edge case nobody planned for are exactly the skills this field is missing. Neurotechnology has researchers and neuroscientists. It doesn't have enough people asking what breaks.And here we have a target that actually matters.</p>
<p>The skills that make someone good at security research, reading systems for weaknesses, thinking like an attacker, finding the edge case nobody planned for  are precisely the skills this field is missing. Neurotechnology has neuroscientists and electrode engineers and machine learning researchers. It does not have enough people whose job is to ask: what breaks? What gets abused? What does this look like from the other side?</p>
<h3>Why This Matters More Than It Seems</h3>
<p>There's a version of this story where BCIs are simply the next generation of consumer technology — a more intimate interface, a cleverer device, with the same security challenges and the same eventual solutions as everything before it.</p>
<p>There's another version where they're categorically different.</p>
<p>Every piece of consumer technology before this has captured what you do: what you search, what you buy, where you go, what you say. BCIs capture what <strong>YOU</strong> are, the electrical signatures of thought and emotion and reaction that exist below the level of deliberate expression. Data that you didn't choose to generate. Data that reveals things about you that you might not know about yourself.</p>
<p>The engineers building the next generation of these devices will make decisions,  about encryption, about data retention, about what gets transmitted and what gets stored and what gets sold which will determine whether neural interfaces become tools of human dignity or the foundation of a surveillance infrastructure that nobody consciously consented to. Those engineers are, in many cases, students right now. Some of them are security researchers who haven't yet looked at this space.</p>
<p>It is worth looking at this space. Before someone else makes the decisions for you. Before the terms of service have already been agreed to. Before the data that cannot be reset has already left the building.</p>
<p>The window is open. It won't stay open.</p>
]]></content:encoded></item><item><title><![CDATA[Agent Skills & AaaS: AI’s Universal Adapter & On-Demand Agents]]></title><description><![CDATA[Stop me if this sounds familiar: You’re in the middle of a high-intensity "vibe coding" session, feeling like a digital god, until your AI assistant confidently suggests a library version that hasn't been relevant in a while. Suddenly, the flow is de...]]></description><link>https://blog.acmvit.in/on-demand-agent-skills</link><guid isPermaLink="true">https://blog.acmvit.in/on-demand-agent-skills</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[agents]]></category><category><![CDATA[mcp]]></category><category><![CDATA[agent-skills]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Rishit Shivam]]></dc:creator><pubDate>Mon, 26 Jan 2026 10:16:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769422455936/4f6df716-7e22-4fb2-ba1e-3b952868a8c1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Stop me if this sounds familiar: You’re in the middle of a high-intensity "vibe coding" session, feeling like a digital god, until your AI assistant confidently suggests a library version that hasn't been relevant in a while. Suddenly, the flow is dead, and you're back to manual labour, opening endless documentation tabs just to find the one updated command the AI should have known, or manually clicking through menus to test changes that the AI should have handled in the background.</p>
<p>It turns out that the dream of being a "lazy" architect, watching things happen in the background while you are busy playing Elden Ring, has been hindered by a chaotic mess of custom bridges and data silos. But the game is changing. We are moving from a world where you have to pick up the cards to a world where you hire a professional gambler who already knows every trick and strategy.</p>
<h2 id="heading-1-deep-dive-into-mcp"><strong>1. Deep Dive into MCP</strong></h2>
<p><strong>What is MCP?</strong></p>
<p>Ever thought how cool it would be to not even click the buttons or do the operations an AI model instructs you to for your query? Being completely lazy, doing nothing but watching things happen in the background while you continue watching Stranger Things.</p>
<p>In plain English, the <strong>Model Context Protocol (MCP)</strong> is a universal language that allows an AI to talk to your computer and the internet.</p>
<p>Before MCP, if you wanted an AI to read your files or check your calendar, a developer had to write a "custom bridge" specifically for that task. If they switched AI models, they often had to rebuild the bridge. MCP replaces this chaos with a <strong>single, open standard</strong>.</p>
<p>In short, MCP’s origin story is about breaking data silos: every new data source used to demand M×N bespoke connectors, MCP cuts that to M+N by standardising the integration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769345715339/1a647bf5-eadf-4394-8f33-f362b6a912ad.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-the-crust-of-the-technicality"><strong>The crust of the technicality</strong></h4>
<p>To understand how MCP actually works under the hood, we need to look at the <strong>Host-Client-Server</strong> relationship and the <strong>JSON-RPC</strong> signals used.</p>
<h3 id="heading-1-main-components"><strong>1. Main Components</strong></h3>
<ul>
<li><p><strong>MCP Host:</strong> This is the Environment/Application you are actually using, like <strong>Claude Desktop</strong>, <strong>Cursor</strong>, <strong>Antigravity</strong> (pretty famous these days), or a specialised <strong>IDE</strong>. The Host is the main character of the movie, who manages security and decides which AI model gets access to which tools.</p>
</li>
<li><p><strong>MCP Client:</strong> This is the <strong>Translator</strong> inside the Host. It maintains a 1:1 connection to a server and handles the "handshake" to ensure both sides are speaking the right version of MCP.</p>
</li>
<li><p><strong>MCP Server:</strong> This is the main core. It’s a small, lightweight program that knows how to do one thing well, like "Read Google Drive" or "Search a SQL Database." It exposes these abilities to the AI in a standardised way.</p>
</li>
</ul>
<h3 id="heading-2-the-communication-json-rpc-20"><strong>2. The Communication (JSON-RPC 2.0)</strong></h3>
<p>MCP uses <strong>JSON-RPC 2.0</strong>, a lightweight "Remote Procedure Call" protocol. Instead of messy, unstructured chat, the Client and Server send strict, numbered messages:</p>
<ul>
<li><p><strong>Requests:</strong> "Hey Server #1, run the get_products tool for 'DMart'."</p>
</li>
<li><p><strong>Responses:</strong> "Hey Client #1, here is the data for Request #1: &lt;data&gt;"</p>
</li>
<li><p><strong>Notifications:</strong> "Heads up, the file you were watching just changed!" (No reply needed).</p>
</li>
<li><p><strong>Format:</strong></p>
<p>  <code>{ jsonrpc: "2.0", id: number | string, method: string, params?: object }</code></p>
</li>
</ul>
<p>If you want to read more about JSON_RPC 2.0 in MCP, refer to this: <a target="_blank" href="https://medium.com/@dan.avila7/why-model-context-protocol-uses-json-rpc-64d466112338">https://medium.com/@dan.avila7/why-model-context-protocol-uses-json-rpc-64d466112338</a>.</p>
<h4 id="heading-3-the-transport-layer-how-the-data-moves">3. The Transport Layer (How the data moves)</h4>
<ul>
<li><p><strong>Local (stdio):</strong> Used for tools on your own machine. The Host starts the Server as a "child process", and they whisper to each other through the computer's internal pipes. It's incredibly fast and secure because the data never leaves your hardware.</p>
</li>
<li><p><strong>Remote (SSE):</strong> Used for cloud services. It uses <strong>Server-Sent Events</strong> to keep a "live wire" open between your AI and a remote server, allowing for real-time updates.</p>
</li>
</ul>
<p>MCP Server have a lot of tools whose selection and order of execution are decided by the LLM connected. A simple MCP Server Tool looks like this:</p>
<pre><code class="lang-python"><span class="hljs-meta">@mcp.tool()</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_attendance</span>(<span class="hljs-params">student_id: str, date: str</span>) -&gt; str:</span>
    <span class="hljs-string">"""Get attendance status for a student on a given date."""</span>
    <span class="hljs-comment"># Implementation details...</span>
</code></pre>
<p>Visit the official docs if you want to learn more about the implementation: <a target="_blank" href="https://modelcontextprotocol.io/docs">https://modelcontextprotocol.io/docs</a>.</p>
<p>Although there are some people that are saying that MCP is something that is overhyped. If you look at the registry <a target="_blank" href="https://www.pulsemcp.com/statistics">PulseMCP</a>, there are over <strong>6,600 servers</strong> listed, but the number of active enterprise users is still catching up.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769340243974/762416ca-5232-4a1a-a6c5-ca48aa8f5c17.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2-agent-skills">2. Agent Skills</h2>
<p>Have you ever noticed that while in a vibe coding session of yours, the commands or code getting executed aren't the latest or the most optimal?</p>
<p>You are in the flow, and the AI is hallucinating a library version from the good old days of 2021, and you only realise when you have to open those docs poised in anticipation of your arrival.</p>
<p>Here comes agent skills, first introduced by anthropic.</p>
<p>Now, what is Agent Skills? It's a very simple concept, actually, if we see. <strong>Agent skills is a meta tool for conversation and execution context injection with progressive disclosure.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769284598369/ba7bad98-fefe-4bf1-87e4-ac8addbdddc0.png" alt class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Layer</strong></td><td><strong>Loading State</strong></td><td><strong>Content Focus</strong></td><td><strong>Role</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Level 1: Metadata</strong></td><td>Always On</td><td><code>name</code>, <code>description</code></td><td>The "Discovery" phase; tells the agent a skill exists without clogging memory.</td></tr>
<tr>
<td><strong>Level 2: Instructions</strong></td><td>On Trigger</td><td><a target="_blank" href="http://SKILL.md"><code>SKILL.md</code></a>, workflows</td><td>The "Real-Talk" phase overrides old training data with your latest project docs.</td></tr>
<tr>
<td><strong>Level 3: Resources</strong></td><td>On Demand</td><td><code>scripts/</code>, <a target="_blank" href="http://REFERENCE.md"><code>REFERENCE.md</code></a></td><td>The "Execution" phase provides the actual code/tools only when it's time to act.</td></tr>
</tbody>
</table>
</div><p>By splitting your agent's knowledge into these three layers, you solve the two biggest problems in AI development:</p>
<ul>
<li><p><strong>Token Bloat:</strong> You aren't paying for the agent to "remember" every script in your scripts/ folder until it actually needs to run them.</p>
</li>
<li><p><strong>Context Drift:</strong> Because Level 2 is loaded only when triggered, it acts as a "fresh start" for the agent's logic, ensuring it follows your <strong>current</strong> documentation instead of its <strong>past</strong> training.</p>
</li>
</ul>
<p>A good read if you want to learn how to write agent skills for something you’re building or tailor an agent to your needs: <a target="_blank" href="https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/">https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/</a></p>
<h3 id="heading-some-cool-agent-skills-to-try">Some cool Agent Skills to try:</h3>
<p><a target="_blank" href="https://github.com/resend/email-best-practices">https://github.com/resend/email-best-practices</a></p>
<p><a target="_blank" href="https://github.com/supabase/agent-skills/tree/main/skills/postgres-best-practices">https://github.com/supabase/agent-skills/tree/main/skills/postgres-best-practices</a></p>
<p><a target="_blank" href="https://vercel.com/blog/introducing-react-best-practices">https://vercel.com/blog/introducing-react-best-practices</a></p>
<h2 id="heading-where-mcp-plugs-in-the-capability-layer">Where MCP Plugs In: The Capability Layer</h2>
<p>MCP is the <strong>universal connector</strong> that allows your Level 3 Resources to actually <strong><em>do</em> something</strong>. While your Skill might tell the agent "Update the documentation," MCP is the physical protocol that lets it log into GitHub and push the commit, so you don’t have to move your rusty fingers to do some clicks.</p>
<h2 id="heading-3-unpacking-agent-as-a-service-aaas">3. Unpacking Agent as a Service (AaaS)</h2>
<p>If MCP is the universal hardware port, <strong>Agent-as-a-Service (AaaS)</strong> is the full-service "robot butler" that plugs into it. Traditionally, <strong>Software-as-a-Service (SaaS)</strong> meant subscribing to a tool where you had to do the heavy lifting inside their interface.</p>
<p>With <strong>AaaS</strong>, you aren’t just renting a static tool, you’re renting an <strong>autonomous digital worker</strong> that lives in the cloud and actually performs the tasks for you.</p>
<h3 id="heading-saas-vs-aaas">SaaS vs. AaaS</h3>
<p>The shift is a change from “do-it-yourself” to “done-for-you".</p>
<p><strong>Active vs. Passive:</strong> SaaS is a sophisticated toolbox that is useful, but it stays quiet until you learn how to use it or hire someone. AaaS agents are "always-on" teammates (they don’t ask for rigid job timings ;) ) that monitor your data, plan their own steps, and execute goals without you constantly poking them.</p>
<p>This brings new ways to do things.</p>
<h4 id="heading-1-new-revenue-models-paying-for-wins-not-seats"><strong>1. New Revenue Models: Paying for Wins, Not Seats</strong></h4>
<p>Forget the old "flat fee per user" model that SaaS uses. Since agents do specific work, vendors are shifting to <strong>Outcome-Based Pricing</strong>.</p>
<ul>
<li><p><strong>Micro-charges:</strong> Instead of a $100/month flat rate, you might pay <strong>per lead generated</strong> or a small fee for every customer ticket successfully resolved.</p>
</li>
<li><p><strong>Scalability:</strong> You only pay for what you actually consume, making it perfect for startups that need to scale fast without the "psychological barrier" of a massive monthly bill.</p>
</li>
</ul>
<h3 id="heading-2-faster-integrations"><strong>2. Faster Integrations:</strong></h3>
<ul>
<li><p><strong>Efficiency:</strong> Building custom connectors used to take months of developer time. With MCP, agents can "plug and play" into your GitHub, Slack, or Google Drive via a single, universal interface.</p>
</li>
<li><p><strong>Speed:</strong> Companies using MCP report saving roughly <a target="_blank" href="https://www.newline.co/@zaoyang/mcp-in-enterprise-ai-use-cases-and-benefits--7b6e4c0a"><strong>30–40% on their integration timelines</strong></a>.</p>
</li>
</ul>
<h2 id="heading-the-casino-whale-analogy">The Casino Whale Analogy</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769289817715/26edcabd-4908-4b87-8e57-406a466fa8d3.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-1-the-gambler-the-ai-agent">1. The Gambler (The AI Agent)</h3>
<p>The gambler is the core brain. They have the intent to win and the ability to make decisions based on what they see. But a gambler without a strategy or a seat at the table is just a person in a fancy suit (probably you)</p>
<h3 id="heading-2-the-cheat-sheet-amp-strategy-agent-skills">2. The Cheat Sheet &amp; Strategy (Agent Skills)</h3>
<p>This is our <strong>3-layer architecture</strong>.</p>
<ul>
<li><p><strong>Level 1 (Metadata):</strong> The gambler knows which games are in the room (Poker, Blackjack, Roulette). They don't have the mood or gut feeling which one to play in their head yet, but know where to walk.</p>
</li>
<li><p><strong>Level 2 (Instructions):</strong> Suppose they sit at the Poker table. Now, they “activate” their Poker Skill. They recall the specific rules of this game, the "latest and most optimal" strategy that overrides their general knowledge.</p>
</li>
<li><p><strong>Level 3 (Resources):</strong> This is the mental math. The gambler pulls out a specific "resource", a probability chart for a 52-card deck, poker hand combinations, etc., to calculate the exact odds of the current hand.</p>
</li>
</ul>
<h3 id="heading-3-the-keycard-amp-the-chips-mcp">3. The Keycard &amp; The Chips (MCP)</h3>
<p>The gambler has the skill, but they can't play without <strong>access</strong>.</p>
<ul>
<li><p><strong>MCP is the Casino Keycard.</strong> It lets the gambler through the door, identifies them to the pit boss, and connects them to the house's data (the cards being dealt).</p>
</li>
<li><p>Without MCP, the gambler is just gambling in the mind. With MCP, they are plugged into the table's live feed.</p>
</li>
</ul>
<h3 id="heading-4-the-winning-hand-agent-as-a-service">4. The Winning Hand (Agent-as-a-Service)</h3>
<p>In the old world (SaaS), you were the gambler. You had to buy the strategy book and sit at the table yourself. In terms of <strong>AaaS</strong>, you are just the <strong>Backer</strong>. You don't play the game, you simply hire the pro gambler, give them the MCP keycard to your accounts, and they play 24/7. You don't care about the individual card flips, you only care about the <strong>chips stacking up in your vault</strong>.</p>
<h2 id="heading-human-upgrades-does-this-mean-we-lose-our-jobs">Human Upgrades: Does this mean we lose our jobs?</h2>
<p>(Only applicable if you have one right now)</p>
<p>We aren't just losing repetitive roles, but we are evolving them to ship stuff faster.</p>
<p>There is a shift that is happening, and we can see that by considering the layoffs going on.</p>
<p>Roles like Agent Supervisor, Prompt/Task Engineers for humans who oversee "fleets" of Agents. Instead of doing the data entry, mailing, and logging, you monitor the <strong>agents</strong> for the edge cases and step in only when things get complicated.</p>
<p><strong>Strategic Oversight:</strong> Humans move from being "order-takers" to "strategic planners," focusing on high-level goals like <strong>system designs</strong> while the agents actually get their hands dirty (no, we are not calling them slaves).</p>
]]></content:encoded></item><item><title><![CDATA[Bugs, Breakdowns & Breakthroughs: A Guide to Debugging Without Losing Your Mind]]></title><description><![CDATA[You know what's worse than your code not working? Your code almost working.
Try picturing this: It's 2 in the morning. Your function ran perfectly during testing, and now it's crashing on the simplest input. Somewhere in your 200 lines of code, a bug...]]></description><link>https://blog.acmvit.in/debugging</link><guid isPermaLink="true">https://blog.acmvit.in/debugging</guid><category><![CDATA[Bugs and Errors]]></category><category><![CDATA[Developer]]></category><category><![CDATA[coding]]></category><category><![CDATA[debugging]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[tips and tricks]]></category><dc:creator><![CDATA[Aarav Gupta]]></dc:creator><pubDate>Wed, 21 Jan 2026 09:53:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768986986632/8d74f93b-da6e-4748-a369-ac1ccd0fda73.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You know what's worse than your code not working? Your code almost working.</p>
<p>Try picturing this: It's 2 in the morning. Your function ran perfectly during testing, and now it's crashing on the simplest input. Somewhere in your 200 lines of code, a bug is laughing at you. And in all honesty, it has every right to do so.</p>
<p>They’ll say that debugging is like clever detective work, but they always forget to mention that you’ll be spending the first three hours digging for clues with a plastic fork.</p>
<h2 id="heading-why-bugs-are-actually-your-best-teachers-im-not-joking"><strong>Why Bugs Are Actually Your Best Teachers (I’m not joking)</strong></h2>
<p>Bugs feel like personal betrayals, but they're secretly your best teachers. They exist to turn you from someone who types code into someone who understands it. Here's the truth. Tutorials teach you syntax. Bugs teach you how to think. Every bug you fix sharpens your intuition about how code behaves in the real world. You start recognizing patterns, like memory leaks or API failures.That’s why senior developers seem like wizards-they’ve spent hours wanting to delete VS code and change their major.</p>
<h2 id="heading-my-first-hackathon-48-hours-of-rage-embarrassment-and-depression"><strong>My First Hackathon: 48 Hours of Rage, Embarrassment and Depression</strong></h2>
<p>Let me take you back to my first year. It was my first hackathon. My team was building a website, I was handling the frontend, and confidence was dangerously high(I have no idea why).</p>
<p>Hour 6: "This is going great! We'll finish early!(all i had done till then was  understand what frontend and backend meant)" </p>
<p>Hour 18: "Why won't this button work? (my website was completely broken. The colours were wrong and the buttons were not buttons, just images)"</p>
<p> Hour 30: "WHO MOVED MY SEMICOLON!!?(believe me. This is the biggest issue) "</p>
<p>If I were to be completely honest, around 70% of the bugs were textbook rookie mistakes. They were either missing semicolons or variables that I named incorrectly, and then I would spend 20 minutes wondering why everything was undefined. My teammates would look at my screen for 2 seconds and point out the error. I feel humiliated to this day.</p>
<p>The copy-paste disasters were the most humbling.I needed a responsive navigation bar, but my vision seemed too complex to build from scratch. So, I asked ChatGPT. It gave me a beautiful block of HTML, CSS, and JavaScript. In my excitement, I copied everything and pasted it directly into our project. For a glorious second, it worked. The new nav bar was there. But then I noticed the rest of my website had completely imploded. The CSS from ChatGPT was at war with my own styles. My page's buttons were suddenly the wrong color, the font sizes were all over the place, and my layout was a mess.The code looked like the real deal, but it was a black box. Because I hadn't written it, I had no idea how to fix the conflicts without breaking the nav bar itself. It just wouldn't work.</p>
<p>By hour 40, I realized I'd spent more time debugging than writing new code. That hackathon, I understood that debugging wasn't only about finding the errors. It's about fixing the issues without creating new ones.</p>
<p><strong>A Field Guide to Code's Worst Nightmares</strong></p>
<p>Before diving into debugging strategies, lets meet the usual suspects. Understanding bug types is crucial for choosing the right approach.</p>
<p>You'll meet all sorts of pests on your coding journey. Here are the headliners:</p>
<p><strong>Syntax Bugs (The Grammar Police):</strong> A forgotten semicolon or a misplaced bracket, and your code will refuse to run. Your editor will scream at you with angry red squiggles. Listen to it.</p>
<p><strong>Logic Bugs (The Gaslighters):</strong> These bugs make you question your own sanity. Your code runs with zero errors, but it will confidently tell you that 2+2=5. They don't crash your program, but they do make it a compulsive liar.</p>
<p><strong>The Copy-Paste Special (The Trojan Horse):</strong> You paste a 'perfect' solution from the internet, but it was built for a slightly different reality. The moment you try to change anything, the whole thing collapses like a house of cards. This usually happens when you’re too lazy to even look at what chatgpt gave. </p>
<h2 id="heading-your-debugging-toolkit-from-panic-to-pro"><strong>Your Debugging Toolkit: From Panic to Pro</strong></h2>
<p>Now that we’ve covered the types of bugs that you’ll encounter, lets talk about the principle strategies for getting rid of them.</p>
<p>Here's what actually helps when you're neck-deep in bugs:</p>
<h3 id="heading-1-print-statements-your-best-friend"><strong>1. Print Statements: Your Best Friend</strong></h3>
<p>Don't be ashamed of spamming console.log(). They may be messy, but they are the simplest method to get rid of bugs that you just can’t seem to find. You’ll remove the lines later anyways. You can use print statements to trace function calls and variable changes, or even to check values before and after operations.</p>
<h3 id="heading-2-read-error-messages-like-love-letters"><strong>2. Read Error Messages Like Love Letters</strong></h3>
<p>Error messages aren't trying to hurt you. They're there to help. That scary red text is actually the bug writing you a detailed confession about exactly what went wrong.</p>
<ol>
<li>Stack traces: Read bottom to top to see the sequence of function calls  </li>
<li>TypeError: You're using the wrong data type ("Cannot read property 'name' of undefined" means that you're accessing a property on something that doesn't exist)  </li>
<li>Syntax errors: Point to exactly where your code is malformed</li>
</ol>
<h3 id="heading-3-the-rubber-duck-method"><strong>3. The Rubber Duck Method</strong></h3>
<p>Explain your code to somebody line by line, or just say them to yourself out loud. You’ll catch your mistake mid-sentence. It may be embarrassing, but it works.</p>
<h3 id="heading-4-divide-and-conquer"><strong>4. Divide and Conquer</strong></h3>
<p>Comment out half your code. Does the bug still happen? Great, it's in the other half. Keep dividing until you find the exact line causing trouble.<br />Pro tip: Create a minimal reproduction case—the smallest amount of code that still shows the bug.</p>
<h3 id="heading-5-the-sacred-rule-of-copy-paste"><strong>5. The Sacred Rule of Copy-Paste</strong></h3>
<p>Read the code if you're copying it from anywhere (Stack Overflow, ChatGPT, that GitHub repo), and understand what it does. Make sure that it fits your specific problem. Blindly pasting code is like following GPS directions without looking at the road.<br />Test it in isolation before integrating it, and check for stuff like dependencies and version requirements. Make sure to rename the variable to match your code.</p>
<h3 id="heading-6-version-control-is-your-time-machine"><strong>6. Version Control is Your Time Machine</strong></h3>
<p>Git isn't just for team projects. It's for the moment when you break something that was working perfectly an hour ago. <code>git checkout</code> can literally undo your mistakes and bring you back to when life was simpler. Commit frequently with clear messages, and use branches for risky experiments.</p>
<h2 id="heading-before-you-go-a-reality-check"><strong>Before You Go: A Reality Check</strong></h2>
<p>Bugs will definitely frustrate you. They'll make you consider switching to a branch where the biggest problem is understanding the author’s feelings.</p>
<p>But, the satisfaction of finally understanding why something isn't working is addictive.</p>
<p>So the next time you encounter a bug, take a deep breath, grab some snacks and remember:</p>
<p>Every expert was once a beginner who refused to give up on a bug.</p>
<p>Happy debugging, and may your console.logs be ever informative!</p>
<p>P.S-  if your code works perfectly the first time, you either wrote Hello World or forgot to actually run it :) </p>
]]></content:encoded></item><item><title><![CDATA[flag{H3LL0_FR13ND}]]></title><description><![CDATA[When movies say “hacker,” they usually mean a mysterious person in a hoodie typing furiously on a glowing keyboard while symbols float in the air. Well, sorry to break it to you, that’s very far from reality. It often involves running commands you ba...]]></description><link>https://blog.acmvit.in/ctf101</link><guid isPermaLink="true">https://blog.acmvit.in/ctf101</guid><category><![CDATA[CTF]]></category><category><![CDATA[cybersecurity]]></category><category><![CDATA[hacking]]></category><category><![CDATA[hacker]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[beginner]]></category><category><![CDATA[beginnersguide]]></category><category><![CDATA[guide]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Harshit Narang]]></dc:creator><pubDate>Wed, 14 Jan 2026 15:30:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768404378029/fc7ab68a-99dd-4e81-b478-4b38f15dcc4b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When movies say “hacker,” they usually mean a mysterious person in a hoodie typing furiously on a glowing keyboard while symbols float in the air. Well, sorry to break it to you, that’s very far from reality. It often involves running commands you barely understand and Googling error messages for the fifth time in a row.</p>
<p>To add some excitement to the cybersecurity community, there is something called <strong>Capture the Flag</strong>, or <strong>CTF</strong>.</p>
<p>No, not the old-school playground game. This version is digital, and you can think of it as a collection of puzzles and obstacle courses built for your computer. The goal is simple: find a hidden piece of text or string called a <em>flag</em>. It usually looks something like this:</p>
<pre><code class="lang-plaintext">flag{s0m3th1ng_clev3r}
</code></pre>
<h2 id="heading-why-ctfs">Why CTFs</h2>
<p>CTFs are one of the few things that give you an opportunity to hack into something, test your hacking skills, and make it fun since you are competing with other hackers at the same time.</p>
<p>They help you understand how systems fail, and once you see how things break, you start to understand how to build them better.</p>
<p>CTFs are one of the best ways to build and sharpen cybersecurity skills, whether you are a complete beginner in the field or already a seasoned veteran.</p>
<p>They are usually of two types:</p>
<h2 id="heading-jeopardy-style">Jeopardy Style</h2>
<p>This is the most common format.</p>
<p>You are given a board of challenges divided into categories like:</p>
<ul>
<li><p>Web Exploitation</p>
</li>
<li><p>Cryptography</p>
</li>
<li><p>Forensics</p>
</li>
<li><p>Reverse Engineering</p>
</li>
<li><p>OSINT</p>
</li>
<li><p>Pwn / Binary Exploitation</p>
</li>
</ul>
<p>These categories are mostly present in every CTF, and there are categories that are getting popular nowadays like <em>AI</em>, <em>Web3</em>, <em>hardware</em>, etc.</p>
<p>Each challenge is worth points based on difficulty. You pick one, solve it, submit the flag, and get the points. Jeopardy-style CTFs are the perfect place for beginners to get into cybersecurity.</p>
<h2 id="heading-attack-defense-style">Attack Defense Style</h2>
<p>This is the advanced version.</p>
<p>Teams are given servers to defend while simultaneously attacking other teams’ servers. It is fast-paced, competitive, and chaotic.</p>
<p>For now, I will focus on Jeopardy-style CTFs because that is where the real learning happens.</p>
<h2 id="heading-a-tour-of-common-ctf-challenge-categories">A Tour of Common CTF Challenge Categories</h2>
<p>Here is what you will usually find when browsing a CTF challenge board.</p>
<h2 id="heading-web-exploitation">Web Exploitation</h2>
<p>In Web Exploitation challenges, you are usually given a vulnerable or broken website, and you are expected to find the flag.</p>
<p>Some of the things to look for in web challenges include:</p>
<ol>
<li><p>The source code of the website using the view-source: . </p>
</li>
<li><p>Network requests using proxy tools like Burp Suite or Caido</p>
</li>
<li><p>Suspicious things in the developer tools section like unwanted cookies, console logs, etc.</p>
</li>
<li><p>Misconfigured APIs</p>
</li>
<li><p>Understanding various web frameworks helps you learn details specific to each framework. For example, FastAPI provides a <code>/docs</code> endpoint, and similarly, other frameworks have their own unique features and conventions.</p>
</li>
</ol>
<p>The most common vulnerabilities found in web-based challenges are SQL injection, Server-Side Request Forgery (SSRF), XSS (Cross-Site Scripting), and IDOR (Insecure Direct Object Reference). These are all part of OWASP’s Top 10 vulnerabilities.</p>
<p>If you are interested in learning how these vulnerabilities actually work, PortSwigger Academy is the best resource to learn web-based vulnerabilities.</p>
<h2 id="heading-cryptography">Cryptography</h2>
<p>This category focuses on encoded or encrypted data. Most of the time, you will be given a script and a file that is encrypted or encoded using that script. You have to understand what’s actually going on in the script and somehow find a way to decrypt or decode whatever is in the file.</p>
<p>I know this sounds not that difficult, but trust me, these are some of the most challenging categories in Jeopardy-style CTFs. If you’re good at Python and encoding-related concepts, this will be your favourite category because most of the scripts are written in Python.</p>
<p>Other times, it is about recognizing common encodings like Base64 or hex and knowing how to decode them.</p>
<p><strong>How to start:</strong></p>
<ol>
<li><p>Use tools like CyberChef and dcode.fr, as these are some of the best online tools that support encoding and decoding for almost every cipher in existence.</p>
</li>
<li><p>If the challenge has its own encryption script, try to understand it and reverse engineer the logic so that you can make your own decryption script.</p>
</li>
<li><p>Sometimes the challenges have a script running on an nc (netcat) server, and it gives out clues and keys for you to use and deduce the encryption to decrypt the encrypted text.</p>
</li>
</ol>
<h2 id="heading-forensics">Forensics</h2>
<p>This is one of the most exciting and wild categories in Jeopardy CTFs.</p>
<p>You might be given a network capture, an image file, a memory dump, a corrupted machine, or sometimes, if your luck is bad, a file with a file extension so random that you have to Google it for five minutes.</p>
<p>In these challenges, you have to act like a digital detective and find the flag through lots and lots of data.</p>
<p><strong>How to start:</strong></p>
<ol>
<li><p>You can try your luck by running <code>strings</code> or <code>grep</code>, but usually flags are not that easy to find.</p>
</li>
<li><p>If you have a file format that you are not familiar with, make sure you know what it is and how to work with it.</p>
</li>
<li><p>These challenges have a sub-category called <em>Steganography</em>, which basically means hiding things in images, audio files, videos, GIFs, etc. This website has a checklist with all the tools you should try if you encounter a steg challenge.</p>
</li>
<li><p>For network capture files (pcap or pcapng), Wireshark and tshark are your go-to tools to extract useful information.</p>
</li>
<li><p>For challenges involving memory dumps or machine dumps, you need patience and the ability to look for different clues while searching for the flag.</p>
</li>
</ol>
<h2 id="heading-reverse-engineering">Reverse Engineering</h2>
<p>Oh man, personally, I find these very difficult because you are literally given a compiled program with no source code and asked to figure out what it does.</p>
<p>The solver is expected to examine binaries, read disassembled code, and locate hidden logic or hardcoded strings in an executable.</p>
<p><strong>How to start:</strong></p>
<ol>
<li><p>Run <code>strings</code> to find hardcoded strings that may or may not act as clues for the flag.</p>
</li>
<li><p>Use tools like Ghidra or IDA for decompiling, and gdb for debugging and runtime analysis. Decompilers are tools that turn executables into readable code.</p>
</li>
<li><p>Run the program and try to break it or make it do things it’s not supposed to.</p>
</li>
</ol>
<h2 id="heading-osint">OSINT</h2>
<p>One of my favourites is OSINT, which refers to Open-Source Intelligence. In these types of challenges, the information you need to get the flag is public and accessible to everyone on the internet.</p>
<p>This usually involves searching usernames, analyzing images, checking public records, and connecting small clues scattered across the internet. Common challenges include identifying locations from photos, tracking online identities, and finding leaked or archived information.</p>
<p><strong>How to start:</strong></p>
<ol>
<li><p>Learn how to search effectively using techniques like dorking and reverse image searches.</p>
</li>
<li><p>Tools like whois, Wayback Machine, and Sherlock are extremely useful.</p>
</li>
</ol>
<h2 id="heading-binary-exploitation">Binary Exploitation</h2>
<p>These are often referred to as pwn (short for “own,” meaning to gain control of a program) challenges.</p>
<p>This category involves exploiting vulnerable programs, usually written in C, to control execution or retrieve a flag.</p>
<p><strong>How to start:</strong></p>
<ol>
<li><p>Understand how the stack, heap, pointers, and buffers work. Most pwn challenges are based on simple mistakes in C programs.</p>
</li>
<li><p>Learn to use tools like <code>gdb</code> to step through a program, inspect memory, and see where it crashes.</p>
</li>
<li><p>Start with beginner challenges that run on your own machine before touching remote services. This helps you understand what your input is doing.</p>
</li>
<li><p>Tools like pwntools make interacting with binaries and remote servers much easier once you understand the fundamentals.</p>
</li>
</ol>
<h2 id="heading-how-to-start-without-losing-your-mind">How to Start Without Losing Your Mind</h2>
<ol>
<li><p><strong>Get good at searching</strong></p>
<p> Knowing how to search properly is a huge part of CTFs. Looking up error messages, tools, or random things you do not understand is completely normal. Reading write-ups after you have given a challenge an honest try is not cheating, it is how you realize what you missed. Writing your own write-ups helps even more because putting your thoughts into words makes the learning actually stick.</p>
<p>  Here’s the website where we post write-ups of the CTFs we participate in: https://z0d1ak.vercel.app/</p>
</li>
<li><p><strong>Do not do it alone</strong></p>
<p> CTFs are much more enjoyable with other people. Even if you compete solo, communities like CTFtime and Discord servers are full of people sharing hints, tools, and encouragement.</p>
</li>
<li><p><strong>Start with beginner-friendly platforms</strong></p>
<p> Some CTF platforms are built specifically for learning. Some of the best platforms to get started with are:</p>
<ul>
<li><p>picoCTF</p>
</li>
<li><p>HackMyVM</p>
</li>
<li><p>CTFlearn</p>
</li>
<li><p>OverTheWire</p>
</li>
</ul>
</li>
<li><p><strong>Set up a comfortable environment</strong></p>
<p>A Linux VM makes life easier since most development and security tools work best on Linux. Kali Linux is especially helpful because it comes with many tools preinstalled, so you can start experimenting right away. Using it in a VM gives you a safe space to break things, learn from mistakes, and get comfortable with the terminal and real-world server setups.</p>
</li>
</ol>
<ol start="5">
<li><p><strong>Trust that the flag is there</strong></p>
<p>            CTF challenges are designed so everything you need is included. Read the challenge description     carefully, look through the files, and do not rush past small details.</p>
</li>
</ol>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>CTFs are giant playgrounds built by curious people for curious people.</p>
<p>You will get stuck. A lot. You will feel confused, frustrated, and convinced that everyone else knows something you do not. At some point, you will probably think you are just bad at this.</p>
<p>Then something clicks.</p>
<p>You notice a small detail you ignored earlier. You try one more thing. A flag appears. Suddenly, all that confusion turns into excitement, and for a moment, you feel unstoppable.</p>
<p>This eureka moment is why people keep coming back.</p>
<p>Do not give up when you get stuck. Keep reading write-ups after challenges end and learn how others solved them. Every write-up you read adds another tool to your mental toolkit. Even when you solve nothing, you are still learning.</p>
<p>Most importantly, keep participating. CTFs happen almost every weekend, and there is always another chance to try again. You can find upcoming competitions on CTFtime at https://ctftime.org/event/list/upcoming.</p>
<p>So pick a beginner CTF, open a terminal, and start poking at things. The worst thing that can happen is that you learn something new. The best thing that can happen is that you find a flag :)</p>
<p><strong>Happy hacking.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Backs Against The Wall - ACM-VIT's Biggest (almost) Failure]]></title><description><![CDATA[You don’t forget some stuff. You just don’t.
Especially if the “stuff” we’re talking about is umpteen discord pings of 800 event participants saying things along the lines of:

If you understood what’s going on, you’re one of the 50 odd people who li...]]></description><link>https://blog.acmvit.in/the-cryptic-hunt-2024-blog</link><guid isPermaLink="true">https://blog.acmvit.in/the-cryptic-hunt-2024-blog</guid><category><![CDATA[engineering]]></category><category><![CDATA[events]]></category><category><![CDATA[College]]></category><category><![CDATA[failure]]></category><category><![CDATA[technology]]></category><dc:creator><![CDATA[Manan Shah]]></dc:creator><pubDate>Wed, 24 Sep 2025 17:12:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758733796948/fbaf69e1-897c-4176-898b-97a508a2e0f4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You don’t forget some stuff. You just don’t.</p>
<p>Especially if the “stuff” we’re talking about is umpteen discord pings of 800 event participants saying things along the lines of:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758733290077/59584b03-af8f-4357-9e10-4d4f981b50ae.jpeg" alt class="image--center mx-auto" /></p>
<p>If you understood what’s going on, you’re one of the 50 odd people who lived through the trauma of 27th September, 2024 - day 0 of ACM-VIT’s flagship event <strong>Cryptic Hunt 3.0.</strong></p>
<p>If you don’t understand, I envy you - but allow me to give you some context (I’ll keep it to the point, I promise). Vellore Institute of Technology, one of India’s very reputed engineering universities, holds its annual tech fest in the months of September/October, called <em>graVITas</em>. Many clubs and chapters organise events during it - hackathons, ideathons, various tech competitions - but a select few organise the “premium events”, premium enough to be mentioned in The Hindu newspaper. Our chapter, ACM-VIT, organises one of them - our legacy scavenger hunt which we call “Cryptic Hunt”.</p>
<h2 id="heading-what-is-cryptic-hunt">What is Cryptic Hunt?</h2>
<p>800 participants. 36 hours. A hunt across campus.</p>
<p>In simple words - it’s a scavenger hunt across the huge campus we know as VIT, but with a technological twist. We developed our own mobile app (from scratch - and yes, the tech details will also follow). Our dedicated research team builds a set of questions related to cryptography and network security. Upon answering these questions, you’re hinted to a particular location on campus - where we have discretely pasted QR codes. Scan the correct code for a question in our app, and voila - you get points and move up on the leaderboard.</p>
<p>Pretty simple, right?</p>
<p>Let’s have a look at a question from last year:</p>
<table><tbody><tr><td><p>In the heart of a vibrant kingdom called Diversia, where fiery Blazers clashed with serene Tranquils, chaos reigned as arguments over the annual festival escalated into fierce brawls. Each side, convinced of their own superiority, filled the air with shouts and despair. But when a wise old woman shared the tale of two mountains--one jagged and bold, the other smooth and gentle--their hearts began to shift. Realizing that their differences were not flaws but essential parts of a beautiful whole, they merged their visions into an unforgettable celebration, where raucous laughter intertwined with soothing melodies, revealing that the true magic of Diversia lay in the harmony of its polar opposites.<br />132, 109, 91, 83, 57, 49, 20, 11</p></td></tr></tbody></table>

<p>The solution for this, along with all our other questions is available on our <a target="_blank" href="https://github.com/ACM-VIT/Cryptic-Hunt-Solutions-2024/blob/main/Level%201/numericalMystery/README.md">GitHub repository</a> - we make our solutions public once any event ends ;)</p>
<p>But hey, that’s all the context I can give you - now begins the real story. I’ll split it up into sections, each being technical or non-technical - if you’re here for the drama, <em>for the tea</em>, feel free to skip the tech part. However, if you’re a nerd like almost all of us are, the tech we’ve implemented is quite cool - do check it out.</p>
<p>The grand tale of Cryptic Hunt 2024 shall be divided into the following chapters:</p>
<ol>
<li><p>The Planning [non-tech]</p>
</li>
<li><p>The System Design [tech]</p>
</li>
<li><p>The Implementation [tech]</p>
</li>
<li><p>The D-Day [non-tech]</p>
</li>
<li><p>The Downfall [non-tech]</p>
</li>
<li><p>The Resilience [tech (mostly?)]</p>
</li>
<li><p>The Post-Mortem [non-tech]</p>
</li>
</ol>
<h2 id="heading-the-planning-non-tech">The Planning [non-tech]</h2>
<p>It all started off during the summer break - just a group of 15 odd kids sitting on google meets whiling away the odd hours at night. We were the new senior core of ACM-VIT, and at that time we thought we were the most important people in the world - “make ACM great again” is pretty much what our slogan was. Reminds you of a certain personality in world politics, maybe? Yeah, there were red flags quite early on, I guess.</p>
<p>However, the first order of action - graVITas was approaching, and we needed to make sure Cryptic Hunt was a grand, grand success. We needed to show everyone that we’re the real deal, the real stuff. Who was watching? No one, really, but hey - let some kids be happy thinking they matter.</p>
<p>Utopian world. Don’t you wish we lived in one? A world where everything went according to plan, everything was ideal. But nah, utopia is only achievable as an album, not in implementation. Yet, everyone plans an ideal situation. An ideal approach. An ideal timeline, maybe.</p>
<p>So did we.</p>
<p>And when I look at it now, 12 months later, I can’t help but laugh. In all fairness, it isn’t a bad schedule in the least - extremely achievable. It’s just funny how absolutely NOTHING went as per schedule - the “plan” going absolutely nuts from the very beginning.</p>
<p>Here it is, the “ideal plan” we had in mind (as of July 2024):</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Sr No</strong></td><td><strong>Task</strong></td><td><strong>Start Date</strong></td><td><strong>Deadline</strong></td></tr>
</thead>
<tbody>
<tr>
<td>1</td><td>Introducing CH officially to our Junior Core</td><td>Jul 24, 2024</td><td></td></tr>
<tr>
<td>2</td><td>Design starts work</td><td>Jul 28, 2024</td><td></td></tr>
<tr>
<td>3</td><td>Sponsorship brochure completion (design)</td><td></td><td>Aug 1, 2024</td></tr>
<tr>
<td>4</td><td>App designs finalized</td><td></td><td>Aug 10, 2024</td></tr>
<tr>
<td>5</td><td>Tech starts work</td><td>Aug 11, 2024</td><td></td></tr>
<tr>
<td>6</td><td>Frontend and backend individually completed</td><td></td><td>Aug 24, 2024</td></tr>
<tr>
<td>7</td><td>Begin FE + BE integration</td><td>Sep 1, 2024</td><td></td></tr>
<tr>
<td>8</td><td>Complete integration + testing</td><td></td><td>Sep 15, 2024</td></tr>
<tr>
<td>9</td><td>Purchase apple developer license</td><td>Sep 15, 2024</td><td></td></tr>
<tr>
<td>10</td><td>Push updates to play store + full app to app store</td><td>Sep 16, 2024</td></tr>
</tbody>
</table>
</div><p>How much of this actually went according to schedule? Good question. Good, <em>good</em> question.</p>
<p>There’s a lot of reasons why we weren’t able to achieve a lot of what we set out for, and since I’m being completely transparent out here, I won’t hold anything back. It wasn’t just a lack of skill, it was lack of effort, a whole lot of politics, constant ego clashes and, well, some bad luck as well.</p>
<p>Coming back to the plan - the tech part was quite simple. Everyone takes up certain tasks, or “issues”, codes them and raises a pull request from their fork to our official GitHub repository. We had the tech split into 2 - backend and frontend (app) - and each division had one <em>final boss</em>, the best in each field we had in our senior core, who were supposed to give the final approval before a piece of code was merged.</p>
<p>This was probably our first mistake. Not the tech division, don’t get me wrong, but the appointed <em>final bosses</em>. Because what ensued was a massive cold war between frontend and backend, one on such a scale that it threatened the very existence of our entire event. Okay, maybe that’s a bit of an exaggeration, but it was pretty bad to say the least - because when you’re way behind deadlines and find out something major is broken, the last thing you want is to hear:</p>
<ul>
<li><p>Mr. Frontend: <em>“I’ve made sure the app is perfect. Whatever issue is there is in the backend, tell them to figure out what the hell is wrong with their sh\</em>t”*</p>
</li>
<li><p>Mr. Backend: <em>“The backend is made foolproof, I can give you my 100% guarantee on that. Ask frontend to fix their stupid app, not me.”</em></p>
</li>
</ul>
<p>Oh boy.</p>
<h2 id="heading-the-system-design-tech">The System Design [tech]</h2>
<p>Alright, time for the nerdy stuff. If you're still reading, you're either genuinely interested in our tech stack or you're procrastinating on something else (no judgment, I've been there).</p>
<p>We built our backend using Go Fiber - fast, lightweight, and honestly just fun to work with. For our database, we went with CockroachDB, and before you ask - yes, the name is as weird as it sounds, but hear me out. This thing has distributed SQL capabilities, maintains high consistency, and scales horizontally like a dream. Plus, it survived our chaos (literally), so it earned some respect.</p>
<p>Everything was containerized and deployed on Google Cloud Run. Why? Because we're lazy developers who don't want to manage servers, obviously. Auto-scaling, load balancing, pay-per-use - it was supposed to be our silver bullet. Spoiler alert: it wasn't, but we'll get to that trainwreck in a bit.</p>
<p>Authentication was handled through Firebase with Google Sign-In as the primary method. Static assets like question images and media were stored in Google Cloud Storage buckets - nothing fancy there, just good old reliable cloud storage.</p>
<p>The mobile app was built with Expo React Native. One codebase for both iOS and Android - the dream of every developer who's tired of maintaining separate native apps. We cached question data locally for better performance, and used Firebase Cloud Messaging (FCM) for cache invalidation. Whenever new questions went live, FCM notifications would trigger cache clears to make sure everyone got the latest data. In theory, this was brilliant - fast performance with real-time updates. In practice... well, let's just say FCM and we had some trust issues.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758733369236/703fdf33-8e91-4f07-8838-c60c890d722e.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-implementation-tech">The Implementation [tech]</h2>
<p>The journey started innocently enough. Our design team huddled together in what I can only describe as the most intense brainstorming session of our lives. After cycling through themes that ranged from "cyberpunk dystopia" to "medieval fantasy" (don't ask), we settled on Minecraft. Yes, Minecraft. Look, it worked, okay? The blocky aesthetic was perfect for a tech event, and everyone immediately got the vibe.</p>
<p>With the creative direction locked in, we split into two teams - backend and app developers. Classic divide and conquer strategy. The backend team, bless their souls, documented every single API endpoint they planned to build. This wasn't just good practice; it was survival. The app team needed to know exactly what they'd be working with, and nobody had time for "oh wait, I changed that endpoint yesterday" surprises.</p>
<p>Both teams worked in parallel, which sounds organized and professional, but was actually just organized chaos. Daily standups that went on for an hour, messages at 2 AM asking "did you push the auth fix?", and the occasional panic when someone realized their local changes would break everything else.</p>
<p>Once the core features were done, both teams came together for the anti-cheat implementation. This was actually pretty cool - making sure both sides of the system worked in perfect harmony to catch cheaters. Spoiler: it worked... mostly.</p>
<p><em>Deep breath</em> Alright, storytime. And not the fun kind.</p>
<p><strong>Mobile App Challenges</strong></p>
<p>Let's start with the app stores, shall we? You know how they say "it works on my machine"? Well, it worked on our phones, but the app stores? They had opinions. Strong opinions.</p>
<p><em>Sign-in inconsistencies</em> - We kept getting rejections from both Google Play and Apple's App Store because our sign-in flow was apparently "inconsistent". What does that even mean? We still don't know, but after the 4th rejection, you just start making random changes and hoping for the best.</p>
<p><em>Permission handling issues</em> - iOS is particularly picky about why you want camera and location permissions. Our initial reason strings were apparently too vague. "We need your camera" wasn't good enough - they wanted to know WHY we need it, HOW we'll use it, and probably our grandmother's maiden name too. Fair enough, I guess.</p>
<p><em>Apple's mandatory sign-in requirement</em> - Here's a fun one. Apple decided that if you offer Google Sign-In, you MUST also offer "Sign in with Apple". But our university ecosystem was built around Google. Everyone had Google accounts. Nobody wanted Apple sign-in. But Apple didn't care. So we had to implement it anyway, for exactly zero users who wanted it.</p>
<p><em>Legacy Firebase client issues</em> - This one almost killed us. We were reusing older Firebase authentication clients that were tied to outdated SHA keys. Everything worked fine in testing, but on the actual event day? Complete authentication failure. We spent the entire final day scrambling to fix this while 800 participants waited. Not our finest moment.</p>
<p><strong>Backend Challenges</strong></p>
<p>If you thought the app issues were bad, wait till you hear about the backend disasters.</p>
<p><em>Unreliable FCM delivery</em> - FCM doesn't guarantee 100% delivery. Fair enough, networks can be unreliable. But our system was designed to retry failed sends automatically. So when FCM inevitably failed to deliver some notifications, it triggered retries, which prolonged the table locks, which made everything worse. It was a beautiful cascade of failure.</p>
<p><em>Escalating infrastructure costs</em> - And here's the cherry on top of this disaster sundae. In our desperation to keep the system running, we started throwing money at the problem. Increased Cloud Run instance limits. Scaled up everything. "Just make it work, we'll worry about the bill later."</p>
<p>The bill came later. ₹36,000 later, to be exact. That's about $430 for our international readers, which might not sound like much, but remember - we're college students organizing a campus event. That was more than our entire budget for prizes, food, and everything else combined. Our treasurer nearly had a heart attack when they saw the Google Cloud invoice.</p>
<p>But hey, at least the system was running, right? Right?</p>
<h2 id="heading-the-d-day-non-tech">The D-Day [non-tech]</h2>
<p>It finally was here. The day we had been working so hard for.</p>
<p>Day 0 of Cryptic Hunt 2024 was here.</p>
<p>The app was live on the Play Store, and iOS users could access it using testflight (we spent 10k dude, had to do <em>something</em>). The backend was live, the database was up, the admin app was in place.</p>
<p>Oh wait - the admin app. We <em>thought</em> it was in place. When I entered the auditorium, there was a frenzy. I asked what’s up, and that's when I found out - the admin app simply wasn’t ready. “It’s okay, calm down, it’s just 800 people waiting to play your game. That’s not much. Calm down.” - that’s what I was telling myself constantly (horrible advice, in hindsight). The admin app was responsible for linking all questions to the correct QR codes, and all our QR pasting teams were ready to get the codes up across campus, but they could do nothing without the admin app. So we put them on standby, and got to work.</p>
<p>First things first, I needed to get an update on the status of the admin app. I called up my friend who was working on it, in quite audible panic. <em>“It’s more or less done, I have all my changes on local - gonna push it in 5 minutes”</em>, he said - and true to his word, he pushed all his changes to GitHub (after 55 minutes). But hey, better late than never - we had a working admin app. QR team got into action and put up the codes ASAP, while on-site management handled the crowd and got them seated for the opening ceremony.</p>
<p>The opening ceremony started, and was the only thing that happened without any major fumbles. Towards the end, they showed the links for the participants to download our official Cryptic Hunt App and register themselves and their teams on it.</p>
<p>And that’s when the downfall began.</p>
<h2 id="heading-the-downfall-non-tech">The Downfall [non-tech]</h2>
<p>They say <em>“You know the good part about hitting rock bottom? There is only one way to go, and that’s up”</em>. I wanna know who’s the “they” who say this - because boy, oh boy, does the “rock go bottom-er”.</p>
<p><em>- “Dude, it’s crashing.”</em><br />“Huh?”<br /><em>- “The app, it’s crashing.”</em><br />“Nah, must be some net issue. Just tell that guy to restart and all will be good.”<br /><em>- “Dude. It’s crashing for everyone. NOBODY can log in.”</em></p>
<p>That’s all the conversation was between me and my friend who was handling management. At first, I thought “eh, can’t always be completely smooth, must be the extra load”. Oh, innocent little boy - you had no idea how wrong you were.</p>
<p>We opened GCP logs - and what greeted us was a horrendous site. A bloodbath of 500 coded error messages, with metrics worse than the great collapse of the financial market in 2008. What? How? Why? No idea - but we still weren’t very demotivated. We’ve debugged through stuff before, no biggie - so we got to work doing some root cause analysis.</p>
<p>Here’s when management stepped up. They handled hundreds of students in the auditorium, each shouting their problem out loud. They calmed the ruckus. They settled the tension. While we sat and figured out why the heck would our onboarding section keep crashing, management were on their feet - collecting participant names and emails, their teams, their teammates’ emails - everything on pen and paper - just so that we could immediately cut to action once tech figures stuff out.</p>
<p>But tech - were we figuring stuff out? Uhhhhhh, not really.</p>
<p>We were completely clueless as to what was causing the issue. The issue we had pinpointed - all the requests to create/join a team were timing out. Why so? It was because the users table in our database was locked - i.e. it was inaccessible to be written to since it was undergoing some sort of a change. However, rather than being locked for an infinitesimal amount of time (that it would ideally be), it seemed to be permanently locked, hence all create/join team requests were just waiting in queue for the locked database table, and eventually timing out.</p>
<p>But what was causing this issue? We had no flipping clue. It’s not often that I’ve felt this helpless. Usually there is someone who knows what is going on when something messes up. Someone who has been in a niche situation like that. Someone who could be our stack overflow. But that day, everyone was equally clueless. The board, the senior core, the junior core - everyone just had a massive question mark on their faces.</p>
<p>We turned to AI. We did something only the most desperate (or the most stupid) devs do - fed entire files of code into ChatGPT and asked it - “oh lord, please tell us why our users table permalocked”. And GPT responded. We were shocked - how did we miss something this trivial? How? So many tech minds, and all of us missed something this small? It was unbelievable. We never added transaction rollbacks to our database querying code. That’s it. No tx.rollBack() code. That’s literally it. So, I got to action. Instantly found the few functions where there was a missing rollback, and instantly added it and pushed. Mr. Backend hit redeploy on Google Cloud Console - and we saw the code reach prod. I opened my phone, opened the CH app, and tried to join a team. It worked. 100 students wearing ACM t-shirts across the auditorium breathed a very audible sigh of relief. Management made the announcement to the participants - we’re good to go guys, lets get this event started.</p>
<p>And it crashed. Again. Bloody GPT, knew we shouldn’t have relied on it.</p>
<p>Oh my God. The auditorium was exploding with angry shouts from the participants, yet the silence felt deafening. It felt like an implosion of sorts. Suddenly it felt like the event was screwed, big time.</p>
<p>We were told to empty the auditorium, since we had used up our entire time duration (which was for the opening ceremony). The participants were livid. Most of our team was involved in calming them down, and assuring them that the event will start soon and they will be kept updated through our discord. As the participants left, and so did ACM members, the tech team sat outside the auditorium, on the floor, racking our brains hard - trying desperately to find any small error. ACM team was assigned a control room in an SJT smart classroom, so it was just 4 of us, and our dear chairperson - who was somehow quite calm and composed. Have to give it to him - when everyone was panicking and a lot of his reputation was at stake, guy managed to stay completely calm and just keep his trust in us - telling us to focus on getting the app functional while he handled any curveballs graVITas team threw at us.</p>
<p>This is where I made a big mistake. Mr. Backend had insisted on using a serverless backend, connected to a cockroachDB postgreSQL instance. We had our doubts about it. He assured us he had enough experience with it, the instances will upscale whenever the load is high. It won’t crash. It’ll work. He spent hours convincing us it’ll work. And when everything was in a state of chaos and panic, when we should have taken his advice and sat down and traced our steps through the code to see what was causing the issue, when we should have been working as a team - I flipped. Mr. Frontend and I turned on the backend guy, and started ranting about how this was an issue due to the serverless architecture of our backend. It simply wasn’t able to handle the load, we said. He disagreed, but we didn’t care at that moment.</p>
<p>We convinced the others, and soon most of the team was trying to find server-based alternatives for hosting our backend. We started finding possible issues with our serverless backend, nitpicking at everything. At one point, I noticed that the maximum active connections to our backend was capped at 100. <em>“Voila!”,</em> I remember thinking. I found the issue. Only 100 people were able to connect to our backend at once. Everyone started celebrating - we found the issue! A small oversight on the cloud console. No biggie, we instantly scaled it up to 1000 and redeployed. Metrics became green again, the cheers became louder. I felt like I was the most important person in the room for a while.</p>
<p>Why did I feel that only for a while? Because within 5 minutes, it crashed again.</p>
<p>Have you ever heard an entire classroom cuss out loud at once? Yeah, that definitely didn’t happen then.</p>
<p>We wasted more than 5 hours trying to switch to a server-based, locally hosted solution to our backend - when we shouldn't have ever doubted our most experienced guy in the first place. Sure, his entire argument was just “trust me bro”, but we should’ve done our due diligence on whether serverless was actually the issue or not, before directly assuming that it was the issue.</p>
<p>It was soon 8.30PM. Cryptic Hunt 2024 was cancelled for that day, officially. The events team was LIVID - they were facing a lot of backlash for the failure of our event, and we were being given (very well deserved) flak for it. ACM core went back to their rooms, awaiting further instructions. Our chairperson, along with the research lead and a senior core member of ours who was a part of graVITas events committee, headed to the fest control room to try and calm matters down there.</p>
<p>I remember leaving SJT all alone, while raindrops were pouring on me. I could just look at the floor while I walked. That walk will always stick by me. That walk back was one that was low on so many levels - it was straight up as if a ball of dark energy was around me as I walked. So many thoughts, so many mental conflicts. “Is any of this even worth it?”, “should have just left this chapter long ago”, “what’s the bloody point” - everything was hitting at once. There were quite a few losses/issues many of us were facing in our personal lives as well, while building up to Cryptic Hunt 2024 - so that didn’t help either. It really wasn’t that bad, don’t get me wrong. It’s just an event. In a college fest. A damn college fest. I know that as well, it all sounds so over-exaggerated - but at that moment, after 12+ hours of constant debugging in prod with absolutely 0 progress, any developer would’ve felt like absolute shit.</p>
<p>At the same time, three of our very own were waging a different war - one to make sure our event doesn’t get cancelled (even though it’d save us a lot of pain, imagine telling 800 people they won’t get refunded their INR250 + GST for absolutely no rhyme or reason). Our chair, research lead, and a senior core member (who was part of the events committee) were handling negotiations with graVITas control room. If you want a mild understanding as to how phenomenally grave the situation was, I’ll let you in on something the senior core member in concern told me:</p>
<p>“For the first time, I saw our chair sit on the footpath - and he looked… dejected? He looked as if he was about to cry. The same person who was calm throughout the day, who was the pillar of support we needed throughout the day - he was seated there all alone facing the wrath of the committee, just thinking ‘why did we put in this much of effort, if it was to all go in vain’ - that hit me hard.”</p>
<h2 id="heading-the-resilience-tech-mostly">The Resilience [tech (mostly?)]</h2>
<p>I crashed on my bed somewhere around 8:45PM - after more than 12 hours of useless debugging. I woke up at 9.15PM to a call from Mr. Frontend himself.</p>
<p>“Come to R3xx, we’re sitting and figuring out the problem,” he said. I just wanted some sleep, man. This seems impossible to figure out anyways - what's the point. Yet, despite my body telling me no in 15 different ways, I got up and slugged my way towards our senior’s room.</p>
<p>The atmosphere there wasn’t very bright either, but there definitely was some hope. It was at this moment I saw the entire tech community of VIT (not ACM, mind you - the entirety of VIT) come together. In hindsight, it was something beautiful - all these big tech chapters having insane amounts of not-so-friendly <em>friendly competition</em> amidst one another, always trying to one-up the others - yet when one chapter was drowning, they all came together to help. We had Anuj Parihar from GDSC, Pratham Mishra from CSI, Kaushal Rathi from CSI alongside ACM board, and 3 of us senior core members - all sitting in one R block 3-bed room - with one common goal in mind: save the Cryptic Hunt.</p>
<p>I’ll deviate from topic for a bit here, because this is a good moment to express my genuine gratitude towards all the seniors who came to our aid that night. From myself, and the entirety of ACM, we genuinely appreciate the camaraderie and brotherhood you guys showed that night - while y’all could have very well enjoyed watching us sink, y’all chose not to. You guys are the biggest reason we could save the event, and it means a lot to all of us.</p>
<p>Back to topic, though. It was no longer just us - we now had seniors working our case as well, and many of them were returning from internships in big tech firms. Suddenly things seemed to fall into place. Suddenly issues were getting clearer, actual problems were being solved.</p>
<p>To the seniors, the problem was identified almost immediately. What was it?</p>
<p>FCM tokens. Firebase Cloud Messaging tokens. Four innocent words that would haunt our dreams for months to come.</p>
<p>You see, when you're building a real-time system that needs to notify users about cache invalidation - essentially telling their apps "hey, new questions are live, refresh your local data" - FCM is your go-to solution. It's reliable, it's fast, it's used by literally millions of apps worldwide. What could go wrong?</p>
<p>Everything. Absolutely everything.</p>
<p>Here's what we did - and I want you to cringe along with me as I explain this architectural nightmare. All FCM tokens for our 800+ participants were stored in a single database table. The same table that stored user information, team memberships, scores - basically everything. Our beloved users table that was the heart and soul of our entire application.</p>
<p>Now, every time we wanted to send push notifications for cache invalidation (which happened every time we published new questions or made updates), our backend would:</p>
<ol>
<li><p>Query the users table to get all FCM tokens</p>
</li>
<li><p>Start a database transaction</p>
</li>
<li><p>Send notifications to all ~1000 tokens one by one</p>
</li>
<li><p>Update delivery status back in the database</p>
</li>
<li><p>Commit the transaction</p>
</li>
</ol>
<p>Sounds reasonable, right? WRONG.</p>
<p>Here's the kicker - the entire FCM sending process was wrapped inside a database transaction. And during a transaction, the table gets locked. Not for a few milliseconds like it should be, but for the ENTIRE duration of sending 1000+ push notifications.</p>
<p>Do you know how long it takes to send 1000 FCM notifications? About 2-3 minutes on a good day. During those 2-3 minutes, our users table was completely inaccessible for writes. Every login attempt, every team join request, every score update, every single write operation that our app needed to perform was just... waiting. Queuing up. Timing out.</p>
<p>Picture this: User opens the app, tries to join a team. Backend tries to write to the users table. Table is locked because FCM is busy sending notifications. Request waits. And waits. And waits. Eventually times out with a 500 error. User tries again. Same thing. Multiply this by 800 frustrated participants all mashing the "Join Team" button simultaneously.</p>
<p>But wait, it gets worse! (I know, I know, how is that even possible?)</p>
<p>FCM doesn't guarantee 100% delivery success. Network hiccups, invalid tokens, devices that are offline - stuff happens. And what did our brilliant system do when FCM failed to deliver a notification? It retried. Automatically. Which extended the transaction duration. Which kept the table locked for even longer. Which made more requests timeout. Which made more users retry. Which created more load. Which made FCM fail more often. Which triggered more retries.</p>
<p>It was a beautiful, perfectly orchestrated cascade of failure. A masterpiece of how NOT to design a system.</p>
<p>The seniors took one look at our database logs, saw the transaction duration metrics, and immediately knew what was happening. "Your FCM implementation is locking your users table," Anuj said, as casually as someone pointing out that the sky is blue. "Move it outside the transaction."</p>
<p>That's it. That was the fix. Move the FCM calls outside the database transaction. Let the transaction handle just the database operations, and let FCM do its thing separately. If notifications fail, who cares? Cache invalidation is a nice-to-have, not a must-have. Users can manually refresh if needed.</p>
<p>We pushed the fix somewhere around 2AM, after a few hours of removing EVERY damn transaction from the codebase. We didn’t have any business logic complicated enough to wrap inside a transaction - absolute naivety on our part. However, we didn’t stop then. We were traumatised from the day before.</p>
<p>What followed was close to 4 hours of absolute drama. We stocked up on snacks, cold drinks, chocolates - and opened up Grafana K6. Grafana K6 isn’t something any normal college student uses - it’s an industry level load tester. But we were scared to a whole new level - so we began the process of tormenting our backend to loads of up to 6000 concurrent hits. Parallelly, we sent a message to our ACM core group - asking all our members to start spamming the create team and join team feature on our app. Absolute chaos ensued. Hundreds of messages of people sending random team codes to join, creating teams at will, leaving teams every other second - absolute mayhem. But oh boy - that was some insane fun. The feeling of seeing your app, which crashed at 800 users, handle 6500+ concurrent API calls without crashing, the high of an entire team of students up at 3AM just spamming buttons on a mobile app (which they made, btw) just silently praying they don’t see an error message, the first-degree chaos - it all formed a moment that I just won’t ever forget.</p>
<p>It was soon 6AM. We had done everything we possibly could. We headed back to our rooms, hoping to get some sleep before the event started at 8AM.</p>
<p>Somehow, I woke up at 7:45AM. Not because I wasn’t tired - I was. Simply because I was scared. Frightened - to a whole new level. I stared at the clock as it ticked closer to 8AM, the time when participants would start using the app again. Forty-five… Fifty… Fifty-five… Eight AM. The moment of truth.</p>
<p>For the next 20 minutes, I don’t think I blinked once. My eyes were plastered to my laptop screen, occasionally glancing at my WhatsApp to see if anyone reported any crashes. Time seemed to be passing even slower than usual. Everything seemed to pause.</p>
<p>And then came the moment. At 8:20AM, our vice-chair texted us - “no issues guys, cryptic hunt is live - have a good night” - and that’s when I finally let out a huge sigh of relief.</p>
<p>We actually did it. Cryptic Hunt was saved. The trauma we faced on day 0 was finally over.</p>
<p>That’s when I closed my eyes, and my head hit the pillow.</p>
<h2 id="heading-the-post-mortem-non-tech">The Post-Mortem [non-tech]</h2>
<p>I made my way to the CH control room somewhere around 11:30AM, which was basically the ground floor of Mahatma Gandhi block. The moment I entered the block, I was greeted by a sight that - to this day - stays etched in my heart.</p>
<p>Those who have heard me speak over the past year or so know that I have a famous line of sorts - “ACM is my family” - and at that moment, I saw everyone dressed in ethnic clothing - kurtas, salwars, sarees - just smiling and messing around. To the normal eye, it was probably nothing very special - but to me, after a day of just tense faces, these were the smiles and gleaming eyes of my very own people who were enjoying an event they have (almost) successfully organized. No angry participants, no malfunctioning tech - just 3 generations of ACM-VIT having fun, together. And that, my friends, is one of my favourite moments throughout my entire college life.</p>
<p>GraVITas committee allowed a 1-day extension for our event, making our event the “only 3-day event” in the entire fest (or so we marketed it, lol). The app never crashed again, and the event proceeded uneventfully (pun intended).</p>
<p>And so we concluded Cryptic Hunt 2024 - a day so forgettable, it’s become a tale that is extremely unforgettable :)</p>
<p>Cheers</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758733635135/6282711b-c537-48fa-98c4-39e440f3171f.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[How do computers see us?]]></title><description><![CDATA['...........' go the self-driving cars these days...
With the introduction of technological innovations in the automotive sector, one is frequently left wondering how these vehicles manage to operate.
You guessed it: ‘COMPUTER VISION’ is the answer.
...]]></description><link>https://blog.acmvit.in/how-do-computers-see-us</link><guid isPermaLink="true">https://blog.acmvit.in/how-do-computers-see-us</guid><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Tanush Golwala]]></dc:creator><pubDate>Fri, 13 Jun 2025 06:26:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247836926/a47227ff-3b9b-446b-9aa2-66de3583a159.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>'...........' go the self-driving cars these days...</p>
<p>With the introduction of technological innovations in the automotive sector, one is frequently left wondering how these vehicles manage to operate.</p>
<p>You guessed it: ‘<strong>COMPUTER</strong> <strong>VISION</strong>’ is the answer.</p>
<h2 id="heading-introduction">Introduction</h2>
<p>Computer Vision is a field of computer science that focuses on computers to identify and understand various dimensions among images and videos. It is a branch of <strong>Artificial</strong> <strong>Intelligence</strong> that enables computers to interpret and analyze the visual world.</p>
<p>Since its inception, Computer vision has found its way into multiple facets of day-to-day human activities. From <strong>detecting</strong> <strong>intrusions</strong> in surveillance video in Israel, and <strong>monitoring</strong> <strong>mine</strong> <strong>equipment</strong> in China to rapid <strong>face</strong> <strong>detection</strong> in Japan.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699245572790/946912d0-8a33-42e8-99d3-75ebb4435062.jpeg" alt="Israel has used Computer Vision to develop an intrusion detection system. " class="image--center mx-auto" /></p>
<p>The image shows how Israel has used Computer Vision to develop an intrusion detection system.</p>
<h2 id="heading-what-does-the-past-look-like">What does the past look like?</h2>
<p>It is highly implausible that the concept of computer vision originated more than half a century ago, in <strong>1963</strong>, when <strong>Larry</strong> <strong>Roberts</strong>, known as the "<strong>Father</strong> <strong>of</strong> <strong>Computer</strong> <strong>Vision</strong>", discussed the possibility of extracting 3D geometrical data from 2D perspective images.</p>
<p>However, the real breakthrough in Computer Vision was seen in <strong>2001</strong>, when two <strong>MIT</strong> <strong>researchers</strong> came up with the well-known ‘<strong>Viola</strong> – <strong>Jones</strong> <strong>Algorithm</strong>’ which revolutionized the field of face detection thus complementing the applications of CV.</p>
<p>In the previous decades, computer vision has further seen strides both from a development and application point of view due to the interest of tech giants like Google, Meta, Apple, and Tesla tapping into its immense capabilities. Governments from many nations, especially developing nations like India, are increasingly supporting its development and deployment, turning it into a race.</p>
<p>Now, let us have a look at one of the most widely used library of CV</p>
<h2 id="heading-opencv">OpenCV</h2>
<p><strong>OpenCV</strong> (<em>Open</em>-<em>Source</em> <em>Computer</em> <em>Vision</em> <em>Library</em>) is an open-source computer vision and machine learning software library. OpenCV was created to expedite the use of machine perception in commercial goods and to offer a common foundation for computer vision applications.</p>
<p>The library has more than <strong>2500</strong> <strong>optimized</strong> <strong>algorithms</strong> and can be used in various languages, a few of which include Python, JavaScript, C++, and Java. It has been under <strong>active</strong> <strong>development</strong> since <strong>2011</strong> and keeps constantly receiving updates.</p>
<h2 id="heading-what-can-you-do-with-opencv"><strong>What can you do with OpenCV?</strong></h2>
<h3 id="heading-image-processing"><strong>Image Processing</strong></h3>
<p>Before getting into image processing, we shall look at how images are perceived by the OpenCV library.</p>
<p>Each digital image can be represented as a <strong>3</strong>-<strong>dimensional</strong> <strong>NumPy</strong> array. Where each dimension stores a value ranging from <strong>0</strong> to <strong>255</strong> in RGB format. The combination of these three values available at each pixel is what gives the human eye a perception of color.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699246233569/1fe231ae-c7f1-4c8e-848d-bc63519d06db.png" alt="Representation of an image in terms of an array representing RGB values" class="image--center mx-auto" /></p>
<p>Representation of an image in terms of an array representing RGB values</p>
<p>Effectively once you read an image using the OpenCV library, according to its presets it reads the data in <strong>BGR</strong>(<strong>blue</strong>, <strong>green</strong>, <strong>red</strong>) form.  This needs to be corrected using the <strong>cvtColor</strong> function’s inbuilt attribute provided by the OpenCV library for converting <strong>BGR</strong> <strong>to</strong> <strong>RGB</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699246751571/b8deef8e-7887-4936-adc3-fe0037c3039f.png" alt class="image--center mx-auto" /></p>
<p>Notice how the <strong>index</strong> <strong>correction</strong> helps CV interpret the same image differently</p>
<h3 id="heading-blurring-and-sharpening"><strong>Blurring and Sharpening</strong></h3>
<p>Blurring and sharpening can be achieved in computer vision using a <strong>Gaussian</strong> <strong>Blurring</strong> <strong>filter</strong>. It is achieved by creating a <strong>kernel</strong> that acts as a <strong>filter</strong> over the original image. In image processing, a kernel or mask is a <strong>small</strong> <strong>matrix</strong> used for <strong>blurring</strong>, <strong>sharpening</strong>, <strong>embossing</strong>, and <strong>edge</strong> <strong>detection</strong>.</p>
<p>To put it simply, it defines each pixel's output about its neighboring pixels. It's referred to as kernel. The threshold setting and matrix size will determine how much blurring or sharpening occurs. Sliding the kernel over each pixel in the image, carrying out the convolution procedure, and substituting the weighted sum for the original pixel value is how a kernel is applied to an image. An image that has been blurred is the outcome of this technique being repeated for each pixel in the original image.</p>
<p>Let us make an image using OpenCV</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247073728/3a5223a1-456d-4d12-aa5e-bea1a97cb867.png" alt class="image--center mx-auto" /></p>
<p>Creating a kernel for it,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247166217/47dd1859-11f2-4804-b984-64b734309b53.png" alt class="image--center mx-auto" /></p>
<p>Applying the kernel to achieve the blurring effect,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247238240/6d814908-a056-44ec-a126-fdba6746f618.png" alt class="image--center mx-auto" /></p>
<p>This demonstrates how the <strong>sharpness</strong> of a picture can be <strong>altered</strong> by performing basic mathematical operations on its <strong>pixel</strong> <strong>values</strong>.</p>
<p>While blurring and sharpening is one of the most widely used applications of image processing in computer vision. A lot more can be done with highly optimized libraries present in the OpenCV library.</p>
<p>Edge detection works by <strong>detecting</strong> <strong>discontinuities</strong> in brightness and helps us <strong>enhance</strong> <strong>object</strong> <strong>tracking</strong> and <strong>detection</strong> when it comes to <strong>video</strong> <strong>processing</strong> using OpenCV.</p>
<h3 id="heading-video-processing"><strong>Video Processing</strong></h3>
<p>Computer vision is a tool used in the field of video that allows computers and systems to <strong>extract</strong> <strong>useful</strong> <strong>information</strong> from digital photos, videos, and other visual inputs. Based on that information, the computers and systems can then act or offer recommendations.</p>
<p>Typically, computer vision is used in <strong>conjunction</strong> with another <strong>library</strong> while <strong>analyzing</strong> videos. The two most popular ones are <strong>TensorFlow</strong> and <strong>Mediapipe</strong>.</p>
<h3 id="heading-mediapipe"><strong>MediaPipe:</strong></h3>
<p>Mediapipe is a <strong>cross</strong>-<strong>platform</strong> <strong>library</strong> developed by <strong>Google</strong> that provides amazing <strong>ready</strong>-<strong>to</strong>-<strong>use</strong> <strong>ML</strong> <strong>solutions</strong> for computer vision tasks. A flexible collection of <strong>pre</strong>-<strong>built</strong> <strong>modules</strong> for <strong>computer</strong> <strong>vision</strong> and <strong>machine</strong> <strong>learning</strong> activities is provided by Mediapipe. Face detection, position estimation, object tracking, hand tracking, and other features are among the essential modules. By offering pre-made solutions for challenging visual challenges, these modules expedite the creation of apps and save developers the time and work involved in creating them from the ground up.</p>
<p>Let us take the example of MediaPipe Hands.</p>
<p>For every recognition run, the <strong>Gesture</strong> <strong>Recognizer</strong> creates a <strong>gesture</strong> <strong>detection</strong> result object. Hand landmarks in picture coordinates, handedness (left/right hand), and <strong>hand</strong> <strong>gesture</strong> categories of the detected hands are all contained in the output item.</p>
<p>MediaPipe plots the coordinates of each of the <strong>21</strong> hand landmarks are <strong>x</strong>, <strong>y</strong>, and <strong>z</strong>.  The depth at the wrist serves as the origin, and the landmark depth is represented by the z coordinate. The landmark is closer to the camera the smaller the value.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247336202/95e29df8-3500-4fcd-818f-8db4d08a7bda.png" alt class="image--center mx-auto" /></p>
<p>Representation of HandLandMark Recognition with OpenCV and Mediapipe</p>
<p>Similarly, if we look at a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699247432537/d9a08758-9b31-4a78-b8b9-fe050c98ef7b.png" alt class="image--center mx-auto" /></p>
<p>Vehicle Identifiers used in streets of New York and Chicago</p>
<h2 id="heading-what-does-the-future-hold"><strong>What does the Future hold?</strong></h2>
<h3 id="heading-healthcare">Healthcare</h3>
<p>In medical imaging, computer vision will be essential for helping with early <strong>disease</strong> <strong>detection</strong>. Vital sign monitoring and medical scan analysis are two of its benefits. Companies like <strong>AINexus</strong> <strong>Healthcare</strong> are always working to improve their respective models and have already begun employing computer vision for diagnoses.</p>
<h3 id="heading-security"><strong>Security</strong></h3>
<p>Security operations will be strengthened by improvements in <strong>object</strong> <strong>tracking</strong> and <strong>facial</strong> <strong>recognition</strong> technologies, which will help businesses and public safety organizations identify and mitigate hazards more successfully. Data from a huge network of cameras that covers most of its population is collected by China's sophisticated facial recognition system.</p>
<h3 id="heading-smart-cities"><strong>Smart cities:</strong></h3>
<p>With the help of computer vision, traffic, garbage, and <strong>infrastructure</strong> <strong>maintenance</strong>, <strong>urban</strong> <strong>settings</strong> can be made safer and more effective. <strong>Google</strong> subsidiaries like ‘<strong>Sidewalk</strong> <strong>Labs</strong>’ have already started developing an ecosystem for such cities</p>
<h2 id="heading-bibliography"><strong>Bibliography:</strong></h2>
<p><a target="_blank" href="https://opencv.org/about/">https://opencv.org/about/</a></p>
<p><a target="_blank" href="https://www.edlitera.com/en/blog/posts/computer-vision-edge-computing#mcetoc_1g2q47gt6c">https://www.edlitera.com/en/blog/posts/computer-vision-edge-computing#mcetoc_1g2q47gt6c</a></p>
<p><a target="_blank" href="https://people.csail.mit.edu/sparis/bf_course/slides/02_gaussian_blur.pdf">https://people.csail.mit.edu/sparis/bf_course/slides/02_gaussian_blur.pdf</a></p>
<p><a target="_blank" href="https://cds.cern.ch/record/400313/files/p21.pdf">https://cds.cern.ch/record/400313/files/p21.pdf</a></p>
]]></content:encoded></item><item><title><![CDATA[The Chef’s Secrets: The Story behind ExamCooker]]></title><description><![CDATA[Isn’t there something deeply invigorating about seeing an idea go from a messy whiteboard sketch to a widely-used working website? Well, Rome wasn’t built in a day. No one talks about the painful journey in between…the hours spent debugging, learning...]]></description><link>https://blog.acmvit.in/the-chefs-secrets-the-story-behind-examcooker</link><guid isPermaLink="true">https://blog.acmvit.in/the-chefs-secrets-the-story-behind-examcooker</guid><category><![CDATA[System Design]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[ACM]]></category><dc:creator><![CDATA[ExamCooker]]></dc:creator><pubDate>Fri, 13 Jun 2025 05:53:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749747187893/a7284bb1-2c80-44f4-960e-c693ad63c4b2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Isn’t there something deeply invigorating about seeing an idea go from a messy whiteboard sketch to a widely-used working website? Well, Rome wasn’t built in a day. No one talks about the painful journey in between…the hours spent debugging, learning tech you’ve never heard of and figuring out how to turn ‘just another website’ into something people rely on. That’s what this blog is about: the experience of building, learning and sometimes getting it hilariously wrong!</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdx9nD0HyjD4R1ACCwmpSwFS3KEwthQDpH4sSti4eU9PDy5Y6OSI0NGagSWCesaNNIdLxaEloHHLKBrk4Ntnp_-LYKUuDS_OXS3jeDwtjxHqmoD7z6O0hRpBYGMk2cAO1oVAgR9cg?key=1JTbnTjzxcLBY5-GsKMlnA" alt /></p>
<p>It all started when Sunny, our senior, came up with this relatable idea that instantly clicked with everyone. <em>A platform for past year papers and notes</em>. The maintainers built on this and shaped it into one of the projects for last summer’s ACM project cycle that we would learn from. And that, kids, is the origin story of <strong>ExamCooker™</strong> (unofficial trademark pending, courtesy of our favourite Bengali).</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXelDQ9mvxoYL29j0AnZpugHHzI0bR8UN95qjC66uUwy9DKgzrhgDqDo_dGQV4-BLIav5XxaOXUaJiINOE1mpK2L4RBgYdGsvrph409df-WMUw2wWp4bFSW5qPINCzBG5e9HYFB-xg?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
<p>Initially, we were all equally clueless. That’s when our project maintainers, Supratim, Eshita, Nitesh and Kairav, stepped in. They didn’t dictate what ExamCooker should be. Instead, they left the brainstorming to us–what features we wanted, what problems to solve and what the platform should feel like for a lazy student using it at 2 AM before an exam.</p>
<p>Learning and building happened simultaneously. One minute we were hunched over VS Code, and the next we’d be on a Gmeet with Kairav, patiently guiding us through NextJS and breaking down concepts we barely understood. But we’ll save the emotional rollercoaster for later. First, let’s talk about what we were actually building.</p>
<p>At its core, <strong>ExamCooker</strong> is a one-stop web application for exam resources, from notes and past papers to student forums, built by <strong>ACM</strong>-<strong>VIT</strong> for the students of VIT Vellore. Let’s walk through how everything comes together. What each part of the system does, how data travels from the user’s screen to our backend and database and how we keep things scalable, reliable, and easy to maintain. It wasn’t all smooth sailing, but every decision had a reason (and sometimes a lesson attached).</p>
<h2 id="heading-high-level-architecture-overview">H<strong>igh-Level Architecture Overview</strong></h2>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdYqFIuqpxThvstKRKw-O9GKHhbwkYAkUc1-UMwpX_6gPFEFWClpai01RSJ9ztJ83mX-3uMKmO7_2wlCOrQA4godRhGOo49b4xVpudvxoPsLZTYX2pNDpTkZTwBWVyGuT7mb50h3w?key=1JTbnTjzxcLBY5-GsKMlnA" alt /></p>
<p>At a high level, ExamCooker follows a classic client-server model with a <a target="_blank" href="https://www.sitecore.com/resources/insights/development/what-is-a-decoupled-cms">decoupled backend service</a> and cloud-first architecture. Every key component in ExamCooker plays a pivotal role.</p>
<p>Time to enter dev mode because we’re going to dive into the details so that the next time you upload a paper, you’ll know exactly what’s happening in the background. <em>And also to brag about the tech we do at ACM!</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdcKtaYvsduGaWdycaE8HsX0MIBxMnwASMEHnfa5IhOFtB13ZTcn7WRV4S2HUaldqngQCRj49zc792KpK5tDzBD7nl_rL3I7UFEv4SODFeiA8bEs3r-wZw4F6n0GXCCqQD14VldgA?key=1JTbnTjzxcLBY5-GsKMlnA" alt /></p>
<h3 id="heading-frontend-nextjs-on-vercel-with-server-side-rendering">Frontend: Next.js on Vercel (with Server-Side Rendering)</h3>
<p>The user interface is powered by <a target="_blank" href="https://nextjs.org/docs">Next.js</a> (on top of <a target="_blank" href="https://react.dev/learn">React</a>) and styled with Tailwind CSS for that clean, modern look. It's deployed on Vercel, which handles the scaling magic whenever traffic spikes (like the night before an exam). <a target="_blank" href="https://nextjs.org/docs/pages/building-your-application/rendering/server-side-rendering">Server-Side Rendering</a> (SSR) means pages are rendered on the server, so users get lightning-fast load times and content that Google bots can read (hello, SEO). We chose SSR to ensure that even dynamic content like exam resources or forum posts loads quickly. Vercel’s platform automatically handles scaling the Next.js serverless functions, so as traffic increases, more instances can render pages in parallel.</p>
<p>Once the page loads, it becomes an interactive React app, no page reloads, just smooth client-side routing. Take Instagram for instance. You can like a post, comment or scroll through profiles without the whole page refreshing. That’s the kind of seamless experience we aimed for. We also made use of Next.js 14's <a target="_blank" href="https://nextjs.org/docs/app">App Router</a> and server actions. That means instead of writing a whole separate API just to update a favorite or post a forum reply, we let components call server-side functions directly.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfljBuHEayp1G8IrvFoRGvb4UNMjZFk-f1fETaC1FRgltG6ulLPmnqQkTUIGDNfXhcEnY5IcHbzTlR3i967Jbydl-Yhrg0QZ-bkajLu5OTVmoj-3TydF5p_2WY6JL7v8xU2OadyKQ?key=1JTbnTjzxcLBY5-GsKMlnA" alt /></p>
<h3 id="heading-authentication-google-oauth-20-via-nextauth">Authentication: Google OAuth 2.0 via NextAuth</h3>
<p>Login is handled by Google OAuth, courtesy of <a target="_blank" href="https://next-auth.js.org/getting-started/introduction">NextAuth</a>. Only students with official VIT emails can get in. No passwords to store and no forgotten credentials to reset. It's secure, fast, and familiar. On the backend, user sessions are validated on every page request using NextAuth helpers. If you're not signed in, you won't even see the dashboard SSR makes sure of that before sending any content.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXckKaSkoWuKn5EIagCcwlJezYAApkl7P4eGmnOncYgMrLQdDdFZYVRaj3vASjvY_5b1ip30sO9_KeraZvOPDqMwwtH81YY2NDntsnRnd6HJ7arWAs0iXugTbjQ2JL_1NdjApkSS9w?key=1JTbnTjzxcLBY5-GsKMlnA" alt /></p>
<h3 id="heading-backend-service-fastapi-microservice">Backend Service: FastAPI Microservice</h3>
<p>Some tasks are just too heavy for the frontend like parsing a PDF or generating a thumbnail. For that, there's a <a target="_blank" href="https://dev.to/paurakhsharma/microservice-in-python-using-fastapi-24cc">Python FastAPI microservice</a>. It handles the grunt work i.e. processing uploaded files and pushing them to Google Cloud Storage. It talks to the frontend over REST APIs. If you've ever uploaded a 30 MB file and wondered how it didn't crash the site, now you know who to thank.</p>
<p>Here’s what happens when you upload a PDF:</p>
<ul>
<li><p><strong>Thumbnail Generation</strong>: The microservice takes the uploaded PDF and converts the first page into a JPEG thumbnail. This helps users preview files quickly without needing to open them. Python's powerful library ecosystem (think PyPDF2, Pillow, PDFPlumber, etc.) makes this process smooth and reliable.</p>
</li>
<li><p><strong>Cloud Upload</strong>: Both the original PDF and the generated thumbnail are uploaded to Google Cloud Storage (GCS). Offloading large files to cloud buckets means scalable, durable storage and fast delivery when users request them.</p>
</li>
<li><p><strong>Returning URLs</strong>: Once the upload is complete, the service generates public URLs or storage paths and sends them back to the main application as a JSON response containing links to the PDF, the thumbnail and any relevant metadata or status messages.</p>
</li>
</ul>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXe0HCd25Dq_VGpehM2TmD4feU2yX6b6I_jWnNGN-H3kSQ4Xp7rN_Ude8nv69k6_Aeddq-wi7q19BxQX5liVRRdyrI_rKkPsc5YWn_KobzfkkW9fRAeQLE5SUbzcxH9Ug2sRvqTVfg?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
<p>This microservice doesn't operate in isolation. It communicates with the Next.js server via RESTful API calls. When a user uploads a file from the frontend, Next.js hands it off to the FastAPI backend, waits for the processed links, and then stores those links in the database using Prisma. The final step is sending the links to the frontend so users can view previews or download the file directly.</p>
<h3 id="heading-database-cockroachdb-prisma">Database: CockroachDB + Prisma</h3>
<p>As mentioned earlier, for storing all the structured data, ExamCooker uses CockroachDB, a distributed SQL database that speaks <a target="_blank" href="https://www.postgresql.org/about/">PostgreSQL</a>. <em>Fun fact: CockroachDB was named after the word “cockroach” since cockroaches are infamous for being hard to kill!</em> With CockroachDB, you get horizontal scalability (just add nodes to handle more users or data) and high availability (failures don’t bring the system down), making it a solid choice for a growing student platform.</p>
<p>The database handles everything from user profiles (hooked into Google Auth via NextAuth) to metadata about uploaded notes, past papers, forum discussions, and more. A single resource record (like a past paper) likely includes its title, a GCP URL for the file, a URL for the thumbnail, who uploaded it (linked to the Users table), and tags for categorization (like subject or year).</p>
<p>With Prisma and CockroachDB working in tandem, the website gets the best of both worlds: resilient infrastructure and developer-friendly data access.</p>
<p>Prisma doesn't just make querying the database easier, it also handles schema migrations as the application evolves. This ensures that the structure of the database stays in sync with the TypeScript code, drastically reducing bugs caused by schema mismatches. With Prisma in the stack, developers can iterate quickly and safely, knowing that both their types and their tables are aligned.</p>
<h3 id="heading-rate-limiting-layer-redis-for-fair-usage">Rate Limiting Layer: Redis for Fair Usage</h3>
<p>We didn't want anyone overwhelming our servers during those frantic exam prep sessions, which is exactly why ExamCooker integrates Redis (specifically Upstash Redis) as a rate-limiting layer. Redis is an in-memory data store that tracks server action usage patterns in real time, ensuring that no single user can make excessive requests that would slow down the platform for everyone else.</p>
<p>The rate-limiting works by tracking request counts per user within specific time windows. When someone tries to upload multiple files rapidly or makes too many server action calls in quick succession, Redis keeps count and temporarily throttles their requests once they hit the configured limits. This protects our backend services and ensures a smooth experience for all students, especially during peak usage periods like exam weeks.</p>
<p>With this rate-limiting system in place, ExamCooker can handle sudden traffic spikes without degrading performance for legitimate users. It's a simple but effective way to maintain platform stability while keeping the user experience fair and responsive for everyone.</p>
<p>To know more about Redis, this blog by one of our senior core members is all you need! <a target="_blank" href="https://blog.acmvit.in/redis-a-stellar-intro">https://blog.acmvit.in/redis-a-stellar-intro</a></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdbnKuEDuryW87hpOiEtLgMDww8MnqWYgBI0eAoykra9ORreVtUgexEOMeCvRyjv5IpLuG59PDvusGTUh-TAHv_daDCy0A8w1j8KItW1hovVVwJpwu0daz4QRIlqQdkJN2gUX1ZeA?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
<h3 id="heading-file-storage-google-cloud-storage-for-media-assets">File Storage: Google Cloud Storage for Media Assets</h3>
<p>You're probably wondering what happens to those absurdly titled scanned papers that you upload after your exams. PDFs and thumbnails don't belong crammed inside your database or tangled up in serverless functions. They deserve their own VIP storage service backed by a CDN. This is why ExamCooker outsources these to Google Cloud Storage (GCS). GCS effortlessly handles storage and bandwidth, so your web servers don't have to break a sweat whether it's 10 users or 10,000 rushing to download a file at the same time. Behind the scenes, our FastAPI microservice handles file uploads using Google's SDK or REST APIs, then hands over neat public links to the Next.js server which stores these references in the database. This way only the file URLs live in the database keeping it fast and query-friendly.</p>
<h2 id="heading-data-flow-from-click-to-cloud"><strong>Data Flow: From Click to Cloud</strong></h2>
<p>Let’s walk through two common use cases: <strong>viewing exam resources</strong> and <strong>uploading new study materials</strong> to trace the complete journey from the browser to backend.</p>
<h3 id="heading-viewing-exam-resources-read-flow">Viewing Exam Resources <strong>(Read Flow)</strong></h3>
<ol>
<li><p><strong>Client Request:</strong> Eshita, a logged-in user navigates to the "Past Papers" page. The browser makes a request to the Next.js frontend.</p>
</li>
<li><p><strong>Session &amp; Cache Check:</strong> The server-side logic (SSR handler) verifies Eshita’s session via NextAuth. Before hitting the database, it may check Redis to see if a cached result is available for this query (e.g., recently fetched "Software Engineering" papers).</p>
</li>
<li><p><strong>Database Fetch (Fallback):</strong> Prisma fires a SQL query to CockroachDB. Thanks to CockroachDB's PostgreSQL compatibility and strong consistency, the query returns reliable data, which Prisma serializes into TypeScript-friendly objects.</p>
</li>
<li><p><strong>Page Rendering:</strong> Next.js renders the React components server-side with the data, returning a fully constructed HTML page. This is fast, SEO-friendly and user-ready.</p>
</li>
<li><p><strong>Direct Media Delivery:</strong> The client browser fetches the linked media files directly from Google Cloud Storage (GCS). These static assets are never served through the application server, offloading bandwidth and latency.</p>
</li>
<li><p><strong>Logging &amp; Analytics:</strong> The website might log the access or update user activity ("recently viewed") via lightweight API calls.</p>
</li>
</ol>
<h3 id="heading-uploading-a-new-resource-write-flow">Uploading a New Resource <strong>(Write Flow)</strong></h3>
<ol>
<li><p><strong>User Submission:</strong> Another logged-in student, Nitesh uploads a PDF titled “CAT1 A1 24-25 Artificial Intelligence-BCSE306L” (now you know the correct nomenclature!), adds a slot, year, tags and clicks submit.</p>
</li>
<li><p><strong>Rate Limit Check:</strong> Before processing the upload, Redis verifies that Nitesh hasn't exceeded the upload rate limits for his account.</p>
</li>
<li><p><strong>File Handling:</strong> The PDF is first uploaded from the browser to the Next.js backend. The backend then forwards it to the FastAPI microservice for further processing.</p>
</li>
<li><p>A direct backend fetch to the microservice via REST is made (e.g., POST /process_uploaded_pdf).</p>
</li>
<li><p>The FastAPI microservice saves the PDF to GCS. It generates a JPEG thumbnail (from the first page). Both files are uploaded to designated GCS buckets. It returns the public URLs of the PDF and image.</p>
</li>
<li><p><strong>Database Write:</strong> Next.js (using Prisma) inserts a new resource entry into CockroachDB, including:</p>
<ul>
<li><p>title, fileUrl, thumbnailUrl</p>
</li>
<li><p>authorId (from the session)</p>
</li>
<li><p>Associated tags (created or linked via a junction table)</p>
</li>
</ul>
</li>
<li><p><strong>User Feedback:</strong> Nitesh will now see a success message.</p>
</li>
</ol>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcaeMEV0fCJ1GNN5ZF9t9VyvMzgQO6-no1yZPXbR4_rt-d57M5BKFVoVvSENQ8i5GOyqDtGiiGHc29foq4P4iTWqWGmvnM65L2CTIiKnOBagaxbSNJF_xsTa8C_AawxzGH2HYmxTQ?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
<p>Man, that was way too much tech talk! If we had a coin for every time we wrote 'scalable' or 'microservice', poor Supratim wouldn't have had to pay Google Cloud Run every month! Anyway, once deployment was done, the team worked nonstop uploading past papers and marketing for ExamCooker. Then, right around CATs (no surprises there), we saw our first big surge… 1500 users!</p>
<p>It wasn’t a smooth sail all along. At one point, we hit Redis's 10,000 requests/month limit and boy did it hit hard! The site stopped processing requests altogether. No caching, no responses, just dead silence. It was our first major obstacle and a very real reminder that even the smartest architecture can crumble if you don’t account for limits and quotas.</p>
<p>Another fine day, we crossed the 50 million request units/month limit on CockroachDB. <em>Yes, fifty million!</em> It was humbling (and kind of impressive?) to realize we’d built something students were using that much.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdBpfJOAiH_wDMDuhUrgoA0BWk8Qvi1bmlvXXLlICEzfaodg2Ma2F5-hx00tgPQurRZiH4htfwOWhbO3oyQeUxSAC9S_vEPNctg87oQ4BxYVGd1slQGYwSqUvD019__eZt5QnbJ7Q?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
<h2 id="heading-epilogue"><strong>Epilogue</strong></h2>
<p>You probably might be wondering why we are openly sharing the system design of our application. Well, we’re doing it to celebrate hitting 11,000 users!</p>
<p>As ExamCooker continues to scale and support more students every day, we believe milestones like these deserve more than just a post. They deserve a deep dive into the engine that powers it all.</p>
<p>Thinking back to a year ago at our amateur selves who didn't know how to create routes, we have certainly evolved. What started as a summer project that had us pulling all-nighters and nearly losing our minds turned into something so much bigger: a journey filled with learning, memories and some great friendships. More than just a project, ExamCooker ended up shaping our growth in ways we never imagined.</p>
<p>Here’s to growing users, more papers…and cooler features!</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfiK07WDCWs_hMv7zon9xWvjD2b2Qf-wt2hwQpiolTfL6BeEh3c_V0JiBVe5wYzM-OcqF1K59Z1WUZn_5Y6DYHLQ7UNvDrvkYgjrdkgudGn9I4YZL9_a27fzHi-kvGepmgDbEaqmQ?key=1JTbnTjzxcLBY5-GsKMlnA" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Git Happens: From Mess to Success]]></title><description><![CDATA[Ctrl + Z Doesn’t Work Here
If you’ve ever worked on a software project, you’ve probably heard of Git. Yep right, that magical tool developers use to not destroy each other’s code, or at least try not to.
Git is literally everywhere: from open-source ...]]></description><link>https://blog.acmvit.in/git-happens</link><guid isPermaLink="true">https://blog.acmvit.in/git-happens</guid><category><![CDATA[Git]]></category><category><![CDATA[git push]]></category><category><![CDATA[git add]]></category><category><![CDATA[merge-conflict]]></category><category><![CDATA[ACM]]></category><category><![CDATA[VITVellore]]></category><dc:creator><![CDATA[Kashish Singh]]></dc:creator><pubDate>Mon, 09 Jun 2025 11:03:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749466305836/d1c91e93-1a6a-476a-8132-67ed20b10b79.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-ctrl-z-doesnt-work-here"><strong>Ctrl + Z Doesn’t Work Here</strong></h3>
<p>If you’ve ever worked on a software project, you’ve probably heard of Git. Yep right, that magical tool developers use to not destroy each other’s code, or <em>at least try not to.</em></p>
<p>Git is literally everywhere: from open-source side projects to the codebases of billion-dollar giant tech companies. Its the foundation to modern collaborative coding.</p>
<p>And Well to be honest, if you have any experience with the same you would agree that getting started with Git feels less like opening a helpful tool and more like stepping into a black hole of commit messages and branch confusion.</p>
<p>One moment you’re typing “git commit” the next you’re Googling “how to undo git commit”.</p>
<p>Relatable?</p>
<h3 id="heading-the-early-struggles-learning-git-basics"><strong>The Early Struggles: Learning Git Basics</strong></h3>
<p>When someone first introduced me to git, they said something along the lines of —“It's easy. You just add, commit and push. Simple!”</p>
<p>Simple?</p>
<p>That was the biggest lie I’d heard since <em>“Group project means equal effort.”</em></p>
<p>Anyway let’s break this down, because Git basics aren’t exactly basic when you’re just starting out.</p>
<p><strong>Step 1: git add .</strong></p>
<p>Okay, cool. You’ve written some beautiful (probably buggy) code. Now someone tells you to run git add . , and you do it… but nothing happens.</p>
<p><em>Was it added? Where did it go? Is it hiding?</em></p>
<p>No visual feedback. No popup. Just a silent terminal pretending like everything’s fine. You’re left wondering if you just did something amazing or irreversibly catastrophic.</p>
<p>Think of it as organizing papers on your desk before putting them in a binder. The staging area gives you control, you can choose what to include, leave out half-done stuff, or split your work into tidy, meaningful commits.</p>
<p><strong><em>NOTE:</em></strong> git add . stages all the changes(new, modified, deleted files) in your current directory and all subdirectories. It’s like saying “Yes, Git ,take it all”.</p>
<p>(Be careful though, that includes accidental additions too.)</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc8fIFp5mfqL1jmXoXL82bAdF8DyLfsDeYzHbteP6BeIHk79517y6sMUMkHssVfbVIWNgZel6RfxWaREkMmYMo4IzqRqr_VNtN47NmO-aidp9mdsos5amavmG0woqO7EO4KsvYm?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p>For example above, after git add the files go to the staging area. Running git status will show:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf15xFhmCDiA0KngGy5Tq2FZZGUFLIo-zjQFwds8mSfRBJ_mcqVzogVn2Vr37bTpeo6KECiP9kx6cR0NNr5uTb9aaUKYE_f9fQMb48a5QjDIZfsqtO5nyIIjY6qz1ykZ04aMytl?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p><strong>Step 2: git commit -m “message”</strong></p>
<p>Now comes the commit. The part where you're supposed to <em>summarize your changes like a responsible developer.</em> This command takes a snapshot of your staged changes, basically saving your progress, along with a message describing what you changed.</p>
<p>Think of it like hitting “Save” in a game, but for your code. The message you write with -m should explain what you did, so future-you (and your teammates) know what you changed and why.</p>
<p>Sounds simple, right?</p>
<p>Well let’s be honest, your early commit messages probably looked like this:</p>
<p>(i) git commit -m "fixed stuff"</p>
<p>(ii) git commit -m "small changes"</p>
<p>(iii) git commit -m "final version"</p>
<p><strong><em>SPOILER ALERT:</em></strong> it’s <strong>never</strong> the final version.</p>
<p>Because two commits later, you're typing-</p>
<p>(iv) git commit -m "final final version"</p>
<p>(v) git commit -m "final final final version"</p>
<p>(vi) git commit -m "fully final final use this one version".</p>
<p>By the time you’re at "fully final final final use this one version", Git’s probably judging you harder than your code reviewer.</p>
<p>And somewhere in the chaos, you realize you forgot to add the actual file you worked on and have committed the wrong files.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXetsMp79gOER3O6Jmg69JRVvLzDF5tUz2Y5I0DBXRSBI09sgkIVLlrODIBosjKqLqKP_igFuUkh_U6Jxw918edqFyM9_nZ4YA8C4DEl-vgY0xrVrKE1rR4aRx-ZBmTqKdX7rFx8yw?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeyC9PL0Phake7OjTaSyyPmon__Eo7WilDK46NRA-r8xGHHMlTtNushJZvcv9LM2giJjgoZYjpjImiTDfFkcRBnaUO-aR3FI1zdySuMdMHQrMbh44qR6G1l0kwomhYPecztYctjog?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p>For reference in the example before we progressed to git commit and now a snapshot of these changes have been recorded. Running git log will show your new commit at the top:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXef3Nm37sgltq5VBQ1HFYjPM85pttwDlaociqlStud7m1R3SyMp7q0W3iMmxO5zPTXUXCPoH4q6h3oBr845v1G7O03ajq-AVxt-EdSOXB7nLjKFvf84Nw6f0e-IfA_vojTb0TSY?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p><strong>Step 3: git push</strong></p>
<p>So basically after you’ve added your changes and committed them, comes git push, the final boss. This command sends your local commits to a remote repository, like GitHub, so others can see (and hopefully not judge) your work.</p>
<p>With sweaty palms, you summon the courage to type git push, and <strong>boom</strong>, git is ready with its own set of tantrums.</p>
<p>Your terminal hits you with:</p>
<ul>
<li><p><strong><em>“everything up-to-date”</em></strong> <em>(it’s totally not)-The ultimate troll message that makes you question reality.</em></p>
</li>
<li><p><strong><em>“rejected by remote”</em></strong> <em>-Git’s way of saying, “Nope. Not today, buddy.”</em></p>
</li>
<li><p><strong>“fatal: The current branch has no upstream branch.”</strong> -<em>“Basically Git doesn’t know where to send your code.”</em></p>
</li>
<li><p><strong><em>“you are not currently on any branch”</em></strong> <em>- What? Is this even possible?</em></p>
</li>
<li><p><strong>“hint: Updates were rejected because the tip of your current branch is behind”</strong> -</p>
</li>
</ul>
<p><em>A mic drop moment:</em></p>
<p><em>“You forgot the most sacred rule: pull before you push! Someone else has sneaked in and changed the code while you were busy writing your masterpiece(broken code).Skip it, and you’ll wake up with a merge-conflict migraine so brutal that no nap can fix!”</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeW_yuc-tIY7K8sCCR_9PdRP3AWTd6GoKPRKvcoF-E3piZXoaTqUK0MCL6w1O0suvuytAb853GXk5JhfM3pLNlKSa-NHObWs4cdRFQrqB8g9VMXWdwsK6MetkFRDuprcj6ldMEj7w?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<p>This sends your commits to the main branch on the remote repository. After pushing, teammates can pull these changes and benefit from your latest updates.</p>
<h3 id="heading-pulled-the-code-welcome-merge-conflicts"><strong>Pulled the Code- Welcome “Merge Conflicts”</strong></h3>
<p>Now, like a responsible coder you pulled the code, only for git to betray you yet again. VS code now looks like it is screaming-something is just not right and you encounter the thing every developer is secretly terrified of- <strong>A Merge Conflict</strong></p>
<p>Now you have four choices:</p>
<p>(i) Accept Current (your changes)</p>
<p>(ii)Accept Incoming (their changes)</p>
<p>(iii)Accept Both (if you’re feeling lucky)</p>
<p>(iv)Compare (brave people choose this)</p>
<p>You choose one, hoping that nothing breaks and sometimes you survive it.</p>
<p><strong>Other times?</strong></p>
<p>Well, I myself have messed it up so badly that I deleted my entire fork and re-forked the repo, not once, but multiple times during my early Git journey(I still do).</p>
<p>It’s messy, it’s frustrating, and it’s part of the learning curve. But every conflict you resolve makes you just a little bit better at reading code, understanding changes, and staying calm under pressure.</p>
<p><strong><em>PRO-TIP</em></strong> <em>:</em> Don't just blindly accept both, because sometimes you might end up with extra semicolons and a debugging session that could last an hour. Keep an eye out for &lt;&lt;&lt;&lt;&lt;&lt;&lt;, =======, and &gt;&gt;&gt;&gt;&gt;&gt;&gt;—Git’s not-so-subtle way of saying, “We have a problem.”</p>
<p>It’s up to you to step in, settle the conflict, and clean up the mess by removing those arrows.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXe5XgdWB8Rrn7CB8RT-fSwapVDyFqQbtBPnG-k7l7ensDSKkdCIkSQWZ73DDW1jwYm45i-BfckOspMZOehhqXz6crqdcQ-QOCkR7MNz5Xnrl0XjiiVshfuwUNwdpHoDBEzgjdY1?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<h3 id="heading-from-confusion-to-clarity-the-git-breakthrough-moment"><strong>From Confusion to Clarity: The Git Breakthrough Moment</strong></h3>
<p>Somewhere between running git status countless times and resorting to git reset --hard in frustration, things gradually start to make sense. Git isn’t trying to be difficult—it’s just highly precise. One major breakthrough often comes with understanding how branches work. Rather than being confusing detours, they act as parallel environments, allowing different features or fixes to be developed without disrupting the main codebase.</p>
<p>Early on, learning the difference between <strong>forking a repo</strong> (creating your own copy to experiment or contribute) and <strong>cloning</strong> it (downloading a working copy to your local machine) also clears up a lot of confusion. Together, they empower you to safely explore and collaborate.</p>
<p>Visual tools like <strong>Git Graph</strong> in VS Code can also make a big difference. By showing the commit history and branch structure in a clear, visual format, they help demystify what’s happening behind the scenes. With time and practice, Git begins to feel less like an obstacle and more like a reliable system that brings structure and flexibility to collaborative coding.</p>
<h3 id="heading-git-commands-that-make-your-life-easier"><strong>Git Commands That Make Your Life Easier</strong></h3>
<p>Once the initial confusion clears, having a few go to commands and tools in your Git toolkit can make your workflow smoother and far less panic inducing.</p>
<ul>
<li><p><strong>git status</strong>: Think of this as your Git health check. It tells you what’s going on, what’s staged, what’s not, and what’s being ignored.</p>
</li>
<li><p><strong>git stash</strong>: It's basically hiding your mess when the guests arrive. In technical terms, it saves your uncommitted changes and cleans your working directory, so you can switch branches or pull updates without losing your progress.</p>
</li>
<li><p><strong>git revert &lt;commit-hash&gt; :</strong> It is the polite way to undo changes in Git. It creates a new commit that undoes the changes of a previous commit, without rewriting history.</p>
</li>
<li><p><strong>git reset - -hard</strong>: Use it when you have messed up your local code badly, haven’t pushed yet, and just want to go back to a clean slate. It is one of the most powerful and dangerous commands in Git. It resets your working directory, staging area, and HEAD to a specific commit, discarding all uncommitted changes.</p>
</li>
</ul>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeyOmExzHDntbc2sQSEuOCJ2ja5iPjDufCgVIB8x5EGnqyK-UTUEC4bEwx7yxCn04GNC9ZxAQQOzK5WNPh-M1NTP0x2lvojTYK942Dv92jlmEpoOPZC6eXrHTRXcsjVI4UhzMge6w?key=-5g81M9vrWU55yRLNGZWvQ" alt /></p>
<h3 id="heading-embracing-the-git-journey"><strong>Embracing the Git Journey</strong></h3>
<p>Learning git might feel like a rollercoaster ride at first, full of mysterious commands and countless errors. But with time and trial, and more than a few merge conflicts, it starts to make sense. What felt like random errors now seem like clear (if slightly dramatic) warnings. The learning curve is real, but it slowly turns into a solid foundation.</p>
<p>Eventually, Git stops feeling like a confusing set of commands and starts to feel like a trusted sidekick. It becomes the thing that lets you try out new ideas without fear, fix mistakes without panic, and work with others without stepping on each other’s toes. The journey from “what did I just do?” to “I’ve got this” takes time, but when it clicks, it really clicks.</p>
]]></content:encoded></item><item><title><![CDATA[When machines learn to learn]]></title><description><![CDATA[On December 5, 2017, something remarkable occurred in a peaceful research laboratory that would forever alter our comprehension of artificial intelligence. An AI system named AlphaZero was provided with nothing but the fundamental rules of the old ga...]]></description><link>https://blog.acmvit.in/self-improving-ai</link><guid isPermaLink="true">https://blog.acmvit.in/self-improving-ai</guid><category><![CDATA[Alphazero]]></category><category><![CDATA[Self-improving-ai ]]></category><category><![CDATA[CNN]]></category><category><![CDATA[backpropagation neural netowrk]]></category><category><![CDATA[Future of AI]]></category><dc:creator><![CDATA[Vanshika Garg]]></dc:creator><pubDate>Fri, 06 Jun 2025 09:53:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749203143449/fcfa319b-e638-4524-9ec6-c91ff6d3ef8e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On December 5, 2017, something remarkable occurred in a peaceful research laboratory that would forever alter our comprehension of artificial intelligence. An AI system named <strong>AlphaZero</strong> was provided with nothing but the fundamental rules of the old game of Go</p>
<p><em>No examples from humans, no pre-loaded strategies, simply the rules.</em></p>
<p>Within only <strong>72 hours</strong> of playing itself, it didn't merely equal the skills of human masters who had spent their lives studying the game. <em>It crushed them.</em></p>
<p>It even defeated the reigning AI champion that had been laboriously trained on human data for years, employing tactics that experts said were "alien" and "like watching a player from the future." This was not merely a computer beating a game. It was a machine learning strategy in a way that made grandmasters go, “Wait… what just happened?” If this were a sci-fi movie, this would be the moment the soundtrack shifts and someone whispers, “<em>We</em> <em>may</em> <em>have</em> <em>gone</em> <em>too</em> <em>far</em>.”</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXetC-0yGEQCzon3x0Qrj0AoBO7kXdKhDwQ6UYBmeS63Y7hc1oBVkLLL2f-j-maC76GGJ10Ugf4MtT1NU5ZIMqRa9VwNlO2zWdVlJluVhwTAdxzudkcZQ5RO7Rj5YfqmxrknRzXY?key=HZAyl4UVtmjv1wKxYDZDoqHO" alt /></p>
<h3 id="heading-navigating-the-unknown"><strong>Navigating the Unknown</strong></h3>
<p>When I heard for the first time about <strong>AlphaZero's</strong> accomplishment, I experienced that strange dizziness that comes from <em>looking into a future coming sooner than anticipated.</em> If a machine can master Go <em>in a weekend, what happens when it sets its sights on medicine? Or climate science? Or running the economy?</em></p>
<p><em>…and what happens to us?</em></p>
<p>Are we just going to sit back and watch as machines out-teach us in real time? If the machines are going to teach themselves at a quicker pace than we learn, are we going to be just <em>observers of our own technological revolution?</em> How can we be certain these systems are <strong>human-aligned</strong> when they may one day be based on principles that we barely understand?</p>
<p>These aren't <em>theoretical, abstract philosophical speculations any more.</em> They are pressing questions about which researchers, policymakers, and citizens must now contend while there is still some hope of influencing the response.</p>
<p>What are these self-improving algorithms behind all the gravity of these questions, anyway? These self-improving algorithms aren’t just <strong>upgrades</strong>. <em>They’re a whole new species of intelligence, one we may not even fully understand, yet.</em> They are a break with how AI systems grow and change, a break that potentially redefines our relationship to technology and to intelligence itself.</p>
<p><em>And they’re not coming. They’re already here.</em></p>
<h3 id="heading-the-mechanics-of-self-improving-intelligence"><strong>The Mechanics of Self-Improving Intelligence</strong></h3>
<p>Self-enhancing AI isn't some <em>buzzword tech term</em>, rather, it's a revolution in the way machines learn. While other AIs patiently sit around until humans tinker with them to improve them, <strong>self-enhancing programs</strong> can <em>analyze their own failures</em> and <strong>fix themselves</strong>. Think about it like the difference between a violin that must be tuned by its player versus a violin that could hear itself playing off-key and tune itself. <strong>AlphaGo Zero</strong> is the poster child for this ability. Told only the rules of Go, it played millions of games of Go against itself, refining strategies that defeated human masters and its own predecessor AlphaGo hands-down, all within three days. This was not simply a computer beating at a game; it was a computer <em>learning to think</em> in ways its own developers couldn't.</p>
<p>But <em>how</em> does a machine like AlphaGo Zero teach itself strategy from scratch? To understand that, we need to look under the hood, at the architecture powering its learning process.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeib-dI_oqD0FjP0IiGdLyf_HLQ1QRCFEAp24sh0Q75PDEN9GdVrcOgkJVCKrnpu46aT-lAY0luvbrYsa-5Ysi0xqu5x2NKzJ21sIFr4YwkZrfvZJiVyegkxqSL9TOxcM2sqXOy-Q?key=HZAyl4UVtmjv1wKxYDZDoqHO" alt /></p>
<h3 id="heading-a-under-the-hood-how-machines-learn-to-think"><strong>A) Under the Hood: How Machines Learn to Think</strong></h3>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeppCNNsh1AtAEZB3kMtJFh8nDGK30lc2EBC5Y-tgUJW_7oxDFfCXDYTRwmTE5QBP3-GA_Q8Db2A88fsoSyQbC8EVHqJB1MPnVANxR3UKnVz_Ub2K-kisCzhUZtgKmQCcANw0Mwjg?key=HZAyl4UVtmjv1wKxYDZDoqHO" alt /></p>
<p>At the heart of most deep learning systems lies a process that sounds <em>deceptively simple:</em> <strong>Forward Propagation</strong>. It’s the <em>fundamental engine</em> that lets a neural network take in information, process it layer by layer, and spit out a prediction. But beneath that simplicity is a <em>cascade of calculations that mimic</em>, in their own alien way, <em>how we humans make decisions.</em></p>
<p>Imagine a neural network as a <em>towering system</em> of interconnected nodes, or <strong>Perceptrons</strong>. Each perceptron takes input from the layer before it, <strong>multiplies each input by a learned Weight</strong>, adds a <strong>Bias</strong>, and then pushes the result through an <strong>Activation Function</strong>, a kind of <em>yes-or-no</em> gate that helps the network decide what to keep, what to discard, and what to pass forward.</p>
<p>This ripple of calculation flows from the Input Layer through <strong>Hidden Layers to the Output Layer,</strong> and the entire process, <em>this thinking cascade</em>,  is what we call <strong>Forward Propagation.</strong></p>
<p>But AlphaZero didn’t just need to <em>think</em>. It had to <em>see</em>. It had to understand the shifting spatial patterns of a Go board or a chess game the way a human grandmaster might glance at the board and feel the weight of the future in a single shape.</p>
<p>That’s where <strong>Convolutional Neural Networks (CNNs)</strong> come in. An invention that dates back to <strong>1989,</strong> when <em>Yann LeCun</em> gave machines a better way to interpret the visual world. CNNs are purpose-built for <em>pattern recognition</em> in grid-like data: images, game boards, or anything where <strong>space and shape matter.</strong></p>
<p>A CNN is made of <strong>three key types of layers</strong>, each playing a different role in the machine’s perception:</p>
<ol>
<li><p><strong>Convolutional Layers act like digital eyes</strong>. They slide tiny filters over the data, scanning for patterns: edges, corners, clusters, the way our brains recognize the shape of a face or the corner of a bishop’s move.</p>
</li>
<li><p><strong>Pooling Layers compress what’s been seen</strong>, keeping what matters and dropping what doesn’t. They help the system focus, distilling the data into its most meaningful essence.</p>
</li>
<li><p><strong>Fully Connected Layers come in at the end to pull everything together.</strong> They <em>weigh the possibilities and make a decision,</em> often with a final mathematical whisper like softmax, declaring what the network believes it just saw, or what move it should make.</p>
</li>
</ol>
<p>This <em>architecture is the canvas</em> on which <strong>AlphaZero</strong> painted its alien genius. From medical diagnostics to video games to ancient board games, <em>CNNs have become the lens</em> through which machines begin to understand the world.</p>
<p>And with that lens sharpened, we can now look at how AlphaZero used it, not just to mimic intelligence, but to <em>create something startlingly new.</em></p>
<h3 id="heading-b-how-alphago-zero-thinks"><strong>B) How AlphaGo Zero Thinks</strong></h3>
<p><strong>AlphaGo Zero</strong> isn’t just a faster engine or a smarter chess bot. It’s a completely different species of intelligence, <em>one that sees, thinks, and evolve</em>s through a delicate dance between <strong>deep learning and tree search.</strong></p>
<p>At the heart of its brilliance lies the fusion of two powerful components: a <strong>Convolutional Neural Network</strong> and a <strong>Monte Carlo Tree Search (MCTS)</strong>. And what’s even more remarkable? It was trained entirely through <em>self-play</em>, no human data, no expert games, just pure, relentless iteration. A <em>machine playing itself to perfection.</em></p>
<p>Before we dive deeper, let’s get acquainted with the language AlphaGo Zero thinks in:</p>
<ol>
<li><p><strong>State (sₜ):</strong> This is the <em>board at any point in time,</em>  from the opening move (s₀) to the final position (sₜ) where the game ends.</p>
</li>
<li><p><strong>Move (aₜ):</strong> At each state, the AI selects a move aₜ, <em>its action, its decision,</em> based on probability, not instinct.</p>
</li>
<li><p><strong>Search Probability (πₜ):</strong> This is where the Monte Carlo Tree comes in. πₜ is a <em>probability distribution over possible moves,</em> calculated through countless simulations. It’s how the AI chooses not just <em>a</em> move, but the <em>best</em> one.</p>
</li>
<li><p><strong>Monte Carlo Tree (αₜ(θ)):</strong> This is the AI’s mental model of the future, <em>a branching tree of possible outcomes,</em> weighted by their likelihood. It explores, evaluates, and narrows down on the path most likely to lead to victory.</p>
</li>
<li><p><strong>Convolutional Neural Network (fθ):</strong> This is AlphaGo Zero’s brain. It takes the raw board state as input and produces two crucial outputs:</p>
<ol>
<li><p><strong>The Value Vector (vₜ):</strong> <em>a prediction of who’s winning from this position.</em> Not in terms of points, but in terms of destiny.</p>
</li>
<li><p><strong>The Policy Scalar (pₜ):</strong> <em>a roadmap of which moves are promising</em>, assigning a probability to each.</p>
</li>
</ol>
</li>
<li><p><strong>Winner (z):</strong> At the end of the game, when the dust settles, z is determined, the    final verdict. That outcome is then <em>back-propagated</em> through the network to refine its understanding, like a player reflecting on every decision made.</p>
</li>
</ol>
<p>What’s groundbreaking here is not just the components,  it’s how elegantly they loop together. <em>The neural network guides the tree search.</em> The tree search picks the next move. The result of the game trains the neural network. It’s a perfect feedback loop: <em>play, learn, repeat</em>. And with each cycle, the machine gets stronger, not by imitating humans, but by discovering strategies even we don’t fully understand.</p>
<p>This recursive self-enhancement is what binds these systems to the holy grail of Artificial General Intelligence. If one AI can expand its own abilities, and each improved variant can further enhance itself, then we may see an <em>"intelligence explosion"</em> where machine cognition outgrows human capabilities across thousands of domains. <em>The stakes are as exciting as they are chastening.</em></p>
<p>AlphaGo Zero didn’t need millions of expert moves. It only needed the rules, the board, and time. <em>And somehow, that was enough.</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcQP7o4hqx6WaIq-XYhKCan1-ejui7uJmHtMJsGUXx9h7HFCBX0HnXpTlQ0XTuzOjxwDr9y44_2ypNCQ1sQJUX-6OKXnju3f5oSsIlCcwtDlM9_wHjTJyRkesxr7YOzw_t4Kxzb?key=HZAyl4UVtmjv1wKxYDZDoqHO" alt /></p>
<h3 id="heading-promise-and-threat"><strong>Promise and Threat</strong></h3>
<p>The age of self-improving AI is no longer confined to games. DeepMind has already expanded AlphaZero’s core ideas to real-world problems. <em>AlphaFold</em> revolutionized protein structure prediction, <em>AlphaTensor</em> optimized matrix multiplication, and <em>AlphaDev</em> discovered faster sorting algorithms. Their latest creation, <em>AlphaEvolve</em>, uses evolutionary methods to generate and refine code, a recursive loop of improvement guided only by outcomes. Each of these represents not just raw power, but a shift toward autonomy: systems that learn, evolve, and shape their own goals.</p>
<p>So what exactly does this emerging class of AI bring to the table, and what should we be wary of?</p>
<p><strong>Strengths of  self-improving AI:</strong></p>
<ul>
<li><p><em>Exponential problem-solving ability</em> - It can catalytically speed up areas such as medical science by acquiring knowledge from immense amounts of data in the form of experiments, data points, and patient outcomes while constantly reconfiguring its learning method.</p>
</li>
<li><p><em>Faster and more efficient processing</em> - AI computers can process much faster than humans, allowing for quick analysis and solution creation.</p>
</li>
<li><p><em>Improved pattern recognition</em> - AI is able to recognize intricate patterns and come up with solutions which may take human generations to find out, not only working faster but perhaps wiser.</p>
</li>
<li><p><em>Solving high-complexity issues</em> - AI is capable of solving gargantuan problems with variables so vast and interconnected that human minds cannot understand or solve them efficiently.</p>
</li>
<li><p><em>Self-improvement on a continuous basis</em> - AI systems are able to improve their own learning capacity, connecting progressively better dots and gaining insights with each passing moment.</p>
</li>
</ul>
<p><strong>Weaknesses and Threats of AI:</strong></p>
<ul>
<li><p><em>Alignment uncertainty -</em> While AI systems adapt their own goals and approaches, there is no way to ensure that they will still be aligned with human well-being and values.</p>
</li>
<li><p><em>Black box problem -</em> AI systems improve to become more and more mysterious and opaque, such that their decision-making becomes hard to explain or predict.</p>
</li>
<li><p><em>Risk of unintended consequences -</em> The disconnect between the capabilities of AI and what humans know poses risks of negative consequences not envisioned or prepared for.</p>
</li>
<li><p><em>Loss of human control -</em> With increasingly advanced and self-modifying AI systems, human monitoring and intervention will become ever more difficult or even impossible.</p>
</li>
</ul>
<h3 id="heading-voices-shaping-the-future"><strong>Voices Shaping the Future</strong></h3>
<p>I looked up what some of the greatest minds of our generation have to say about self-improving AI, and no doubt, there’s an ongoing debate surrounding it. <em>Sam Altman, the CEO of OpenAI,</em> underlines that "if we can figure out the alignment problem, self-improving AI systems could help solve humanity's greatest challenges. If we can't, we're in trouble." <em>Demis Hassabis,CEO of DeepMind Technologies,</em> regards these systems as "potentially the most important technology humanity has ever developed," while still advocating for human values at their heart.</p>
<p>Approaching the topic with more caution, <em>Eliezer Yudkowsky, American computer scientist and Researcher,</em> cautions that "once a system is self-improving, human control becomes increasingly tenuous. We get one chance at designing the initial conditions and constraints correctly." <em>Stuart Russell, A British computer scientist,</em> suggests developing AI that is uncertain about human preferences and thus driven to learn from human feedback. <em>Fei-Fei Li, Inventor of ImageNet and the Godmother of AI,</em> reminds us all that "AI's purpose is to augment human capabilities, not replace them.".</p>
<p>Together, these voices paint a picture that is both inspiring and daunting. We stand on the brink of creating something with extraordinary power, one that requires careful guidance and responsibility to ensure it benefits humanity.</p>
<h3 id="heading-the-poetics-and-perils-of-self-improving-ai"><strong>The Poetics and Perils of Self-Improving AI</strong></h3>
<p>Having traced this terrain of possibility and anxiety, I find myself on the brink, gazing out at a vista both thrilling and frightening. Self-improving AI is perhaps the <em>most impactful technology</em> we have ever considered. It might unleash cures for centuries-old diseases, solutions to global warming, and scientific breakthroughs that could rewrite our understanding of the cosmos.There is something deeply poetic about developing intelligence that is able to generate still more intelligence.</p>
<p>But I cannot help feeling trepidation. The difference between healthy self-betterment and out-of-control self-tweaking feels precariously thin. We're trying to build systems that will quickly work at levels of complexity we can hardly fully understand. This isn't fear of technology; <em>it's a realization that intelligence, once let loose, can take on momentum that is hard to steer.</em></p>
<h3 id="heading-beyond-the-horizon-of-human-understanding"><strong>Beyond the Horizon of Human Understanding</strong></h3>
<p>What impresses me the most is that we are in a <em>singular moment in human history: precarious, luminous, and irreversibly potent.</em> We have the potential to be creating beings that someday will comprehend things that we cannot. There is awe in that, but also humility, and a quiet responsibility. As we venture further into this space, our work is not to fear the smarter, to ensure that as our intelligence grows, our wisdom deepens alongside it. The question is not if machines will get more intelligent than us</p>
<p><em>they probably will in most areas but if they'll reflect the values that make intelligence desirable in the first place.</em></p>
<p>As we stand on the brink of unprecedented technological advancements, it’s crucial to reflect on the kind of future we want to build. <em>Can we instill in our machines the very best of human qualities, or is wisdom a trait that must remain uniquely human?</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeaAzRUTmJfsUlRmV7S_oiXP3kicdIdQa2kwJYMDk53bhjxiAWjPd2q6EJiDuFK8buSrwNsHAyD6iJgyM_wsXdUAvubdy__Al-UvyqL-s3qcBELAQbKTO0oXCGmoy-Ya8Eb8SEx?key=HZAyl4UVtmjv1wKxYDZDoqHO" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Blockchain Took Over My Bank Account (And I Kind of Liked It)]]></title><description><![CDATA[Have you ever tried sending money abroad and watched it take days? Or waited in line at a bank just to fill out a form and prove that yes, you exist? Traditional finance works, yes - but it’s slow, full of middlemen, and rarely feels like it’s built ...]]></description><link>https://blog.acmvit.in/defi-101</link><guid isPermaLink="true">https://blog.acmvit.in/defi-101</guid><category><![CDATA[defi]]></category><category><![CDATA[Blockchain]]></category><category><![CDATA[finance]]></category><category><![CDATA[DeFi]]></category><dc:creator><![CDATA[Shaurya Garg]]></dc:creator><pubDate>Tue, 03 Jun 2025 09:48:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748943970327/259c6138-d58e-4ea5-b8d3-0ef2edef354b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever tried sending money abroad and watched it take days? Or waited in line at a bank just to fill out a form and prove that yes, you exist? Traditional finance works, yes - but it’s slow, full of middlemen, and rarely feels like it’s built for us.</p>
<p>Now imagine a world where money moves at internet speed, where there are no bank holidays, no paperwork, and no one’s asking you for your salary slips.</p>
<p>Sounds far-fetched? It’s already happening, quietly. And it’s all on the blockchain.</p>
<p>When people hear ‘blockchain finance’, they often imagine crypto chaos, either the thrill of overnight riches or the despair of losing it all to a meme coin. However, beyond this noise and hype, blockchain is quietly taking over how we manage money, and honestly, I’m kind of here for it.</p>
<p>In the time it took you to read the above sentence, someone just borrowed a million dollars on the blockchain, and that too without any paperwork, credit score, or even an identity.</p>
<p>So, how is that even possible? Forget bankers in suits - the future of money wears a hoodie and runs on code.</p>
<p>Welcome to DeFi, short for “<strong>Decentralized</strong> <strong>Finance</strong>”. Think of it as a new way of handling money without any banks, no paperwork, and just code - programs that handle money like a digital banker, but with zero bias and no lunch breaks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748933331249/5c699eec-38e0-4462-b047-7b24bbf75d9a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-exactly-is-defi">What exactly is DeFi?</h2>
<p>At its core, DeFi is a network of financial applications that are built on blockchain, mostly Ethereum, and replace traditional institutions like banks or stock exchanges.</p>
<p>These apps use smart contracts: <em>self</em>-<em>executing</em> <em>bits</em> <em>of</em> <em>code</em> <em>with</em> <em>the</em> <em>rules</em> <em>written</em> <em>into</em> <em>them</em>.</p>
<p>DeFi is what happens when you take the power of a bank - lending, borrowing, earning interest, and hand it over to a code that lives on the blockchain. No gatekeepers, no bank queues, and no suit-wearing finance overlords. Just money with fewer middlemen (or none at all) and a lot more math.</p>
<h3 id="heading-so-what-can-you-do-with-defi">So, what can you do with DeFi?</h3>
<p>Now, banks are out of the picture, and paperwork has been replaced by smart contracts, but what’s actually in it for you? Well, turns out, quite a lot!</p>
<p><strong><em>Lend and Borrow</em></strong><br />You can lend your crypto and earn interest, or even borrow against it - all without a credit score, identity check, or an awkward bank meeting. Just connect your wallet and you’re good to go.</p>
<p>It’s like depositing your rare game skin for instant cash - only the pawn shop is pure code and it’s open 24/7.</p>
<p><strong><em>Earn Interest</em></strong><br />DeFi platforms let you have a passive income by depositing your crypto into liquidity pools. The returns can be wild - sometimes generous, sometimes just plain volatile.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748933466890/6575f1b5-bf6c-4e08-b86b-6d7c5cc04acf.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-whats-cooking-in-defi">What’s cooking in DeFi?</h2>
<p><strong><em>(A sneak peek into the weirder, wilder parts of the DeFi finance world)</em></strong></p>
<p>Alright, so we’ve seen the basics. But DeFi isn’t just a savings account and loan alternative. It’s a full-blown experimental lab of financial ideas - some genius, some unhinged, all fascinating.</p>
<h3 id="heading-yield-farming-because-interest-alone-wasnt-enough"><em>Yield Farming (because interest alone wasn’t enough)</em></h3>
<p>In DeFi, there are no savings accounts. Instead, we have yield farming. Sounds agricultural? Good, because it’s about planting your crypto into weirdly named liquidity pools and praying the harvest doesn’t disappear overnight.</p>
<p>It’s high risk, high reward, and for many, it’s part of the thrill. You’re not trusting a banker in a suit, you’re trusting code. Smart contracts do exactly what they’re programmed to do without any bias, breaks, or backdoors (unless someone messed up the math).</p>
<h3 id="heading-daos-the-worlds-weirdest-group-chats"><em>DAOs (the world’s weirdest group chats)</em></h3>
<p>DAOs, or Decentralized Autonomous Organizations, are the governance layer of DeFi and can be thought of as crypto-native clubs where decisions are made collectively, powered by tokens instead of titles.</p>
<p>Here’s how it works:</p>
<p>You buy a token -&gt; You get voting power -&gt; You help decide what happens.</p>
<p>It’s like a group project, except here:</p>
<ul>
<li><p>There’s no leader.</p>
</li>
<li><p>Everyone votes.</p>
</li>
<li><p>And the budget might just be $200 million.</p>
</li>
</ul>
<p>Some DAOs fund startups, some buy NFTs, and one even tried to buy the US Constitution.</p>
<p><strong>Random question:</strong> Would you trust a multi-million-dollar treasury run completely on Discord?<br />Well, thousands already do. Welcome to Web3 governance.</p>
<h3 id="heading-flash-loans-borrow-first-ask-never"><em>Flash loans (borrow first, ask never)</em></h3>
<p>Arguably DeFi’s most chaotic flex: flash loans. These are uncollateralized loans that let you borrow millions instantly, but with one huge catch: you have to repay the full amount (plus a fee) within the same blockchain transaction.</p>
<p>What if you don’t?</p>
<p>The entire transaction gets reversed as if it never happened. Thanks to the atomic nature of blockchain transactions, either everything succeeds or nothing does. Therefore, there is no risk to the lender. But, this also means you better know what you’re doing.</p>
<p>It may sound illegal, but it’s not. It's just pure math.</p>
<p>It’s like borrowing a Ferrari, racing it across the city, flipping it for profit, and returning it - all before the traffic light turns green. If you messed up, the whole drive never happened.</p>
<p>So, what are flash loans used for?</p>
<ul>
<li><p><strong>Arbitrage</strong>: Take advantage of price differences across exchanges</p>
</li>
<li><p><strong>Collateral</strong> <strong>swaps</strong>: Instantly switch one crypto load for another, without selling your assets or logging into multiple apps - all in one go.</p>
</li>
<li><p><strong>Protocol</strong> <strong>Exploits</strong>: (unfortunately) some use them to manipulate markets or drain poorly written contracts</p>
</li>
</ul>
<p>Some made millions. Some crashed entire systems. But either way, it showed that programmable money can do a lot - and do it fast.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748933697234/f2d29539-df5f-437c-85a2-fefe346fa323.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-whats-the-catch-the-ugly-side"><strong>What’s the Catch? <em>(the ugly side)</em></strong></h2>
<p>It’s fun and futuristic until someone loses their wallet keys or gets rugged. These systems don’t forgive. If you make a mistake, there’s no “forgot password”, unfortunately, just on-chain regret. In DeFi, you don’t need permission. But at the same time, you don’t get protection.</p>
<p>No banks to call. No customer support to cry to. You are your own bank.</p>
<p>So, here are a few red flags worth knowing:<br /><strong><em>Rug Pulls</em></strong><br />That hot new token promising 1000x gains? Turns out it was created by a guy with the username BoneyChicken010 who just drained the liquidity and disappeared.<br /><strong><em>Ponzi-ish Tokens</em></strong><br />Many projects often promise insane returns - funded not by profits, but by new users buying in. Does it ring a bell?</p>
<p><strong><em>Overhyped NFTs</em></strong> Yes, including the infamous Bored Apes. Some sold for millions. Others? Well, let’s just say someone’s retirement plan now lives in a JPEG folder.</p>
<p><strong><em>User Error and Hacks</em></strong> Lose your private keys, and it’s all over. Sign the wrong transactions and it’s all gone. Smart contracts may be smart, but hackers are smarter. And bugs don’t come with refunds.</p>
<p>DeFi gives you freedom. But freedom comes with responsibility.<br />The guardrails are off. You’re not just using finance.<br /><strong>You are the finance.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748933909937/bbb194de-bdc2-417e-8068-593b1ed47322.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-where-is-it-headed"><strong>Where is it headed?</strong></h2>
<p><strong><em>(from hoodie traders to hedge funds - DeFi’s next chapter)</em></strong></p>
<p>What started as a playground for crypto nerds and internet anarchists is now catching the eyes of banks, billionaires, and even governments. Institutions that once laughed off crypto are now stepping into DeFi, quietly exploring how smart contracts can make money move faster and definitely, cheaper.</p>
<p>It’s not just about meme coins anymore. Real-world assets, be it real estate, corporate bonds, or even art - all are being tokenized and brought onto the blockchain. It’s like listing your apartment on a digital ledger so it can be bought, sold, or borrowed against with just a few clicks. Weird? Definitely. But also kind of genius.</p>
<p>Of course, all this growth also brings the not-so-fun part: regulation. Governments are scrambling to tame this wild new world. Some rules might clean up the mess, while others might just kill the vibe. But either way, the regulators are coming (and fast).</p>
<p>India’s crypto scene is cautiously optimistic. Sure, taxes and red tape slow things down, but builders are still building. Startups are rising. Hackathons are buzzing. And somewhere, a college kid is launching the next DeFi app during a lecture.</p>
<p>We may be playing it safe, but we’re definitely playing.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>(finance, rewritten)</p>
<p>So yeah, DeFi isn’t just some crypto cult yelling “Wen moon?” on X. It’s code, it’s chaos, it’s creativity, and it’s quietly rewriting how money works.</p>
<p>The best part? You don’t need a finance degree or a Wall Street internship to join in. Just curiosity and maybe a half-decent internet connection.</p>
<p>So don’t just close this tab and move on. Poke the system.<br />Fall down a DeFi rabbit hole. Click buttons you barely understand (on a testnet, please).<br />Ask the questions no one else is asking.<br />Lose fake money, learn real things.<br /><em>Because</em> <em>finance</em> <em>isn’t</em> <em>just</em> <em>changing</em> - <em>it’s</em> <em>being</em> <em>rewritten</em> <em>in</em> <em>real</em>-<em>time</em>.</p>
]]></content:encoded></item><item><title><![CDATA[The Metaverse’s Cousin You Didn’t Know Existed: Meet the Digital Twin]]></title><description><![CDATA[Have you ever been fascinated by the idea of parallel universes? Just imagine that there is a version of yourself existing somewhere in the darkness, a few light years away. One whose tiniest choices ripple through time, quietly altering the course o...]]></description><link>https://blog.acmvit.in/the-metaverses-cousin-you-didnt-know-existed-meet-the-digital-twin</link><guid isPermaLink="true">https://blog.acmvit.in/the-metaverses-cousin-you-didnt-know-existed-meet-the-digital-twin</guid><category><![CDATA[#acmw]]></category><category><![CDATA[Digital Twin ]]></category><category><![CDATA[tesla]]></category><category><![CDATA[ACM]]></category><dc:creator><![CDATA[Yashika]]></dc:creator><pubDate>Thu, 29 May 2025 09:49:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748501257513/c782dc4a-0b29-4100-8255-df12e7ed3a7f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever been fascinated by the idea of parallel universes? Just imagine that there is a <em>version of yourself</em> existing somewhere in the darkness, a few light years away. One whose tiniest choices ripple through time, quietly altering the course of everything. It’s definitely a mind bending concept, alternate realities shaped by different decisions. But what if I told you we’re already building something eerily similar right here on Earth?</p>
<p>Let’s take, for instance, the catastrophic explosion aboard the Apollo 13 mission in 1970, with three astronauts stranded in space which felt nothing less than a <em>real life space drama</em> filled with unpredictability. Not only were they lost in the vastness of  space, but they were also suspended in a life or death struggle with no guarantee of return. No Google Maps, No Wi-Fi, just <em>pure nerve</em>. But ever wondered how despite all the shortcomings, NASA managed to pull off this rescue from 240,000 miles away?</p>
<p>The answer lies not in magic or luck but in something far more brilliant. In a move that feels more science fiction than reality, the engineers created a full-scale physical replica of the spacecraft on Earth, mirroring its setup and running through every possible <em>“what if”</em> scenario until they found the path home. In essence, they created a parallel version of the Apollo 13 environment, one that existed safely on earth, where every move could be tested, analyzed and perfected. This physical clone  became the foundation for what we now call the <strong>Digital</strong> <strong>Twins</strong>.</p>
<p>Digital Twins are high-fidelity virtual counterparts of real world systems that predict glitches before they happen and ensure their smooth running under unpredictable environments. Like any doppelgänger (yes, we’re talking about <em>Katherine Pierce</em> and <em>Elena Gilbert</em> here), they quietly mirror the real world, learn its moves and step in just in time to prevent chaos. Minus the drama that this look-alike won’t actually <em>ruin your life or steal your boyfriend</em>, of course.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdE_ok2j0e4pCs0E1ls20X_ClT3Stc8I71tlJcPK258nweghEC0iH80lJUkRrCWMRVefWFoq4w51A5uPlbid9U_85U14zVMXI4A7qhPU_QwkRFAn9B4OkvOBBwx339dCseWfD-XnA?key=KfJO7KIooopoQi0hECj5zl2n" alt /></p>
<p>But don’t let these pop culture references fool you. Fast forward to present and this tech isn’t just for saving astronauts. It’s the brain behind the cities running faster, engines adapting mid-flight and yes, even those lightning-fast racing cars. Digitally, it's reshaping industries and steering us towards a future where virtual twins keep the real world in sync.</p>
<p>So, the next time your day starts to feel too smooth, maybe it’s not just chance. Maybe it’s your digital twin, <em>watching, waiting and making sure everything stays just right.</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcXLkgBGk7dupxBEq792FWVx7K91A8u61WirZFNGMXlBdfD0pFYzzWTOugtZKH1c3sChxhSXmioLe7aFJBrZngeU4w__cXvNYVjAqouYEFtfS402yB1yoXAitZfEA5l5kq8L8EXjg?key=KfJO7KIooopoQi0hECj5zl2n" alt /></p>
<h2 id="heading-where-data-meets-design"><strong>Where Data Meets Design</strong></h2>
<p>The gadgets we carry aren’t just tools anymore. They’re like detectives constantly <em>watching</em> and learning from us. They track our steps, sleep patterns, music choices, and even how we scroll through apps. It’s like they’re quietly building a digital version of <em>you.</em> But collecting information is only one part of this story. The real magic kicks in when this information is used to learn, adapt, and predict what you might do next, more like that best friend who knows you inside out*.*</p>
<p>Suppose the device we use is a <em>student</em>. So, for it to behave like a digital twin it has majorly two learning methods.</p>
<ul>
<li><p><strong>Physics-based Models:</strong> They use math and scientific rules to simulate how the real world works like predicting how a bridge might sway in the wind.</p>
</li>
<li><p><strong>Data-driven Models:</strong> They get smart by consuming massive amounts of real-world data, think of it as learning by example, like how your phone recognizes your face.</p>
</li>
</ul>
<p>Together, these learning methods help your gadgets go from just “tracking” to actually understanding you.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748499521999/3319a05a-1689-4f5d-b307-257d12ef4e9a.png" alt /></p>
<p>Let’s say we take a scenario. Imagine your navigation app isn’t just a map, but your personal sidekick who actually gets you. It doesn’t judge whether you’re a speed demon or a cautious driver, it just silently watches how you drive, learns your quirks and starts suggesting better routes that fit your style as it gets better with each trip. This magic combination of old school, traditional physics models and smart, real-world learning is called <strong>data</strong> <strong>assimilation</strong>. Might sound like a fancy term but it simply means “<em>constantly</em> <em>fine</em>-<em>tuning</em> <em>its</em> <em>model</em> <em>every</em> <em>time</em> <em>a</em> <em>new</em> <em>info</em> <em>rolls</em> <em>in</em>.”</p>
<p>It keeps track of almost everything: be it your sensor’s data, your not-so-good driving habits or just a simple system feedback. So instead of a one-size-fits-all generic model , you get a model that’s basically made-to-order, tailored just for you. Like a GPS with a sixth sense and a little attitude, <em>maybe</em>.</p>
<h2 id="heading-the-code-behind-the-clone"><strong>The Code Behind the Clone</strong></h2>
<p>But it isn't just a fancy 3D model that sits pretty on a screen. It starts as a digital <em>model</em> - a visual replica, adds real-time data along with AI, and suddenly, it becomes a digital <em>shadow</em>, watching everything its real-world counterpart does.</p>
<p>And wait, it doesn’t stop there. Once it starts thinking, learning, and predicting… You’ve got yourself a full-fledged “Digital Twin”. It's like the system's tech-savvy stunt double but <em>only smarter, always alert, and never calls in sick.</em> It not only listens to your data in real time but also learns patterns and evolves itself to stay in sync and predict what might happen next.</p>
<p>Now, curious how all this <em>wizardry</em> happens? Let’s dive into the behind-the-scenes steps that make this tech tick.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfG_71Tn9KJtb1JzNUA0an9sunC1Sg6WTz4S8tQwLMmjJRmRse726TlEBNF2QitCjV2iWsY__wMz9MaDYOmjZvkTgs7rRQ0S0HOl43EXiBTdrabTMZgeWOSvnc0AKVpWK4Xlfk4?key=KfJO7KIooopoQi0hECj5zl2n" alt /></p>
<h3 id="heading-start-with-a-smart-virtual-model"><strong>Start with a Smart Virtual Model</strong></h3>
<p>First, the engineers build a high-fidelity digital replica of the physical system. Using <strong>CAD</strong> (<strong>Computer</strong> <strong>Aided</strong> <strong>Design</strong>) <strong>data</strong> <strong>models</strong>, <strong>FEA</strong> (<strong>Finite</strong> <strong>Element</strong> <strong>Analysis</strong>) and physics-based simulations, it mimics not just how the real asset looks but behaves like it too. It can simulate under mechanical stress, fluid dynamics, thermal changes or even electromagnetic responses.</p>
<p>A great real-world example of this approach is <em>Siemens'</em> Digital Enterprise Suite, which connects product design with real-time production data to create dynamic, self-improving twins. <a target="_blank" href="https://www.siemens.com/global/en/products/automation/topic-areas/digital-enterprise/digital-twin.html?utm_source=chatgpt.com#:~:text=Continuously%20optimize%20product%20and%20production%20with%20the%20comprehensive%20Digital%20Twin"><em>Here’s how they do it</em></a></p>
<h3 id="heading-connect-real-world-sensors"><strong>Connect Real-World Sensors</strong></h3>
<p>Next, to bridge the gap between the physical and the digital worlds, we give the real system some “<em>super senses</em>.” <strong>IoT</strong> <strong>sensors</strong> like the thermocouples, pressure transducers or accelerometers are installed to track temperature, pressure, vibrations and whatever else needs watching.</p>
<p>Frameworks like the <em>Azure Iot Hub or Siemens MindSphere</em>  capture this raw data, streaming real-time information seamlessly from edge into the cloud. These sensors are like its eyes and ears, constantly monitoring and whispering updates to the digital twin.</p>
<h3 id="heading-keep-it-in-sync-with-reality"><strong>Keep It in Sync with Reality</strong></h3>
<p>This step is where the <em>magic</em> (a.k.a. math) happens. The incoming sensor data flows through a data pipeline involving:</p>
<ul>
<li><p><strong>Edge Computing Nodes</strong> - Picture a wind turbine whose vibration sensor sends data every millisecond. The edge nodes like <em>Azure IoT Edge</em> run machine learning models, filtering out the noise and only carrying forward the meaningful vibration spikes.</p>
</li>
<li><p><strong>Stream Processing Frameworks</strong> - Next, these filtered vibration spikes then zoom into platforms like <em>Azure Stream Analytics</em> which quickly route the sensor events and detect unusual patterns sending real-time alerts to engineers.</p>
</li>
<li><p><strong>Time-Series Databases</strong> - Meanwhile, all vibration readings are stored in time-series databases like <em>Azure Data Explorer</em>, letting analysts review historical trends, correlate early warning signs with past failures and train the smarter ML models.</p>
</li>
</ul>
<p>Then comes the brainpower, data fusion and state estimation algorithms like <em>Kalman</em> filters these tons of info, ensuring that the twin is not just guessing, it knows exactly what’s going on.</p>
<p>With that clarity, platforms like <em>Azure Digital Twins</em> dynamically update the virtual model and fine-tune it to reflect the real conditions.</p>
<p>Think of it like your twin adjusting its stance every second to stay perfectly in sync.</p>
<p>Lag? <em>Nope</em>.</p>
<p>Outdated info? <em>Not on this twin’s watch.</em></p>
<h3 id="heading-predict-prevent-and-optimize"><strong>Predict, Prevent, and Optimize</strong></h3>
<p>Now that the twin is locked in step with reality, it levels up its game. It uses AI-driven diagnostics, ML and neural networks to predict potential issues, simulate scenarios and recommend fixes before things go sideways optimizing performance.</p>
<p>Need to test what happens if a pump fails at 3 a.m.? <em>No worries</em>, <em>the twin's already simulated it multiple times.</em></p>
<p>Tools like <em>Siemens Simcenter or MATLAB</em> perform simulation with varying fidelity levels like:</p>
<ul>
<li><p><strong>Low</strong>-<strong>fidelity</strong> <strong>models</strong> for lightening fast decisions.</p>
</li>
<li><p><strong>High</strong>-<strong>fidelity</strong> <strong>models</strong> when precision is non negotiable.</p>
</li>
</ul>
<p>It’s basically your 24/7 virtual engineer, just without the <em>coffee addiction</em>.</p>
<h2 id="heading-why-your-digital-double-wins-every-time"><strong>Why Your Digital Double Wins Every Time?</strong></h2>
<p>Digital twins aren’t just digital blueprints, they’re your <em>smartest team members.</em> Constantly learning, simulating, and predicting, they help businesses run smoother, faster, and safer. Whether it’s reducing waste, cutting costs, or designing better products faster, they’re quietly running the show behind the scenes.</p>
<p>Let’s pull back the curtain and explore how digital twins are powering real-world impact i.e <em>one smart move</em> at a time.</p>
<p><strong>Smarter Decisions, Fewer Surprises:</strong> Digital twins aren’t just data dashboards rather they offer a real-time interactive mirror of operations. This clarity lets teams spot bottlenecks and play out “<em>what if</em>” scenarios before making data-backed decisions.</p>
<p><strong>Powering Predictive Maintenance:</strong> Why wait for things to break? With digital twins, that same live data becomes a <em>crystal ball</em> spotting trouble from a mile away. The result? Fewer disruptions, longer lasting assets and smarter maintenance schedules.</p>
<p><strong>Accelerated Innovations at Lower Costs:</strong> Trial and error is so last decade now that you can prototype, test and refine products virtually slashing waste, speeding up timelines and making every resource count at a much affordable price.</p>
<p><strong>Safety and Scalable Potential:</strong> Whether simulating emergencies or training for the unexpected in virtual environments, these twins prioritize safety above all. And as industries evolve, they grow along them from factory floors to entire cities.</p>
<p>From <em>insight to impact</em>, digital twins don’t just reflect reality, they help reshape it.</p>
<h2 id="heading-a-silent-revolution"><strong>A Silent Revolution</strong></h2>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeisPl6u7rdtcKzRNJWHohrO3eGa-BALFvJnQawi0x9Do27czYtvJ_mF8IniqPDycv6mKaOoZVkkt4l85t3kySjhVitLdD9wk4ipyss3LD2N0pfPTHM7l7a6poozaDDIeww9b5mSA?key=KfJO7KIooopoQi0hECj5zl2n" alt /></p>
<p>Once a space-age idea from NASA’s missions, digital twins have now traded their spacesuits for lab coats, hard hats, and city planning blueprints. What started as a lifesaving innovation in space has now infiltrated our daily lives. From streamlining production lines to reimagining how we plan cities and deliver healthcare, this tech is no longer just futuristic but now fully functional.</p>
<p>Let’s see how digital twins have <em>actually</em> done so far.</p>
<h3 id="heading-a-digital-dream-of-urban-life"><strong>A Digital Dream of Urban Life</strong></h3>
<p>Yes, <em>Singapore</em> <em>has</em> <em>a</em> <em>fully</em> <em>playable</em> <em>3D</em> <em>version</em> <em>of</em> <em>itself</em>. And no, it's not exactly for you to randomly put a building next to another, rather it’s for the government to simulate how traffic moves or maybe how flood water flows in real time.</p>
<p>Virtual Singapore is a <strong>full</strong>-<strong>scale</strong>, <strong>3D</strong> <strong>semantic</strong> <strong>model</strong> of the entire city-state powered by the blend of <strong>GIS</strong> (<strong>Geographic</strong> <strong>Information</strong> <strong>Systems</strong>), <strong>BIM</strong> (<strong>Building</strong> <strong>Information</strong> <strong>Modelling</strong>) and <strong>IoT</strong> <strong>sensors</strong> embedded everywhere from roads to skyscrapers. In Singapore, civil engineers are basically your game devs.</p>
<p>When a natural disaster approaches, the government can easily simulate the routes, tests traffic reroutes and models stress on utility lines. This is due to the <strong>CFD</strong> (<strong>Computational</strong> <strong>Fluid</strong> <strong>Dynamics</strong>) simulations and city structural analysis using a simulation software.</p>
<p>And lastly, the glue which sticks it all together? <em>Azure Digital Twins</em>, which connects live sensor feeds with virtual models.</p>
<p>A city in Singapore just doesn’t exist, it <em>thinks</em>.</p>
<h3 id="heading-teslas-car-has-a-cloud-clone"><strong>Tesla’s Car Has a Cloud Clone</strong></h3>
<p>While you’re busy dreaming, your Tesla car’s twin is wide awake, probably zipping through the virtual streets, dodging some imaginary pedestrians or maybe gossiping with the other Tesla cars.</p>
<p>Tesla doesn’t just build cars but also their virtual twins that learn from your daily driving patterns. From the brake buildup on the slopes or motor torque in high stress, these simulations are powered by Tesla’s custom AI built supercomputer, <strong>Dojo</strong> which <em>by the way</em>, <em>feels suspiciously borrowed from Tony Stark’s garage</em>. It thinks in parallel and trains self driving algorithms using real world sensors piped through <em>Apache Kafka</em> streaming system.</p>
<p>The simulation pipeline includes high-fidelity <strong>CAD</strong> (<strong>Computer</strong> <strong>Aided</strong> <strong>Design</strong>) models and physics-based dynamics engine which simulates both physical and software behavior. These insights feed into the car via <strong>OTA</strong> (<strong>over</strong>-<strong>the</strong>-<strong>air</strong>) updates turning it smarter each passing week.</p>
<p>It’s like your Tesla has its <em>own personal JARVIS</em> and you’re just lucky enough to be the driver.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfctqf_guGKZgmbw-_hMRjzXU9-XbElQWmJxNx4F780A-EnhvNAq-waA79jwzhQlW3ULB5FN-myBU10JXEZkpQISsI9GVL496iGph0aLamSpBy1dPQUPJsNjXC9Ic8S_vNSNSpL1g?key=KfJO7KIooopoQi0hECj5zl2n" alt /></p>
<h3 id="heading-from-twin-sparks-to-global-waves"><strong>From Twin Sparks to Global Waves</strong></h3>
<p>Digital twins are basically the ultimate “<em>clone your homework</em>” hack for the real world but way smarter and legal. What started as a lifesaver back in the days of Apollo has now expanded everything from hospitals to smart cities, making sure things run smoother than your favorite video game.</p>
<p>What’s really exciting is how digital twins will evolve next. Beyond just mirroring reality and spotting problems as they happen, in the future, they will predict complex scenarios like simulating entire city ecosystems to test how climate change might affect them. They will become more autonomous and will be able to make real-time decisions without human input, much like a digital co-pilot.</p>
<p>So here’s the deal: when reality gets a digital twin, your imagination becomes the new blueprint. The only limit? How far you're willing to think or how much caffeine you’ve consumed. Either way, it’s your move now. Think smarter, dream bigger, and <em>maybe even have a little fun while you’re at it</em>.</p>
]]></content:encoded></item><item><title><![CDATA[Quantum Time Bomb: When Encryption Stops Working]]></title><description><![CDATA[The Clock is Ticking And Your Data Isn’t Ready
Quantum computers aren’t just lab experiments anymore. They’re real, evolving faster than a ‘Brainrot trend’ , and primed to detonate the encryption protecting almost everything you do online. Imagine ha...]]></description><link>https://blog.acmvit.in/quantum-timebomb</link><guid isPermaLink="true">https://blog.acmvit.in/quantum-timebomb</guid><category><![CDATA[#cybersecurity]]></category><category><![CDATA[quantum computing]]></category><category><![CDATA[Security]]></category><category><![CDATA[Post-Quantum Cryptography]]></category><dc:creator><![CDATA[Harshit Narang]]></dc:creator><pubDate>Wed, 14 May 2025 04:46:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747196459400/208b278b-a11b-47ba-88e2-2852983aa480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-the-clock-is-ticking-and-your-data-isnt-ready"><strong>The Clock is Ticking And Your Data Isn’t Ready</strong></h3>
<p>Quantum computers aren’t just lab experiments anymore. They’re real, evolving faster than a ‘Brainrot trend’ , and primed to detonate the encryption protecting almost <em>everything</em> you do online. Imagine hackers in 2035 cracking today’s encrypted data, which can include your bank details, medical records, and that cringe google search for “why do cats ignore me when I pspsps at them?”, as easily as popping a balloon.</p>
<p>This is not a <em>Mission: Impossible</em> movie plot, it’s the <strong>quantum time bomb</strong> lurking beneath our digital lives. So here is how it works, why your data is at risk, and how we’re defusing it - with lasers, math mazes, and a sprinkle of chaos.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744576594785/9a92d694-7f5e-45c7-91ba-c761ba88dc39.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-quantum-computing-the-atomic-clock-on-espresso-and-why-you-should-care"><strong>Quantum Computing: The Atomic Clock on Espresso (and Why you should care)</strong></h3>
<p><strong>Classical computers</strong> are like sundials, steady but slow. <strong>Quantum computers?</strong> They’re atomic clocks on espresso, with a side of Red Bull.</p>
<ul>
<li><p>Classical bits are light switches: <strong>0</strong> (off) or <strong>1</strong> (on).</p>
</li>
<li><p>Qubits are like dimmer switches - thanks to superposition, they can be both 0 and 1, or anything in between. It’s like getting out of the labyrinth by exploring every path at once instead of step by step.</p>
</li>
</ul>
<h4 id="heading-entanglement-einsteins-spooky-bffs"><strong>Entanglement: Einstein’s “Spooky” BFFs</strong></h4>
<ul>
<li>If two qubits are entangled, changing one instantly affects the other - even if they’re on opposite sides of the galaxy. Think of it as twin librarians who finish each other’s sentences… and shelves.</li>
</ul>
<p><strong>Why Quantum Isn’t Mainstream (Yet):</strong><br />Qubits are like divas: they demand near‑absolute‑zero temperatures and complete isolation from vibrations, stray light, and even Wi‑Fi signals to prevent decoherence. Error correction, meanwhile, is the real nightmare—like herding a bunch of overly caffeinated cats.<br /><strong>Real-World Quantum Players:</strong></p>
<ul>
<li><p><strong>IBM’s Quantum Eagle:</strong> Handles 127 qubits but still makes mistakes.</p>
</li>
<li><p><strong>D-Wave’s Annealers:</strong> Solve optimization problems but can’t crack RSA… yet.</p>
</li>
</ul>
<h4 id="heading-why-this-matters"><strong>Why This Matters</strong></h4>
<p><strong>Shor’s Algorithm</strong> - a quantum cheat code - could crack <strong>RSA encryption</strong> (the lock guarding 90% of the internet) in <em>hours</em>. For classical computers, it’s like running a marathon barefoot but for quantum computers it’s a hoverboard sprint.</p>
<p><em>Analogy:</em> RSA is a timed safe whereas quantum computers are lockpicks with a stopwatch and a PhD in chaos theory.</p>
<p><strong>Real-World Stakes:</strong></p>
<ul>
<li><p>The NSA <a target="_blank" href="https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3498776/post-quantum-cryptography-cisa-nist-and-nsa-recommend-how-to-prepare-now/">recommends</a> preparing for PQC by 2030.</p>
</li>
<li><p>China claims it’s built a quantum computer that cracks RSA-2048 in <em>minutes</em>. <a target="_blank" href="https://www.defenseone.com/technology/2023/01/china-about-destroy-encryption-we-know-it-maybe/382041/">Yikes</a>.</p>
</li>
<li><p><strong>The EU’s Quantum Flagship Program</strong> is investing €1 billion to deploy quantum-safe infrastructure by 2030, prioritizing defense and healthcare.</p>
</li>
<li><p><strong>JPMorgan Chase</strong> is stress-testing PQC to secure $10+ trillion in daily transactions, fearing quantum-driven financial chaos.</p>
</li>
<li><p><strong>Ransomware groups</strong> like <a target="_blank" href="https://en.wikipedia.org/wiki/LockBit">LockBit</a> are stockpiling encrypted data, betting on future quantum paydays.</p>
</li>
</ul>
<hr />
<h3 id="heading-rsas-rise-and-fall-from-hero-to-zero"><strong>RSA’s Rise and Fall: From Hero to Zero</strong></h3>
<p><strong>A Brief History of RSA:</strong><br />In 1977, three MIT nerds (Rivest, Shamir, Adleman) invented RSA, turning encryption into a math puzzle:</p>
<ol>
<li><p>Pick two massive primes (like 1,000-digit monsters).</p>
</li>
<li><p>Multiply them.</p>
</li>
<li><p>Security relies on one fact: Factoring that product back into primes is <em>brutally hard</em> for classical computers.</p>
</li>
</ol>
<p><strong>Prime Factorization 101:</strong><br />Factoring primes is like reverse engineering a cake. If you bake 17 x 23 = 391, it’s easy. But if I give you 391 and ask for the original primes … well it’s game over for you. Now imagine the numbers are 600 digits long.</p>
<p><strong>Why RSA Ruled the World:</strong><br />It’s like hiding a needle in a haystack which is almost the size of Jupiter. Even the world’s fastest supercomputer would take <strong>300 trillion years</strong> to crack RSA-2048.</p>
<p><img src="https://www.securew2.com/wp-content/uploads/2024/01/RSA-Encryption-Works.png" alt="Understanding RSA Asymmetric Encryption: How It Works" /></p>
<p><strong>But Quantum computers couldn’t care less:</strong><br />Shor’s Algorithm tests <em>all possible factors at once</em> using superposition. Imagine brute forcing a password by guessing every combination simultaneously.</p>
<p><strong><em>Reality Check:</em></strong> RSA is like milk in the sun - already going bad. Quantum computing is the heatwave speeding it up.</p>
<p><strong>Case in Point:</strong><br />In 2022, Chinese researchers <a target="_blank" href="https://arxiv.org/abs/2212.12372">simulated breaking 2048-bit RSA</a>. The fuse is now lit.</p>
<hr />
<h3 id="heading-harvest-now-decrypt-later-the-heist-of-the-century"><strong>Harvest Now, Decrypt Later: The Heist of the Century</strong></h3>
<p>Hackers aren’t waiting for quantum tech - they’re <strong>hoarding encrypted data</strong> today. Your tax returns, corporate secrets, and <em>that</em> Spotify blend with your crush? All are sitting ducks in a digital storage locker.</p>
<p><strong>Did You Know?</strong></p>
<ul>
<li><p>70% of organizations admit they can’t detect encrypted data theft (<a target="_blank" href="https://www.ponemon.org/">Ponemon Institute</a>).</p>
</li>
<li><p>95% of web traffic is encrypted (<a target="_blank" href="https://transparencyreport.google.com/">Google Transparency Report</a>).</p>
</li>
</ul>
<h4 id="heading-how-it-works"><strong>How It Works:</strong></h4>
<ol>
<li><p><strong>Steal encrypted data</strong> (easy, since most traffic is encrypted).</p>
</li>
<li><p><strong>Wait 5-10 years</strong> for quantum computers to mature.</p>
</li>
<li><p><strong>Decrypt everything</strong>, from military intel to your middle-school blog-diary.</p>
</li>
</ol>
<h4 id="heading-the-fallout-digital-mayhem"><strong>The Fallout? Digital Mayhem:</strong></h4>
<ul>
<li><p>🔓 <strong>HTTPS/TLS:</strong> Secure websites become glass houses and even worse - your passwords added to <a target="_blank" href="https://www.keepersecurity.com/blog/2023/08/04/understanding-rockyou-txt-a-tool-for-security-and-a-weapon-for-hackers/#:~:text=txt-,The%20RockYou.,over%2032%20million%20user%20passwords.">rockyou2.txt</a> .</p>
</li>
<li><p>💸 <strong>Blockchain:</strong> Crypto wallets? Emptied. NFTs? Repossessed by quantum-powered bots.</p>
</li>
<li><p>📜 <strong>Digital Signatures:</strong> Forged contracts, fake software updates, and <em>literally</em> counterfeit money.</p>
</li>
</ul>
<p><strong>Industry-Specific Chaos:</strong></p>
<ul>
<li><p><strong>Finance:</strong> Banks could lose billions overnight if transaction histories are altered.</p>
</li>
<li><p><strong>Healthcare:</strong> Your DNA data? Auctioned to the highest bidder.</p>
</li>
<li><p><strong>Government:</strong> Diplomatic cables leaked, sparking geopolitical crises.</p>
</li>
</ul>
<p><strong><em>Worst thing that can happen:</em></strong> A hacker group leaks 2030’s decrypted data, revealing your “visionary leadership” speech was ChatGPTed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744577535827/ea13f712-41ec-462c-9188-f9303374e6ff.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-post-quantum-cryptography-cybersecuritys-avengers-explained-for-beginners"><strong>Post-Quantum Cryptography: Cybersecurity’s Avengers (Explained for Beginners)</strong></h3>
<p>Meet <strong>PQC</strong>—the superhero squad of encryption. These algorithms make quantum computers rage-quit.</p>
<h4 id="heading-lattice-based-cryptography-math-mazes-in-500-dimensions"><strong>🌀 Lattice-Based Cryptography: Math Mazes in 500+ Dimensions</strong></h4>
<p>Imagine navigating a maze, but instead of 2D walls, you’re dodging obstacles in nearly <em>500 dimensions</em>. Lattice-based cryptography uses grids (lattices) in mind-bending dimensions to hide data. Quantum computers struggle here because solving these mazes requires guessing <em>all paths at once</em> - something even their multitasking qubits have a skill issue with. Why is it cool: Because it’s the backbone of algorithms like Kyber and powers privacy tools like secure messaging apps.</p>
<p><strong>🔐 Hash-Based Signatures: Tamper-Proof Fingerprints</strong><br />Hash-based signatures work like a wax seal for data. When you “sign” a document, it’s stamped with a unique <strong>hash</strong> - a fixed-length code (e.g., a random string like <em>a3F9A2xZ</em>). Tamper with the document? The hash changes completely and screams “FAKE!”<br /><strong>Drawback:</strong> They’re one-time use, like disposable glovesgr - eat for critical systems, clunky for Netflix binges.</p>
<p><strong>📜 Code-Based Cryptography: Errors as Locks</strong><br />This method borrows from <em>error-correcting codes</em> which means math is used to fix twisted texts. Imagine sending a message with intentional typos. Only someone with the “typo rulebook” (your private key) can decode it. Quantum computers hate this because finding errors in massive codes is like finding a single misspelled word in the <em>Harry Potter</em> novels written in Morse code.</p>
<p><strong>📊 Multivariate Polynomials: Equations from Hell</strong><br />These algorithms use systems of equations with hundreds of variables (e.g., <em>x³y² + 4xy – 7z⁴ = 42</em>). Solving them requires brute-forcing endless combinations - a nightmare even for quantum machines.<br /><strong>Real-world use:</strong> They’re niche but secure, like a vault guarded by a troop of kangaroos with boxing gloves.</p>
<hr />
<h3 id="heading-kyber-amp-dilithium-the-dynamic-duo-explained"><strong>⚡ Kyber &amp; Dilithium: The Dynamic Duo, Explained</strong></h3>
<p><strong>Kyber (Key Exchange):</strong></p>
<ul>
<li><p>Uses lattice math to securely share encryption keys.</p>
</li>
<li><p>Imagine whispering a password in a crowded room, but the password is hidden inside a 100D maze. Only your intended recipient has the map.</p>
</li>
<li><p><strong>Used in:</strong> Google’s PQ-TLS experiments, VPNs.</p>
</li>
</ul>
<p><strong>Dilithium (Signatures):</strong></p>
<ul>
<li><p>Creates unforgeable signatures using lattices.</p>
</li>
<li><p>Think of it as a wax seal that <em>explodes</em> if tampered. Even quantum bots can’t fake it.</p>
</li>
<li><p><strong>Used in:</strong> Software updates, legal e-signatures.</p>
</li>
</ul>
<p><strong>Why They’re Cool:</strong> They’re fast, efficient, and already being tested by tech giants.</p>
<hr />
<h3 id="heading-hybrid-encryption-double-the-locks-zero-the-regrets"><strong>🛡️ Hybrid Encryption: Double the Locks, Zero the Regrets</strong></h3>
<p>Hybrid encryption pairs RSA with PQC algorithms. Why?</p>
<ol>
<li><p><strong>Backward compatibility:</strong> Old systems still understand RSA.</p>
</li>
<li><p><strong>Quantum-proofing:</strong> PQC adds a futuristic lock.<br /> <strong>How it works:</strong> Your data is wrapped in <em>both</em> RSA and PQC encryption. Hackers need to crack both - like breaking into a bank vault while dodging laser sharks.<br /> <strong>Real-World Use:</strong> AWS, Microsoft Azure, and Signal already use hybrid approaches</p>
</li>
</ol>
<p><strong>NIST’s PQC Timeline:</strong></p>
<ul>
<li><p><strong>2016:</strong> Launched global competition for quantum-safe algorithms.</p>
</li>
<li><p><strong>2022:</strong> Announced Kyber, Dilithium, and others as finalists.</p>
</li>
<li><p><strong>2024:</strong> <a target="_blank" href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards">Final standards released</a></p>
</li>
</ul>
<p><strong>Who is NIST?</strong></p>
<p>The National Institute of Standards and Technology (NIST) is the U.S. federal agency setting the gold standard for cybersecurity. Since 2016, they’ve spearheaded the global effort to vet and standardize quantum-resistant algorithms—because even hackers need rules to break.</p>
<p><strong>Why This Squad Matters:</strong></p>
<ul>
<li><p>Protects WhatsApp chats, online banking, and even your smart fridge.</p>
</li>
<li><p>Guards critical infrastructure (power grids, self-driving cars) from quantum chaos.</p>
</li>
<li><p>Future-proofs IoT: Medical implants, connected cars, and yes, even your smart fridge.</p>
</li>
<li><p>The EU wants PQC in banks and hospitals by 2025 and even NASA’s testing it for <em>space comms</em>.</p>
</li>
</ul>
<p>PQC is not just a shield, it’s more of a time machine which plans on securing today’s tech for tomorrow’s quantum world.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744576276694/c718441a-8eac-4509-ae59-f06aadcba07e.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-your-countdown-checklist-how-to-dodge-the-quantum-apocalypse"><strong>Your Countdown Checklist: How to Dodge the Quantum Apocalypse</strong></h3>
<p><strong>1. Stay Calm (But Move Fast):</strong></p>
<ul>
<li>Treat this like climate change: act now or drown later. <em>Why?</em> Transitioning to PQC takes years—start before the clock hits zero.</li>
</ul>
<ol start="2">
<li><strong>Nudge Your IT Team:</strong></li>
</ol>
<ul>
<li><p>“Hey, maybe peek at <a target="_blank" href="https://csrc.nist.gov/projects/post-quantum-cryptography">NIST’s PQC drafts</a>?”</p>
</li>
<li><p><strong>Tools to Try:</strong> <a target="_blank" href="https://openquantumsafe.org/">Open Quantum Safe</a> (free PQC libraries).</p>
</li>
</ul>
<ol start="3">
<li><strong>Password Hygiene:</strong></li>
</ol>
<ul>
<li><p>Use a password manager (<em>cough</em> <a target="_blank" href="https://bitwarden.com/">Bitwarden</a> cough).</p>
</li>
<li><p>“yourdogsname123” won’t save you and neither will “YourName@dob.”</p>
</li>
</ul>
<ol start="4">
<li><strong>Learn the Basics:</strong></li>
</ol>
<ul>
<li><p>YouTube “quantum for dummies.”</p>
</li>
<li><p><strong>Free Course:</strong> <a target="_blank" href="https://www.coursera.org/">Coursera’s Cryptography I</a>.</p>
</li>
</ul>
<ol start="5">
<li><strong>Advocate Loudly:</strong></li>
</ol>
<ul>
<li><p>CEOs should brag about PQC in earnings calls.</p>
</li>
<li><p>Normies just tweet #QuantumProofMe on X.</p>
</li>
</ul>
<hr />
<h3 id="heading-myth-busting-quantum-nonsense-vs-reality"><strong>Myth Busting: Quantum Nonsense vs. Reality</strong></h3>
<p>🔥 <strong>Myth:</strong> “Quantum computers exist already! My data’s gone!”<br /><strong>Reality:</strong> Today’s quantum machines are toddlers—cute but useless.</p>
<p>🔥 <strong>Myth:</strong> “PQC will slow the internet to dial-up.”<br /><strong>Reality:</strong> Modern PQC is <em>faster</em> than RSA.</p>
<p>🔥 <strong>Myth:</strong> “Only governments need to worry.”<br /><strong>Reality:</strong> If you use Wi-Fi or oxygen, you’re on the team.</p>
<p><strong>Bonus Myth:</strong> “Quantum can break <em>all</em> encryption.”<br /><strong>Reality:</strong> Symmetric encryption (like AES-256) is quantum-resistant. PQC handles the rest.</p>
<hr />
<h3 id="heading-what-if-we-do-nothing"><strong>What If We Do Nothing?</strong></h3>
<p>Imagine waking up in 2035 to:</p>
<ul>
<li><p><strong>Bankrupt banks:</strong> Quantum hackers drain accounts globally.</p>
</li>
<li><p><strong>Fake news 2.0:</strong> Forged government documents spark wars.</p>
</li>
<li><p><strong>Identity apocalypse:</strong> Your medical history is just waiting to be meme fodder.</p>
</li>
</ul>
<p><em>This is not me trying to be fearmonger, it’s just simple math.</em></p>
<hr />
<h3 id="heading-faq-your-quantum-questions-answered"><strong>FAQ: Your Quantum Questions, Answered</strong></h3>
<p><strong>Q: When will quantum computers crack RSA?</strong><br />A: Although projections place the timeline between 2030 and 2050, hackers have already begun hoarding data.</p>
<p><strong>Q: How soon will PQC be everywhere?</strong><br />A: Though NIST's standards were introduced in 2024, full adoption may take 5–10 years.</p>
<p><strong>Q: Is my iPhone safe?</strong><br />A: For the time being, Apple is looking into implementing PQC in upcoming iOS updates.</p>
<p><strong>Q: Can I buy a quantum computer?</strong><br />A: At $15 million per D-Wave annealer, your dog’s influencer career might need to be on hold for a bit.</p>
<hr />
<h3 id="heading-final-countdown-encryption-isnt-deadits-evolving"><strong>Final Countdown: Encryption Isn’t Dead—It’s Evolving</strong></h3>
<p>The quantum time bomb isn’t doom — it’s a wake-up call. We survived Y2K, spam, and Flash; now, PQC is the next chapter.<br /><strong>TL;DR:</strong></p>
<ul>
<li><p>Quantum <em>will</em> break RSA.</p>
</li>
<li><p>PQC is the fuse we’re cutting.</p>
</li>
<li><p>Your job? Stay alert, install upgrades, and for the love of the quantum gods stop reusing the same password for every social media account.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744578834734/a95967f6-24dc-4532-94a2-ec74624a1033.png" alt class="image--center mx-auto" /></p>
<hr />
<p><strong>What’s Next?</strong></p>
<p><strong>For Tech Teams:</strong><br />Start stress-testing Post-Quantum Cryptography (PQC) libraries like <a target="_blank" href="https://openquantumsafe.org/"><strong>Open Quantum Safe</strong></a>—a free, open-source toolkit that lets you experiment with quantum-resistant algorithms today. Think of it as a “quantum-proof helmet” for your data. Dive into hybrid encryption prototypes, collaborate with frameworks like <strong>PQ-TLS</strong>, and join industry trials (Google and Cloudflare are already inviting beta testers). The goal? Ensure your systems aren’t caught with their encryption pants down when quantum arrives.</p>
<p><strong>For Everyone Else:</strong><br />Bookmark <a target="_blank" href="https://csrc.nist.gov/projects/post-quantum-cryptography"><strong>NIST’s PQC updates</strong></a> and follow tech giants like IBM and Microsoft, who blog about quantum readiness. Not a developer? No problem. Advocate for PQC adoption in your workplace (<em>“Hey, shouldn’t our app be quantum-safe?”</em>), and keep an eye on apps/software announcing PQC upgrades. Knowledge is power—and in this case, it’s also your underground bunker in case the quantum time bomb goes off.</p>
<p><em>Got questions? Drop them below. Conspiracy theories? We’ll bring popcorn.</em> 🍿</p>
<hr />
<p><em>Stay secure, stay snarky, and remember: Time’s ticking, but we’ve still got the codes.</em> 🔒⏳</p>
]]></content:encoded></item><item><title><![CDATA[Redis: A Stellar Intro]]></title><description><![CDATA[Need For Speed
I open Netflix, ready to watch Interstellar for the hundredth time. I hit play.
Baam—the dreaded loading wheel appears.
Frustrating, right? Seconds feel like hours. But why does this even happen?
Every time you stream a movie, your dev...]]></description><link>https://blog.acmvit.in/redis-a-stellar-intro</link><guid isPermaLink="true">https://blog.acmvit.in/redis-a-stellar-intro</guid><category><![CDATA[Redis]]></category><category><![CDATA[caching]]></category><category><![CDATA[netflix]]></category><category><![CDATA[hulu]]></category><dc:creator><![CDATA[Navdha Sharma]]></dc:creator><pubDate>Mon, 31 Mar 2025 05:14:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743360085077/3ac638c5-d543-4b7f-817b-57416dbfe8b5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-need-for-speed"><strong>Need For Speed</strong></h2>
<p>I open Netflix, ready to watch <em>Interstellar</em> for the hundredth time. I hit play.</p>
<p>Baam—<em>the dreaded loading wheel appears.</em></p>
<p>Frustrating, right? Seconds feel like hours. But why does this even happen?</p>
<p>Every time you stream a movie, your device has to fetch a massive amount of data <em>- loads of it</em>. Think of it like ordering food at a restaurant. If the chef has everything prepped, your meal arrives in minutes. But if they’re starting from scratch, you’re in for a long wait.</p>
<p>That’s exactly how streaming works. If the data isn’t readily available, buffering kicks in, and suddenly, your movie night turns into a waiting game.</p>
<p>But what if your favorite movies started instantly— every single time? <em>No buffering. No delays. Just play and enjoy.</em> Sounds like magic?</p>
<p>Well, it’s not magic—it’s <em>caching</em>. And one of the most powerful tools for this? <em>Redis.</em></p>
<p>In today’s world of streaming, where milliseconds can make or break an experience, even the slightest delay is a deal breaker. Users don’t just want their content fast—<em>they want it now. No buffering. No waiting. Just hit play and go.</em></p>
<p>So how do platforms like Netflix and Hulu pull this off? What keeps them running smoothly even when millions of people are streaming at the same time? The answer lies in effective caching - and the secret behind it?</p>
<p>Redis. (There are others too, but for now, let’s roll with Redis.)</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdjcqwgaP_e8TDuGlLVRagkcoFMGpySSEq1X0p_j-9DapEf30IhbfZn86YQGvk9tHeZek36KCcFeg9qsoKVMMy7atsMeg6PLN6T7HxwkETcImoYhoyslJmtUi025VuloXy-7xbJVg?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt /></p>
<h2 id="heading-the-role-of-caching-in-streaming-services"><strong>The Role of Caching in Streaming Services</strong></h2>
<p>Imagine this: our friend Jeremy wants some ice cream. He has two options—</p>
<p>1. Grab one from the nearby ice cream parlour.</p>
<p>2. Order it directly from the company’s storage.</p>
<p>If Jeremy picks the ice cream parlour, he gets his treat in just two minutes—quick and convenient! But if the parlour doesn’t have what he wants, he has no choice but to wait 10 minutes for the company’s storage to deliver it.</p>
<p>Naturally, the best way to get ice cream quickly is to check the parlour first. If it’s available, great! If not, he places an order, waits, and once it arrives, the parlour stocks that flavour for next time—ensuring others can grab it instantly later.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXczTokSDP6s4iodOS5KFgu7Oyyi1OdgdIGlk7rFluyByEn4soz9CA9qLFvYe-o3966QqGp5WbPcokhAF3xxjwbOrwTsForoIY1bxzdLIsn376WIIYWjNtf9bYKjSso5FDHOxJh1Xw?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt /></p>
<p>Now, swap out ice cream for video data, and you’ve got caching in streaming services.</p>
<p>If every request had to go all the way to a primary datastore, playback would be painfully slow. Instead, they use Redis, a high-speed, in-memory data store, to act as the “ice cream parlour” for frequently accessed data.</p>
<p>It helps in streaming services by:<br />1. Storing frequently accessed metadata (all your recommendations and angry ratings).<br />2. Caching video chunks and API responses (like storing frequently accessed calls, images, or data) helps reduce database load and improve performance.<br />3. Providing near-instant access to frequently used data.</p>
<p>This significantly reduces the need for repeated database queries, enhancing performance and efficiency. Making streaming feels instantaneous—(no more endless loading wheels).</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeI8G2aVlKnFDVn7G9ki9C92GbquwnJ-qwHnfTd2Y7railqNO7Pe_4c226sDL_wObNIJyBvbFKSscej8hMoI8cysNQnxpDSplzJCZgEljg9vydt_3gDqE2fflY54y2qjlz_WPgv?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt /></p>
<p>But where did this idea come from? It all started with one frustrated developer…</p>
<h2 id="heading-built-out-of-pure-frustration"><strong>Built out of pure frustration</strong></h2>
<p>Back in 2009, an Italian programmer named Salvatore Sanfilippo was working on a real-time web analytics system to track user activity on websites instantly but he ran into a wall.</p>
<p>The database he was using simply couldn't handle the load of tracking thousands of web pages in real-time! Every page view meant multiple database writes, and complex queries were bringing his servers to their knees.</p>
<p>The alternatives weren't great either.</p>
<p>Memcached could cache data but couldn't save it permanently.</p>
<p>MySQL was too slow for real-time operations he needed.</p>
<p>MongoDB was great for storing large amounts of data, but it was overkill for what he was trying to do.</p>
<p>Instead of giving up, Sanfilippo did what any passionate developer would—he built his own solution—something fast, lightweight, and capable of handling real-time data with ease. He wanted a system that could store and retrieve data quickly without the overhead of traditional databases.</p>
<p>With Redis, Sanfilippo didn’t just solve his problem—he changed the way the world handles data.</p>
<h2 id="heading-rediss-evolution"><strong>Redis’s Evolution</strong></h2>
<p>Originally developed by Salvatore Sanfilippo as a simple key-value store, Redis was designed to make data storage and retrieval more efficient. Over time, it evolved into a powerful in-memory data store, supporting advanced data structures like lists, sets, sorted sets, and hashes. Today, Redis is more than just a key-value store—it’s a high-performance, multi-purpose tool that powers everything from real-time analytics to AI-driven applications.</p>
<p>Unlike databases that store data on a hard drive, Redis keeps everything in the computer’s memory. Because of this, it can perform over 100,000 tasks every second, making it perfect for apps that need to respond immediately. Think of it as reaching into your pocket for a key, instead of running to the basement.</p>
<p>As Redis grew beyond a simple key-value store, its architecture evolved to handle even more demanding tasks.</p>
<h2 id="heading-the-architecture-that-drives-speed"><strong>The Architecture That Drives Speed</strong></h2>
<p>Let’s put ourselves in the shoes of Sanfilippo. Imagine you’re designing an alternative to the medley of problems you have. How would you go about it? You would probably consider the following factors:</p>
<p><strong>Efficiency</strong>: You’d opt for efficiency first. This means processing tasks one by one rather than handling everything at once—you want a streamlined model, right? Redis achieves this with its Single Threaded Efficiency model. Instead of juggling multiple tasks like traditional databases, Redis processes requests one at a time, but at an incredibly fast pace.</p>
<p><strong>Fast</strong> <strong>Data</strong> <strong>Retrieval</strong>: Next, you’d want to search for data quickly because you wouldn’t want your resources wasted just looking for the data you need. To do this, Redis uses Optimized Data Storage with smart, memory-efficient structures. These structures store data compactly, reducing memory usage and speeding up lookups.</p>
<p><strong>Stability</strong> <strong>Through</strong> <strong>Separation</strong>: Finally, you’d want to keep different operations separate. For instance, writing new data shouldn’t slow down reading existing data. Redis handles this with Copy-on-Write for Stability. When saving data, it uses a method that ensures new writes don’t interfere with ongoing operations, keeping the system smooth and efficient.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXe6C3g_VeGfJ2M9uB85DlXkYBYhEbnR7XwiPlnXfRmdg0EREXLPKRPujv3u6E7P8SJ_TR878DdltML54VsuNOSMZnFBXoixtdFAZekyIE7GDouOfCYyNkgJao0U4B7ZI4Q1PQCjGg?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-redis-powers-real-time-applications"><strong>How Redis Powers Real-Time Applications</strong></h2>
<p>Well, now that you've seen just how engrossed Salvatore was in cracking these challenges, here is a little list I have made to present Redis’s use case:</p>
<p><strong>Caching:</strong></p>
<p>In everyday terms, when you open an app—especially a chat app—you expect everything to load instantly. Redis helps achieve this by caching data that is accessed over and over, like user profiles. Instead of reaching out to a slower database every time you need to see someone’s profile, Redis keeps a copy in fast-access memory. This means that as soon as you open the app or click on a conversation, the profile information is already there, making the experience smooth and responsive.</p>
<p><strong>Session</strong> <strong>Management:</strong></p>
<p>What do we mean by it? Session management is like keeping a note of what a user is doing while they're using an app. It remembers details such as when you're logged in or what's in your shopping cart so you don't have to log in or fill your cart again on every page.</p>
<p>Instead of saving these notes in a slower database, Redis keeps them in memory. This means when you use a social media app, for example, your login stays active and your information is quickly remembered—even if you close your browser. Redis makes the whole experience faster and more secure by handling these "notes" in real-time.</p>
<p><strong>Geolocation:</strong></p>
<p>Heck, Redis can even be used to store and track real-time location data, making it possible to build applications that require location-based features, such as ride-hailing services or social media check-ins. For example, a ride-hailing service could use Redis to track the location of drivers and riders in real-time.</p>
<p>These diverse applications highlight Redis’s versatility, but what really makes it indispensable are its performance advantages:</p>
<h3 id="heading-why-is-it-the-go-to-choice-for-real-time-systems"><em>Why is it the Go-To Choice for Real-Time Systems</em></h3>
<p>A little list here too about the same</p>
<ol>
<li><p><strong>Ultra</strong>-<strong>Low</strong> <strong>Latency</strong> – Responds almost instantly—usually in less than a millisecond—so users don’t experience any delay.</p>
</li>
<li><p><strong>High</strong> <strong>Throughput</strong> – Can handle millions of requests per second, keeping things running smoothly even during heavy traffic.</p>
</li>
<li><p><strong>Pub</strong>/<strong>Sub</strong> <strong>Messaging</strong> – Lets different parts of your system talk to each other in real time, much like a live chat or news feed.</p>
</li>
<li><p><strong>Scalability</strong> – Easily grows by adding more servers, ensuring that performance stays high as your system expands.</p>
</li>
<li><p><strong>Time</strong>-<strong>to</strong>-<strong>Live</strong> (<strong>TTL</strong>) – Automatically removes old or unused data to keep the memory clean and efficient.</p>
</li>
<li><p><strong>Multi</strong>-<strong>Model</strong> <strong>Storage</strong> – Supports various data types—from simple key-value pairs to more complex structures like lists and maps—making it versatile for different needs.</p>
</li>
</ol>
<p>To see these benefits in action, let’s explore how industry giants like Netflix and Hulu harness Redis for scalable streaming</p>
<h2 id="heading-how-netflix-and-hulu-use-redis-to-power-scalable-streaming">How <strong>Netflix and Hulu Use Redis to Power Scalable Streaming</strong></h2>
<p>In large-scale streaming services, delivering content swiftly and reliably is paramount (You don’t want trp ratings dropping because you delivered the content late). Both Netflix and Hulu have harnessed the power of Redis to meet these demands, a lil sneak peek</p>
<p><strong>Netflix</strong>: <strong>Scaling</strong> <strong>with</strong> <strong>Dynomite</strong> <strong>and</strong> <strong>Redis</strong></p>
<p>Netflix developed Dynomite, a distributed datastore that builds on Redis's features to support data availability across multiple regions.</p>
<p>This integration offers several advantages:​</p>
<ol>
<li><p><strong>Elastic</strong> <strong>Scalability</strong>: Netflix spreads its work across many servers. This means no single server is overwhelmed, and if one server has a problem, the others can pick up the slack. How does it do it? By deploying Redis clusters across multiple nodes, it effectively distributes workloads, minimizing single points of failure</p>
</li>
<li><p><strong>Caching</strong> <strong>API</strong> <strong>Responses</strong>: Utilizing Redis to cache frequently accessed metadata reduces latency and alleviates the load on primary databases, this helps the system deliver content quickly without constantly going back to the main, slower database.</p>
</li>
<li><p><strong>High</strong> <strong>Availability</strong>: Dynomite lets Netflix operate in multiple regions (or parts of the world) at the same time. This means that even if one region experiences issues, users in other regions can still enjoy uninterrupted streaming.</p>
</li>
</ol>
<p>For an in-depth exploration of Dynomite's performance benchmarks on AWS, refer to Netflix's technical blog post. ​[3]</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdwpXvJiIQ6r7XQDY9Iql5dQtMMhltL89Iu-eZj3c18tB_LwYYwILiTWRIz9TOL_7b-RgxkruZ7CbKxFQqyHvq_aW3r2Th3qnIVfBlGBZAyeFMpyNAyOFfCkkmlusniyHW9eX4LCQ?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt /></p>
<p><strong>Hulu</strong>: <strong>Managing</strong> <strong>Billions</strong> <strong>of</strong> <strong>Video</strong> <strong>Requests</strong> <strong>with</strong> <strong>Redis</strong> <em>[2]</em></p>
<p>Facing the challenge of serving over 4 billion videos, Hulu integrated Redis to bolster its infrastructure:​</p>
<ol>
<li><p><strong>Session</strong> <strong>Storage</strong>: Redis keeps track of user sessions across many servers. This means if one server fails, another can quickly take over, ensuring a smooth and continuous experience for the user.</p>
</li>
<li><p><strong>Content</strong> <strong>Delivery</strong> <strong>Optimization</strong>: Hulu caches video details and thumbnails in Redis. This allows videos and images to load faster and reduces the load on the main servers, making the service more responsive.</p>
</li>
<li><p><strong>Rate</strong> <strong>Limiting</strong> <strong>and</strong> <strong>Traffic</strong> <strong>Management</strong>: Redis efficiently manages a high number of requests at once. This helps prevent system overload during busy times, ensuring the service remains stable even under heavy traffic.</p>
</li>
</ol>
<h2 id="heading-challenges-and-problems-faced"><strong>Challenges and Problems Faced</strong></h2>
<p>Despite Redis’s advantages, even the best have their weak spots—a chink in the armor, if you will. Large-scale implementations come with their own set of challenges:</p>
<ul>
<li><p><strong>Memory</strong> <strong>Constraints</strong>: Being an in-memory store, Redis requires careful memory management to prevent excessive costs.</p>
</li>
<li><p><strong>Data</strong> <strong>Persistence</strong> <strong>Issues</strong>: Ensuring data consistency in case of crashes requires additional configurations.</p>
</li>
<li><p><strong>Replication</strong> <strong>Overhead</strong>: Scaling Redis clusters demands efficient replication strategies to balance performance and reliability.</p>
</li>
<li><p><strong>Sharding</strong> <strong>Complexity</strong>: Splitting data into manageable pieces (or shards) requires careful planning to distribute workloads effectively.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Redis has become an integral part of large-scale streaming services, eliminating lag and keeping content delivery smooth. But the road isn’t without challenges. Scaling efficiently, managing traffic spikes, and ensuring high availability across multiple regions require constant innovation. Fortunately, Redis continues to evolve, adapting to the ever-growing demands of real-time content delivery. As streaming services push the boundaries of speed and quality, Redis remains the silent hero, ensuring that every movie night is seamless.</p>
<p>Take a simple movie night, I had recently, for instance.</p>
<p>I settled into my couch, ready to watch <em>Interstellar</em> for the hundredth time. I hit play.</p>
<p>Baam—no buffering. No waiting. Just pure speed.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXekgChR5yR_wXlu0sM_USzyoMMs00cqpUTK44tAT048juxL3HjoDaVwQgdbTsSqkvTvFP7mlBdMXuKXKV6ZrNM__nQKYpNtr6sbna7zp9wy4aaLGMRK6SlI0eQN_HaZ6MC5SO0l?key=SDkzOAVZvGXe-xTpTAVMVNPW" alt class="image--center mx-auto" /></p>
<p><strong>References:</strong></p>
<ol>
<li><p><a target="_blank" href="https://architecturenotes.co/p/redis"><em>https://architecturenotes.co/p/redis</em></a></p>
</li>
<li><p><a target="_blank" href="https://venturenox.com/blog/the-power-of-redis-in-transforming-real-time-applications/"><em>https://venturenox.com/blog/the-power-of-redis-in-transforming-real-time-applications/</em></a></p>
</li>
<li><p><a target="_blank" href="https://blogs.vmware.com/tanzu/case-study-how-hulu-scaled-serving-4-billion-videos-using-redis/"><em>https://blogs.vmware.com/tanzu/case-study-how-hulu-scaled-serving-4-billion-videos-using-redis/</em></a></p>
</li>
<li><p><a target="_blank" href="https://netflixtechblog.com/dynomite-with-redis-on-aws-benchmarks-5c942fc7ca38"><em>https://netflixtechblog.com/dynomite-with-redis-on-aws-benchmarks-5c942fc7ca38</em></a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[A Sneak Peek into My Favorite Coding Event]]></title><description><![CDATA[Here’s Why I’m Writing About Reverse Coding
If you’re passionate about competitive programming, you’ve probably heard of ICPC, the Olympics of Programming—the most prestigious algorithmic programming contest in the world. Every year, thousands of the...]]></description><link>https://blog.acmvit.in/rc</link><guid isPermaLink="true">https://blog.acmvit.in/rc</guid><category><![CDATA[#reversecoding]]></category><category><![CDATA[Problem Solving]]></category><category><![CDATA[coding]]></category><category><![CDATA[analytical]]></category><category><![CDATA[coding competition]]></category><category><![CDATA[ACM]]></category><dc:creator><![CDATA[Krish Chitlangia]]></dc:creator><pubDate>Thu, 06 Feb 2025 11:21:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738839336353/9805b28f-af1e-42ec-9322-b4236e8a5579.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-heres-why-im-writing-about-reverse-coding">Here’s Why I’m Writing About Reverse Coding</h2>
<p>If you’re passionate about competitive programming, you’ve probably heard of ICPC, the Olympics of Programming—the most prestigious algorithmic programming contest in the world. Every year, thousands of the brightest minds battle it out for a coveted spot in the regionals and beyond.</p>
<p>I’ve had the privilege of competing in ICPC regionals twice, securing AIR 69 in Chennai and AIR 51 in Amritapuri. These experiences have shaped me into a sharper problem-solver, teaching me how to break down complex problems and optimize solutions under extreme time constraints.</p>
<p>Beyond ICPC, I’m also a Specialist on Codeforces, where I’ve been actively competing for the past year, continuously refining my skills against some of the best programmers in the world. Competitive programming isn’t just a hobby—it’s an obsession. I’ve spent countless hours grinding problems on Codeforces, LeetCode, and AtCoder, tackling a diverse range of problem-solving paradigms, from graph theory and dynamic programming to bit manipulation and number theory.</p>
<p>But here’s the thing—no matter how many problems you solve, there’s always a new way to challenge yourself. Reverse Coding is one such challenge that breaks the traditional approach to problem-solving.</p>
<p>Most contests give you a problem statement, and your job is to write a program that satisfies the given constraints. But in Reverse Coding, the game is flipped entirely—<strong>you’re only given input-output pairs, and it’s up to you to decode the hidden logic behind them<em>.</em></strong></p>
<p>This concept is incredibly powerful because it doesn’t just test your coding ability—<em>it tests how well you can reverse-engineer solutions, think critically, and deduce patterns from seemingly random outputs.</em> These skills are crucial in real-world problem-solving, whether in competitive programming, software development, or AI research.</p>
<p>That’s why I’m writing this blog. If you’re a competitive programmer, a logic enthusiast, or someone who simply loves puzzles, I highly recommend participating in Reverse Coding at Yantra Week, organized by VIT’s ACM Chapter.</p>
<p>Let’s dive into the who, what, where, and how of Reverse Coding and why this event is a must-attend for anyone serious about problem-solving.</p>
<h2 id="heading-what-is-reverse-coding"><strong>What is Reverse Coding?</strong></h2>
<p>Imagine this—you’re in a competitive programming contest, but instead of receiving a well-defined problem statement, you’re given only input-output pairs. The logic behind them? A complete mystery.</p>
<p><em>Your task: figure out the pattern and write a program that replicates it.</em></p>
<p>This is Reverse Coding, a twist on traditional problem-solving where you must reverse-engineer the underlying logic using only the given examples. Think of it as debugging in reverse—instead of fixing a broken program, you’re uncovering the hidden logic that connects inputs to outputs.</p>
<p>Here’s a simple example:</p>
<p>🔹 Input: 1 → Output: 1<br />🔹 Input: 2 → Output: 4<br />🔹 Input: 3 → Output: 9</p>
<p>Clearly, the logic is squaring the input (n²). So, your job is to write a function that squares a number.</p>
<p>But don’t let this simple example fool you. The actual competition will feature far more complex patterns, edge cases, and deceptive sequences that will test your ability to think outside the box.</p>
<h2 id="heading-why-reverse-coding-is-a-game-changer"><strong>Why Reverse Coding is a Game-Changer</strong></h2>
<p>Competitive programming is all about recognizing patterns—whether in dynamic programming states, number sequences, tree structures, or graph connectivity. The best programmers aren’t just those who can implement standard algorithms but those who can identify hidden patterns quickly and apply the right logic to solve problems efficiently.</p>
<p>But Reverse Coding takes this to another level by removing the problem statement entirely. Unlike traditional contests, where you start with a well-defined question and work toward a solution, Reverse Coding gives you only the output—leaving you to reconstruct the missing logic from scratch.</p>
<p>This fundamentally changes the way you approach problem-solving, forcing you to think in ways that most programming contests never train you for.</p>
<h3 id="heading-1-think-like-a-problem-setter">1. Think Like a Problem Setter</h3>
<p>Competitive programming typically involves solving problems, but have you ever thought about <em>how problems are created?</em></p>
<p>In Reverse Coding, you are essentially doing the job of a problem setter in reverse. Instead of solving a problem with a given approach, you must figure out what the problem even is.</p>
<ul>
<li><p><em>What mathematical or logical transformation is happening between input and output?</em></p>
</li>
<li><p><em>Is the output following a known formula, sequence, or transformation?</em></p>
</li>
<li><p><em>Are there multiple layers of logic, such as nested conditions, recursion, or modulo operations?</em></p>
</li>
</ul>
<h3 id="heading-2-sharpen-your-analytical-skills">2. Sharpen Your Analytical Skills</h3>
<p>Programming isn’t just about writing code—it’s about <strong>understanding data</strong>. Reverse Coding forces you to <strong>spot hidden patterns</strong> in numbers, strings, and sequences, making it an excellent exercise in <strong>data analysis</strong> and <strong>logical reasoning</strong>.</p>
<h3 id="heading-3-enhance-debugging-abilities">3. Enhance Debugging Abilities</h3>
<p>Reverse Coding simulates this debugging process but in a more structured way. It teaches you to:</p>
<p>🔹 Break problems into smaller test cases to isolate patterns.<br />🔹 Analyze unexpected behavior systematically instead of randomly tweaking code.<br />🔹 Develop logical intuition for how transformations affect output, making bug-fixing much faster.</p>
<p>If you’ve ever struggled with debugging, Reverse Coding is the perfect training ground to sharpen your debugging mindset.</p>
<h3 id="heading-4-develop-intuition-for-edge-cases">4. Develop Intuition for Edge Cases</h3>
<p>One of the hardest skills in competitive programming is anticipating edge cases before they appear. Many people lose contests not because they can’t solve a problem, but because their solution fails on hidden edge cases.</p>
<p>Since Reverse Coding involves actively testing the system with different inputs, it naturally trains you to:</p>
<p>✔ <em>Think about extreme values</em> (What happens at n = 1 vs n = 10⁶?)<br />✔ <em>Try negative and zero inputs</em> (Does the pattern change for -5 or 0?)<br />✔ <em>Identify hidden dependencies</em> (Does the output depend only on n or also on some hidden variable?)</p>
<p>This ability is critical in contests where problems often have tricky constraints that aren’t explicitly stated. By developing this intuition, you become a stronger problem-solver overall.</p>
<h2 id="heading-how-the-competition-works"><strong>How the Competition Works</strong></h2>
<p>At Reverse Coding, you’ll be working in a web-runner interface (similar to LeetCode’s environment). Here’s how the process unfolds:</p>
<p>1️ You input test cases based on the given constraints.<br />2️ The system generates an output for each test case.<br />3️ You analyze the input-output pairs to decode the hidden logic.<br />4️ Once you crack the pattern, you write a program that replicates it for all valid inputs.<br />5️ Your code is then evaluated for correctness and efficiency.</p>
<p>Unlike traditional contests where you start by reading a problem, here you start by experimenting with inputs, making every participant an investigator, a pattern-spotter, and a reverse-engineer.</p>
<h2 id="heading-what-kind-of-problems-can-you-expect"><strong>What Kind of Problems Can You Expect?</strong></h2>
<p>While I can’t reveal the exact problems (where’s the fun in that?), I can certainly give you an idea of the types of challenges you’ll face in <strong>Reverse Coding</strong>. Unlike traditional contests where you have a well-defined problem statement, here you’ll need to <strong>uncover the logic</strong> behind the given input-output pairs. Some problems will be <strong>obvious at first glance</strong>, while others will require deep <strong>pattern analysis, algorithmic intuition, and creative thinking</strong> to decipher. The best strategy is to <strong>experiment with diverse test cases, analyze trends, and adapt dynamically</strong>.</p>
<p>One of the most common categories you might encounter involves <strong>mathematical sequences</strong>. Problems in this category often deal with <strong>Fibonacci numbers, factorials, prime sequences, bitwise transformations, and modular arithmetic</strong>. For example, you might be given a sequence of outputs that correspond to prime numbers, factorials, or values obtained using bitwise XOR operations. Recognizing these mathematical properties quickly is key to solving such problems. Another possible challenge could involve <strong>modular arithmetic</strong>, where outputs are generated based on numbers wrapped around a certain modulo constraint, a concept frequently used in cryptography and number theory problems.</p>
<p>Another exciting category of problems involves <strong>graph-based outputs</strong>, where the given input-output pairs may represent <strong>connectivity, adjacency lists, or shortest paths</strong> between nodes. You might need to recognize a hidden <strong>graph traversal pattern</strong>, such as outputs representing <strong>Breadth-First Search (BFS) levels</strong> or <strong>Depth-First Search (DFS) orderings</strong>.</p>
<p>Finally, some problems will involve <strong>data-driven outputs</strong>, where statistical computations or probabilistic transformations determine the output.</p>
<h2 id="heading-how-to-approach-reverse-coding-like-a-pro"><strong>How to Approach Reverse Coding Like a Pro</strong></h2>
<p>As someone who has competed in ICPC regionals and Codeforces contests, I can tell you that success in Reverse Coding isn’t about brute force—it’s about smart thinking and strategic problem-solving. Unlike traditional programming contests where you can directly apply known algorithms, here you need to uncover the algorithm itself. The best way to do this is by experimenting with diverse inputs. Start small with numbers like 1, 2, 3 and check for common mathematical patterns such as squares, factorials, primes, or arithmetic sequences. Then, push the limits by testing boundary values—for instance, 0, 1, 10⁹—and see how the system reacts to negative or extreme cases. The more inputs you try, the more clues you gather about the hidden transformation.</p>
<p>Once you have some initial observations, look for mathematical relationships between the inputs and outputs. Is the output always a multiple of the input (n × k)? Does it involve bitwise operations like n XOR k? Could it be following a known mathematical sequence, such as Fibonacci, Catalan, or Tribonacci numbers? Recognizing these patterns early can give you a significant edge over competitors who are still guessing. If the problem isn’t purely mathematical, try to break down the output structure—for example, if the output is a string, check whether it’s being reversed, cyclically shifted, or encoded in ASCII values. If the output is an array, analyze how elements are being rearranged, sorted, or grouped.</p>
<p><em>A key strategy in Reverse Coding is to think in terms of state transitions</em>. Ask yourself: What happens when you increase the input? Is the pattern strictly dependent on the current input, or do previous inputs influence the result? Many problems may involve hidden state machines where the output is affected by a previous sequence of inputs. Recognizing dependencies between inputs can help you reconstruct recursive functions or automata-based transitions. Finally, while some problems may seem cryptic at first, it’s important to not overcomplicate your approach. The logic is hidden, not impossible—once you uncover the underlying pattern, implementing the solution will often be straightforward. Staying calm, testing methodically, and thinking outside the box are the keys to dominating Reverse Coding challenges.</p>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>Reverse Coding is more than just a competition—it’s a completely different way to think about programming. Instead of solving problems, you’re creating them from scratch, decoding logic, and thinking like a problem setter.</p>
<p>As an ICPC regionalist and Codeforces Specialist, I’ve seen how crucial pattern recognition, logical deduction, and reverse engineering are in high-level contests. Reverse Coding is the perfect training ground for anyone who wants to level up their thinking and problem-solving abilities.</p>
<p>So, if you love puzzles, programming, and breaking the conventional way of thinking, <em>this is the one competition you don’t want to miss.</em></p>
]]></content:encoded></item><item><title><![CDATA[How I Met Your AI: The Matrix of Microchips]]></title><description><![CDATA[I’m sure at some point, each one of us has daydreamed about living in The Matrix. In case you haven’t, let me quickly walk you through the plot. In the movie, the protagonist, Neo, had to plug himself into a machine to enter the simulated world of th...]]></description><link>https://blog.acmvit.in/how-i-met-your-ai-the-matrix-of-microchips</link><guid isPermaLink="true">https://blog.acmvit.in/how-i-met-your-ai-the-matrix-of-microchips</guid><category><![CDATA[neuralink]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI Alignment]]></category><category><![CDATA[braincomputerinterface]]></category><category><![CDATA[Human Cognition]]></category><dc:creator><![CDATA[Drashti Shukla]]></dc:creator><pubDate>Tue, 14 Jan 2025 13:26:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736852481879/35a98aa6-138a-4379-a8ee-897389c7a5e5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’m sure at some point, each one of us has daydreamed about living in <em>The Matrix</em>. In case you haven’t, let me quickly walk you through the plot. In the movie, the protagonist, Neo, had to plug himself into a machine to enter the simulated world of the Matrix, and that was how he could access the virtual reality of this world. He could manipulate the simulation and gain abilities far beyond normal human limits.</p>
<p><em>That’s pretty mind blowing, isn’t it?</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfA0TAmgxOEht9o2DjGdJCnQa7BcbUuKehDTDY3Q4L0VnqUZZ3ODnpxFTKV9_TvlWWEYrtAmgELK9DxC5ydWg-Mv2qEav0Llifcrm_Q4REV9Akx--qxhnCp7HxbvM6AqrttxOGV?key=0ViumVNGaQdtSMQ_IhjMFQ" alt class="image--center mx-auto" /></p>
<p>But what if I told you that in real life, we’re getting closer to having that machine plugged <em>inside</em> us?</p>
<p><em>Before you ask, “Drashti, what are you on?”—let me stop you right there. I’m serious!</em></p>
<p>I know, I know it might sound like something straight out of a cyberpunk novel, but humans merging with technology isn’t just some wild sci-fi fantasy. It’s actually been in the works for decades. Flashback to the '90s, when the first human microchip implants were used primarily for medical purposes, like tracking health data or helping people with disabilities.</p>
<p>But beyond that, the real vision was always bigger.</p>
<p>…<em>what if we could actually use technology to enhance our minds?</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdGGtxju0lKdyxvUlMvEA2GcLMozrvOmg-ye8wZNQBWdH4r9_2wBs60VQRjZEwT2W_SX1l1YRY3Kf2-BzBmDZm0eMdG8bCqmTcx5qR07oHg2Wn3JWlSgOIdpTrtE60Yh5SQBNmH3A?key=0ViumVNGaQdtSMQ_IhjMFQ" alt class="image--center mx-auto" /></p>
<p>Stumbled across <strong>Neuralink</strong> yet?</p>
<p>Founded in <strong>2016</strong>, turning this once wild, almost outlandish dream into reality, <strong>Elon Musk’s</strong> <strong>Neuralink</strong> was created with the ambitious vision of merging the human brain with advanced technology to help solve neurological disorders and eventually enhance human capabilities. <em>And it has been making some serious waves.</em></p>
<p>A recent survey states that <strong>Neuralink</strong> has a total of <strong>61 patents globally</strong>, with 18 granted so far. Over 80% of these patents are still active, providing the company with ongoing protection for its innovations. (<a target="_blank" href="https://www.reuters.com/technology/musks-neuralink-valued-about-5-bln-despite-long-road-market-2023-06-05/">Source</a>)</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdN5jnmU2FZtCCTX8LKYv7FVjbw5yhC_g719mPk_ZLvYzVTsyfAyBkM6b6znLZ7c8fJwORWCtxi4Z-OXZNR4gvq5VifKQqlMMtmCtSDdXg51YHJrNoQM9Do4dX5T7z_g--U7Eed?key=0ViumVNGaQdtSMQ_IhjMFQ" alt /></p>
<p>Now that we've had a glimpse of this groundbreaking innovation, it's time to dive into the who, what, where, and how of it all.</p>
<h3 id="heading-the-mind-machine-integration"><strong>The Mind-Machine Integration</strong></h3>
<p>Think of it this way: Over 86 billion neurons make up your brain's complex and intricate network and above all, your brain never catches a break. For seamless operations, these neurons process information continuously and send and receive signals to and from many bodily sections.</p>
<p>However, it's a monumental effort to maintain such a multifaceted and nuanced data system around-the-clock without interruption. Imagine you are driving a car in an unfamiliar location with no map or GPS and numerous paths. One can easily veer off course or lose their way entirely. In a similar way, confusion can occasionally overwhelm your brain. If only there was a smart GPS in your brain– a system that could efficiently and precisely direct those brain signals.</p>
<p>This is where the concept of <strong>microchipping</strong> comes into play.</p>
<p>By creating a direct interface between your brain and external devices, the Neuralink microchip helps re-establish lost connections or enhance existing ones. For instance, in the case of someone with a spinal injury, the chip could act as a conduit, helping signals from the brain bypass the damaged area and restore mobility.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXe3uKdHLTrseGWIh2z0K3-zhGZbmKXrt1bTNEKLaF_A_aI2I4CpuIRvNCTylPW1zJQIfY0AzPOXtKI1RmqSDqUnaASHhYM1EXW1-LjQ6enE7MEfxdWFv1qcV5Ag8YsP37MDDVdcmA?key=0ViumVNGaQdtSMQ_IhjMFQ" alt class="image--center mx-auto" /></p>
<h3 id="heading-decoding-the-how"><strong>Decoding the ‘How’</strong></h3>
<p>The N1 chipset, a <strong>coin-sized device</strong> with a diameter of only <strong>8 mm</strong> that is implanted straight into the skull, is the <em>brains</em> behind Neuralink's breakthrough. It blends perfectly with the neurons in the brain by using incredibly <strong>fine wires</strong>—thinner than a human hair strand. The operation is done by a <strong>robotic surgeon</strong> who steers clear of the arteries and veins in that particular area of the brain. Multiple chips can be inserted for complex circumstances, offering even more coverage and functionality.</p>
<p>Now I'm not saying that you could just plug in the microchip and learn kung fu in a few seconds like in the mind-blowing movie but think of it as the Matrix-<em>lite</em> version–no cables hanging out of your head, just a sleek, implantable device.</p>
<p>Following successful implantation, Neuralink records brain impulses, transforms them into digital data, and sends that data to external devices such as computers and prosthetic limbs,</p>
<p>Neuralink eliminates the need for large, obtrusive equipment by using <strong>wireless technology</strong> to transmit data between the brain and computers. Each of the <strong>1,024 electrodes</strong> on the chip can record or stimulate impulses. These electrodes are set up in <strong>64-thread layouts, with 200 microns</strong> separating each electrode and <strong>16 electrodes per thread</strong>. The robotic surgeon makes an incision in the skull that is just a little bit bigger than the chip itself, then meticulously <em>sews</em> the electrodes into the brain.  <em>(Umm honestly, now that I think I’ll take back the idea of implanting a chip and listening to spotify)</em></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeYSGHZhzwAYvCyqQNc7zgXpRS4s6BctMyzWsg6dttY2HmsSygGB4i1rzxHyKCvA6aL7nj22XeMx0OOtL4yrhyPmHB6DQRVRdMILHigguprZuFNpKYfU_FtpWvHIEd1AGzTIbMp?key=0ViumVNGaQdtSMQ_IhjMFQ" alt /></p>
<h3 id="heading-the-neural-timeline"><strong>The Neural Timeline</strong></h3>
<p><em>But how exactly did this Einsteinian moment come about? How did this bizarre idea take shape?</em></p>
<p>Let's rewind back to the pre-covid era of <strong>2017</strong> when <strong>Neuralink</strong> submitted its first patent application for "<strong>Neural Lace</strong>," a state-of-the-art technology. Neural Lace consisted of incredibly small electrodes that possessed the ability to <strong>monitor brain activity</strong>. It might sound a little surreal, but the chef-d'œuvre extraordinaire advancements that have followed were made possible by this foundational research.</p>
<p>Two years later in <strong>2019</strong>, Musk and his team unveiled the groundbreaking <strong>Brain-Computer Interface</strong> technology, which involved inserting flexible, ultra-thin electrodes into the human brain. Theoretically, these electrodes could allow individuals to control external devices such as computers or prosthetic limbs—<em>using nothing but their thoughts.</em></p>
<p>As a proof of concept, a demonstration was organized, streamed live from Neuralink’s headquarters, where the team introduced <strong>Gertrude</strong>, a pig implanted with Neuralink. This tracking of her neural signals during movement showcased the device’s impressive functionality and immense potential.</p>
<p>“<em>The public's reaction was mixed. Some were captivated by the potential of this technology, envisioning future applications in medicine and human augmentation. However, others expressed skepticism and concern, questioning the ethical implications and the feasibility of such advancements. Critics also noted that while the demonstration was impressive, it primarily showcased existing neuroscience capabilities rather than groundbreaking innovations. -BBC</em>”</p>
<p>Fast forward to <strong>2021</strong>, the burning question was finally going through its trials;</p>
<p><em>Could Neuralink chips be implanted into humans?</em></p>
<p>Among the first experiments, the objective was clear: to utilize the brain-computer interface to <strong>restore mobility</strong> in patients with severe spinal cord injuries and neurological disorders. As part of <strong>Neuralink's</strong> <strong>PRIME</strong> (Precise Robotically Implanted Brain-Computer Interface) project, a <strong>wireless brain implant</strong> was tested on <strong>quadriplegic individuals</strong>, offering a glimmer of hope and giving a glimpse of the endless possibilities that could be unlocked.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736861016033/1b748616-7003-4a90-b6f8-a379457095b1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-monkey-mindpong"><strong>Monkey MindPong</strong></h3>
<p><strong>Neuralink</strong> gained international acclaim in <strong>April 2021</strong> for an intriguing experiment involving the <strong>macaque monkey</strong>, <strong>Pager</strong>. The N1 chip was positioned in the parts of Pager's brain in charge of hand and arm movements. The electrical signals his brain sent to control those motions were picked up by these small electrodes. Here’s where it gets interesting! Pager was initially taught to use a <strong>joystick to play a basic video game</strong>. The Neuralink device continued to <strong>record his brain activity</strong> while he played, learning to decipher the signals associated with his hand movements.</p>
<p>Once the team had enough data, they took away the joystick. But Pager kept playing the game, this time <strong>controlling the action on the screen purely with his mind</strong>. The chip translated patterns of his brain activity into real-time commands for the game. By the end, Pager was successfully playing a pong-like video game, proving that <strong>Neuralink’s</strong> technology could interpret neural signals and control external devices seamlessly.</p>
<p><em>Allow me to amuse you even more—checkout this YouTube video posted by Neuralink.</em></p>
<p><a target="_blank" href="https://youtu.be/rsCul1sp4hQ?si=k1zRvapBarBokeKI">Monkey MindPong</a></p>
<p>This remarkable demonstration displayed the potential of Neuralink to help people with paralysis or other motor impairments regain control over devices and, perhaps one day, parts of their own bodies. Technology sure has a way of blurring the lines between reality and the kind of futuristic worlds we usually only see in sci-fi movies.</p>
<h3 id="heading-the-now-and-the-next"><strong>The Now and The Next</strong></h3>
<p>Presently, <strong>Neuralink</strong> is diving into the medical world with groundbreaking applications—like <strong>treating neurological disorders</strong> such as Parkinson’s, epilepsy, and Alzheimer’s. It’s also exploring ways to restore movement for those affected by paralysis and working on enhancing both hearing and vision.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfPEx2CAfrHua1ZMEywLN1VTEJeIeWuuVVw5v5P1iuVlHWtS1YwSdMFSx7QSSLsnz0ZScY7hYkTmvH00WxKx4zaBSuRu3avYFR45sByBYPAh8pCGZLQ1ABqoorU7lko3HETX-FJ?key=0ViumVNGaQdtSMQ_IhjMFQ" alt /></p>
<p>So, what is <strong>Neuralink's</strong> final aim?</p>
<p>It’s to make it possible for humans to connect directly with AI, effortlessly exchanging thoughts and commands in real-time through a brain-computer interface. This vision stems from Mr. Musk’s belief in the urgent need for humans to stay ahead of the curve, especially given the rapid, exponential evolution of artificial intelligence. <strong>Its goal is to close the gap between the human mind and machines</strong>, ensuring that people don’t just keep up with AI but play an active role in shaping the future it creates.</p>
<p><em>Imagine a world where telepathy is a reality, thoughts are transferred directly between minds, and communication transcends linguistic barriers.</em></p>
<p>Although, let’s not give in to the temptations of dystopia just yet.</p>
<h3 id="heading-behind-the-breakthrough"><strong>Behind the Breakthrough</strong></h3>
<p><em>Have I been praising Neuralink a little too much? Well then, time to flip the coin.</em></p>
<p><strong>Neuralink</strong> has a great deal of technical and medical hazards that should be carefully assessed. This technology requires an unpleasant operation that involves making an <strong>incision in the skull</strong>. While the process is designed to be precise with robotic assistance, complications such as <strong>infection, inflammation, or thread retraction</strong> remain possibilities.</p>
<p>The electrodes may eventually <strong>disrupt the brain's normal functioning</strong>. Moreover, problems like <strong>device malfunction, battery failure, or disturbances in communication signals</strong> might reduce the implant's performance and prompt additional surgeries for repair.</p>
<p>Privacy and security are also pressing concerns. The way Neuralink devices <strong>capture and transmit cerebral activity</strong> naturally begs the question-- <em>who controls this extremely private information and how secure it is.</em></p>
<p>On a broader scale, the long-term effects of such a device remain unknown.</p>
<p><em>Could prolonged use of N1 chips lead to neurodegeneration or cognitive decline?</em></p>
<p><em>What about the psychological impact of using technology to even think, move, and communicate?</em></p>
<p><em>What would happen if someone who has become accustomed to using Neuralink for routine tasks faced device malfunction?</em></p>
<p>As <strong>Neuralink</strong> moves forward, these questions demand further research and open discourse. These risks serve as a reminder that striking a balance between the potential to transform medicine and advance human potential while ensuring safety and ethical conduct is no easy task.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXd3DX2kk2N5DCNYii88wGKufs1YG_1y3xmrFYDf7n5jztXLOzlf187wnvWfiGJj15EBTopQA_hTJmKOoGnHGXZk_yWiV7OUTdiRRRc6Ffzd3lFDK6jkM5fZyIcCuz9sHgGQe7VW5w?key=0ViumVNGaQdtSMQ_IhjMFQ" alt class="image--center mx-auto" /></p>
<h3 id="heading-epilogue"><strong>Epilogue</strong></h3>
<p>This fine line between risk and reward raises profound questions about the kind of future we want to create. Mr. Musk, the visionary behind this creation, has often shared his thoughts on achieving <strong>AI alignment</strong>—a concept where humans can merge with AI in a controlled, harmonious way, ensuring that this integration remains beneficial and safe for society. In the long run, he sees Neuralink as a way to foster a <strong>symbiotic relationship</strong> where both humans and machines can enhance each other’s abilities.</p>
<p>Microchipping pushes the boundaries of what it means to be human, merging technology with the brain in ways that were once pure science fiction. It’s a step into a future where the line between man and machine begins to blur—a world where our thoughts could control devices, and technology in turn, could enhance our abilities. But with such innovation come big questions.</p>
<p><em>Will this lead to incredible advancements that improve our lives and understanding of each other? Or could it take us further from what makes us human in the first place?</em></p>
<p>The boundary between man and machine continues to fade, and maybe, in the end, <em>the Matrix doesn’t close with a plug-in, but with the seamless integration of the mind and machine</em>.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdaKKbbOA9mrOkC0uAEI_UCMbNsohgfpoikYZ1ruVveLl5EUN_ot5us6gZWoBC1iBY76f-A5iFH_lvsuBRodd2FSt-cHy6aC450_DaDXweWJIM97I81vK4VprriXVXF0rociVyl?key=0ViumVNGaQdtSMQ_IhjMFQ" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Whispers Between the Hovers: The Magic of Micro-Interactions]]></title><description><![CDATA[Enter the Realm of Micro-Interactions
You get the impression that you've entered a tiny, private coffee shop where every little detail has been carefully considered thanks to the comfortable seats, pleasant lighting, and background sound of cups clin...]]></description><link>https://blog.acmvit.in/micro-interactions</link><guid isPermaLink="true">https://blog.acmvit.in/micro-interactions</guid><category><![CDATA[Design]]></category><category><![CDATA[UI]]></category><category><![CDATA[UX]]></category><category><![CDATA[ui ux designer]]></category><category><![CDATA[microinteraction]]></category><category><![CDATA[pullToRefresh]]></category><category><![CDATA[animations]]></category><dc:creator><![CDATA[Nishtha Aggarwal]]></dc:creator><pubDate>Wed, 01 Jan 2025 11:18:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735727434955/9adfe389-ee16-4086-90c0-288b15ac45df.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-enter-the-realm-of-micro-interactions">Enter the Realm of Micro-Interactions</h3>
<p>You get the impression that you've entered a tiny, private coffee shop where every little detail has been carefully considered thanks to the comfortable seats, pleasant lighting, and background sound of cups clinking. Every component has been thoughtfully crafted to ensure your comfort and a seamless experience that extends beyond just a cup of coffee.</p>
<p><em>"Feels familiar, doesn't it?"</em></p>
<p>Imagine a world in which each tap, click, and scroll is meant to be as purposeful as possible in order to guide, instruct, and entertain. This is the area of micro-interactions, the small elements that turn a simple interface into one that is genuinely engaging.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcJavcCnVAsppCh1h1_ptrWstMETGj0oLMXzxwaoU7wtihf8eiy40bXcs3AmuOqe2k8nLtkyEiUdeZlVWqK5KcqXszf7xwvcg18g2tSZlw7aYwQFOUtmka7OnHQFHWFvL7OVQFFZA?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h3 id="heading-the-magic-unveiled">The Magic Unveiled</h3>
<p>Even though micro-interactions may appear minor or incidental, they give digital experiences flavor, much like spices do in cooking. These are what give an interface life—subtle vibrations, animations, or visual clues. Micro-interactions, which provide immediate feedback, walk users through activities, and add charming details that leave an interface memorable, are essential to the user experience even though they are frequently disregarded. With these careful touches, a decent UI can become a genuinely delightful and fulfilling experience.</p>
<p>Dan Saffer, a leading voice in the world of UX, encapsulates this approach with insightful simplicity:</p>
<p><em>“Micro-interactions are an exercise in restraint, in doing as much as possible with as little as possible. Embrace the constraints and focus your attention on doing one thing well. Mies van der Rohe’s mantra of ‘less is more’ should be the micro interaction designer’s mantra as well.”</em></p>
<p>Micro-interactions are the little but powerful components that allow users to interact with an interface in a natural, human way in the digital age. They are much more than just bells and whistles. These little details give the digital world a somewhat livelier feel, whether it's the joyful bouncing of an app icon when it's opened or a faint glow that indicates a button is ready to be touched. They provide us with constructive criticism, lead us through tasks, and even inject some humor into the process</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735724672766/8f375eba-8398-495c-a85d-a1d465e3b882.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-four-elements-of-micro-interactions">The Four Elements of Micro-Interactions</h3>
<p>Micro-interactions are made up of four main components: <strong>Triggers</strong>, <strong>Rules</strong>, <strong>Feedback</strong>, and <strong>Loops &amp; Modes</strong>. Each part plays a role in creating a seamless, enjoyable user experience. Let's explore each one with visual analogies to bring them to life.</p>
<h4 id="heading-1-trigger"><strong>1. Trigger</strong></h4>
<p>The <strong>trigger</strong> is the starting point of a micro-interaction, activated either by the user or the system. User-triggered actions may involve clicking, swiping, tapping, or scrolling, while system-triggered actions happen automatically under certain conditions.</p>
<p>Picture a chef preparing a meal. The moment they chop the first ingredient is a trigger, setting off a series of actions: sautéing, seasoning, and plating. Similarly, in digital design, a user’s initial click, refreshing a page or swipe initiates a series of responses that guide them through their task.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcJ3j61ClsAh3bp4vVz6T7kuP4AhjR5NRWErF-fIe6DYkbo9AvFGa3HqM8ee92EdWx6T5HTihrv7BW0p6l2ND6Tg9wFf_F1O-kI_k2JthQfkZol7BC1vHz_8Q8gx3hXb62Sfxry1Q?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h4 id="heading-2-rule"><strong>2. Rule</strong></h4>
<p>The <strong>rule</strong> defines the next steps once a micro-interaction is triggered; it’s the interaction’s "instruction manual." The rule responds to the user or system’s trigger,</p>
<p>Imagine tapping the theme icon on your device. The rule here is simple: tapping the icon should switch between light and dark modes. This principle applies to both user- and system-triggered interactions, ensuring that each response aligns with the user’s journey and expectations.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdi6gjNvf7r2LoXL7z7lSZxHhkvA6iCbTVo0ORcKDxFdRZR6ytN7A1LaQ_Xt-rsT3CBuziDncJ4_MjYxI8b4s5vBWJ1Egx-bM2EsjSrQ8dirsSxly-z53Ybsx0h-aR3vfFnvhat_A?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h4 id="heading-3-feedback"><strong>3. Feedback</strong></h4>
<p><strong>Feedback</strong> lets users know what’s happening during a micro-interaction. It’s the visual or audio response to an action, providing users with real-time updates.</p>
<p>During a payment process, a red border might appear around a card number field if it’s incorrect, while a green border signals correctness. Users are energised and gain confidence as they navigate the experience thanks to this instant feedback, which is similar to an audience applauding an actor's performance. These answers improve user satisfaction and add interest to the interface.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfQhsci34eIi1DVjYFgv2UOpDbf-mjlNFOe6Nh1x-nSAO28Z-HIUiDNF4Q2lME-ShhV7aQNvUs5q7mzC_3K5MBGq_5RdDGr51--7w3J3v2Nr6zHyyZfV8J7uayJ9xE6HV3E9_Ql?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h4 id="heading-4-loops-and-modes"><strong>4. Loops and Modes</strong></h4>
<p>The behaviors and length of a micro-interaction are determined by <strong>loops and modes</strong>. The current configuration that remains in effect until it is altered is called a mode. For instance, unless the user selects a different project, the default project in a time-tracking app stays selected. Conversely, loops control the duration of an interaction.</p>
<p>The timer loop in a time-tracking app keeps counting until the user stops it. Users are kept interested by the dynamic flow and constantly shifting performance created by this continuous motion.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdDnQH3KglOI0yctoJg8Z3sJSdlX9-5_ZRcwDvguk5rzTbfnb8bA91U_gv8kyF0kc69Y_0xbqaXcpKNxnWPdFAcyd8vO5DU_kuRGKBAR4Fsp6WSgwwsrCChjUVgKGGZ3SGxGm4GQQ?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h3 id="heading-why-micro-interactions-steal-the-show">Why Micro-Interactions Steal the Show</h3>
<p>Technology's capacity to empower and engage individuals is one of its biggest benefits. Beyond simple usefulness, a truly great experience should be captivating and unforgettable. In order to do this, micro-interactions are essential since they improve a product or service's appearance, feel, and general usability.</p>
<p>Micro-interactions improve the user experience in the following significant ways:</p>
<ul>
<li><p><strong>Promoting Engagement:</strong> Users are encouraged to interact with the UI through interactive touchpoints.</p>
</li>
<li><p><strong>Status Indication:</strong> Users' expectations are managed with subtle animations that notify them of loading times.</p>
</li>
<li><p><strong>Error Prevention:</strong> Quick feedback and direction cut down on errors and frustration, which lowers churn rates.</p>
</li>
<li><p><strong>Brand Identity:</strong> Distinct animations add character to a brand and leave a lasting impact.</p>
</li>
<li><p><strong>Human Touch:</strong> Fun and relatability are enhanced by whimsical nuances.</p>
</li>
<li><p><strong>Real-Time Feedback:</strong> Users feel reassured that their actions are acknowledged by prompt responses.</p>
</li>
<li><p><strong>Improved user interface:</strong> Delicate animations produce a natural, enjoyable experience.</p>
</li>
<li><p><strong>Faster Adoption:</strong> New users might adjust more rapidly when they have friendly interactions.</p>
</li>
<li><p><strong>Task Simplification:</strong> Micro-interactions simplify difficult tasks by dividing them into smaller, more manageable steps.</p>
</li>
</ul>
<h3 id="heading-iconic-micro-interactions-on-the-ux-stage">Iconic Micro-Interactions on the UX Stage</h3>
<p>Micro-interactions have been effectively included into a number of well-known platforms to improve user experience, increase engagement, and communicate important information. Here are a few noteworthy instances:</p>
<p><strong>Instagram’s Pull-to-Refresh</strong></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeNW0GvGcEIhs4hA3cyKeiYTFq99bKQ27DhaYlaitSml29b2XE2MrFkjJZFkaHyIS8TPOJ82ZwIODgm-zgo3Zvz3CgEHVeJopjshPxyFBD2dkT5z-vPCi04TO93GUYj8KIHySOv?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<p>By letting users pull down a list, Loren Brichter's Pull-to-Refresh feature, which was first used in the Tweetie app (2010), makes it easier to refresh material. Since then, mobile platforms have widely embraced this user-friendly design.</p>
<p>Instagram's approach is commended for its fluid and captivating experience, which includes smooth transitions that produce fluid motion from pull to refresh, progressive feedback as the animation changes during the pull, and subtle physics for realistic bounce effects.</p>
<ol>
<li><em>Basic Animation Flow</em></li>
</ol>
<ul>
<li><p>Pull Gesture Detection: The pull-to-refresh animation is triggered by a downward swipe.</p>
</li>
<li><p>Progressive Animation: As the user pulls, icons—like a spinning arrow—animate proportionately.</p>
</li>
<li><p>Release and Load State: The animation changes to a loading spinner and the content refreshes when the threshold is reached.</p>
</li>
<li><p>Finalization: The animation seamlessly resets after loading.</p>
</li>
</ul>
<p>2. <em>Smooth Transition Elements</em></p>
<ul>
<li><p>Easing Routines: Personalized easing functions offer a spring effect upon release and produce responsive, flowing animations.</p>
</li>
<li><p>Physics-Based Animations: The pull to refresh experience is improved with realistic effects like bounce-back upon release.</p>
</li>
</ul>
<p>3*. Implementation Techniques*</p>
<ul>
<li><p>UIRefreshControl is used for pull-to-refresh in iOS (UI Kit/Swift UI). For distinctive animations, subclass it and add new views.</p>
</li>
<li><p>Android: Customize SwipeRefreshLayout in Recycler View with animations.</p>
</li>
<li><p>Web: To animate SVGs and icons, use JavaScript touch events, CSS animations, or frameworks like GSAP.</p>
</li>
</ul>
<p>The responsiveness and tasteful micro-interactions of Instagram's pull-to-refresh animation make it stand out. It provides a user experience that is both intuitive and visually captivating by combining subtle physics, progressive feedback, and smooth transitions.</p>
<p><strong>Duolingo’s reward animations</strong></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfchkGOqZDAGDFIHj1AmnG9HYF0phcq2TVdm1AnW23FwW2UbTUWpVtiO3nfyIPM979lDIkJgXSs2AGs9LcUVu3fHl1xl1oCb4QqbeznGtH7sWjmH4n2SkChQO328XOhyRVq-9Svng?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<p>A key component of Duolingo's entertaining and captivating app are its reward animations, which employ interactive graphics, audio, and behavioral psychology to strengthen users' feelings of achievement. Here's a thorough explanation of how they operate:</p>
<ol>
<li><em>Instantaneous gratification and progressive rewards</em></li>
</ol>
<ul>
<li><p>Gems, Badges, and XP: Duolingo provides users with immediate satisfaction by rewarding them with animated gems, badges, or XP for finishing lessons or streaks.</p>
</li>
<li><p>Instant Feedback: Users are inspired to continue learning when they receive incentives right away, which gives them a sense of accomplishment.</p>
</li>
</ul>
<p>2*. Visual and Audio Feedback*</p>
<ul>
<li><p>Confetti Effects &amp; Animations: Pop-ups and colorful bursts generate excitement and visually celebrate victory.</p>
</li>
<li><p>Music &amp; Sound Effects: Chimes, fanfare, and lively music add to the festive atmosphere.</p>
</li>
<li><p>Responsive animations: To maintain the interface's vibrancy, components such as jewels and streak counters respond with nuanced movements or lighting effects.</p>
</li>
</ul>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf3LQXu-k0GGMa1iz9ILfuwl5SPGYn7_jBxU3Dj32vnfpe0_qRS61bjiDZFSvSpIU56AXQBdDS6wVNr-R0rq7fNW12D4-Bi9WAjNih_4xw1nBvbqifeB_WwWs_fvNy0eID9HVLctg?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<ol start="3">
<li><em>Gamified Elements</em></li>
</ol>
<ul>
<li><p>Leaderboard Animations: These dynamic animations encourage users to keep up their activity and create a sense of rivalry by showing them moving up or down in the rankings.</p>
</li>
<li><p>Achievements &amp; Badges: Duolingo gives out badges for reaching milestones like streaks or finishing lessons. When a badge is acquired, it flashes or glows, which motivates users to get more.</p>
</li>
</ul>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdGTp8VV0F_-RwC24BlG-ReeTG9_6236a_jRzVhP5BcIV-PMY9CMIOaoKCf1otNgQsk8Wse-_pEKEGwdFk0aGddtkGG7zzrWlBw8S2F4F-fV8quTiv-V3L9Qeg8yLhPx1JQJnL2nA?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<ol start="4">
<li><em>Psychological Impact of Reward Animations</em></li>
</ol>
<ul>
<li><p>Positive Reinforcement: To encourage learning and user retention, Duolingo employs immediate, visible rewards.</p>
</li>
<li><p>Sense of Progress: Users are kept interested by animations that clearly convey their progress through achievements, levels, and streaks.</p>
</li>
<li><p>Dopamine-Driven Engagement: Through anticipation and reward, reward animations cause dopamine to be released, which in turn creates a potent engagement cycle.</p>
</li>
</ul>
<p>The reward animations on Duolingo combine gamification, instant gratification, and multisensory input to make learning a language fun and interesting. Duolingo's success can be attributed to the way it uses sound, animations, and character-driven interactions to turn a potentially boring activity into an enjoyable and rewarding experience.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfMon2Cpxj13KvF8VIqfss-NHXok4Igj1jqOrXbsbOhPsSHMlDyTwcvBwWzcNKlD3HOe8R02Nv_U9CPTvjTIVfaxlQ0kHqpnrF2rDDTOrqJTwjj4fuTAwz3BWEBawl5RsrqCTlDyg?key=8Tj08DycwrhTzBt_9kK4Eg" alt /></p>
<h3 id="heading-impact-of-poorly-designed-micro-interactions">Impact of Poorly Designed Micro-Interactions</h3>
<p>The "undo send" feature in Gmail's original version is a well-known example of a failure caused by missing or poorly designed micro-interactions. When users pushed "send," there was initially no apparent indication that the email was being sent, and there was no real-time indication of how long they had to "undo" the action.</p>
<p><strong>Consequences of Poor Micro-Interactions in This Case:</strong></p>
<ol>
<li><p><em>Uncertainty and Panic</em>: When someone realizes they made a mistake in their email too soon, they may become anxious because they don't know how long they have to reverse the send. They were unable to determine how long it took for the email to be permanently delivered because there was no visible countdown or progress animation.</p>
</li>
<li><p><em>Missed Correctional Opportunity</em>: Users may become agitated and regret their acts if they are unable to undo them in the brief amount of time provided by a clear feedback loop or animation showing the "undo" button.</p>
</li>
<li><p><em>Decreased Trust:</em> Users were less confident in the feature's functionality in the lack of clear feedback, such as a countdown or a notification. The purpose of the feature is to serve as a safety net.</p>
</li>
</ol>
<p>This example demonstrates how crucial micro-interactions are for users to understand and safely interact with features, especially when those features call for quick actions. Real-time feedback and visual cues are two examples of this. Without these small but important interactions, users may feel lost, unsatisfied, or uncertain, which can lead to negative experiences.</p>
<h3 id="heading-encore-a-designers-epilogue">Encore: A Designer’s Epilogue</h3>
<p>Micro-interactions are the small but important details that make the difference between usefulness and enjoyment. As I researched this subject, I discovered how frequently little things like animations or feedback cues might have a significant impact on how people view a website or app. Apps that provide instant, clear feedback or smoothly walk you through a job using animations, for example, don't just work—they feel correct. These encounters build relationships and leave a lasting impression.</p>
<p>When I discovered that sometimes the most memorable app experiences are the ones that appear invisible, like an animation that rewards you when you complete a task or a subtle visual cue that leads you through the interface, the idea of designing for these tiny but significant moments first caught my attention. A user-centered approach is promoted by this design philosophy, which holds that paying attention to small things has a big influence.</p>
<p>As you refine your own designs, always remember to ask yourself:</p>
<ul>
<li><p><em>Is the action intuitive and does it make the user feel confident about what to do next?</em></p>
</li>
<li><p><em>Does the user know right away whether their action was successful, or should there be a clearer visual or auditory cue?</em></p>
</li>
<li><p><em>Whether playful, elegant, or professional, do the animations and transitions match the overall tone of the brand?</em></p>
</li>
<li><p><em>Does it feel fast and fluid, or does it cause delay and frustration?</em></p>
</li>
<li><p><em>Does it enrich the user experience, or is it distracting or too complex for its purpose?</em></p>
</li>
</ul>
<p>By consistently incorporating deliberate elements into each click, hover, and scroll, designers may leave a little magic in their wake, transforming the commonplace into the remarkable and adding a little more fun and significance to each user experience. These minor adjustments have the power to transform a simple interface into something genuinely unique, making consumers want to use it repeatedly.</p>
<p>Through deliberate attention to these micro-details, designers transform functional apps into delightful experiences, <em>ultimately bridging the gap between technology and human emotion.</em></p>
]]></content:encoded></item><item><title><![CDATA[Beyond the Totem: AI-Driven Dream Manipulation, The Future Of Lucid Dreaming]]></title><description><![CDATA[You're walking through a quiet Parisian street when you notice something strange. The buildings on either side of you start to rise, twisting and folding upward until they meet, forming an impossible arch overhead. A surge of exhilaration fills you—a...]]></description><link>https://blog.acmvit.in/beyond-the-totem-ai-driven-dream-manipulation-the-future-of-lucid-dreaming</link><guid isPermaLink="true">https://blog.acmvit.in/beyond-the-totem-ai-driven-dream-manipulation-the-future-of-lucid-dreaming</guid><category><![CDATA[Lucid Dreaming]]></category><category><![CDATA[AI]]></category><category><![CDATA[ConvolutionalNeuralNetworks]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[neural networks]]></category><dc:creator><![CDATA[Lakshmi Sarupa Venkadesh]]></dc:creator><pubDate>Tue, 29 Oct 2024 11:32:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/LiLPRqxWI9I/upload/4477ced089f1505ca80513dc84222582.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You're walking through a quiet Parisian street when you notice something strange. The buildings on either side of you start to rise, twisting and folding upward until they meet, forming an impossible arch overhead. A surge of exhilaration fills you—a sense of power at knowing that you’re in control of it all, like you’re stepping into a world that bends to your will.</p>
<p>Hmm…feels familiar?</p>
<p>You run your hand along the wall, and with a single thought, you shift the entire scene. The walls dissolve into sand, revealing an endless desert under a blazing sky. Your mind races with the thrill of possibility—<em>what else can I create? What if I just decided to—</em></p>
<p><em>But wait. This isn’t Inception.</em></p>
<p>There’s no dream architect guiding you through, no layers of subconscious built by a team of extractors. Instead, it’s you, in a controlled lucid dream, guided by AI. A headband rests on your temple, tracking your brainwaves, subtly shaping your dreams as you take charge. You’re not just visiting a world of imagination—you’re building it, with the help of technology that brings this surreal experience under your control.</p>
<h2 id="heading-the-dawn-of-controlled-dreaming"><strong>The Dawn of Controlled Dreaming</strong></h2>
<p>Lucid dreaming—the art of staying conscious within a dream—has been known and practiced for centuries, a rare skill that allows us to take the reins of our subconscious. Ancient Tibetan Buddhist texts describe dream yoga practices, while modern oneirologists have documented its potential for psychological healing and creative breakthrough.</p>
<p>But what if I told you that this experience of lucid dreaming could be achieved on command, night after night, with the convergence of artificial intelligence and neuroscience?</p>
<h2 id="heading-the-basic-idea-lucidly-put"><strong>The Basic Idea, Lucidly Put</strong></h2>
<p>AI-driven dream manipulation is about using technology, particularly AI, to monitor, influence, and guide your dreams in real-time. The core idea is to create dream states where you can become aware that you’re dreaming (lucid dreaming) and potentially control or shape your dreams. The process goes somewhat like this:</p>
<ol>
<li><p><strong>Monitoring</strong> an individual’s brain activity to detect specific stages of sleep (like REM- Rapid Eye Movement) using wearables equipped with EEG sensors that monitor your brainwaves.</p>
</li>
<li><p><strong>Classifying</strong> these different sleep stages so that dream manipulation can be initiated only during appropriate stages.</p>
</li>
<li><p><strong>Inducing lucidity</strong> with targeted stimulation. Using techniques like focused ultrasound or light and sound cues to stimulate the prefrontal cortex and trigger lucidity (awareness and consciousness while dreaming).</p>
</li>
<li><p><strong>Influencing the dream’s content</strong> or mood, enhancing control and direction within the dream. This can be done by feedback mechanisms that adjust the stimuli to help steer the dream’s narrative.</p>
</li>
<li><p><strong>Personalising</strong> the experience over time based on user-specific dream patterns and responses.</p>
</li>
</ol>
<h2 id="heading-the-technical-foundation-deep-learning-in-dream-analysis"><strong>The Technical Foundation: Deep Learning in Dream Analysis</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120755326/014d3a06-adf1-4e83-b499-5e43b3e73037.jpeg" alt class="image--center mx-auto" /></p>
<p>Modern dream manipulation rests on sophisticated deep learning architectures that process a complex symphony of sleep-related biosignals.</p>
<p>Deep learning algorithms are particularly adept at handling intricate information, like those generated during your sleep. This data includes brainwave activity, REM cycles, heart rate, muscle activity, breathing patterns, and body temperature.</p>
<p><strong>Convolutional Neural Networks (CNNs)</strong> process your EEG spatial patterns to predict and categorise different stages of sleep and dreaming, <strong>Long Short-Term Memory (LSTM)</strong> networks analyse temporal sequences in your sleep stages, and <strong>Transformer models</strong> identify recurring dream themes and patterns, providing insights into your mental state and areas of interest.</p>
<p><strong>Developing Neural Interfaces for Inducing and Controlling Dreams</strong></p>
<p><strong>Neural interfaces</strong>, or <strong>brain-computer interfaces (BCIs)</strong>, interact with the brain's electrical activity to induce or control dreams. Techniques such as <strong>transcranial direct current stimulation (tDCS)</strong> and <strong>transcranial alternating current stimulation (tACS)</strong> apply electrical currents to the scalp (<em>no, we’re not trying to electrocute you :)</em> ), potentially inducing lucid dreams. Neurofeedback devices provide real-time input on brain activity, helping individuals control their brain waves to achieve lucid dreaming.</p>
<h2 id="heading-the-halo-pioneering-dream-control-technology"><strong>The Halo: Pioneering Dream Control Technology</strong></h2>
<p>The most promising development in this field is the <strong>Halo device</strong>, developed by Prophetic in collaboration with the Donders Institute for Brain, Cognition, and Behaviour.</p>
<p>This pioneering development by the tech startup, has the ambitious goal of inducing and stabilising lucid dreaming using a combination of <strong>AI algorithms</strong> and <strong>focused ultrasound (FUS) technology</strong>. The study aims to determine the device's effectiveness in enhancing participants’ control over dream content, marking a significant step in human-computer interaction and neural modulation.</p>
<h4 id="heading-methodology"><strong>Methodology</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120889519/10f5fd66-6285-4fa1-a2b8-9d1b71648192.jpeg" alt class="image--center mx-auto" /></p>
<p>Participants in the study wear the Halo device while sleeping, allowing it to continuously monitor their brain activity through high-resolution EEG sensors. When REM sleep is detected, the device initiates ultrasound stimulation targeting the prefrontal cortex - a critical region for self-awareness and cognitive control. This stimulation is designed to enhance the likelihood of lucid dreaming—a state in which one is aware of and can manipulate the dream.</p>
<p>Key data collected includes:</p>
<ul>
<li><p><strong>EEG Recordings:</strong> These capture brain activity patterns that are characteristic of lucid dreaming, providing insights into neurological indicators of dream lucidity.</p>
</li>
<li><p><strong>Dream Reports:</strong> Participants record their dream experiences immediately upon waking, providing qualitative data on lucidity, control over dream narratives, and emotional response to dream scenarios.</p>
</li>
</ul>
<h4 id="heading-preliminary-results-early-indications-of-effectiveness"><strong>Preliminary Results: Early Indications of Effectiveness</strong></h4>
<p>Initial results from the study reveal a promising potential for the Halo device to enhance lucid dreaming experiences:</p>
<ol>
<li><p><strong>Increased Lucid Dreaming Incidence:</strong> Participants report a significantly higher frequency of lucid dreams after using the device compared to baseline readings before the study.</p>
</li>
<li><p><strong>Enhanced Control in Dreams:</strong> Reports also indicate a greater ability to control dream environments and storylines, suggesting the device may facilitate improved mastery over one’s dream state.</p>
</li>
</ol>
<p>Quantitative measures reflect these findings:</p>
<ul>
<li><p><strong>73% increase in lucid dream frequency</strong> among participants.</p>
</li>
<li><p><strong>85% of subjects experienced improved control</strong> over their dream content and narratives.</p>
</li>
<li><p><strong>15-minute extension in lucid dream duration</strong> on average.</p>
</li>
<li><p><strong>90% dream recall rate</strong> post-study, a substantial increase from a baseline of 45%.</p>
</li>
</ul>
<p>Qualitative improvements are also evident:</p>
<ul>
<li><p>Greater clarity and vividness in dream visuals.</p>
</li>
<li><p>Enhanced control over the dream narrative, leading to a more immersive and directive dreaming experience.</p>
</li>
<li><p>Notable improvements in emotional regulation, especially during nightmares, allowing participants to navigate distressing dream content with greater ease.</p>
</li>
<li><p>Sustained awareness across different dream scenarios, contributing to an overarching sense of stability within the dream state.</p>
</li>
</ul>
<h4 id="heading-expert-insights"><strong>Expert Insights</strong></h4>
<p>Professor Guy Leschziner, a neurologist and sleep medicine specialist, views the findings with cautious optimism. He acknowledges the intriguing potential of using ultrasound to stimulate the prefrontal cortex during REM sleep, yet emphasises the necessity of rigorous studies to evaluate long-term impacts and the ethical implications of brain modulation technology. Leschziner advocates for ongoing, detailed research to better understand any extended effects of frequent lucid dreaming.</p>
<h4 id="heading-future-directions"><strong>Future Directions</strong></h4>
<p>Building on these promising findings, Prophetic aims to launch an expanded study involving a year-long brain imaging project slated for late 2024. The broader trial will focus on refining the device’s AI algorithms for more precise stimulation, making it suitable for regular consumer use. The ultimate goal is a universally accessible device capable of reliably inducing lucid dreams, appealing to both researchers and the broader public intrigued by the potential of controlled dreaming experiences.</p>
<p>The Halo device’s unique combination of hardware and AI-driven software represents a milestone in dream control technology.</p>
<p><strong>Technical Components</strong></p>
<ul>
<li><p><strong>Advanced Sensors:</strong> High-resolution EEG electrodes, precision ultrasound emitters, and temperature and motion sensors.</p>
</li>
<li><p><strong>AI Processing Pipeline:</strong> Includes real-time signal processing, sleep stage classification, dream state prediction, and ultrasound targeting optimization.</p>
</li>
<li><p><strong>Neural Interface Mechanisms:</strong> Focused ultrasound stimulation specifically targets the prefrontal cortex with a closed-loop feedback system to maintain lucidity. Adaptive stimulation patterns are personalised to each participant’s neural responses.</p>
</li>
</ul>
<h3 id="heading-unlocking-potential-the-magic-of-ai-enhanced-lucid-dreaming"><strong>Unlocking Potential: The Magic of AI-Enhanced Lucid Dreaming</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120897532/50e57436-3887-4154-80b2-22950167d7ac.jpeg" alt class="image--center mx-auto" /></p>
<p>Now imagine diving into your subconscious, where aspirations unfold without limits. AI-driven lucid dreaming technology is opening doors to personal growth, creativity, and professional development.</p>
<p>Picture a public speaker transforming anxiety into confidence by rehearsing presentations in vivid dream simulations. Surgeons, too, can practice complex procedures in hyper-realistic dreamscapes, enhancing their skills and reducing stress.</p>
<p>But it’s not just about skills; lucid dreaming is a powerful emotional exploration tool. Navigate dreams where you confront unresolved feelings and rehearse tough conversations, fostering self-awareness and stronger relationships.</p>
<p>Creatives can find inspiration in dreams, painting entire dreamscapes or interviewing characters to enrich their narratives. In health, chronic pain sufferers can practice relaxation techniques in dreams, while individuals visualise movements to reinforce muscle memory and speed up recovery.</p>
<p>The scientific community is also tapping into this potential, simulating climate change effects and prototyping virtual reality experiences in low-risk environments.</p>
<p>As AI reshapes our relationship with sleep, lucid dreaming evolves into a powerful tool for creativity, healing, and excellence. Welcome to an era where our dreams become a canvas for growth and exploration!</p>
<h2 id="heading-ethical-considerations-and-safety-protocols"><strong>Ethical Considerations and Safety Protocols</strong></h2>
<p>Using AI and neural interfaces in dream manipulation raises significant privacy and ethical concerns. Collecting and storing sleep and dream data requires strong security measures to prevent misuse. Participants must give informed consent, understanding the risks and benefits. Ethical guidelines are needed to ensure dream manipulation doesn't harm users and to protect their well-being.</p>
<h2 id="heading-in-a-nutshell"><strong>In a Nutshell</strong></h2>
<p>As we stand at the threshold of mastering dream consciousness, AI-driven dream manipulation technology promises to unlock the full potential of our sleeping minds. While challenges remain in ethics, safety, and technical implementation, the future of controlled dreaming is rapidly becoming reality.</p>
<p>The question is no longer whether we can control our dreams, but how we'll use this extraordinary capability to enhance human experience, creativity, and healing. As research continues and technology evolves, we're witnessing the dawn of a new era in human consciousness—one where the boundary between waking and dreaming becomes a bridge rather than a barrier.</p>
<p><em>And hence, the totem tumbles to the ground..</em></p>
]]></content:encoded></item><item><title><![CDATA[Honey I shrunk the AI : Quantizing LLM's for Edge Hardware]]></title><description><![CDATA[One could argue that humanity’s rise to power on this planet came from its ability to walk on two legs, or the ability to throw sharp rocks at food, or even the ability to touch, hear and see at a deeper level than any other animal. However, one abil...]]></description><link>https://blog.acmvit.in/honey-i-shrunk-the-ai-quantizing-llms-for-edge-hardware</link><guid isPermaLink="true">https://blog.acmvit.in/honey-i-shrunk-the-ai-quantizing-llms-for-edge-hardware</guid><category><![CDATA[aitools]]></category><category><![CDATA[llm]]></category><category><![CDATA[quantization]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[large language models]]></category><dc:creator><![CDATA[Hemanth Balgi]]></dc:creator><pubDate>Sat, 26 Oct 2024 05:30:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729878965623/b75a0850-e850-4f28-ba12-af07ff7acadf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One could argue that humanity’s rise to power on this planet came from its ability to walk on two legs, or the ability to throw sharp rocks at food, or even the ability to touch, hear and see at a deeper level than any other animal. However, one ability that is often overlooked is our ability to create, understand and synthesise speech and language - the communication tool that surpasses them all. Such is our prowess in this field that we have now created machines that can do it for us.</p>
<h2 id="heading-how-do-llms-work"><strong>How do LLMs work?</strong></h2>
<p>LLMs(Large Language Models) have done the impossible task of speaking and listening like humans do, but underneath the facade of this ingenious marvel is an uncountable amount of mathematics.</p>
<p>The process by which LLMs run is a very complex procedure involving multiple small steps, but broadly speaking they can be divided into the following four:</p>
<ol>
<li><p>Input: taking in the prompt by the user as input.</p>
</li>
<li><p>Tokenization: converting it into smaller parts or individual words.</p>
</li>
<li><p>Prediction: predicting output, or the next token based on the prompt through statistics and pattern recognition.</p>
</li>
<li><p>Output: the prediction process is repeated until it reaches a specific length or the end of the text.</p>
</li>
</ol>
<p>Just like it took humans years to evolve abnormally large brains relative to the animal kingdom to synthesise speech, LLMs are also extremely resource intensive and need a lot of computational power to run and therein lies the problem.</p>
<h2 id="heading-weights"><strong>Weights</strong></h2>
<p><em>The predicted token is decided by a measure called weights.</em></p>
<p>These weights are floating point real numbers that signify the importance the model gives to certain parts of the text that it has been trained on. This gives the model the ability to find patterns using these numbers, which it then uses to predict a token.</p>
<p>These floating point values coupled with the sheer number of arithmetic calculations that the machine needs to do per token is extremely resource heavy and needs highly capable hardware that isn't available to the average consumer.</p>
<p>This results in the large-scale implementations of LLMs over the web using APIs, and doing the actual computation using cloud infrastructure that uses the hardware that can support generative AI.</p>
<p>However, the rate at which the consumer has adopted LLMs into their daily life has been extremely rapid and the demand has only been increasing. Running costs and API costs are only getting more expensive and the average consumer has now been subject to paywalls for unlimited access to Generative AI.</p>
<h2 id="heading-quantization"><strong>Quantization</strong></h2>
<p>There is a solution on the horizon though- quantization. Put very simply, you shrink the model. How do you do it?</p>
<p><em>Convert the billions of large floating point weights of the model that make calculations more complex and convert them into integer values using specific algorithms.</em></p>
<p>For example, converting a 16-bit floating point into a 4-bit integer, or a 32-bit floating point into an 8-bit integer. This makes the millions of calculations that the model has to do much less intensive on the hardware. Simply put 1.5287678 and 1.098764 is much more complex to add then 2 and 1.</p>
<p>The amount of processing power and memory used by quantized models is markedly lesser than their raw counterparts, making it easier to run on consumer hardware.</p>
<h2 id="heading-quantization-techniques"><strong>Quantization Techniques</strong></h2>
<p>There are numerous techniques you can use to quantize an LLM, but the easiest to use is weight or static quantization as described earlier. Other techniques include:</p>
<ul>
<li><p><strong><em>Dynamic quantization:</em></strong> dynamically quantize the weights as needed during inference.</p>
</li>
<li><p><strong><em>Quantization aware training:</em></strong> Simulates the effects of quantization while training the model itself.</p>
</li>
<li><p><strong><em>Clustered Quantization:</em></strong> clusters similar weights together and replaces them with the centroid of this cluster.</p>
</li>
</ul>
<p>The easiest way to quantize a model is to use llama.cpp, a program that was initially intended to get inference of llama models in pure cpp. However, the library also includes methods to quantize numerous models by using <em>GGUF</em>, a framework and file format that can store and run quantized LLM’s.</p>
<p>The library helps you choose the method of weight quantization which is denoted by an indicator. For example <em>“Q2_k.gguf''</em> indicates that 2-bit quantization has been used, meaning that each weight can have a possible of 4 (n²) values.</p>
<p>The <strong><em>K</em></strong> here denotes that the K-means clustering algorithm was used, the weights were clustered into 2k clusters and the centroid of these clusters is calculated and taken as the quantized value of all the weights in that cluster. Similarly, there are multiple similar formats, the golden rule being that  the higher n bit quantization, the more possible values that a weight can have.</p>
<h2 id="heading-risks-amp-tradeoffs"><strong>Risks &amp; Tradeoffs</strong></h2>
<p>And in this, there is a delicate game to play.</p>
<p><em>The higher the bit quantization, the more unique values a weight can have, increasing the load on the hardware and memory. On the other hand, too small of a bit quantization and the number of unique weights drastically decreases, decreasing the quality of inference because the precision and uniqueness of the weights is what lends to the quality of inference.</em></p>
<p>The key is to strike a balance and create harmony between performance and load.</p>
<h2 id="heading-exercise"><strong>Exercise</strong></h2>
<p>The following are some inferences of a model I quantized using the llama.cpp framework, a raw model based on Codellama 7B was used - <strong>EvolCodeLLama 7B</strong> fine tuned by <strong>Mlabonne</strong> on Huggingface. The model has been fine tuned to answer coding based questions across all domains using a varied dataset.</p>
<p>The <em>“Q4_K_M”</em> variation was used to quantize this model, i.e., each weight could have a possible 16 values, similar weights were clustered together while quantizing as denoted by the ‘k’ and the residual weights or error weights are then quantized again by clustering these residual weights as denoted by the ‘M’. This approach was in hopes of striking a balance between quality and resource intensiveness.</p>
<p>The model is running completely natively and offline and all the processing is done by the local machine itself, however the processing was offloaded onto a consumer level GPU in the machine as it is more optimised to carry out the processes than a CPU and to take the load of the CPU that is already running the heavy LM studio application, the interface used to interact with the model.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf7aYfyEg27IOAtEA75Ym2Osix0ozuC9B2mfGyrBFTyQEtHHQq0TGK0xgN7hKk0xufXjPVqqbwza4xU7LrR3Ug5tcFJ_pGuC6YWi8d-6dy2teKRcQg0i_mEUwOTBcbo52K3i4zyKsmnHw82Ngsd_lakOX2O?key=njZgVrsqTaUm8n0ffBcXvQ" alt /></p>
<p><strong><em>Image 1: A simple question in python that prints the factorial of a non-negative integer, we can see that the code is extremely simple but it does satisfy all the requirements that were asked along with base cases</em></strong></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeZQHobIRhw8LcD-DlVb-wNVUytOXwhkKL8bhy8VitbUMuUJG_1pZXYff6w35CHPa7xnwmNWH2zSrRPOiDhtLjkQQAnGhc3YSBTz10BdYvRQcHffH-O5sOmSBxAOsApcLHCLYpck6KvwKpSkAtZAsUckig?key=njZgVrsqTaUm8n0ffBcXvQ" alt /></p>
<p><strong><em>Image 2: A much more complex question that involves image classification using specific python libraries. Here the LLM does give an accurate answer in terms of what needs to be done, but there is no elaboration, there is no explanation and in these cases, the overall quality of the output is satisfactory at best.</em></strong></p>
<p>Here we can clearly see the compromise we have to make with quantization, but the flexibility it gives in relation to the many variations of the techniques that exist means that the ratio of performance and load can be altered to fit all kinds of machines with varying hardware capabilities. Ranging from high end consumer PCs that can run high bit quantization models which give higher quality inference to mobile phones that can be optimised to give above average inference even with low bit quantization.</p>
<p>To see the difference in inference that comes with different levels of bit quantization, I used the small 3B parameter <strong>phi-2 model</strong> created by Microsoft and fine-tuned by <strong>TheBloke</strong> on Huggingface and ran it on my local machine with the same settings and configurations. I used a <strong>Q2_K</strong> version with 2 bit quantization and a <strong>Q8_0</strong> version with 8 bit quantization, the two extremes of quantization’s for the model. To see the difference in inference the same questions were asked to both these variations to conclude a subjective comparison based on the inference.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfuZj_zq0KCY0VkmpA-t-8ez4sdYWVsySoIsvjQxON-ETFzVd-fQiTa5PNTU1HNnuENJypVWBzqEI9DIFEyWmuOJjDJaOd5q1O-jermSnI5Sa7YOmzG_7M_SeueZHFFSjfTg4MYEqC30TErum_qMZD1BuUk?key=njZgVrsqTaUm8n0ffBcXvQ" alt /></p>
<p><strong><em>Image 3: Q2_K phi-2 model</em></strong></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXceSNowHXcjQ3yCsoXOMUqlQJZlQqA8fVDWHlbPKhovUIfDwZ87NwPThOPYc2a3sZtN8al4osmQbP4cDsOhYQGpWutcgS14JgysSpSTiSctDrh7jCIwI8TLPUmYD7bpUb1OREgeDqmNWdlY0Ki7CyRFLfc?key=njZgVrsqTaUm8n0ffBcXvQ" alt /></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdHL-VuHUNWbHBchstNdeKbn-YAAnEyJMLGn05qQtCF2KDnTnVdmdmup4F6tyyqb0zhGZzL59yhJPX_ApUQ0OuaElSfatMcs408n7PokVvQxwyXhvkWbfenJgTuWPLfD2qORGIfbxz44lIcCI2rR28lQg4m?key=njZgVrsqTaUm8n0ffBcXvQ" alt /></p>
<p><strong><em>Image 4 and 5: Q8_0 phi 2 model</em></strong></p>
<h2 id="heading-results"><strong>Results</strong></h2>
<p>If we compare the results:</p>
<ol>
<li><p><strong>Coherence and Clarity:</strong></p>
<ul>
<li><p>The response from the 8-bit quantized model provides a more detailed and structured explanation of ocean acidification, its causes, and its impacts on coral reef ecosystems. It follows a logical progression, starting with the mechanism of ocean acidification and then discussing its effects on corals and symbiotic relationships.</p>
</li>
<li><p>On the other hand, the response from the 2-bit quantized model appears to be less coherent and concise. It jumps directly into discussing the decline in ocean pH levels without providing as much context or explanation of the underlying processes involved.</p>
</li>
</ul>
</li>
<li><p><strong>Accuracy and Detail:</strong></p>
<ul>
<li><p>The 8-bit quantized model response includes specific scientific terminology and references to research studies, such as mentioning the "carbonate system" and citing studies by Hansen et al. (2007) and Diaz et al. (2006). This indicates a higher level of detail and accuracy in the explanation.</p>
</li>
<li><p>In comparison, the 2-bit quantized model response lacks specific scientific terminology and references. It provides a more general overview of ocean acidification without delving into as much detail about the processes involved or supporting evidence.</p>
</li>
</ul>
</li>
<li><p><strong>Amount of Data:</strong></p>
<ul>
<li><p>The response from the 8-bit quantized model appears to contain more information and data, covering various aspects of ocean acidification and its impacts on coral reef ecosystems in depth.</p>
</li>
<li><p>In contrast, the response from the 2-bit quantized model seems to be more concise and less detailed, potentially due to limitations in the amount of data that can be processed by the model.</p>
</li>
</ul>
</li>
<li><p><strong>Factuality:</strong></p>
<ul>
<li>Both responses convey accurate information about ocean acidification and its effects on coral reef ecosystems. However, the response from the 8-bit quantized model provides more specific details and references to scientific studies, which may enhance its credibility.</li>
</ul>
</li>
</ol>
<p><em>The response from the 2-bit quantized model, while accurate in its general statements, lacks the specific scientific references and details that would strengthen its factual accuracy. While the 8 bit quantized model has superior quality, it takes more time to create the inference, requires more memory and processing power to predict tokens as well.</em></p>
<p>Thus the balance argument is compounded in this demonstration.</p>
<h2 id="heading-what-next"><strong>What Next?</strong></h2>
<p>Just as humanity's linguistic abilities set us apart from the animal kingdom, the development of Large Language Models (LLMs) has redefined our relationship with technology. These marvels of artificial intelligence emulate our capacity for speech and understanding, transforming the way we interact with machines.</p>
<p>However, their immense computational demands have created barriers to widespread adoption, with high costs and resource requirements limiting access.</p>
<p>Quantization emerges as a game-changer, shrinking models to make them more accessible without sacrificing their core capabilities. By converting complex floating-point weights into simpler integer values, quantization strikes a balance between performance and resource efficiency. This innovation democratises LLMs, allowing even consumer-grade hardware to harness their power.</p>
<p><em>The true potential of LLMs lies in finding this balance, much like our own evolution in language and communication.</em></p>
<p>By making these models more efficient, we open the door to a future where advanced AI is a tool for everyone, seamlessly integrated into our daily lives. As we refine and optimise these techniques, we pave the way for a new era of human-AI collaboration, unlocking possibilities we have yet to imagine!</p>
]]></content:encoded></item></channel></rss>