Current, 2024

Reading/ Watching | Mementos

Saturday, February 15

February

The highlight has to be HackNYU, a 48-hour hackathon that I participated in last weekend. We built Neighborly, A Hyperlocal community inventory management app for you to donate, borrow, or exchange food and essentials with those in need. We didn’t win, but, I am really proud of what we accomplished in the given timeframe. Here’s the gist of the project on Devpost. We built the skeleton in a few minutes using tools like v0.dev, blot.new, rollout.site, and presentations.ai. It’s crazy how AI has changed the landscape of today’s development.

Tech is a commodity now, anyone with a slight sense of what they are doing can build something. I don’t see the point in hiring a lot of developers to build something that can be done by a few now. I get what FAANG companies mean, when they lay-off a lot of devs. Not that I support that, but, it just makes more fiscal sense.

The real challenge is to build something that people want. I am currently reading Hooked: How to Build Habit-Forming Products by Nir Eyal. It’s a great book that talks about how to build products that people can’t stop using. It’s a must-read for anyone who is interested in building products that people love.

I just finished watching Paatal Lok - Season 2 this weekend. It’s a gripping tale of crime, corruption, and power, a must-watch for anyone who enjoys Indie crime dramas, and it’s really good. On the other hand, also currently watching a heartfelt, light-hearted show called Ted Lasso. It’s a comedy-drama series that follows an American football coach who is hired to manage an English football team. The sheer optimism and relentless positivity in the show are infectious. The show is so so good!

Here’s what else is in store: reads/shows. Looking forward to an exciting month ahead! 🍿 🍺

January

Back in action: January was like an amazing much-needed magical vacation back home, and now I am back and it feels good. I met my cousins, family and relatives, ate a bunch of good Indian food, and spent a lot of time with my friends, still unsure how a month went by and if I was ready to get back to work. My sister got married, and I can’t stop digesting this. She is just six months older than me, and we grew up together, inseparable. I am happy for her, but it feels like a part of me is missing. I can’t wait to finish things here and rush back home.

My LinkedIn is filled with how Deepseek, an open-source LLM just killed it in the LLM space, and did it at less than 20-40 times the cost of OpenAI. The company has attracted attention in global AI circles after writing in a paper last month that the training of DeepSeek-V3 (ChatGpt 4o alternative) required less than $6 million worth of computing power with Nvidia H800 chips, which is 20 to 50 times cheaper than the cost of training similar models by OpenAI and Anthropic. This had a significant impact on the AI community, financial markets, and the world at large. On Monday, January 27, 2025, the stock closed at $118.42, marking a 17% drop from the previous close. This decline erased nearly $600 billion from NVIDIA’s market capitalization, setting a record for the largest single-day loss in U.S. stock market history. Several models of DeepSeek like R1 (ChatGpt O1 alternative) are open-source. This democratization of AI is a big win for the community. It has unlocked a new era of AI-powered development with unparalleled potential to innovate solutions to the most pressing problems.

To understand why this is revolutionary, consider the following:

By being extremely close to the hardware and by layering together a handful of distinct, very clever optimizations, DeepSeek was able to train these incredible models using GPUs in a dramatically more efficient way. By some measurements, over ~45x more efficiently than other leading-edge models. DeepSeek claims that the complete cost to train DeepSeek-V3 was just over $5mm. That is absolutely nothing by the standards of OpenAI, Anthropic, etc., which were well into the $100mm+ level for training costs for a single model as early as 2024.

With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets. Their DeepSeek-R1-Zero experiment showed something remarkable: using pure reinforcement learning with carefully crafted reward functions, they managed to get models to develop sophisticated reasoning capabilities completely autonomously. This wasn’t just about solving problems— the model organically learned to generate long chains of thought, self-verify its work, and allocate more computation time to harder problems.

These excerpts are from the The Short Case for Nvidia Stock blog post by Jeffrey Emanuel. Give this a read to understand how DeepSeek was able to achieve this and it’s impact on the AI community and the world at large. I am excited to see how all this unfolds in the coming months.

This year, I plan on building fewer projects and spending more time-solving Leetcode and preparing for interviews, as I am graduating by the end of this year. Currently, I am mostly involved in finishing up an old torrent project, fixing up a couple of bugs, and starting with Ferry, a C compiler for RISC-V written in Rust. This helps me learn more about computers and how they work at a lower level.

Taking a break:

I am going back home, this month. If you’re curious, I will be spending more time on my books, PS5, and Netflix in the next few weeks. Red Dead Redemption, Spiderman 2, Pulp Fiction, Godfather, and a long list await this new year. Hoping to gobble up good food for the rest of the year in this one-month break.

Here’s to new beginnings, and new adventures! 🥂 I hope you have a great year ahead. Merry Christmas, and a happy New Year, everyone! Promise to be back soon. Keep checking :)


2024: The magic and the storms
2023: The daunting realm of adulthood