Current odyssey
Current, 2025

Working | Reading | Mementos

Tuesday, April 1

April

F-ed up my pc for a day. Ferry is not programmed to handle malformed code, and the IR-assembly code generation missed adding epilogue to this one for-loop in a test.c file. Ferry uses spike to emulate the RISC-V assembly code, and it started an infinte loop with seg faults and stack overflows. I quit the emulator (ctrl + c) and was on my way, but spike was silently running in the background. A day later my CPU Fan started roaring and the laptop was heating up. Initially, I ignored but later upon inspection, it was clear today why tests are important in your code. Spike was hogging 99.5% CPU usage! And these macbooks are so performant that I didn’t even notice the CPU was running at 100% for a day. It was only the fan sound that made me realise something was off, and I used it extensively in the meantime- muliple chrome windows, tabs,nodejs servers and whatnots.

Current steps would be fixing IR-assembly generator and incorporating unit=tests for all the modules in the project.

gemini-2.5-pro is new hot thing in the space. Reddit users are reportedly receiving massive performance differences from Claude 3.7-sonnet

On the contrary, my experience has not been quite the same. When I asked it to analyse Ferry, my C compiler project, it did successfully analyse it in one-shot, identify bugs and suggest improvements. But, the code generated by gemini 2.5-pro were buggy, with unreadable characters (this one time it generated a weird char which I hadn’t seen in ASCII before xD), lot of unused variables amongst others. The code also didn’t solve the problem.

Currently, Ferry is riddled with bugs, primarily in the IR to Assembly space, mainly becuase I mostly wrote all the template code myslef and asked Claude to fill in the rest.

Other issues are not “bugs” but rather missing features. I haven’t implemented the full C standard yet (i don’t plan to do that), so there are some features that are not supported, yet. The ultimate goal here would be to make the compiler efficient enough to be able to solve all of Leetcode.

March

Claude finally has a web search feature; I can finally let my ChatGPT+ rest in peace now. Well done, soilder. You did well. (Immidiately regrets after looking at the ghibli art images everyone is generating) :(

Happy Holi folks! I had a lovely Holi celebration here this weekend. Currently wrapping up Ferry, a C compiler in Rust. The compiler is currently capable of parsing, tokenizing, generating AST and an optimised intermidate representaion of the code for a subset of C language. The following steps would be to compile IR to RISC-V assembly.

Compiled languages are always faster than interpreted languages, which is why ​Microsoft has also announced a native port of the TypeScript compiler to Go which earlier was written in TypeScript.

I am also contributing to a process isolation sandbox project, LIND, at the Secure Systems Lab at NYU and in some research about Meta’s Orion Glasses, particularly focusing on hand-gestured controllers.

The internet is going crazy since the launch of Claude 3.7 sonnet and for the right reasons. Claude 3.7 destroyed benchmarks, which is a significant improvement over other models. I tried out Claude, and it’s just insane. For instance, I asked it to review an article I wrote earlier with the default prompt provided by Anthropic (look for the polish your prose suggestion in Claude), and it just nailed it.

Now the question is, do I replace my ChatGPT+ subscription with Claude? Claude doesn’t have a web-search feature, yet. But, I switched, crossed the line, and made the jump to the dark side. I wish it had a web search feature. It could be a separate entity, like a separate web scraper model, or maybe a separate tab in the app, or something. This would solve all my problems and would be a game-changer. I used ChatGPT for every single thing; from research, and coding to analysing nutritional values of food. For example, every day, I ask ChatGPT to find the nutritional values of a food item, like say a particular branded milk/ oat milk or a coffee from Starbucks, and it still does a great job at it. To maintain a log, I ask it to save all the results in a markdown and return.

The other solution now is using Google as usual (Google does a better job than the likes of Perplexity at finding relevant sources, BUT, for summarizing, paraphrasing & “how to” queries, an LLM-based search engine is better. Refer to this article), and then copy-pasting the results to Claude.

Is it March yet? I remember going back home, to India for a break, like it was yesterday and now two months are already over? Strange. On this side of the world, I am currently building a compiler for a subset of C language to RISC-V assembly language. It’s a fun project that helps me understand how compilers work. I also joined the SSL (Secure Systems Lab) at NYU, and am involved in a project aimed at building a secure system that aims to isolate processes in a single process sandbox. This helps limit the damage caused by bugs or security flaws in the application. Also, involved in some research about Meta’s Orion Glasses because I found the hand-gestured controllers fascinating. I am planning to work this semester on a controller as such. Excited to see how all these unfold in the coming months.

February

The highlight has to be HackNYU, a 48-hour hackathon that I participated in last weekend. We built Neighborly, A Hyperlocal community inventory management app for you to donate, borrow, or exchange food and essentials with those in need. We didn’t win, but, I am proud of what we accomplished in the given timeframe. Here’s the gist of the project on Devpost. We built the skeleton in a few minutes using tools like v0.dev, blot.new, rollout.site, and presentations.ai. It’s crazy how AI has changed the landscape of today’s development.

Tech is a commodity now, anyone with a slight sense of what they are doing can build something. I don’t see the point in hiring a lot of developers to build something that can be done by a few now. I get what FAANG companies mean when they lay off a lot of devs. Not that I support that, but, it just makes more fiscal sense. The real challenge is to build something that people want.

Finished watching Paatal Lok - Season 2 this weekend. It’s a gripping tale of crime, corruption, and power, a must-watch for anyone who enjoys Indie crime dramas, and it’s really good. On the other hand, also currently watching a heartfelt, light-hearted show called Ted Lasso. It’s a comedy-drama series that follows an American football coach who is hired to manage an English football team. The sheer optimism and relentless positivity in the show are infectious. This show is so good! If you’re more interested in watching shows about the journey of building yourself, try watching Marvelous Mrs. Maisel. It’s a comedy-drama series about a housewife who becomes a stand-up comedian in the 1950s. The writing is sharp, the characters are well-developed, and the acting is top-notch. Running parallels, Midge’s wins feel personal here, well-deserved.

January

Back in action: January was like an amazing much-needed magical vacation back home, and now I am back and it feels good. I met my cousins, family and relatives, ate a bunch of good Indian food, and spent a lot of time with my friends, still unsure how a month went by and if I was ready to get back to work. My sister got married, and I can’t stop digesting this. She is just six months older than me, and we grew up together, inseparable. I am happy for her, but it feels like a part of me is missing. I can’t wait to finish things here and rush back home.

Currently reading The Short Case for Nvidia Stock by Jeffrey Emanuel which explains how Deepseek, an open-source LLM just killed it in the LLM space. The training of DeepSeek-V3 required less than $6 million worth of computing power with Nvidia H800 chips, which is 20 to 50 times cheaper than the cost of training similar models by OpenAI and Anthropic. On Monday, January 27, 2025, the stock closed at $118.42, marking a 17% drop from the previous close. This decline erased nearly $600 billion from NVIDIA’s market capitalization, setting a record for the largest single-day loss in U.S. history. The stock’s decline was attributed to a combination of factors, including a broader market sell-off, concerns about the company’s growth prospects, and a downgrade from analysts at Morgan Stanley.

Taking a break:

I am going back home, this month. If you’re curious, I will be spending more time on my books, PS5, and Netflix in the next few weeks. Red Dead Redemption, Spiderman 2, Pulp Fiction, Godfather, and a long list await this new year. Hoping to gobble up good food for the rest of the year in this one-month break.

Here’s to new beginnings, and new adventures! 🥂 I hope you have a great year ahead. Merry Christmas, and a happy New Year, everyone! Promise to be back soon. Keep checking :)


2024
2023