(vibe)coder badge claimed
hello world,
i’ve been extremely busy with work and quarter-life realizations lately. had a couple traumatizing incidents this week when working on my canvas/workflow builds because of poorly designed ux. wanted to learn how i’d handle ux if i were building an app – and so i did. i had no idea how to build it, but hey, it’s 2025, and we have Cursor. if the tiktok girlies could do it (and they made it look SO COOL), then i should be able to AT LEAST build a simple something. somehow, this turned into a full weekend project. i thought i was just gonna mess around for an hour, but ended up deep figuring out how to make vibes measurable. turns out, i actually get it now — the logic, the structure, the debugging flow. and it’s kind of addictive. i was learning at a ridiculous pace without even realizing it.
here’s the recap.
the idea: everyone knows that i love music (who doesn’t) and i’m a Spotify power user. ofc my first app would have to be music-related. i started out with a very high-level idea: i wanted to create something similar to Wrapped, except it could be used every day so we don’t have to wait until december when Wrapped drops, and it’d capture different data for 3–6–12 month periods to see how your audio personality evolves over time.
attempt #1:
familiarizing myself with Cursor and Spotify for devs took like 15 minutes. i had no idea what i was doing so i just talked to the agent, which pretty much guided me through the process: Starting an app in the Spotify dev dashboard, getting client keys to enable API calls, choosing a techstack for my app, and diving into the actual build — and Cursor came back with a v0 in roughly 15 minutes. it it had a good and generic UI: summarize your listening activities, the “vibes” you’re giving, your top tracks/artists/genres. i tested it out on local environment and it was working. except i wasn’t sure if i really understand the process — i had no idea how this was built and had zero control over the setup.
attempt #2:
i decided to start over. this time, instead of using on agent mode, i toggle Cursor to plan mode, asked for a step-by-step planning doc for what i wanted to build, and walked through it so i could have more control over the input and setup. i communicated to the agent that i wanted an MVP to start with, but also noted that I wanted to make it scalable/ flexible to add new features after getting the MVP up and running. this (new) v0 only had simple data — top tracks, artists, and genres. from there, i pitched a “16 personalities” system to assign a listenality to users depending on what we could get from Spotify API and told the agent to build it. the agent built it, but localhost couldn’t render. i spent 2 hours debugging and it kept breaking, so i decided to start again.
screenshot of Listenality landing page
attempt #3:
started from scratch again, this time with more context and better planning. i used Cursor’s Plan mode, along with tools like codeguide.dev and v0 app to help me plan better and got a quick mock up of the front end UI to flesh out in details what I wanted to build and the steps involved. with this MVP, i only wanted a simple UI displaying basic raw stats — songs, albums, playlists recap. once the MVP was running on local environment, i started planning the 16-personalities idea again, but realized in the planning phase that i couldn’t get these data straight from /me/top/tracks (need to make an extra call to /audio-features?ids=...). since this is not a must-have, but a nice-to-have, i noted that down to tackle later. i tidied up the UI a little, and was excited to deploy my codes into production.
deployment: i had never used Render or Vercel before, so seeing how deployment works was new. also noting that it was my 1st with GitHub. the first few deployments kept failing, so i went back, debugged, and worked with agents on fixes. finally got it live after a while. tested on both web browser and mobile to see what i didn’t like about it as a user, had some ideas for both UX and UI improvements, and worked with agents to fix them. i also added a Share your stats floating button so users could share a card (even ✨optimized✨ it for IG story sizing and resolution — this is a must-do for me, i know this too well as a marketer) with their top 3 artists, tracks, and genres. everything looked good (for now), so i hit deploy. and went to bed.
feature improvement: next day, i thought about adding a section using AI to generate a brief overview of a user’s audio personality based on their top tracks/ artists/ genres. but which model? although i preferred ChatGPT (let’s be real, Chat handled content generation in these contexts like a champ), I realized the only free model i could use was Gemini (technically with a usage limit), so I generated a Gemini API key and got to work. two hours later, it worked on preview. i kept running into bugs when trying to deploy into prod — turned out i needed a billing account (even on free tier) for the Gemini API key to work. enabled billing, redeployed on prod and it finally worked.
API limitation with Spotify dev mode: since the app’s in dev mode with Spotify, i can only add 25 users, and i need their spotify account emails to authorize the API calls. wanna move your app out of dev mode? as of October 2025, Spotify only accepts new corporate partners now (no individuals) and to qualify, you need a minimum of 250k MAUs. wow. another new thing i learned - the economics of APIs. so i added a few friends to the app on Spotify for devs, sent the app to friends, and asked them to test.
by the end of it, i had something real, live, and mine. and the wildest part is it wasn’t just me poking at AI agents; i actually understood what was happening under the hood. every error, fail, and fix taught me something that stuck. and now that i’ve built something that works, i kinda want to keep going. not for a job, not for a portfolio but just because it’s fun watching your ideas come alive line by line.