Building this portfolio: WebGL, FFT, and a working terminal
A custom GLSL portrait, a real audio engine driven by AnalyserNode, and a 24-command zsh-style terminal with a virtual filesystem. Here is how the pieces fit, what broke, and what I would do differently.
why is the hero a real terminal?
The hero of this site is a 24-command zsh-style terminal with a virtual filesystem. You can cd lyrics, cat next-maker.md, grep -ri "redis", tree, uptime, uname, ping krishna-adhikari.com.np. Most of it is fake; some of it isn't.
Why?
Because a portfolio for a backend engineer should feel like the thing the engineer does. A terminal is the most honest possible representation of a developer's daily medium. Most portfolios pick a metaphor that's adjacent to the work — gallery, magazine, single-page resume — and the metaphor leaks. A magazine layout makes you think about typography, not what the engineer ships. A gallery makes you think about projects as art objects, which they aren't. A terminal puts you back in the medium.
It's also good interaction design. There's a discoverable surface (the visible commands), an exploratory layer (help reveals more), and easter eggs for people who type whoami or sudo make me a sandwich. Bored visitors leave; curious ones discover open spotify.
the audio engine
The audio section is a real Web Audio API engine, not an <audio> tag with custom CSS. It uses an AnalyserNode with a 2048-sample FFT, smoothing constant 0.82, drawing the frequency response onto a canvas circular visualizer that breathes with the bass.
const ctx = new AudioContext();
const analyser = ctx.createAnalyser();
analyser.fftSize = 2048;
analyser.smoothingTimeConstant = 0.82;
const source = ctx.createMediaElementSource(audioEl);
source.connect(analyser);
analyser.connect(ctx.destination);The visualizer reads the frequency bins, splits them into bass (0-6%), mid (6-28%), and treble (28-100%), and feeds those into a particle system rendered with globalCompositeOperation: "lighter" so the colors add together rather than overwriting.
The result is that when you hit play on one of the gazals, you get a real audio-reactive visualization that's tied to the actual signal — not a generic loop. Subtle, but the kind of thing you notice when you sit with it for thirty seconds.
the GLSL portrait
The hero portrait isn't an <img> tag. It's a WebGL fragment shader that samples the source image, applies an ASCII-style halftone with hover-based intensity, and renders to a canvas. The interesting code is here:
vec2 coverUv(vec2 uv) {
vec2 scale = vec2(1.0);
float canvasAspect = u_canvasRes.x / u_canvasRes.y;
float imageAspect = u_imageRes.x / u_imageRes.y;
if (canvasAspect > imageAspect) scale.y = imageAspect / canvasAspect;
else scale.x = canvasAspect / imageAspect;
return uv * scale + (1.0 - scale) * 0.5;
}That helper handles object-cover behavior in shader space — the equivalent of CSS object-fit: cover for a custom render. Without it the portrait stretches on non-matching aspects, which looks broken.
the Lenis smooth scroll
The whole site uses Lenis for smooth scroll. This is the kind of thing that's trivial to wire up and infuriating to wire up correctly, because Lenis lerps the actual scroll position, which means window.scrollY reads stale values during transitions. Anything that depends on scroll position — sticky headers, scroll-spy, intersection observers — needs to be told.
The fix in this site: wrap every scroll-aware component with the Lenis useLenis hook so they read the lerp'd value, and replace overflow-x: hidden on html, body with overflow-x: clip. The latter is critical: hidden creates a scroll container that breaks position: sticky further down the tree, which broke the blog detail sidebar for an embarrassing number of hours before I caught it.
the cyber-physical aesthetic
The visual system is what I'm calling cyber-physical. Mono-uppercase labels, line-numbered section headers (03 // projects, 04 // the.audio.engine), a subtle grid pattern overlay, accent color punching through (mint green in dark, blue in light), and a hairline border-and-typography rhythm that comes from terminals and engineering schematics rather than design Twitter.
It's the opposite of the warm-personable trend in personal sites this year. I don't have anything against warmth, but for an engineer's portfolio I want the visual language to be precise rather than friendly. Terminals are precise.
the music engine
/music is a full music app — not a Spotify embed. There's a persistent player bar at the bottom, an Up Next drawer (slide from right), per-track detail pages with synced lyrics, queue management with shuffle and loop modes, volume control with localStorage persistence, MediaSession integration for lock-screen art and Bluetooth headset controls, and global keyboard shortcuts (space to toggle, ←/→ to scrub, ↑/↓ for volume, S for shuffle, R for loop).
Building this required hoisting the audio engine into a React Context provider mounted at the root layout, so a single <audio> element survives route changes. Without that, every navigation would re-mount the player and stop the music — which is exactly what the previous iteration did.
what I'd do differently
Two things:
The terminal could be more useful. Currently it's a parlor trick — a curiosity that wins the first 30 seconds and then becomes a "novelty" the visitor scrolls past. The next iteration would have terminal commands that actually drive the page: open music already navigates, but I want grep -r nestjs to do a fuzzy search of the blog and show results in the terminal output area. That makes it useful, not just charming.
The portfolio is too dense. There are nine sections on the homepage. The signal-to-noise on a portfolio drops fast after section three. Future pruning will probably cut Journey or merge it into a "/about" page.
the pieces, listed
For anyone curious about the stack:
- Framework: Next.js 16, App Router, React 19, strict TypeScript.
- Styling: Tailwind v4 with theme tokens in CSS, Biome for sort.
- Animation: Framer Motion for component transitions, Lenis for scroll, custom canvas + WebGL where needed.
- State: Redux Toolkit + redux-persist for the few persistent bits, React Context for the audio engine.
- Audio: Web Audio API + custom
useAudioEnginehook + circular FFT visualizer. - Markdown: marked + shiki for blog and project rendering.
- Build: shiki at build time for syntax highlighting (no runtime cost),
generateStaticParamsfor blog/project/music detail pages.
The whole thing builds in 12 seconds and ships static HTML for everything except the homepage.
the source
Eventually this will be open source. Right now the music section pulls from private audio files I'm not ready to make redistributable, and there's some scaffolding I haven't extracted into the @teispace/next-maker template yet. When both are done, the repo opens up.