MONZÓN.
Software engineer, researcher, NSF CSGrad4US fellow, and musician creating multimodal generative systems for computational sensing & design. I explore the intersection of human motion, sound, and materials through interactive AI tools. I also work full-time as a full-stack engineer at Cisco Meraki. Applying to PhD and Master's programs for Fall 2026.

MAGE: Motion-to-Audio Generative autoEncoder (in-progress)
An ongoing independent research project that uses a variational autoencoder to generate percussive audio from hand gestures. The model builds upon RAVE, a state-of-the-art neural audio synthesis architecture, and trains on high-frame-rate recordings of conga drum performances (played by my dad) to capture fine hand movements. The goal is to make it possible to play any percussive instrument using only hand gestures and a camera.

Music-Spectrogram Inpainting (in-progress)
In collaboration with researchers at the MIT Department of Mechanical Engineering and Media Lab, I develop a Stable Diffusion-based pipeline (inspired by Riffusion) for spectrogram inpainting. The system reconstructs masked audio regions to enhance hydrogel air-water extraction by amplifying specific frequency bands. I am currently experimenting with CLIP soft tokens to improve generative guidance and musical coherence.

SwimSense: Computational Sensing for Swimming Analysis (in-progress)
SwimSense is a wireless, wearable, waterproof device equipped with IMU, PPG, and temperature sensor for realtime health sensing and reporting for aquatic environments. Originally as a research collaboration with MIT Media Lab and Dept. of Mechanical Engineering, I am now independently developing SwimSense to improve comfort, battery life, and data quality. The device aims to provide swimmers and coaches with detailed insights into performance and physiological metrics during training.

Designer

4websites
Developer

4Y.O.E.
Graduate

'23UMass Amherst
Engineer

19+Projects
Musician

100+Songs
RESEARCH

MAGE: Motion-to-Audio Generative autoEncoder (in-progress)
An ongoing independent research project that uses a variational autoencoder to generate percussive audio from hand gestures. The model builds upon RAVE, a state-of-the-art neural audio synthesis architecture, and trains on high-frame-rate recordings of conga drum performances (played by my dad) to capture fine hand movements. The goal is to make it possible to play any percussive instrument using only hand gestures and a camera.

Music-Spectrogram Inpainting (in-progress)
In collaboration with researchers at the MIT Department of Mechanical Engineering and Media Lab, I develop a Stable Diffusion-based pipeline (inspired by Riffusion) for spectrogram inpainting. The system reconstructs masked audio regions to enhance hydrogel air-water extraction by amplifying specific frequency bands. I am currently experimenting with CLIP soft tokens to improve generative guidance and musical coherence.

SwimSense: Computational Sensing for Swimming Analysis (in-progress)
SwimSense is a wireless, wearable, waterproof device equipped with IMU, PPG, and temperature sensor for realtime health sensing and reporting for aquatic environments. Originally as a research collaboration with MIT Media Lab and Dept. of Mechanical Engineering, I am now independently developing SwimSense to improve comfort, battery life, and data quality. The device aims to provide swimmers and coaches with detailed insights into performance and physiological metrics during training.

Honors Thesis
As a music producer, I searched for ways to design the right timbre and texture. That curiosity led me to research Generative Adversarial Networks for timbre synthesis in my undergraduate honors thesis at UMass Amherst. With limited resources, I built a multiclass classification model for instrument identification as a foundation for future GAN-based synthesis. The long-term vision was to generate novel sounds directly from text descriptions like “soothing piano with warm overtones.”
PROJECTS

NYT Large Language Model (LLM)
I designed a local Ollama-based personal assistant that summarized and read New York Times articles using real-time TTS. The system refreshed its dataset daily and answered both current and historical questions—stored and processed locally for privacy.

Raspberry Pi Robot
Originally meant to help me “clean my room remotely,” this three-wheel robot evolved into a PS4-controlled rover with a camera, ultrasonic sensor, and claw arm. Built with Raspberry Pi and OpenCV, it detected and grabbed objects autonomously or via controller.
Mushroom Environment Controller
I designed a custom ESP32-based environmental controller for mushroom cultivation. Built on a custom KiCAD PCB, it controlled humidity, UV lighting, PC fans, and a Peltier element for heating/cooling. It also included an OLED interface and internet connectivity for monitoring.

Raspberry Pi Quadcopter
I built a quadcopter powered by a Raspberry Pi with an onboard camera, GPS, and Bluetooth/WiFi control. The project explored PID flight control and quadcopter dynamics—the Raspberry Pi wasn't ideal for flight control, but the lessons were invaluable.

VibeQ - Spotify Group DJ App
I developed VibeQ, an app that let multiple users vote on the next Spotify song in a shared queue via QR code. Built with React Native (Expo) and Firebase, it won two runner-up awards at a UMass Amherst hackathon.

Newtonian Physics Simulator
I built a 2D physics simulator in Python using Pygame, featuring gravitational bodies, instantaneous velocities, and dynamic interactions. Users could create, move, and observe objects as they interacted under Newtonian forces.
Beamshyft - Lower Construction Costs for Developers
I built the Beamshyft web platform to connect developers with manufacturers for cost-efficient interior materials. Developed with Next.js, it enabled users to request materials, manage deliveries, and source stylish furnishings directly from producers.

3D Printer Cooling System
To improve print quality on my budget 3D printer, I designed a custom fan mount and nozzle system in Fusion 360. The upgrade significantly improved cooling performance, overhangs, and overall print precision.

FeedKevin! - Pet Food Timer
To avoid double-feeding our cat Kevin, I soldered a WiFi-enabled smart button that logged and displayed feeding times using an ESP8266, SSD1306 OLED, and a single button. The device fetched NTP time and kept everyone honest—including Kevin.

Conway's Game of Life (C++)
I implemented Conway's Game of Life in C++ and color-graded cells based on the rule they followed, exploring how simple rules produced complex patterns.

Blockbreaker
I created a modern blockbreaker game using JavaScript and an HTML5 canvas. It featured multiple levels, progressive difficulty, and crisp collision physics.

MyHS - High School Student Portal
In high school, I wrote a Swift-based mobile app that aggregated grades, homework, and schedules. The IT department mistook it for hacking and issued a cease-and-desist—an early lesson in innovation (and bureaucracy).

Anybody Home?
Because living in an attic made it hard to tell who was home, I created an ESP8266-based device that detected nearby phones by MAC address and sent notifications. I eventually decommissioned it—effective, but a little too Black Mirror.

Breaker Panel Monitor
To prevent frozen pipes during winter, I built a breaker panel monitor that detected circuit trips and sent notifications via IFTTT. It was a reliable, low-cost safeguard—and far cheaper than calling an electrician.

Curdle - Wordle for Cheeses
To address my cheese-based insecurity, I created Curdle—a Wordle-style game featuring real five-letter cheese names (yes, there were 42). Built with JavaScript, it refreshed daily and offered a flavorful test of dairy vocabulary.