GENERATIVE MACHINE does not use pre-rendered assets. Every pixel is calculated in real-time based on mathematical functions.
The core motion utilizes Perlin Noise derivatives to create organic, flow-field like movements that mimic fluid dynamics yet remain purely computational.
The Fourier Transform converts time-domain signals into frequency-domain data.
This mapping allows for "synesthetic" translation: Bass frequencies drive physical scale, while High frequencies drive rotation and color intensity.
We define "Machine Emotion" not as biological sentiment, but as the unpredictable variance in high-dimensional vector space.
The system operates on a "Human-in-the-Loop" architecture where the AI proposes variations, and the human curator defines the constraints (The aesthetics of refusal).
The codebase follows a strict modular separation between Logic (Calculation) and View (Rendering).
We purposefully reject heavy frameworks (React/Vue) for the core visual engine to maintain maximum frame-budget for WebGL/Canvas rendering. The proximity to the DOM API ensures the "rawness" of the code matches the rawness of the aesthetic.
The feedback loop created by the Sound Machine is designed to create a "sensory crossover". By visualizing sound in real-time with zero latency, the brain begins to "see" audio.
Users are given 5 axes of control (Radar Chart), but the underlying generation has inherent randomness. This balance keeps the user in a state of "Flow" — enough control to feel agency, enough chaos to feel surprise.