2019 Dec, Mon 16

Vizul: My Audio Visualization Project

Sun Sep 3 16:15:31 CEST 2017

Audio visualisations is really fun to play around with, I started this project around the beginning of 2015 and have been working on it when getting bored, or seeing something that would be cool to try out.

I have the last couple of months working on refactoring the Ui to use Vue.js which works really well and makes it easier to organise the code as single components. Vizul also called earlier as mzViz is now pretty mature with a lot of features.


I started looking at the fancy Javascript libraries like React not sure why I went with Vue.js probably because I used it in another project. Where I tried to use Vue as a template engine in nodejs. I really like the concept of single file components in Vue and being able to write CSS with PostCSS, Autoprefixer or SASS pre-processing directly in same file. Also has the nice feature of scoping the CSS automatically with hashes. The drawback is that files can get very large but that just means something can be further refactored/split to new components.

Which in the long run makes it easier to compose and repurpose parts later on.

Web Audio API

I remember it first started with a tutorial which had a cool SVG logo that got morphed/squished when the beats was pumping. After that I started experimenting more with the audio api and created the "noodle strings" which moved depending on intensity and frequencies around it.

From there I keept adding more elements and features like HTML5 drag and drop, simple soundcloud track link input. Which in later releases in the Vue.js version got improved with a search input with fast results. Which improved the user experience a lot. I have also implemented microphone support so you can use loopback devices or software drivers to stream audio input from Spotify for example.


The render algorithm

The rendering algorithm is reading a ArrayBuffer with all the available frequencies from the audio analyser node. Which get used inside the render frame method where all the fun happens. The bass get represented by the circle in the middle where the size and color is influenced by near by frequencies. The midrange is represented by the "noodles" which changing position it syncs with the bass circle and then grow outwards when the bass is more intense.

When the "noodles" reach the outerborder of the bass circle it starts to grow out as sunrays which represents upper midrand and finally treble. When the treble reach higher levels the bars gets slightly thicker. The bars also moves with relation to other nearby frequencies. There is a couple of special features in there too, the bars can grow to boxes if certain conditions are right. Which creates this cool flickering star effect.

Using GitLab CI

After a while I wanted to try out GitLab CI which can be self hosted with docker and have private runners for pipeline jobs. The build process is now fully automated and deployment to a testing site is done automatically. The final step needs confirmation from the Gitlab jobs panel before it executes the production jobs. It's really nice to have automatic builds and automatic deployment when commiting, even for smaller projects like this.


Current Feature list

  • Drag and Drop interface mp3/wav/FLAC (FLAC on newer browsers 2017+)
  • Microphone support
  • SoundCloud support with search, will filter only freely streamable content
  • Trackingbar implemented/shoehorned
  • Visualization renderer
  • Player logic
  • Audio Equalizer (prototype/beta)
  • Gitlab-CI build/deployment tooling

Random widget