Skip to main content
Back to Blog
Web Audio APIJavaScriptAudioMusic TechnologyCreative CodingFrontendTypeScript

Web Audio API Tutorial: Build Interactive Audio Applications with JavaScript

Learn how to create synthesizers, audio visualizers, and sound effects using the Web Audio API. A hands-on guide for developers who want to bring audio to their web applications with practical code examples.

10 min read

As someone who studied music production at Berklee and transitioned into software engineering, the Web Audio API feels like the perfect intersection of my two passions. After building audio features for various projects and working on audio plugin development, I'm excited to share how you can bring professional audio capabilities to the browser.

This tutorial covers everything from basic oscillators to building complete audio applications.

Understanding the Audio Context

The Web Audio API is built around the concept of an Audio Context - think of it as a virtual audio workspace where all your sound processing happens.

// Create an audio context
const audioContext = new (window.AudioContext || window.webkitAudioContext)();

// Important: Browsers require user interaction before audio can play
document.getElementById('startButton')?.addEventListener('click', () => {
  if (audioContext.state === 'suspended') {
    audioContext.resume();
  }
});

The audio context manages all audio operations and serves as the entry point for creating audio nodes.

Building Your First Oscillator

An oscillator generates a periodic waveform - the foundation of synthesized sound.

function playTone(frequency: number, duration: number = 1) {
  const audioContext = new AudioContext();

  // Create an oscillator node
  const oscillator = audioContext.createOscillator();

  // Create a gain node for volume control
  const gainNode = audioContext.createGain();

  // Configure the oscillator
  oscillator.type = 'sine'; // sine, square, sawtooth, triangle
  oscillator.frequency.setValueAtTime(frequency, audioContext.currentTime);

  // Connect nodes: oscillator -> gain -> destination (speakers)
  oscillator.connect(gainNode);
  gainNode.connect(audioContext.destination);

  // Set initial volume
  gainNode.gain.setValueAtTime(0.5, audioContext.currentTime);

  // Start the oscillator
  oscillator.start();

  // Fade out and stop
  gainNode.gain.exponentialRampToValueAtTime(0.001, audioContext.currentTime + duration);
  oscillator.stop(audioContext.currentTime + duration);
}

// Play a 440Hz tone (A4 note)
playTone(440, 2);

Creating a Simple Synthesizer

Let's build a keyboard-controlled synthesizer:

class Synthesizer {
  private audioContext: AudioContext;
  private masterGain: GainNode;
  private activeOscillators: Map<string, OscillatorNode> = new Map();

  constructor() {
    this.audioContext = new AudioContext();
    this.masterGain = this.audioContext.createGain();
    this.masterGain.gain.value = 0.3;
    this.masterGain.connect(this.audioContext.destination);
  }

  // Musical note frequencies
  private noteFrequencies: { [key: string]: number } = {
    'C4': 261.63, 'C#4': 277.18, 'D4': 293.66, 'D#4': 311.13,
    'E4': 329.63, 'F4': 349.23, 'F#4': 369.99, 'G4': 392.00,
    'G#4': 415.30, 'A4': 440.00, 'A#4': 466.16, 'B4': 493.88,
    'C5': 523.25,
  };

  // Keyboard to note mapping
  private keyToNote: { [key: string]: string } = {
    'a': 'C4', 'w': 'C#4', 's': 'D4', 'e': 'D#4',
    'd': 'E4', 'f': 'F4', 't': 'F#4', 'g': 'G4',
    'y': 'G#4', 'h': 'A4', 'u': 'A#4', 'j': 'B4',
    'k': 'C5',
  };

  playNote(note: string): void {
    if (this.activeOscillators.has(note)) return;

    const frequency = this.noteFrequencies[note];
    if (!frequency) return;

    // Create oscillator with envelope
    const oscillator = this.audioContext.createOscillator();
    const envelope = this.audioContext.createGain();

    oscillator.type = 'sawtooth';
    oscillator.frequency.setValueAtTime(frequency, this.audioContext.currentTime);

    // ADSR envelope (Attack, Decay, Sustain, Release)
    const now = this.audioContext.currentTime;
    envelope.gain.setValueAtTime(0, now);
    envelope.gain.linearRampToValueAtTime(1, now + 0.01); // Attack
    envelope.gain.linearRampToValueAtTime(0.7, now + 0.1); // Decay to sustain

    oscillator.connect(envelope);
    envelope.connect(this.masterGain);

    oscillator.start();
    this.activeOscillators.set(note, oscillator);
  }

  stopNote(note: string): void {
    const oscillator = this.activeOscillators.get(note);
    if (!oscillator) return;

    // Quick release to avoid clicks
    const envelope = this.audioContext.createGain();
    envelope.gain.setValueAtTime(0.7, this.audioContext.currentTime);
    envelope.gain.exponentialRampToValueAtTime(0.001, this.audioContext.currentTime + 0.1);

    oscillator.stop(this.audioContext.currentTime + 0.1);
    this.activeOscillators.delete(note);
  }

  handleKeyDown(event: KeyboardEvent): void {
    const note = this.keyToNote[event.key.toLowerCase()];
    if (note) this.playNote(note);
  }

  handleKeyUp(event: KeyboardEvent): void {
    const note = this.keyToNote[event.key.toLowerCase()];
    if (note) this.stopNote(note);
  }
}

// Usage
const synth = new Synthesizer();
document.addEventListener('keydown', (e) => synth.handleKeyDown(e));
document.addEventListener('keyup', (e) => synth.handleKeyUp(e));

Building an Audio Visualizer

Visualizers transform audio data into graphics. The AnalyserNode is key:

class AudioVisualizer {
  private audioContext: AudioContext;
  private analyser: AnalyserNode;
  private canvas: HTMLCanvasElement;
  private canvasCtx: CanvasRenderingContext2D;
  private dataArray: Uint8Array;
  private animationId: number | null = null;

  constructor(canvas: HTMLCanvasElement) {
    this.audioContext = new AudioContext();
    this.analyser = this.audioContext.createAnalyser();
    this.canvas = canvas;
    this.canvasCtx = canvas.getContext('2d')!;

    // Configure analyser
    this.analyser.fftSize = 2048;
    const bufferLength = this.analyser.frequencyBinCount;
    this.dataArray = new Uint8Array(bufferLength);
  }

  async connectMicrophone(): Promise<void> {
    try {
      const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
      const source = this.audioContext.createMediaStreamSource(stream);
      source.connect(this.analyser);
      this.draw();
    } catch (error) {
      console.error('Microphone access denied:', error);
    }
  }

  connectAudioElement(audioElement: HTMLAudioElement): void {
    const source = this.audioContext.createMediaElementSource(audioElement);
    source.connect(this.analyser);
    this.analyser.connect(this.audioContext.destination);
    this.draw();
  }

  private draw(): void {
    this.animationId = requestAnimationFrame(() => this.draw());

    // Get frequency data
    this.analyser.getByteFrequencyData(this.dataArray);

    // Clear canvas
    this.canvasCtx.fillStyle = '#0a0a0a';
    this.canvasCtx.fillRect(0, 0, this.canvas.width, this.canvas.height);

    // Draw bars
    const barWidth = (this.canvas.width / this.dataArray.length) * 2.5;
    let x = 0;

    for (let i = 0; i < this.dataArray.length; i++) {
      const barHeight = (this.dataArray[i] / 255) * this.canvas.height;

      // Create gradient based on frequency intensity
      const hue = (i / this.dataArray.length) * 360;
      this.canvasCtx.fillStyle = `hsl(${hue}, 70%, ${30 + (this.dataArray[i] / 255) * 40}%)`;

      this.canvasCtx.fillRect(
        x,
        this.canvas.height - barHeight,
        barWidth,
        barHeight
      );

      x += barWidth + 1;
    }
  }

  // Waveform visualization (alternative style)
  private drawWaveform(): void {
    this.animationId = requestAnimationFrame(() => this.drawWaveform());

    this.analyser.getByteTimeDomainData(this.dataArray);

    this.canvasCtx.fillStyle = '#0a0a0a';
    this.canvasCtx.fillRect(0, 0, this.canvas.width, this.canvas.height);

    this.canvasCtx.lineWidth = 2;
    this.canvasCtx.strokeStyle = '#00ff88';
    this.canvasCtx.beginPath();

    const sliceWidth = this.canvas.width / this.dataArray.length;
    let x = 0;

    for (let i = 0; i < this.dataArray.length; i++) {
      const v = this.dataArray[i] / 128.0;
      const y = (v * this.canvas.height) / 2;

      if (i === 0) {
        this.canvasCtx.moveTo(x, y);
      } else {
        this.canvasCtx.lineTo(x, y);
      }

      x += sliceWidth;
    }

    this.canvasCtx.lineTo(this.canvas.width, this.canvas.height / 2);
    this.canvasCtx.stroke();
  }

  stop(): void {
    if (this.animationId) {
      cancelAnimationFrame(this.animationId);
    }
  }
}

Adding Effects: Filter, Delay, and Reverb

Professional audio applications need effects. Here's how to implement common ones:

Low-Pass Filter

function createFilteredOscillator(frequency: number, cutoffFrequency: number) {
  const audioContext = new AudioContext();

  const oscillator = audioContext.createOscillator();
  const filter = audioContext.createBiquadFilter();
  const gainNode = audioContext.createGain();

  oscillator.type = 'sawtooth';
  oscillator.frequency.value = frequency;

  // Configure low-pass filter
  filter.type = 'lowpass';
  filter.frequency.value = cutoffFrequency;
  filter.Q.value = 10; // Resonance

  // Chain: oscillator -> filter -> gain -> output
  oscillator.connect(filter);
  filter.connect(gainNode);
  gainNode.connect(audioContext.destination);

  gainNode.gain.value = 0.3;
  oscillator.start();

  // Automate filter cutoff for sweep effect
  filter.frequency.exponentialRampToValueAtTime(
    5000,
    audioContext.currentTime + 2
  );

  return { oscillator, filter };
}

Delay Effect

function createDelayEffect(audioContext: AudioContext, source: AudioNode) {
  const delay = audioContext.createDelay(5.0); // Max 5 seconds
  const feedback = audioContext.createGain();
  const wetGain = audioContext.createGain();
  const dryGain = audioContext.createGain();

  // Configure delay
  delay.delayTime.value = 0.3; // 300ms delay
  feedback.gain.value = 0.4; // 40% feedback
  wetGain.gain.value = 0.5; // 50% wet signal
  dryGain.gain.value = 1.0; // 100% dry signal

  // Create delay feedback loop
  source.connect(delay);
  delay.connect(feedback);
  feedback.connect(delay);

  // Mix wet and dry
  source.connect(dryGain);
  delay.connect(wetGain);

  dryGain.connect(audioContext.destination);
  wetGain.connect(audioContext.destination);

  return { delay, feedback, wetGain, dryGain };
}

Convolution Reverb

async function createReverb(audioContext: AudioContext, impulseResponseUrl: string) {
  const convolver = audioContext.createConvolver();

  // Load impulse response audio file
  const response = await fetch(impulseResponseUrl);
  const arrayBuffer = await response.arrayBuffer();
  const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);

  convolver.buffer = audioBuffer;

  return convolver;
}

// Usage with dry/wet mix
async function setupReverbChain(audioContext: AudioContext, source: AudioNode) {
  const reverb = await createReverb(audioContext, '/impulse-responses/hall.wav');
  const dryGain = audioContext.createGain();
  const wetGain = audioContext.createGain();

  dryGain.gain.value = 0.7;
  wetGain.gain.value = 0.3;

  source.connect(dryGain);
  source.connect(reverb);
  reverb.connect(wetGain);

  dryGain.connect(audioContext.destination);
  wetGain.connect(audioContext.destination);
}

Building a Complete Audio Player

Let's combine everything into a feature-rich audio player:

class AdvancedAudioPlayer {
  private audioContext: AudioContext;
  private audioElement: HTMLAudioElement;
  private sourceNode: MediaElementAudioSourceNode | null = null;
  private gainNode: GainNode;
  private analyser: AnalyserNode;
  private filter: BiquadFilterNode;
  private panner: StereoPannerNode;

  constructor(audioElement: HTMLAudioElement) {
    this.audioContext = new AudioContext();
    this.audioElement = audioElement;

    // Create nodes
    this.gainNode = this.audioContext.createGain();
    this.analyser = this.audioContext.createAnalyser();
    this.filter = this.audioContext.createBiquadFilter();
    this.panner = this.audioContext.createStereoPanner();

    // Default settings
    this.filter.type = 'lowpass';
    this.filter.frequency.value = 20000;
    this.analyser.fftSize = 256;
  }

  connect(): void {
    if (this.sourceNode) return;

    this.sourceNode = this.audioContext.createMediaElementSource(this.audioElement);

    // Chain: source -> filter -> panner -> gain -> analyser -> output
    this.sourceNode.connect(this.filter);
    this.filter.connect(this.panner);
    this.panner.connect(this.gainNode);
    this.gainNode.connect(this.analyser);
    this.analyser.connect(this.audioContext.destination);
  }

  play(): void {
    if (this.audioContext.state === 'suspended') {
      this.audioContext.resume();
    }
    this.audioElement.play();
  }

  pause(): void {
    this.audioElement.pause();
  }

  setVolume(value: number): void {
    // value: 0 to 1
    this.gainNode.gain.setValueAtTime(value, this.audioContext.currentTime);
  }

  setPan(value: number): void {
    // value: -1 (left) to 1 (right)
    this.panner.pan.setValueAtTime(value, this.audioContext.currentTime);
  }

  setFilterFrequency(frequency: number): void {
    this.filter.frequency.setValueAtTime(frequency, this.audioContext.currentTime);
  }

  setFilterResonance(q: number): void {
    this.filter.Q.setValueAtTime(q, this.audioContext.currentTime);
  }

  getFrequencyData(): Uint8Array {
    const data = new Uint8Array(this.analyser.frequencyBinCount);
    this.analyser.getByteFrequencyData(data);
    return data;
  }

  getWaveformData(): Uint8Array {
    const data = new Uint8Array(this.analyser.fftSize);
    this.analyser.getByteTimeDomainData(data);
    return data;
  }

  // Crossfade to another track
  async crossfadeTo(newAudioElement: HTMLAudioElement, duration: number = 2): Promise<void> {
    const newSource = this.audioContext.createMediaElementSource(newAudioElement);
    const newGain = this.audioContext.createGain();

    newSource.connect(newGain);
    newGain.connect(this.audioContext.destination);

    const now = this.audioContext.currentTime;

    // Fade out current
    this.gainNode.gain.setValueAtTime(this.gainNode.gain.value, now);
    this.gainNode.gain.linearRampToValueAtTime(0, now + duration);

    // Fade in new
    newGain.gain.setValueAtTime(0, now);
    newGain.gain.linearRampToValueAtTime(1, now + duration);

    newAudioElement.play();

    setTimeout(() => {
      this.audioElement.pause();
    }, duration * 1000);
  }
}

React Hook for Web Audio

Here's a reusable React hook for audio applications:

import { useRef, useCallback, useEffect, useState } from 'react';

interface UseAudioReturn {
  play: (frequency?: number) => void;
  stop: () => void;
  setVolume: (volume: number) => void;
  isPlaying: boolean;
}

export function useAudio(): UseAudioReturn {
  const audioContextRef = useRef<AudioContext | null>(null);
  const oscillatorRef = useRef<OscillatorNode | null>(null);
  const gainNodeRef = useRef<GainNode | null>(null);
  const [isPlaying, setIsPlaying] = useState(false);

  const initAudioContext = useCallback(() => {
    if (!audioContextRef.current) {
      audioContextRef.current = new AudioContext();
      gainNodeRef.current = audioContextRef.current.createGain();
      gainNodeRef.current.connect(audioContextRef.current.destination);
      gainNodeRef.current.gain.value = 0.5;
    }
  }, []);

  const play = useCallback((frequency: number = 440) => {
    initAudioContext();

    if (!audioContextRef.current || !gainNodeRef.current) return;

    if (audioContextRef.current.state === 'suspended') {
      audioContextRef.current.resume();
    }

    // Stop existing oscillator
    if (oscillatorRef.current) {
      oscillatorRef.current.stop();
    }

    oscillatorRef.current = audioContextRef.current.createOscillator();
    oscillatorRef.current.type = 'sine';
    oscillatorRef.current.frequency.value = frequency;
    oscillatorRef.current.connect(gainNodeRef.current);
    oscillatorRef.current.start();

    setIsPlaying(true);
  }, [initAudioContext]);

  const stop = useCallback(() => {
    if (oscillatorRef.current) {
      oscillatorRef.current.stop();
      oscillatorRef.current = null;
      setIsPlaying(false);
    }
  }, []);

  const setVolume = useCallback((volume: number) => {
    if (gainNodeRef.current && audioContextRef.current) {
      gainNodeRef.current.gain.setValueAtTime(
        volume,
        audioContextRef.current.currentTime
      );
    }
  }, []);

  useEffect(() => {
    return () => {
      if (oscillatorRef.current) oscillatorRef.current.stop();
      if (audioContextRef.current) audioContextRef.current.close();
    };
  }, []);

  return { play, stop, setVolume, isPlaying };
}

Performance Best Practices

Working with audio requires careful performance management:

  1. Reuse Audio Context - Create one context and reuse it
  2. Disconnect unused nodes - Call disconnect() when done
  3. Use Audio Worklets for heavy processing - Move DSP to separate thread
  4. Avoid creating nodes in animation loops - Pre-create and reuse
  5. Handle the suspended state - Always resume context on user interaction
// Audio Worklet for custom DSP (advanced)
// processor.js
class GainProcessor extends AudioWorkletProcessor {
  static get parameterDescriptors() {
    return [{ name: 'gain', defaultValue: 1, minValue: 0, maxValue: 1 }];
  }

  process(inputs, outputs, parameters) {
    const input = inputs[0];
    const output = outputs[0];
    const gain = parameters.gain;

    for (let channel = 0; channel < input.length; channel++) {
      const inputChannel = input[channel];
      const outputChannel = output[channel];

      for (let i = 0; i < inputChannel.length; i++) {
        outputChannel[i] = inputChannel[i] * gain[0];
      }
    }

    return true; // Keep processor alive
  }
}

registerProcessor('gain-processor', GainProcessor);

Conclusion

The Web Audio API opens incredible possibilities for creative web development. From simple sound effects to full-fledged digital audio workstations, the browser has become a powerful platform for audio applications.

As someone with a background in music production, I find the Web Audio API bridges my two worlds perfectly. The same concepts from DAWs like Ableton Live translate directly to code.

For those interested in diving deeper into audio development, check out my guide on VST plugin development with JUCE for native audio applications.

Start experimenting with these examples and build something that sounds amazing. The web is ready for your audio creations.