Building a PWA for a Chromebook; Part 2: Audio API's
25 May 2020People always told me I had a “loud voice”, and my foster son’s loud callouts during quarantine are also trying the adults in the house. To learn more about volume and intensity, I tried to build a Progressive Web App (PWA) that monitors your volume.
Inspired by Nadieh Bremer’s use of the Web Audio API and tutorials on the Web Audio API
Files: Input Element, Capture Attribute
Can use the <audio>
element with the capture
attribute. This was too simplistic for my needs.
Stream: Access the Micropohone Interactively
Calling the navigator.mediaDevices.getUserMedia()
function prompts the user to use their microphone.
Please note the use of navigator.mediaDevices.getUserMedia()
, which takes 1 argument, and returns a promise vs the deprecated navigator.getUserMedia()
, which takes 3 arguments, (the second and third are callback functions).)
The stream can be:
- attached to an
<audio>
element - attached to a
WebRTC
stream - attached to a Web Audio
AudioContext
- saved using
MediaRecorder
API
Example calls:
- if you only know the type of media (audio/video):
navigator.mediaDevices.getUserMedia({ audio: true, video: false})
- if you know the specific device (after enumerating devices with `navigator.mediaDevices.enumerateDevices:
navigator.mediaDevices.getUserMedia({ audio: { device: devices[0].deviceId }})
Processing: Handle the Stream with AudioContext
The Web Audio API is a simple API that takes input sources and connects those sources to nodes which can process the audio data (adjust Gain etc.) and ultimately to a speaker so that the user can hear it.
Analyzing: AnalyserNode (British spelling!)
As seen in this answer
You can also use an AnalyserNode to do the level detection, and just average out the data, kind of like what the above answer does in getAverageVolume. However, the above answer is NOT a good use of ScriptProcessor - in fact, it’s doing no processing of the script node at all, not even passing the data through, it’s just using it like a timer callback. You would be FAR better served by using requestAnimationFrame as the visual callback;
The ScriptProcessor
/createScriptProcessor
is deprecated, but the AnalyserNode
is not!
The AnalyserNode
helps us:
TODO: img https://mdn.mozillademos.org/files/12970/fttaudiodata_en.svg
AnalyserNode.fftSize
Deploying to Glitch
Using Git to commit; if you want to code locally and push to your Glitch git repository, update your Glitch git config like this git config receive.denyCurrentBranch updateInstead
, also seen here
Let’s see if we can use the AnalyserNode
.
I continued using Glitch (where I developed the PWA) to get the getAverageVolume
code running…
I removed createScriptProcessor
and the resulting javascriptNode
. I connected the AnalyserNode
directly to the audioContext.destination
;
analyser.connect(audioContext.destionation)
// javascriptNode.connect(audioContext.destination);
TODO: How to throttle requestAnimationFrame / animating
Visualizing loudness and pitch
- A great article on visualizing loudness and pitch
- Introduction to the concept of the “audio graph” (input nodes, modification nodes, and output nodes) and WebAudio API
- How to understand
getTimeDomain
vsgetFrequency
based on this Stack Overflow question
# Recording From Mozilla’s documentation:
The MediaStream Recording API is comprised of a single major interface,
MediaRecorder
, which does all the work of taking the data from aMediaStream
and delivering it to you for processing.
Measuring Now
performance.now()
vs Date.now()
; the former measures the number of milliseconds since the “time origin”, the latter measures the number of milliseconds since Jan 1, 1970