TL;DR I wanted to launch a flashlight on Android device using Typescript and the experience was far from ideal.

The task

Background: there’s a web application that runs a camera to scan for QR codes. Sometimes the light conditions are poor and scanning is difficult.

The task: Launch the flashlight to improve the scanning experience.

Doesn’t sound that difficult, does it?

Meet the camera API

So we want to obtain the camera stream, how do we do that? It seems the answer is simple. Meet the MediaDevices.getUserMedia()

Let’s obtain the camera then:

const mediaStream = await window.navigator.mediaDevices.getUserMedia({
    audio: false,
    video: { facingMode: "environment" }

This gives us MediaStream. Can we turn on the camera on this thing now?

The answer is No

Apparently the flashlight is not present on the MediaStream. The next best guess is to fetch the video tracks using getVideoTracks that returns an array of MediaStreamTrack.

Why would it return more than one video track from a single camera? No idea

The important part is that there supposed to be a single track anyway, and the track has the getCapabilities. This one returns MediaTrackCapabilities which is… well not documented, at least at the moment of writing


Typescript model

Luckily this interface has been modelled in typescript:

interface MediaTrackCapabilities {
    aspectRatio? : DoubleRange;
    autoGainControl? : boolean[];
    channelCount? : ULongRange;
    cursor? : string[];
    deviceId? : string;
    displaySurface? : string;
    echoCancellation? : boolean[];
    facingMode? : string[];
    frameRate? : DoubleRange;
    groupId? : string;
    height? : ULongRange;
    latency? : DoubleRange;
    logicalSurface? : boolean;
    noiseSuppression? : boolean[];
    resizeMode? : string[];
    sampleRate? : ULongRange;
    sampleSize? : ULongRange;
    width? : ULongRange;

Wait a second, nothing about a flashlight right?

Well that’s because the model is incomplete. Luckily there’s the @types/w3c-image-capture. You just have to know it exists basically.

Apparently after installing it this is how it looks like:

interface MediaTrackCapabilities {
    whiteBalanceMode: MeteringMode[];
    exposureMode: MeteringMode[];
    focusMode: MeteringMode[];

    exposureCompensation: MediaSettingsRange;
    colorTemperature: MediaSettingsRange;
    iso: MediaSettingsRange;
    brightness: MediaSettingsRange;
    contrast: MediaSettingsRange;
    saturation: MediaSettingsRange;
    sharpness: MediaSettingsRange;

    focusDistance: MediaSettingsRange;
    pan: MediaSettingsRange;
    tilt: MediaSettingsRange;
    zoom: MediaSettingsRange;
    torch: boolean;

Entirely different and kind of confusing, isn’t it?

Never mind, we have a torch property! Now just filter through the list returned by getVideoTracks, find a track with torch and we are done? Not that simple

In some cases like mine, when testing the code against Samsung Galaxy S10, the mediaStream I’ve acquired represents a wide lens camera. It has only a single video track that has no torch capability.

Find a camera with flashlight

It’s been tiresome already but we are getting there. When calling getUserMedia above we have requested a camera with certain capabilities. Perhaps we can just restrict it to a device that has a flashlight? Let’s see the data model

interface MediaStreamConstraints {
    audio? : boolean | MediaTrackConstraints;
    peerIdentity? : string;
    preferCurrentTab? : boolean;
    video? : boolean | MediaTrackConstraints;

interface MediaTrackConstraints extends MediaTrackConstraintSet {
    advanced? : MediaTrackConstraintSet[] | undefined;

interface MediaTrackConstraintSet {
    width? : W3C.ConstrainLong | undefined;
    height? : W3C.ConstrainLong | undefined;
    aspectRatio? : W3C.ConstrainDouble | undefined;
    frameRate? : W3C.ConstrainDouble | undefined;
    facingMode? : W3C.ConstrainString | undefined;
    volume? : W3C.ConstrainDouble | undefined;
    sampleRate? : W3C.ConstrainLong | undefined;
    sampleSize? : W3C.ConstrainLong | undefined;
    echoCancellation? : W3C.ConstrainBoolean | undefined;
    latency? : W3C.ConstrainDouble | undefined;
    deviceId? : W3C.ConstrainString | undefined;
    groupId? : W3C.ConstrainString | undefined;

Apparently, there’s no way to obtain a device with a flashlight!


The only thing you can do about it is to enumerate the devices using mediaDevices.enumerateDevices() - this method returns the list of MediaDeviceInfo

interface MediaDeviceInfo {
    readonly deviceId: string;
    readonly groupId: string;
    readonly kind: MediaDeviceKind;
    readonly label: string;
    toJSON(): any;

Using the deviceId you can request every single device which means physically launching each camera! Then you can check all the video tracks and find one that has the flashlight capability. After acquiring each device you need to free it, which can be done with a method like this one:

function shutdownMediaStream(ms: MediaStream) {
  ms.getTracks().forEach(t => {


The camera API is far from perfect, at least for use cases like this one. This is true especially if you are not a front-end specialist.

If you want to play with the camera API yourself, I’ve created a simple playground project for that sake. Feel free to fork it from

Disclaimer: My primary expertise is backend development with Scala so please treat this post as the newcomer perspective on the topic.