Progressive Web Apps

How to Access the Camera in a PWA

Previously featured in our article detailing hardware capabilities of PWA, camera access is one of the more prominent features that we’re seeing more and more of. But to properly integrate this capability into your PWA isn’t an easy task either, which is why in our article today, we’ll try to guide you through this whole process:

Prerequisites

  • A basic PWA which can be easily created using ReactJS and our written guide
  • A solid understanding of HTML and JavaScript

How to Access the Camera in a PWA

The basics

Introducing the getUserMedia() — an API of webRTC

To get direct access to a camera and/or a microphone, the Web uses an API called getUserMedia() which is widely supported in almost all modern browsers. This API, along with RTCPeerConnection and RTCDataChannel, are parts of the WebRTC — a framework built into browsers that enables real-time communication.

Basically, what the API (navigator.mediaDevices.getUserMedia(constraints)) does is it prompts the user for permission to access audio and video input of the phone (e.g., microphone, webcam, camera, etc). Using which permission, the API generates a MediaStream JavaScript object called local that can be further manipulated.

Examples

Say, for example, we have a button:

<button>Show my face</button>

And clicking on which button calls the navigator.mediaDevices.getUserMedia() method (without audio input):

navigator.mediaDevices.getUserMedia({
 video: true
})

Heck, we can go wild with the constraints as well:

navigator.mediaDevices.getUserMedia({
  video: {
    minAspectRatio: 1.333,
    minFrameRate: 30,
    width: 1280,
    heigth: 720
  }
})

Additionally, we can specify a facingMode property in the video object which tells the browser which camera of the device to make use of:

{
  video: {
    ...
    facingMode: {
//Use the back camera
      exact: 'environment'
    }
  }
}

Or

{
 video : {
  …
//Use the front camera
  facingMode: ‘user’
 }
}

Notes:

  • The API is only available on a secure origin (HTTPS)
  • To get a list of the supported constraints on the current device, run:
 navigator.mediaDevices.getSupportedConstraints()

The complicated part

Now that we’ve got a solid understanding of the basics, let’s move on to the advanced part. In this part, we’ll try to create a button in our PWA and, upon clicking on which, it opens our camera and lets us do further work.

Creating the [Get access to camera] button

First, let’s start with the <button> in our index.html :

<button id="get-access">Get access to camera</button>
<video autoplay></video>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

Notes: 

  • The autoplay is there to tell the media stream to autoplay and not freeze on the first frame.
  • The adapter-latest.js is a shim to insulate apps from spec changes and prefix differences.

Stream video in real-time by clicking on the button

To stream a video in real-time when clicking on the button, we’ll need to add an EventListener which will be called when the click event is issued:

document.querySelector('#get-access').addEventListener('click', async function init(e) {
  try {
 }
catch (error) {
 }
})

Afterwards, it calls navigator.mediaDevices.getUserMedia() and asks for a video stream using the device’s webcam:

document.querySelector('#get-access').addEventListener('click', async function init(e) {
  try {
    const stream = await navigator.mediaDevices.getUserMedia({
      audio: false,
      video: true
    })
    const videoTracks = stream.getVideoTracks()
    const track = videoTracks[0]
    alert(`Getting video from: ${track.label}`)
    document.querySelector('video').srcObject = stream
    document.querySelector('#get-access').setAttribute('hidden', true)
//The video stream is stopped by track.stop() after 3 second of playback.
    setTimeout(() =&gt; { track.stop() }, 3 * 1000)
  } catch (error) {
    alert(`${error.name}`)
    console.error(error)
  }
})

Additionally, as specified above in the Basic section, you can also specify more requirements for the video stream:

navigator.mediaDevices.getUserMedia({
  video: {
    mandatory: { minAspectRatio: 1.333, maxAspectRatio: 1.334, facingMode: ‘user’},
    optional: [
      { minFrameRate: 60 },
      { maxWidth: 640 },
      { maxHeigth: 480 }
    ]
  }
}, successCallback, errorCallback);

Creating a canvas

With the <video> element combined with a <canvas>, you can further process our real-time video stream. This includes the ability to perform a varierity of effects such as applying custom filters, chroma-keying (aka the “green screen effect”) — all by using JavaScript code.

In case you want to read more about this, Mozilla has written a detailed guide about Manipulating video using canvas so don’t forget to check it out!

Capture a snapshot of the canvas using takePhoto() and grabFrame()

The new takePhoto and grabFrame methods of the getUserMedia API can be used to capture a snapshot of the currently streaming video. There are, still significant differences between the two methods:

Basically, what grabFrame does is it simply grabs the next video frame — a simplistic and not as efficient method of capturing photos. The takePhoto method, on the other hand, uses a better method of capturing frames which is by interrupting the current video stream to use the camera’s “highest available photographic camera resolution” to capture a Blob image.

In the below examples, we’ll be drawing the captured frame into a canvas element using the grabFrame method:

var grabFrameButton = document.querySelector('button#grabFrame');
var canvas = document.querySelector('canvas');

grabFrameButton.onclick = grabFrame;

function grabFrame() {
  imageCapture.grabFrame()
  .then(function(imageBitmap) {
    console.log('Grabbed frame:', imageBitmap);
    canvas.width = imageBitmap.width;
    canvas.height = imageBitmap.height;
    canvas.getContext('2d').drawImage(imageBitmap, 0, 0);
    canvas.classList.remove('hidden');
  })
  .catch(function(error) {
    console.log('grabFrame() error: ', error);
  });
}

And in this example, we use the takePhoto() method:

var takePhotoButton = document.querySelector('button#takePhoto');
var canvas = document.querySelector('canvas');

takePhotoButton.onclick = takePhoto;

// Get a Blob from the currently selected camera source and
// display this with an img element.
function takePhoto() {
  imageCapture.takePhoto().then(function(blob) {
    console.log('Took photo:', blob);
    img.classList.remove('hidden');
    img.src = URL.createObjectURL(blob);
  }).catch(function(error) {
    console.log('takePhoto() error: ', error);
  });
}

To get an idea of what the above methods look like in action, we recommend Simple Image Capture; and alternatively, PWA Media Capture is also a good example of what a basic media capture feature in PWA would look like.

Conclusion

In this tutorial, we have introduced to you the basics as well as some advanced tricks to implementing camera features in your PWA. The rest is only up to your imagination to make the best out of this feature.

For Magento merchants looking to develop a next-gen PWA-powered store, here in SimiCart we provide complete PWA solutions tailored to your needs.


Luke Vu

A content writer with a passion for the English language.

Subscribe
Notify of
guest

2 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Anon
Anon
3 years ago

What if I’m still running in dev mode via localhost?

Ujjaval
Ujjaval
1 year ago

your article jumps directly from talking about getUserMedia() API, which is supported in all major browsers to using ImageCapture interface that is currently supported by chrome only. This is clearly a gap in the writing that can cause lot of wasted effort for the readers. Maybe a disclaimer before the ImageCapture section would be nice.