Hi everyone, and welcome to my first ever Medium story! Recently I’ve added some barcode scanning functionalities to one of my apps and I’ve decided to share what I’ve learned here. First time ever writing such a thing, so bare with me guys, and if you take ONE useful thing out of this article, then…
Let’s get crackin!
So what’s this going to be about? We are just going to use our device’s camera to point at a barcode and extract the barcode’s “value” in order to use it on a future search, query, or whatever.
We are going to break down the app into three parts:
- Setting up CameraX, and its Preview use case
- Integrating MLKit into our project and preparing the barcode scanner
- Feeding the camera’s image into the scanner using CameraX Image Analysis use case
Simple as that.
CameraX is nothing more than a Jetpack support library for…well…doing stuff with the camera. It introduces the concept of use case, for which there are currently 3 options: Preview, Image Analysis, and Image Capture. We are only interested in the Preview use case for now.
We’ll first need to add the following dependencies to the build.gradle file(Module: app):
Make sure the following plugin is right there at the top, too:
apply plugin: 'kotlin-android-extensions'
And we are going to be using Java 8, so let’s set our compile options as:
Pretty simple step here, just add a PreviewView to your layout. This is the View into which the camera will stream when it becomes active. Something like this is enough:
In our Activity/Fragment we’ll request the necessary permissions, and start the camera after those permissions are granted. For the sake of brevity, I’m only going to include the necessary code for starting the camera here. The full code is available in the repository.
So, let’s start the camera. Be sure to read the comments inside the code snippet if you want a more detailed understanding of the startCamera() method.
So far, so good. Let’s leave this as it is for a moment and we’ll come back to it later.
What’s MLKit? And what does it do?
ML Kit is a mobile SDK that brings Google’s on-device machine learning expertise to Android and iOS apps.
ML Kit’s APIs all run on-device, allowing for real-time use cases where you want to process a live camera stream for example.
MLKit has a BUNCH of applications, all the way from Pose Detection to Digital Ink Recognition. Of course we are interested in the barcode scanning capabilities.
MLKit can also extract a whole lot more information from a barcode other than its raw value, for instance WiFi access point details or geographic coordinates.
For each barcode, you can get its bounding coordinates in the input image, as well as the raw data encoded by the barcode. Also, if the barcode scanner was able to determine the type of data encoded by the barcode, you can get an object containing parsed data.
We are not going to tap into any of that encoded information but you can read more about it here.
Add the following dependency to the build.gradle file(Module: app):
Next up, we are going to create a class called BarcodeAnalyzer, that implements the ImageAnalysis.Analyzer interface. We will then pass this to ImageAnalysis.setAnalyzer to receive images and perform our processing.
Couple of things to make a note of:
Although it’s enough to call BarcodeScanning.getClient() to get an instance of BarcodeScanner, we might want to do some previous configuration for say, only scanning QR codes. That could be done easily using the BarcodeScannerOptions builder which you can checkout in the documentation.
Also, you might see on other projects out there some “rotation compensation” method, or piece of code, which gets the device rotation and performs some calculation. This is not necessary in this case since the ImageAnalysis.Analyzer does it for us. You can see the rotation value as the second parameter to the InputImage.fromMediaImage method.
Finally, the barcodeListener we are going to pass to the analyzer is nothing more than an instance of this:
typealias BarcodeListener = (barcode: String) -> Unit
3. Connecting the Parts
We have a working camera preview, and a barcode scanner which extracts a barcode value from an image. Let’s make them work together.
In our Fragment, right after building our preview, we are going to build an instance of the ImageAnalysis use case. ImageAnalysis acquires images from the camera through an ImageReader and provides them to an ImageAnalysis.Analyzer (our BarcodeAnalyzer).
Here, the processingBarcode AtomicBoolean, is nothing more than a flag that helps us process one barcode at a time, preventing a flood of calls to the searchBracode method. This method is just an operation we want to do with the previously read barcode value. For the purpose of this project, on a successful read, the searchBarcode method will navigate us to a success screen after a 1-second delay.
The cameraExecutor is the executor in which the analyzer will be run:
cameraExecutor = Executors.newSingleThreadExecutor()
Now, we need to add this use case to the method call that binds the use cases to the lifecycleOwner…the piece of code inside the try catch block, remember? So this…
That SHOULD cover everything we need…
Ok, ok, let’s give it a go and see what happens!
Yes! So, what you see there is a half-eaten chocolate bar wrapper, and the scanner reading the wrapper’s barcode properly!
Remember, we are not really doing anything with the code we are scanning. The progress bar just shows for one second and then we navigate to a “success” screen. This, is where you would perform the actual query, API call, or whatever it is you need to do.
Here are the links to the documentation and repo:
CameraX Codelab (love these things):
I really hope you liked the article, that it was worth the read, and hopefully someday soon I’ll come across something worth sharing again.
Until next time!