-
-
Notifications
You must be signed in to change notification settings - Fork 98
Project Changes
- Modified
settings.gradleto use the new plugin management system. - The conversion of
Bitmapto NV21-formattedByteArray( YUV420 ) is now transformed into a suspending function to avoid blocking of the UI thread when a large number of images are being processed.
- Users can now control the use of
GpuDelegateandXNNPackusinguseGpuanduseXNNPackinMainActivity.kt,
// Use the device's GPU to perform faster computations.
// Refer https://www.tensorflow.org/lite/performance/gpu
private val useGpu = true
// Use XNNPack to accelerate inference.
// Refer https://blog.tensorflow.org/2020/07/accelerating-tensorflow-lite-xnnpack-integration.html
private val useXNNPack = true
-
The app now has a face mask detection feature with models obtained from achen353/Face-Mask-Detector repo. You may off it by setting
isMaskDetectionOninFrameAnalyser.kttofalse. -
The source of the FaceNet model is now Sefik Ilkin Serengil's DeepFace, a lightweight framework for face recognition and facial attribute analysis. Hence, the users can now use two models,
FaceNetandFaceNet512. Also, the int-8 quantized versions of these models are also available. See the following line ineMainActivity.kt,
private val modelInfo = Models.FACENET
You may use different configurations in the Models class.
-
The app will now classify users, whose images were not scanned from the
imagesfolder, asUNKNOWN. The app uses thresholds both for L2 norm and cosine similarity to achieve this functionality. -
For requesting the
CAMERApermission and access to theimagesfolder, the request code is now handled by the system itself. See Request app permissions.
-
We'll now use the
PreviewViewfrom Camera instead of directly using theTextureView. See the official Android documentation forPreviewView -
As of Android 10, apps couldn't access the root of the internal storage directly. So, we've implemented Scoped Storage, where the user has to allow the app to use the contents of a particular directory. In our case, users now have to choose the
images/directory manually. See Grant access to a directory's contents. -
The feature request #11 for serializing the image data has been considered now. The app won't load the images everytime so as to ensure a faster start.
-
The feature request #6 has also been considered. After considering the use of
PreviewView, the app can now be sed in the landscape orientation. -
The project is now backwards compatible to API level 25. For other details, see the
build.gradlefile. -
The lens facing has been changed to
FRONTand users won't be able to change the lens facing. The app will open the front camera of the device as a default. -
The source of the FaceNet Keras model -> nyoki-mtl/keras-facenet
-
The image normalization step is now included in the TFLite model itself using a custom layer. We only need to cast images to
float32using theCastOpfrom TFLite Support Library. -
A
TextViewis now added on the screen which logs important information like number of images scanned, similarity score for users, etc.
- The source of the FaceNet model has been changed. We'll now use the FaceNet model from sirius-ai/MobileFaceNet_TF
- The project is now backwards compatible to API level 23 ( Android Marshmallow )
minSdkVersion 23
- Lens Facing of the camera can be changed now. A button is provided on the main screen itself.
- For multiple images for a single user, we compute the score for each image. An average score is computed for each group.
The group with the best score is chosen as the output. See
FrameAnalyser.kt.
images ->
Rahul ->
image_rahul_1.png -> score=0.6 --- | average = 0.65 --- |
image_rahul_2.png -> score=0.5 ----| | --- output -> "Rahul"
Neeta -> |
image_neeta_1.png -> score=0.4 --- | average = 0.35 --- |
image_neeta_2.png -> score=0.3 ----|
- Cosine similarity can be used alongside L2 norm. See the
metricToBeUsedvariable inFrameAnalyser.kt. - A new parameter has been added in
MainActivity.kt. ThecropWithBBoxesargument allows you to run the Firebase MLKit module on the images provided. If you are already providing cropped images in theimages/folder, set this argument tofalse. On setting the value totrue, Firebase ML Kit will crop faces from the images and then run the FaceNet model on it.