An realtime pose detection library for Android and Compose Multiplatform. Android version uses CameraX and GoogleML Kit, while iOS version uses AVFoundation with VisionKit and CoreML. We also support analysing pre-recorded video files. We now also support adding custom object detection models to the library, allowing you to detect custom objects in your camera feed or video files along side body poses.
Quick Start#
Import the Compose library
implementation("com.performancecoachlab.posedetection:posedetection-compose:4.3.0")
Add camera use to your android manifest
<uses-feature
android:name="android.hardware.camera"
android:required="false" />
<uses-permission android:name="android.permission.CAMERA" />
Add camera use to you iOS info.plist
<key>NSCameraUsageDescription</key>
<string>We need access to your camera to analyse your performance.</string>
Usage#
Request camera permissions
var permissionGranted by remember { mutableStateOf(false) }
PermissionProvider().apply {
if (!hasCameraPermission()) RequestCameraPermission(onGranted = {
permissionGranted = true
}, onDenied = { permissionGranted = false }) else permissionGranted = true
}
Create a Skeleton Repisitory
val skeletonRepository = remember { SkeletonRepository() }
val customObjectRepository = remember { CustomObjectRespository() }
Initialise the camera feed
if (permissionGranted) {
CameraView(
skeletonRepository = skeletonRepository,
customObjectRepository = customObjectRespository,
)
} else {
Text("Camera permission not granted")
}
Create a Pose to detect
val upRightPose = Pose(
leftShoulder = Pose.PoseRange(0.0, 40.0),
rightShoulder = Pose.PoseRange(0.0, 40.0),
leftHip = Pose.PoseRange(160.0, 180.0),
rightHip = Pose.PoseRange(160.0, 180.0),
leftKnee = Pose.PoseRange(160.0, 180.0),
rightKnee = Pose.PoseRange(160.0, 180.0)
)
Listen for skeleton updates and detect specific poses
val skeleton by skeletonRepository.skeletonFlow.collectAsState()
val poseDetected = skeleton?.let {
upRightPose.matches(it)
}
Analyse pre recorded video files. Initialise the video extraction for android with your application context.
VideoExtractionContext.setUp(applicationContext)
extract frames from the video and request analysis
rememberCoroutineScope().launch {
try {
extractFrame(url, frame, VideoExtractionContext)?.let { frame ->
frameAnalyser.analyseFrame(frame)?.let { skeleton ->
bitmap = frame.drawSkeleton(skeleton)
}
}
} catch (e: Exception) {
e.printStackTrace()
}
}
Add a custom object detection model Initialse the custom models for ios and android respectively. For android you need to add a .tflite model file to your assets folder, then set androidModelPath to the name of the model file, including the .tflite extension. For iOS you need to add a .mlmodel model file to your Xcode project, then set iosModelPath to the name of the model file without the .mlmodel extension.
val generalModel = initialiseObjectModel(
ModelPath(
"lite-model_efficientdet_lite2_detection_metadata_1.tflite",
"YOLOv3FP16"
)
)
Once this is done, you can use the model to detect objects in the camera feed or video frames.
Check out the sample app for full example of how to use the library.
License#
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.