So I have different Samsung devices (up to S22 Ultra) and it is very easy to access ultra wide lens camera because CameraManager.cameraIdList
returns 4 cameras which includes general back lens camera and ultra wide lens camera as well.
But many other devices (Xiaomi, Vivo and many others) return only two general cameras - back and front.
Some users of my apps said that they are able to use ultra wide lens camera with apps like mcpro24fps and gcam. One user with Xiaomi POCO X3 (Android 11)
How do such apps can access all cameras?
Also Camera2 API usually returns that video stabilization is not supported (manufacture doesn't exposes it within this api)
val characteristics = getCameraCharacteristics(context, cameraIdx)
val modes =
characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES)
?: intArrayOf()
val supported = CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON in modes
supported
usually false
but even if it returns true
for some devices it seems it still doesn't really do any video stabilization and there is no any crop at the frames, no effect basically, based on users responses of my app
so the following code doesn't change anything if camera2 api returns that video stabilization is supported:
if (isVideoStabilizationSupported()) {
captureRequestBuilder.set(
CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE,
CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON
)
but again mcpro24fps and gcam supports this as well but it seems mb a custom solution but I don't really understand how you can implement something custom with camera2 API without effecting performance, because it has to implemented on low level
Update Mb such apps can access ultra wide lens camera by using new zoom ratio camera parameter:
captureRequestBuilder.set(CaptureRequest.CONTROL_ZOOM_RATIO, 0.6f)
It works for my Samsung devices, 0.6...10
zoom ratio. So can switch between lens without changing camera id
The long-term goal for Android's camera APIs is that multi-camera clusters (such as a combination of ultrawide/wide/tele cameras) can be used by applications without having to specially code for it.
That's done via the logical multi-camera APIs. When implemented, that means there'll be one logical camera, composed of two or more physical cameras. What you see in the camera ID list is the logical camera, and you can also get the list of the physical cameras from CameraCharacteristics#getPhysicalCameraIds(). With this arrangement, you can see extended zoom ranges such as what you mention ( 0.6 ... 10
), and the camera implementation will automatically switch to the ultrawide or telephoto cameras when you zoom out or zoom in. That's subject to various other conditions; for example, most telephoto lenses can't focus very close, so if you zoom in while focused on a nearby object, the camera will likely stay with the default wide camera; similarly tele cameras are often worse in low light, so digital zoom may result in better quality than optical zoom plus more amplification.
If you have a particular reason to use the underlying uw/wide/tele camera, you can include them in the stream configuration via setPhysicalCameraId() and use them directly; that lets you force which camera is streamed from if you want to provide a 'use telephoto' button in your UI, instead of letting the logical camera try to use its best judgement to select the active camera.
Unfortunately, not all devices have migrated to the logical camera API; those devices might be listing the multi-camera clusters as individual camera IDs as you've seen, or may just hide some cameras from apps entirely. The main problem with this is that it requires an app to do extra work to just zoom out/in with best quality, and the variety of implementations makes it hard to code up in an app so that it works on all devices.