I have a legacy video distribution prototype that allows users to record a video on their devices, uploads it (the whole video as a single file) to the cloud while it is being transcoded to a specific format, and then shares it with other devices to play. Our use-case is 1-2 min selfie videos of users (so not movies, and videos are not very lengthy).
Now we want to upgrade our system with adaptive streaming (MPEG-DASH). What are the changes we need to make into the pipeline? For instance, we need to segment the videos (how?), with specific durations (what length?), keep various quality versions of each segmnet (best way to do it?), adaptations to the playback, etc.
Any guidance on the procedure, what needs to be done, and any optimizations would be helpful.
Your question is quite broad - I think you essentially need a design for an OTT video on demand system, but addressing the individual points you ask:
For instance, we need to segment the videos (how?)
Assuming your videos are currently mp4, there are multiple tools that will allow you create a DASH manifest and media segments from this, including ffmpeg: https://ffmpeg.org/ffmpeg-formats.html#dash-2. Many packagers (see below) will also support this type of functionality.
with specific durations (what length?)
Segment length is usually a tradeoff between latency, encoding overhead and quality. There is a very good summary here: https://bitmovin.com/mpeg-dash-hls-segment-length/
In general, as indicated in the above article, most applications choose 2-4 second segments lengths at this time.
keep various quality versions of each segmnet (best way to do it?)
Most video streaming solutions will use an origin server for the media streams - if you are using one of these then it will most likely work fine for your DASH files.
adaptations to the playback
Nearly all standard HTML5, Android and iOS players swill support DASH and adaptive streaming as standard, so you should not have to develop anything custom on the player side.
On a more general note, most OTT video solutions will usually support both HLS and DASH streaming protocols. This is because different devices and applications support different protocols and this provides the best reach. As a very high level general rule, noting that there are exceptions:
For this reason many OTT solutions will store transcode videos in a 'carousel' format, e.g. HLS, and then re-package as necessary on the fly to whatever the requesting device actually needs, either HLS or DASH. 'Just In Time' packagers are used for this, and these are often combined with the origin server. See some examples:
It's also worth being aware of the emerging industry standard CMAF, which promises to unify much of the HLS and DASH streaming complexity. Due to some differences in encryption support on different devices the rollout is not as fast as might be hooped for, but it is worth being aware of and planning for: https://www.cta.tech/Resources/Standards/CMAF-IF