I assume that if I specify in
and out
parameters for an avformat producer, MLT interprets these numbers with respect to the file's intrinsic frame rate. E.g., for a source file at 60fps, 00:00:01:10
means 70 frames, but for a file at 25fps, the same timespec means 35 frames. I assume this is orthogonal to the output framerate of the entire project.
Assuming (1) is correct, how does MLT interpret timespecs for audio-only files that have no intrinsic framerate (not in the 24-120Hz range anyway). If I specify in=70
for a WAV file at 48kHz, how does MLT decide how many seconds of audio that translates to?
No, the in and out points are terms of the profile frame rate. If you are not specifying a profile, then it either defaults to one or automatically builds one using the first producer depending on the interface you are using (API, melt command line, XML, or other app). If the first producer is not a video clip, then it typically defaults to dv_pal (25 fps) if using MLT directly instead of one of its GUI apps, which may do things differently. The profile determines the output frame rate. If you change the frame rate of the consumer, then you are actually adjusting the profile. Also, you can avoid using time values with frame number using so-called clock values: HH:MM:SS.ms.
As explained above, the profile determines a frame rate. There will always be some constant frame rate. Audio files also follow this frame rate. The precision for editing audio is also governed by this frame rate: higher frame rate => more precision. Sample-precision editing is not available in MLT at this time.