This adds a new obs_canvas object that acts as a shareable
(reference-counted) owner of views, mixes, and (optionally) scenes.
This is a step towards faciliatating multi-canvas and multi-output
features in OBS Studio.
It solves a number of complications that exist with the manual approach
of using views, such as audio mixing, source active-state tracking, and
scenes not havinga reliable way of identifying the actual available
canvas size.
Introduce support for delivering BPM (Broadcast
Performance Metrics) over SEI (for AVC/H.264 and
HEVC/H.265) and OBU (for AV1) unregistered messages.
Metrics being sent are the session frame counters,
per-rendition frame counters, and RFC3339-based
timestamping information to support end-to-end
latency measurement.
SEI/OBU messages are generated and sent with each IDR
frame, and the frame counters are diff-based, meaning
the counts reflect the diff between IDRs, not the running
totals.
BPM documentation is available at [1].
BPM relies on the recently introduced encoder packet timing
support and the packet callback mechanism.
BPM injection is enabled for an output by registering
the `bpm_inject()` callback via `obs_output_add_packet_callback()`
function. The callback must be unregistered using
`obs_output_remove_packet_callback()` and `bpm_destroy()`
must be used by the caller to release the BPM structures.
It is important to measure the number of frames successfully
encoded by the obs_encoder_t instances, particularly for
renditions where the encoded frame rate differs from the
canvas frame rate. The encoded_frames counter and
`obs_encoder_get_encoded_frames()` API is introduced
to measure and report this in the encoded rendition
metrics message.
[1] https://d50yg09cghihd.cloudfront.net/other/20240718-MultitrackVideoIntegrationGuide.pdf
Packet callbacks are invoked in `send_interleaved()` and
are useful for any plugin to extend functionality at the
packet processing level without needing to modify code in
libobs. Closed caption support is one candidate that is
suitable for migration to a packet callback.
The packet callback also supports the new encoder packet
timing feature. This means a registered callback will have
access to both the compressed encoder packet and the associated
encoder packet timing information.
Introduce support for the `encoder_packet_time` struct
to capture timing information for each frame, starting
from the composition of each frame, through the encoder,
to the queueing of the frame data to each output_t.
Timestamps for each of the following events are based on
`os_gettime_ns()`:
CTS: Composition time stamp (in the encoder render threads)
FER: Frame encode request
FERC: Frame encoder request completely
PIR: Packet interleave request (`send_interleaved()`)
Frame times are forwarded through encoder callbacks in the
context that runs on the relevant encoder thread, ensuring
no race conditions with accessing per encoder array happen.
All per-output processing happens on data that is owned by
the output.
Co-authored-by: Ruwen Hahn <haruwenz@twitch.tv>
Modifies the encoder group API added previously to better follow the
existing libobs API naming paradigms. This also produces much more
readable code, and allows a few small benefits like only needing to
hold a reference to the encoder group, instead of every encoder.
We will now keep a per track list of caption data. When caption
insertion is triggered we will add that to the list for each actively
encoding video track. When doing interleaved send, we pull the data
from the caption data for the corresponding video. This ensures that
captions get copied to all video tracks.
Enable all of the previously Windows only paths for OpenGL backends that
support encode_texture2
Co-authored-by: Torge Matthies <openglfreak@googlemail.com>
To prevent sources from having to take too much extra consideration into
threading, defer the media control functions that directly affect
functionality to the video thread. Getters will still have to take
threading into account, but this should make things much more trivial
for media controls thread-wise.
(Lain note: Context: I noticed that things such as the slide show
require mutexes due to their media controls, and I felt that it was
largely unnecessary and that libobs should mitigate the threading issue
itself and keep it all in the video thread like it should be. Again, the
getters are still going to require *some* consideration into threading
but in terms of threading, for the type of stuff we're doing, querying
state is far more trivial to take into consideration.)
(Lain note: This is mostly so more (yes more) can be added to this
god-forsaken structure without it getting too messy. In terms of actual
code, I need to be better about writing actual code comments. ...Meaning
that I actually need to start writing code comments.)
To avoid passing `struct darray *` type, which cannot hold the type
information, this commit defines array types and uses that types on the
function parameters.
Allows rescaling resolution for GPU encoders and allows moving
rescaling for CPU encoders from CPU to GPU
Rescaling is implemented via core video mixes; encoders create
their own core mix with matching width/height/format/colorspace/
range when gpu scaling is enabled and no matching core video
mix exists
Switching to a static library that contains version information as
const char strings has multiple benefits:
* The version information provided externally via compiler definitions
will fail compilation early if malformed
* An updated version string (which will happen with every commit) will
not invalidate existing compilation units, because only the static
library is affected by the change
* An update of the version change just requires a recompilation of the
static library and a linker update
* An update of the version will _not_ infect the rest of the codebase
(as it does currently, because everything includes obsconfig.h one
way or another)
* Other modules which used the macro definition directly have been
updated as much as possible to use the proper getter method from
`libobs` instead (some Windows-specific modules use preprocessor
string composition, the value has been added as a compiler definition
directly in those cases)
* Because the impact of a version change due to a commit hash change
is limited to the static library, ccache hit rates should be
improved considerably
There was quite a bit of conflated usage of mixes (which refers
to raw audio) and encoder counts. This fully separates the two
and makes a distinct separation when iterating over mixes vs
encoders.