You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/Build/build/GPAC-Build-Guide-for-Linux.md
+3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,7 @@
1
1
_Preliminary notes: the following instructions will be based on Ubuntu and Debian. It should be easily applicable to other distributions, the only changes should be name of the packages to be installed, and the package manager used._
2
2
3
3
GPAC is a modular piece of software which depends on third-party libraries. During the build process it will try to detect and leverage the installed third-party libraries on your system. Here are the instructions to:
4
+
4
5
* build GPAC easily (recommended for most users) from what's available on your system,
5
6
* build a minimal 'MP4Box' and 'gpac' (only contains GPAC core features like muxing and streaming),
6
7
* build a complete GPAC by rebuilding all the dependencies manually.
@@ -47,6 +48,7 @@ _If you are upgrading from a previous version (especially going from below 1.0.0
47
48
## Use
48
49
49
50
You can either:
51
+
50
52
-`sudo make install` to install the binaries,
51
53
- or use the `MP4Box` or `gpac` binary in `gpac_public/bin/gcc/` directly,
52
54
- or move/copy it somewhere manually.
@@ -104,6 +106,7 @@ make
104
106
4. Use
105
107
106
108
You can either:
109
+
107
110
-`sudo make install` to install the binaries,
108
111
- or use the `MP4Box` or `gpac` binary in `gpac_public/bin/gcc/` directly,
All other functionalities of MP4Box are not available through a filter session. Some might make it one day (BIFS encoding for example), but most of them are not good candidates for filter-based processing and will only be available through MP4Box (track add/remove to existing file, image item add/remove to existing file, file hinting, ...).
156
159
157
160
__Note__ For operations using a filter session in MP4Box, it is possible to view some information about the filter session:
161
+
158
162
- -fstat: this will print the statistics per filter and per PID of the session
159
163
- -fgraph: this will print the connections between the filters in the session
Copy file name to clipboardExpand all lines: docs/Howtos/avmix_tuto.md
+13
Original file line number
Diff line number
Diff line change
@@ -105,6 +105,7 @@ _Note_
105
105
A sequence not attached with a scene will not be visible nor played, even if active.
106
106
107
107
Now let's add :
108
+
108
109
- a logo
109
110
- a bottom rectangle with a gradient
110
111
- some text
@@ -131,6 +132,7 @@ In the following examples, we always use [relative coordinates system](avmix#coo
131
132
## Animating a scene
132
133
133
134
Scenes can be animated through timer objects providing value interpolation instructions. A timer provides:
135
+
134
136
- a start time, stop time and a loop count
135
137
- a duration for the interpolation period
136
138
- a set of animation values and their targets
@@ -167,6 +169,7 @@ It can be tedious to apply the same transformation (matrix, active, ...) on a su
167
169
The simplest way to do this is to group scenes together, and transform the group.
168
170
169
171
The following animates:
172
+
170
173
- the video from 90% to 100% , sticking it to the top-left corner and animated the rounded rectangle effect
171
174
- the overlay group position from visible to hidden past the bottom-right corner
172
175
@@ -270,6 +273,7 @@ This works with video scenes too:
270
273
271
274
You will at some point need to chain some videos. AVMix handles this through `sequence` objects describing how sources are to be chained.
272
275
Sequences are designed to:
276
+
273
277
- take care of media prefetching to reduce loading times
274
278
- perform transitions between sources, activating / prefetching based on the desired transition duration
275
279
@@ -297,6 +301,7 @@ AVMix handles this by allowing scenes to use more than one sequence as input, an
297
301
_Note: Currently, defined scenes only support 0, 1 or 2 input sequences_
298
302
299
303
This is done at scene declaration through:
304
+
300
305
- a `mix` object, describing a transition
301
306
- a `mix_ratio` property, describing the transition ratio
302
307
@@ -347,6 +352,7 @@ Specifying an identifier on the sequence avoids that.
347
352
## Live mode
348
353
349
354
Live mode works like offline mode, with the following additions:
355
+
350
356
- detection and display of signal lost or no input sequences
351
357
-`sequence` and `timer` start and stop time can be expressed as UTC dates (absolute) or current UTC offset
352
358
@@ -368,6 +374,7 @@ You should now see "no input" message when playing. Without closing the player,
368
374
]
369
375
```
370
376
And the video sequence will start ! You can use for start and stop time values:
377
+
371
378
- "now": will resolve to current UTC time
372
379
- integer: will resolve to current UTC time plus the number of seconds specified by the integer
373
380
- date: will use the date as the start/stop time
@@ -448,13 +455,15 @@ This is problematic if you use AVMix to generate a live feed supposed to be up 2
448
455
To prevent this, the filter allows launching the sources as dedicated child processes. When the child process exits unexpectedly, or when source data is no longer received, the filter can then kill and relaunch the child process.
449
456
450
457
There are three supported methods for this:
458
+
451
459
- running a gpac instance over a pipe
452
460
- running a gpac instance over TCP
453
461
- running any other process capable of communicating with gpac
454
462
455
463
The declaration is done at the `sourceURL` level through the port option.
456
464
457
465
For each of these mode, the `keep_alive` option is used to decide if the child process shall be restarted:
466
+
458
467
- if no more data is received after `rtimeout`.
459
468
- stream is in end of stream but child process exited with an error code greater than 2.
460
469
@@ -598,6 +607,7 @@ return 0;
598
607
```
599
608
600
609
Your module can also control the playlist through several functions:
610
+
601
611
- remove_element(id_or_elem): removes a scene, group or sequence from playlist
602
612
- parse_element(JSON_obj): parses a root JSON object and add to the playlist
603
613
- parse_scene(JSON_obj, parent_group): parses a scene from its JSON object and add it to parent_group, or to root if parent_group is null
@@ -729,12 +739,14 @@ In this mode, the texturing parameters used by the offscreen group can be modifi
729
739
AVMix can use a global alpha mask (covering the entire output frame) for draw operations, through the [mask](avmix#scene-mask) scene module.
730
740
731
741
This differs from using an offscreen group as an alpha operand input to [shape](avmix#scene-shape) as discussed above as follows:
742
+
732
743
- the mask is global and not subject to any transformation
733
744
- the mask is always cleared at the beginning of a frame
734
745
- the mask is only one alpha channel
735
746
- the mask operations can be accumulated between draws
736
747
737
748
The following example shows using a mask in regular mode:
749
+
738
750
- enable and clear mask
739
751
- draw a circle with alpha 0.4
740
752
- use mask and draw video, which will be blended only where the circle was drawn using alpha= 0.4
@@ -768,6 +780,7 @@ The following example shows using a mask in regular mode:
768
780
The mask can also be updated while drawing using a record mode. In this mode, the mask acts as a binary filter, any pixel drawn to the mask will no longer get drawn.
769
781
770
782
The following draws:
783
+
771
784
- an ellipse with first video at half opacity, appearing blended on the background
772
785
- the entire second video at full opacity, which will only appear where mask was not set
Copy file name to clipboardExpand all lines: docs/Howtos/dash/Fragmentation,-segmentation,-splitting-and-interleaving.md
+1
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,7 @@ Segmentation (`-dash`) is the process of creating segments, parts of an original
20
20
Last, MP4Box can split (-split) a file and create individual playable files from an original one. It does not use segmentation in the above sense, it removes fragmentation and can use interleaving.
21
21
22
22
Some examples of MP4Box usages:
23
+
23
24
- Rewrites a file with an interleaving window of 1 sec.
The [DASH reader](dashin) can be configured through [-forward](dashin#forward) to insert segment boundaries in the media pipeline - see [here](dashin#segment-bound-modes) for more details.
82
83
Two variants of this mode exist:
84
+
83
85
-`segb`: this enables `split_as`, DASH cue insertion (segment start signal) and fragment bounds signalling
84
86
-`mani`: same as `segb` and also forward manifests (MPD, M3U8) as packet properties.
Copy file name to clipboardExpand all lines: docs/Howtos/dash/HEVC-Tile-based-adaptation-guide.md
+4
Original file line number
Diff line number
Diff line change
@@ -75,10 +75,12 @@ You can now playback your MPD using GPAC, and have fun with the different adapta
75
75
## Live setup
76
76
77
77
If you want to produce a live feed of tiled video, you can either:
78
+
78
79
- produce short segments, package them and dash them using `-dash-live`, `dash-ctx` and `-subdur`, see discussion [here](https://github.com/gpac/gpac/issues/1648)
79
80
- produce a live session with a [tilesplit](tilesplit) filter.
80
81
81
82
GPAC does not have a direct wrapper for Kvazaar, but you can either:
83
+
82
84
- use a FFmpeg build with Kvazaar enabled (`--enable-libkvazaar` in ffmpeg configure) - check GPAC support using `gpac -h ffenc:libkvazaar`
83
85
- use an external grab+Kvazaar encoding and pipe its output into GPAC.
84
86
@@ -134,6 +136,7 @@ gpac
134
136
135
137
136
138
The resulting filter graph is quite fun (use `-graph` to check it) and shows:
139
+
137
140
- only one (or 0 depending on your webcam formats) pixel converter filter is used in the chain to feed both Kvazaar instances
138
141
- all tile PIDs (and only them) connecting to the dasher filter
139
142
- 21 output PIDs of the dasher: one for MPD, 2 x (1+3x3) media PIDs.
@@ -179,6 +182,7 @@ In 2D playback, the tile adaptation logic (for ROI for example) is controlled b
179
182
180
183
The compositor can use gaze information to automatically decrease the quality of the tiles not below the gaze.
181
184
The gaze information can be:
185
+
182
186
- emulated via mouse using [--sgaze](compositor#sgaze) option.
183
187
- signaled through filter updates on the [gazer_enabled](compositor#gazer_enabled)[gaze_x](compositor#gaze_x)[gaze_y](compositor#gaze_y)
We will now use a live source (webcam), encode it in two qualities, DASH the result and push it to a remote server. Please check the [encoding howto](encoding) first.
171
172
Compared to what we have seen previously, we only need to modify the input part of the graph:
173
+
172
174
- take as a live source the default audio video grabbed by the [libavdevice](ffavin) filter
Copy file name to clipboardExpand all lines: docs/Howtos/dash/LL-HLS.md
+1
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ In this howto, we will study various setups for HLS live streaming in low latenc
9
9
The same setup for configuring segments and CMAF chunks is used as the [DASH low latency](LL-DASH#dash-low-latency-setup) setup.
10
10
11
11
When you have low-latency producing of your HLS media segments, you need to indicate to the client how to access LL-HLS `parts` (CMAF chunks) while they are produced. LL-HLS offers two possibilities to describe these parts in the manifest:
12
+
12
13
- file mode: advertise the chunks as dedicated files, i.e. each chunk will create its own file. This requires double storage for segments close to the live edge, increases disk IOs and might not be very practical if you setup a PUSH origin (twice the bandwidth is required)
13
14
- byte range mode: advertise the chunks as byte range of a media file. If that media file is the full segment being produced (usually the case), this does not induce bandwidth increase or extra disk IOs.
Copy file name to clipboardExpand all lines: docs/Howtos/dynamic_rc.md
+2
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ In this example we will use RTP as delivery mechanism and monitor loss rate of c
12
12
## RTP reader
13
13
14
14
The reader is a regular video playback from RTP (using SDP as input). We will:
15
+
15
16
- locate the `rtpin` filter in the chain, i.e. the first filter after the `fin`filter used for SDP access
16
17
- update every 2 second the `loss_rate`option of the `rtpin` filter: this will force the loss ratio in RTCP Receiver Reports, but will not drop any packet at the receiver side
17
18
@@ -77,6 +78,7 @@ gpac.close()
77
78
## Encoder and RTP sender
78
79
79
80
The encoder consists in a source (here a single video file playing in loop), an AVC encoder and an RTP output. We will:
81
+
80
82
- locate the `rtpout` filter in the chain, i.e. the first filter before the `fout` filter used for SDP output
81
83
- monitor every 2 second the statistics of the input PID of `rtpout` to get the real-time measurements reported by RTCP
82
84
- adjust encoder max rate based on the percentage of loss packets
Copy file name to clipboardExpand all lines: docs/Howtos/filters-oneliners.md
+4
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,12 @@
1
1
# Foreword
2
2
3
3
This page contains one-liners illustrating the many possibilities of GPAC filters architecture. For a more detailed information, it is highly recommended that you read:
4
+
4
5
- the [general concepts](filters_general) page
5
6
- the [gpac application](gpac_general) help
6
7
7
8
To get a better understanding of each command illustrated in this case, it is recommended to:
9
+
8
10
- run the same command with `-graph` specified to see the filter graph associated
9
11
- read the help of the different filters in this graph using `gpac -h filter_name`
10
12
@@ -13,6 +15,7 @@ Whenever an option is specified, e.g. `dest.mp4:foo`, you can get more info and
13
15
The filter session is by default quiet, except for warnings and error reporting. To get information on the session while running, use [-r](gpac_general#r) option. To get more runtime information, use the [log system](core_logs).
14
16
15
17
Given the configurable nature of the filter architecture, most examples given in one context can be reused in another context. For example:
18
+
16
19
- from the dump examples:
17
20
```
18
21
gpac -i source reframer:saps=1 -o dump/$num$.png
@@ -34,6 +37,7 @@ _NOTE The command lines given here are usually using a local file for source or
34
37
_Reminder_
35
38
Most filters are never specified at the prompt, they are dynamically loaded during the graph resolution.
36
39
GPAC filters can use either:
40
+
37
41
- global options, e.g. `--foo`, applying to each instance of any filter defining the `foo` option,
38
42
- local options to a given filter and any filters dynamically loaded, e.g. `:foo`. This is called [argument inheriting](filters_general#arguments-inheriting).
0 commit comments