1. When converting RTC to RTMP, it is necessary to synchronize the audio
and video timestamps. When the synchronization status changes, whether
it is unsynchronized or synchronized, print logs to facilitate
troubleshooting of such issues.
2. Chrome uses the STAP-A packet, which means a single RTP packet
contains SPS/PPS information. OBS WHIP, on the other hand, sends SPS and
PPS in separate RTP packets. Therefore, SPS and PPS are in two
independent RTP packets, and SRS needs to cache these two packets.
---------
Co-authored-by: john <hondaxiao@tencent.com>
HLS typically has a delay of around 30 seconds, roughly comprising three
segments, each lasting 10 seconds. We can reduce the delay to about 5
seconds by lowering the segment duration to 2 seconds and starting
playback from the last segment, achieving a stable delay.
Of course, this requires setting the OBS's GOP to 1 second, and the
profile to baseline, preset to fast, and tune to zerolatency.
Additionally, updating a few configurations in the hls.js player is
necessary, such as setting it to start playback from the last segment,
setting the maximum buffer, and initiating accelerated playback to
reduce latency.
---------
Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: john <hondaxiao@tencent.com>
For WebRTC:
when player before publisher, it will happen track pt didn't change.
- At source change step, change track pt
---------
Co-authored-by: mingche.tsai <w41203208.work@gmail.com>
Co-authored-by: john <hondaxiao@tencent.com>
Accommodate certain complex parameters that include the "=" character,
for example.
`configure --extra-flags="-O2 -D_FORTIFY_SOURCE=2"`
---------
Co-authored-by: john <hondaxiao@tencent.com>
Description
A crash occurs when a forward relay connection has not been established
and an unpublish event is triggered simultaneously. For instance, if DVR
and forward are configured with a specified DVR path that already
exists, initiating a stream will trigger a crash.
Objective
Fix the crash caused by the forward mechanism.
Additional Information
For detailed reproduction steps, please refer to issue #3901.
---------
Co-authored-by: john <hondaxiao@tencent.com>
In an SDK that supports RTC Opus stereo, the parameter "stereo=1" may
appear. SRS (Spatial Reference System) needs to handle this correctly
and return an answer to enable WebRTC stereo support.
---------
`TRANS_BY_GPT4`
Security is the built-in IP whitelist feature of SRS, which allows and
denies certain IP and IP range users. Previously, it only supported
RTMP, but this PR now supports HTTP-FLV, HLS, WebRTC, SRT, and other
protocols.
See https://ossrs.io/lts/en-us/docs/v6/doc/security as example.
---------
Co-authored-by: john <hondaxiao@tencent.com>
The `-` character, when placed in the middle of a regular expression, is
interpreted as a range. It must be placed at the beginning or end to be
interpreted as a literal character.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: john <hondaxiao@tencent.com>
The `ffmpeg-opus` tool allows you to control the delay using the
`opus_delay` option. The minimum delay can be set to 2.5ms. However, in
practice, you cannot set it this low. You need to set at least 10 frames
to allow the audio encoder to lookahead. Otherwise, the sound will be
distorted.
---------
Co-authored-by: chundonglinlin <chundonglinlin@163.com>
In pure audio mode, there are no keyframes. Therefore, we can only rely
on the length of the slice to determine whether it should be output.
`hls_aof_ratio` is the coefficient that, once reached, will generate a
new slice.
In scenarios with video, if the `hls_aof_ratio` is too small, for
example 1.2, and the GOP (Group of Pictures) is 10 seconds, then a slice
will definitely be generated at 12 seconds. At this point, if there are
no keyframes, it will cause the next slice to start with a non-keyframe.
A safer coefficient is twice the GOP (Group of Pictures). This way, it
won't trigger incorrectly and prevent the individual transcoding of a ts
segment file.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
1. After enabling FFmpeg opus, the transcoding time for each opus packet
is around 4ms.
2. To speed up case execution, our test publisher sends 400 opus packets
at intervals of 1ms.
3. After the publisher starts, wait for 30ms, then the player starts.
4. Due to the lengthy processing time for each opus packet, SRS
continuously receives packets from the publisher, so it doesn't switch
coroutines and can't accept the player's connection.
5. Only after all opus packets are processed will it accept the player
connection. Therefore, the player doesn't receive any data, leading to
the failure of the case.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
### Description
When converting between AAC and Opus formats (aac2opus or opus2aac), the
`av_frame_get_buffer` API is frequently called.
### Objective
The goal is to optimize the code logic and reduce the frequent
allocation and deallocation of memory.
In the case of aac2opus, av_frame_get_buffer is still frequently called.
In the case of opus2aac, the goal is to avoid calling
av_frame_get_buffer and reduce memory allocations.
### Additional Note
Before calling the `av_audio_fifo_read` API, use
`av_frame_make_writable` to check if the frame is writable. If it is not
writable, create a new frame.
---------
Co-authored-by: john <hondaxiao@tencent.com>
By default, caching is enabled during compilation, which means that data
is cached in Docker. This helps to avoid compiling third-party
dependency libraries. However, sometimes when updating third-party
libraries, it's necessary to disable caching to temporarily verify if
the pipeline can succeed. Therefore, a configure option should be added.
When this option is enabled, the compilation cache will not be used, and
all third-party libraries will be compiled from scratch.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
Follow the example in FFmpeg's doc, before calling the API
`avcodec_send_frame`, always use `av_frame_alloc` to create a new frame.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
SRS supports TCP WebRTC by reading 2 bytes of length, like `read(buf,
2)`. However, in some cases, it might receive 1 byte, causing subsequent
data to be incorrect and making it unable to push or play streams.
---------
Co-authored-by: john <hondaxiao@tencent.com>
Checking the HTTPS API or UDP connectivity for WHIP tests can be
difficult. For example, if the UDP port isn't available but the API is
fine, OBS only says it can't connect to the server. It's hard to see the
HTTPS API response or check if the UDP port is available.
This feature lets you set the ice username and password in SRS. You can
then send a STUN request using nc and see the response, making it easier
to check UDP port connectivity.
1. Use curl to test the WHIP API, including ice-frag and ice-pwd
queries.
2. Use nc to send a STUN binding request to test UDP connectivity.
3. If both the API and UDP are working, you should get a STUN response.
---------
Co-authored-by: john <hondaxiao@tencent.com>
When using Docker, logs are usually printed to console (stdout and
stderr). However, since Docker detection occurs late, after log
initialization, the default log output may be incorrect. In Docker, logs
may still be written to a file instead of the console as expected.
Additionally, the Dockerfile has been improved with a new environment
variable `SRS_IN_DOCKER=on` to clearly indicate a Docker environment. If
automatic Docker detection fails, the configuration will be read, and
this variable will correctly inform SRS that it's in a Docker
environment.
Lastly, the default configuration values have been improved for Docker
environments. By default, `SRS_LOG_TANK=console` and daemon mode is
disabled.
---------
Co-authored-by: john <hondaxiao@tencent.com>
The fix is for the DH_set_length error. As shown in lines 2-5, OpenSSL
3.0 added a check for length, which allowed this issue to be exposed.
```
1 if (dh->params.q == NULL) {
2 /* secret exponent length, must satisfy 2^(l-1) <= p */
3 if (dh->length != 0
4 && dh->length >= BN_num_bits(dh->params.p))
5 goto err;
6 l = dh->length ? dh->length : BN_num_bits(dh->params.p) - 1;
7 if (!BN_priv_rand_ex(priv_key, l, BN_RAND_TOP_ONE,
8 BN_RAND_BOTTOM_ANY, 0, ctx))
9 goto err;
... ...
}
```
---------
Co-authored-by: john <hondaxiao@tencent.com>
1. The comment on the ratio configuration says it can affect the slice
duration, but there is no effect after configuring it.
2. The default hls_td_ratio is 1.5, and after setting it to 1, the
duration is still slightly more than 10 seconds.
3. Even if the GOP is an integer, like 1 second, the slice is still a
non-integer, like 0.998 seconds, which seems a bit unreliable.
4. In the duration of the TS in the m3u8 file, it is one frame less than
the duration of the slice.
5. Set hls_dispose to 120s to dispose HLS files when no stream.
6. Use docker.conf for docker.
Before this patch:
```
#EXTINF:10.983, no desc
livestream-0.ts?hls_ctx=3p095hq0
```
After this patch:
```
#EXTINF:10.000, no desc
livestream-0.ts?hls_ctx=3p095hq0
```
Note: If the fragment is set to 10 seconds, but the GOP size cannot be
divided by 10, such as not 1, 2, 5, or 10, then the duration of ts will
still be more than 10 seconds.
---------
Co-authored-by: john <hondaxiao@tencent.com>
When I compile on OpenHarmony, I encounter an error at the
pthread_setname_np function:
```
./src/app/srs_app_threads.cpp:53:10: error: functions that differ only in their return type cannot be overloaded
void pthread_setname_np(pthread_t trd, const char* name) {
/data/local/ohos-sdk/linux/native/llvm/bin/../../sysroot/usr/include/pthread.h:379:5: note: previous declaration is here
int pthread_setname_np(pthread_t, const char *);
```
Our libc is using musl-libc and has no defined __GLIBC__, so we wanted
to add a judgment that __GLIBC__ already defined.