Try to fix two blackbox test:
1. TestSlow_SrtPublish_HttpTsPlay_HEVC_Basic: fixed by enlarge the wait
time from 3 seconds to 4 before run ffprobe task, which will record the
stream by ffmpeg first.
2 TestSlow_SrtPublish_HlsPlay_HEVC_Basic: fixed by wait 16 seconds
before run ffprobe task.
About the 2 case: it seems ridiculous to add 16 seconds delay before run
ffprobe task.
> So, I start #4088 to check the github action workflow process, I start
this branch from a very earlier version `6.0.113
(srs/core/srs_core_version6.hpp)`, what I found it that, inside `SRS
blackbox-test`, the srs version `6.0.128`, the latest version, was
printed. That's really wired.
I confirmed this is not the SRS source code's problem, check
https://github.com/suzp1984/srs/actions/runs/9511600525/job/26218025559
the srs code 6.0.113 was checkout and running actions based on them,
still met same problem.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
The object relations:
![gb](https://github.com/ossrs/srs/assets/2777660/266e8a4e-3f1e-4805-8406-9008d6a63aa0)
Session manages SIP and Media object using shared resource or shared
ptr. Note that I actually use SrsExecutorCoroutine to delete the object
when each coroutine is done, because there is always a dedicate
coroutine for each object.
For SIP and Media object, they directly use the session by raw pointer,
it's safe because session always live longer than session and media
object.
---
Co-authored-by: Jacob Su <suzp1984@gmail.com>
Fix#4037 SRS should not send the publish start message
`onStatus(NetStream.Publish.Start)` if hooks fail, which causes OBS to
repeatedly reconnect.
Note that this fix does not send an RTMP error message when publishing
fails, because neither OBS nor FFmpeg process this specific error
message; they only display a general error.
Apart from the order of messages, nothing else has been changed.
Previously, we sent the publish start message
`onStatus(NetStream.Publish.Start)` before the HTTP hook `on_publish`;
now, we have modified it to send this message after the HTTP hook.
Fix#3967 There is an API `SSL_use_certificate_chain_file`, which can load the
certification chain and also single certificate.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
1. `trunk/research/st/exceptions.cpp` About exceptions with ST, works
well on linux and mac, not work on cygwin.
2. `trunk/research/st/pthreads.cpp` About pthreads with ST, works well
on all platforms.
3. `trunk/research/st/hello.cpp` Hello world, without ST, works well on
all platforms.
4. `trunk/research/st/hello-world.cpp` Hello world, with ST, works well
on all platforms.
5. `trunk/research/st/hello-st.cpp` A very simple version for hello
world with ST, works well on all platforms.
When debugging the RTMP protocol, we can capture packets using tcpdump
and then replay the pcap file. For example:
```bash
cd ~/git/srs/trunk/3rdparty/srs-bench/pcap
tcpdump -i any -w t.pcap tcp port 1935
go run . -f ./t.pcap -s 127.0.0.1:1935
```
However, sometimes due to poor network conditions between the server and
the client, there may be many retransmitted packets. In such cases,
setting up a transparent TCP proxy that listens on port 1935 and
forwards to port 19350 can be a solution:
```bash
./objs/srs -c conf/origin.conf
cd 3rdparty/srs-bench/tcpproxy/ && go run main.go
tcpdump -i any -w t.pcap tcp port 19350
```
This approach allows for the implementation of packet dumping,
multipoint replication, or the provision of detailed timestamps and byte
information at the proxy. It enables the collection of debugging
information without the need to modify the server.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: john <hondaxiao@tencent.com>
1. When converting RTC to RTMP, it is necessary to synchronize the audio
and video timestamps. When the synchronization status changes, whether
it is unsynchronized or synchronized, print logs to facilitate
troubleshooting of such issues.
2. Chrome uses the STAP-A packet, which means a single RTP packet
contains SPS/PPS information. OBS WHIP, on the other hand, sends SPS and
PPS in separate RTP packets. Therefore, SPS and PPS are in two
independent RTP packets, and SRS needs to cache these two packets.
---------
Co-authored-by: john <hondaxiao@tencent.com>
HLS typically has a delay of around 30 seconds, roughly comprising three
segments, each lasting 10 seconds. We can reduce the delay to about 5
seconds by lowering the segment duration to 2 seconds and starting
playback from the last segment, achieving a stable delay.
Of course, this requires setting the OBS's GOP to 1 second, and the
profile to baseline, preset to fast, and tune to zerolatency.
Additionally, updating a few configurations in the hls.js player is
necessary, such as setting it to start playback from the last segment,
setting the maximum buffer, and initiating accelerated playback to
reduce latency.
---------
Co-authored-by: chundonglinlin <chundonglinlin@163.com>
Co-authored-by: john <hondaxiao@tencent.com>
For WebRTC:
when player before publisher, it will happen track pt didn't change.
- At source change step, change track pt
---------
Co-authored-by: mingche.tsai <w41203208.work@gmail.com>
Co-authored-by: john <hondaxiao@tencent.com>
Accommodate certain complex parameters that include the "=" character,
for example.
`configure --extra-flags="-O2 -D_FORTIFY_SOURCE=2"`
---------
Co-authored-by: john <hondaxiao@tencent.com>
Description
A crash occurs when a forward relay connection has not been established
and an unpublish event is triggered simultaneously. For instance, if DVR
and forward are configured with a specified DVR path that already
exists, initiating a stream will trigger a crash.
Objective
Fix the crash caused by the forward mechanism.
Additional Information
For detailed reproduction steps, please refer to issue #3901.
---------
Co-authored-by: john <hondaxiao@tencent.com>
In an SDK that supports RTC Opus stereo, the parameter "stereo=1" may
appear. SRS (Spatial Reference System) needs to handle this correctly
and return an answer to enable WebRTC stereo support.
---------
`TRANS_BY_GPT4`
Security is the built-in IP whitelist feature of SRS, which allows and
denies certain IP and IP range users. Previously, it only supported
RTMP, but this PR now supports HTTP-FLV, HLS, WebRTC, SRT, and other
protocols.
See https://ossrs.io/lts/en-us/docs/v6/doc/security as example.
---------
Co-authored-by: john <hondaxiao@tencent.com>
The `-` character, when placed in the middle of a regular expression, is
interpreted as a range. It must be placed at the beginning or end to be
interpreted as a literal character.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: john <hondaxiao@tencent.com>
The `ffmpeg-opus` tool allows you to control the delay using the
`opus_delay` option. The minimum delay can be set to 2.5ms. However, in
practice, you cannot set it this low. You need to set at least 10 frames
to allow the audio encoder to lookahead. Otherwise, the sound will be
distorted.
---------
Co-authored-by: chundonglinlin <chundonglinlin@163.com>
In pure audio mode, there are no keyframes. Therefore, we can only rely
on the length of the slice to determine whether it should be output.
`hls_aof_ratio` is the coefficient that, once reached, will generate a
new slice.
In scenarios with video, if the `hls_aof_ratio` is too small, for
example 1.2, and the GOP (Group of Pictures) is 10 seconds, then a slice
will definitely be generated at 12 seconds. At this point, if there are
no keyframes, it will cause the next slice to start with a non-keyframe.
A safer coefficient is twice the GOP (Group of Pictures). This way, it
won't trigger incorrectly and prevent the individual transcoding of a ts
segment file.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
1. After enabling FFmpeg opus, the transcoding time for each opus packet
is around 4ms.
2. To speed up case execution, our test publisher sends 400 opus packets
at intervals of 1ms.
3. After the publisher starts, wait for 30ms, then the player starts.
4. Due to the lengthy processing time for each opus packet, SRS
continuously receives packets from the publisher, so it doesn't switch
coroutines and can't accept the player's connection.
5. Only after all opus packets are processed will it accept the player
connection. Therefore, the player doesn't receive any data, leading to
the failure of the case.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
### Description
When converting between AAC and Opus formats (aac2opus or opus2aac), the
`av_frame_get_buffer` API is frequently called.
### Objective
The goal is to optimize the code logic and reduce the frequent
allocation and deallocation of memory.
In the case of aac2opus, av_frame_get_buffer is still frequently called.
In the case of opus2aac, the goal is to avoid calling
av_frame_get_buffer and reduce memory allocations.
### Additional Note
Before calling the `av_audio_fifo_read` API, use
`av_frame_make_writable` to check if the frame is writable. If it is not
writable, create a new frame.
---------
Co-authored-by: john <hondaxiao@tencent.com>
By default, caching is enabled during compilation, which means that data
is cached in Docker. This helps to avoid compiling third-party
dependency libraries. However, sometimes when updating third-party
libraries, it's necessary to disable caching to temporarily verify if
the pipeline can succeed. Therefore, a configure option should be added.
When this option is enabled, the compilation cache will not be used, and
all third-party libraries will be compiled from scratch.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
Follow the example in FFmpeg's doc, before calling the API
`avcodec_send_frame`, always use `av_frame_alloc` to create a new frame.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
SRS supports TCP WebRTC by reading 2 bytes of length, like `read(buf,
2)`. However, in some cases, it might receive 1 byte, causing subsequent
data to be incorrect and making it unable to push or play streams.
---------
Co-authored-by: john <hondaxiao@tencent.com>