To install semantic_aiortc you will need to install a couple of libraries. In this guide used Ubuntu 24.04
and Python 3.12.3
.
Install OS dependencies:
apt-get update && \
apt-get install -y libavdevice-dev libavfilter-dev libopus-dev libvpx-dev pkg-config libsrtp2-dev ffmpeg
Install Python dependencies:
python3 -m pip install opencv-python==4.11.0.86 pillow==11.1.0 yt-dlp==2025.3.31 setuptools==75.8.0
Clone and install aiortc from source:
git clone https://github.com/LABORA-INF-UFG/semantic_aiortc.git
cd semantic_aiortc/
git submodule update --init --recursive
python3 setup.py install
Clone this repository:
git clone https://github.com/LABORA-INF-UFG/semantic_aiortc.git
cd semantic_aiortc/
git submodule update --init --recursive
Install SR-GAN. Check the README for detailed instructions.
(Recommended) Build and install the semantic_aiortc files:
python3 setup.py install
(Alternative option) Install docker and docker-compose, then run:
docker-compose up -d
Run videostream-cli
example to validate semantic_aiortc installation.
Check README for detailed information.
Get a sample video to use as input:
yt-dlp -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]" "https://youtu.be/annk7l_V0nY" -o "input.mp4"
Run the offer
with the downloaded video as input:
python3 cli.py offer --play-from input.mp4 --signaling unix-socket --signaling-path /tmp/test.sock --resize-to 64x64 --extract-frames sent_frames
On a different terminal instance, run the answer
with the output video name:
python3 cli.py answer --record-to output.mp4 --signaling unix-socket --signaling-path /tmp/test.sock --extract-frames upscaled_frames --no-upscale
It is possible to select specific codecs to stream. Check the options with python3 cli.py --help
.