Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add msposd ground station OSD support #1715

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

lida2003
Copy link

support request of #1671

@henkwiedig
Copy link
Contributor

henkwiedig commented Feb 24, 2025

This assumes msposd data is send via a seperate wfb rx/tx pair.
Currently SBCGS and CC image assume diffrently.
see:
https://github.com/JohnDGodwin/zero3w-gs/blob/main/scripts/stream.sh#L139 (via tunnel)
https://github.com/zhouruixi/SBC-GS/blob/main/gs/gs.conf#L40 (transport agnostic)

I think we should agree on a default.
@JohnDGodwin @zhouruixi what do you think

@zhouruixi
Copy link

SBC GS Stock Edition use wfb tunnel and msposd listen on 5000 port by default.
SBC GS CC Edition can use wfb tunnel or a seperate wfb rx and msposd listen on 14551 port by default.

I have two main concerns:

  1. Is there anything wrong with using 14551 as the default receiving port? If not, I tend to use 14551. No need for an extra variable to set the msposd port and easier to remember, mavlink is 14550 and msposd is 15551.
  2. Why did John use port 5000? Was it randomly selected or is there any other benefit?

And I also consider 4 types of routers:

router=>
1: mavfwd, mavlink based osd gs side rendering
2: msposd_air, msposd air side rendering
3: msposd_gs_wfbtx, msposd gs side rendering over a seperate wfb tx
4: msposd_gs_tunnel, msposd gs side rendering over wfb tunnel

@JohnDGodwin
Copy link
Member

Port 5000 was selected randomly. Please let me know which port we select going forward and I'll change it in the SBC-GS image.

@lida2003
Copy link
Author

I support using 14551 by default, which is used in jetson-fpv now.
If there is a final decission, I can modify this PR asap.

@henkwiedig
Copy link
Contributor

Does Android GS support GS rendering ?

@zhouruixi
Copy link

Does Android GS support GS rendering ?

only MavLink based OSD for now

@henkwiedig
Copy link
Contributor

i like @zhouruixi idea of heaveing 4 options.
@lida2003 can you add this ?

@lida2003
Copy link
Author

@henkwiedig @JohnDGodwin

It should work now. make sure the tunnel is true, and check(ps) msposd is in right mode

  • msposd_gs_wfbtx
  750 root      0:18 msposd --channels 8 --master /dev/ttyS2 --baudrate 115200 --out 127.0.0.1:14551 -r 20 --ahi 0
  • msposd_gs_tunnel
  750 root      0:01 msposd --channels 8 --master /dev/ttyS2 --baudrate 115200 --out 10.5.0.1:5000 -r 20 --ahi 0

@zhouruixi
Copy link

I think port 14551 should be used on both way, data based on wfb_tx should be sent to 127.0.0.1:14551, and data based on tunnel should be sent to 10.5.0.1:14551. In this way, no matter which method is used, msposd on the ground side only needs to listen to 0.0.0.0:14551, otherwise you have to change the port that msposd listens to when changing the method.

@lida2003
Copy link
Author

I think port 14551 should be used on both way, data based on wfb_tx should be sent to 127.0.0.1:14551, and data based on tunnel should be sent to 10.5.0.1:14551.

Why is this flexibility needed?

  1. Moreover, the tunnel, compared to WFB UDP, is relatively resource-intensive.
  2. As mentioned earlier, msposd_gs_wfbtx uses WFB UDP, and msposd_gs_tunnel uses the tunnel.
  3. Is there a user demand for supporting both WFB UDP and tunnel 14551 at the same time, or is it just for the convenience of program code? Based on the first point, this approach is not recommended, as developers are capable of handling either WFB UDP or tunnel efficiently.

@zhouruixi
Copy link

Why is this flexibility needed?

  1. Moreover, the tunnel, compared to WFB UDP, is relatively resource-intensive.
  2. As mentioned earlier, msposd_gs_wfbtx uses WFB UDP, and msposd_gs_tunnel uses the tunnel.
  3. Is there a user demand for supporting both WFB UDP and tunnel 14551 at the same time, or is it just for the convenience of program code? Based on the first point, this approach is not recommended, as developers are capable of handling either WFB UDP or tunnel efficiently.

首先明确一点,造成这个问题最主要的原因是:在地面端渲染的msposd推出后,openipc没有及时跟进对应的方案,导致出现了两种不同的实现方式投入到实际使用中。

1、基于隧道的实现,最大的优势在于简单明了。地面站的wfb-ng一直有隧道功能,openipc固件也开始支持并默认启用wfb tunnel,隧道越来越趋向于一种必不可少的基础功能。截至目前,官方固件仍然没地面端渲染有对应的实现(这次提交如果被合并将改变这一现状),这就导致用户要想使用地面端渲染的msposd,就必须修改openipc的代码。使用隧道,只需要将-osd删除,将目标ip从127.0.0.1改成10.5.0.1即可,用户方便操作。openipc官方SBC固件只支持隧道的方式,我维护的SBC地面站两种方式都支持,但默认也是使用隧道。假如用户不使用现成的地面站固件,使用Ubuntu或其他Linux发行版,手动安装wfb-ng后,不需要任何额外的设置(开启一个额外的wfb_rx),只需要运行msposd并监听10.5.0.10:14551即可使用。

2、额外开一对wfb_tx/rx不难,但想要优雅的使用额外的wfb_tx/rx对需要繁琐的配置,下面是henkwiedig分享地面站配置,很好,但是并不友好。

[gs]
streams = [{'name': 'video',   'stream_rx': 0x00, 'stream_tx': None, 'service_type': 'udp_direct_rx',  'profiles': ['base', 'gs_base', 'video', 'gs_video']},
           {'name': 'mavlink', 'stream_rx': 0x10, 'stream_tx': 0x90, 'service_type': 'mavlink',        'profiles': ['base', 'gs_base', 'mavlink', 'gs_mavlink']},
           {'name': 'tunnel',  'stream_rx': 0x20, 'stream_tx': 0xa0, 'service_type': 'tunnel',         'profiles': ['base', 'gs_base', 'tunnel', 'gs_tunnel']},
           {'name': 'msp',     'stream_rx': 0x11, 'stream_tx': 0x91, 'service_type': 'udp_proxy',      'profiles': ['base', 'gs_base', 'gs_msp']}
           ]
[gs_msp]
peer = 'connect://127.0.0.1:14551'  # outgoing connection
frame_type = 'data'  # Use data or rts frames
fec_k = 1            # FEC K (For tx side. Rx will get FEC settings from session packet)
fec_n = 2            # FEC N (For tx side. Rx will get FEC settings from session packet)
fec_timeout = 0      # [ms], 0 to disable. If no new packets during timeout, emit one empty packet if FEC block is open
fec_delay = 0        # [us], 0 to disable. Issue FEC packets with delay between them.

3、使用额外的wfb_tx/rx对确实有一些好处,比如资源占用更少、速度更快。就这两者而言,没有人做过严谨的测试,究竟会多占用多少资源,增加多少延时,都是未知数,亦很难评判是否值得为了这些提升而仅支持额外wfb_tx/rx对的方式。

以上种种,使我做出了支持两种方式的决定。对于openipc,同时支持两种方式仅仅只需要增加一个elif,两行代码,为了多样性,为了用户便捷,不值得吗?


First of all, let's make it clear that the main reason for this problem is that after the launch of msposd for ground-side rendering, openipc did not follow up with the corresponding solution in time, resulting in two different implementation methods being put into practical use.

  1. The biggest advantage of tunnel-based implementation is that it is simple and clear. The wfb-ng of the ground station has always had a tunnel function, and the openipc firmware has also begun to support and enable wfb tunnel by default. Tunnels are becoming more and more an indispensable basic function. As of now, the official firmware still has no corresponding implementation for ground-side rendering (this submission will change this situation if it is merged), which means that if users want to use msposd for ground-side rendering, they must modify the openipc code. To use the tunnel, you only need to delete -osd and change the target ip from 127.0.0.1 to 10.5.0.1, which is convenient for users. The openipc official SBC firmware only supports the tunnel method. The SBC ground station I maintain supports both methods, but the default is also to use the tunnel. If the user does not use the ready-made ground station firmware, and uses Ubuntu or other Linux distributions, after manually installing wfb-ng, no additional settings are required (open an additional wfb_rx), just run msposd and listen to 10.5.0.10:14551 to use it.

  2. It is not difficult to open an additional pair of wfb_tx/rx, but it requires cumbersome configuration to use the additional wfb_tx/rx pair elegantly. The following is the ground station configuration shared by henkwiedig, which is good, but not user-friendly.

[gs]
streams = [{'name': 'video', 'stream_rx': 0x00, 'stream_tx': None, 'service_type': 'udp_direct_rx', 'profiles': ['base', 'gs_base', 'video', 'gs_video']},
 {'name': 'mavlink', 'stream_rx': 0x10, 'stream_tx': 0x90, 'service_type': 'mavlink', 'profiles': ['base', 'gs_base', 'mavlink', 'gs_mavlink']},
 {'name': 'tunnel', 'stream_rx': 0x20, 'stream_tx': 0xa0, 'service_type': 'tunnel', 'profiles': ['base', 'gs_base', 'tunnel', 'gs_tunnel']},
 {'name': 'msp', 'stream_rx': 0x11, 'stream_tx': 0x91, 'service_type': 'udp_proxy', 'profiles': ['base', 'gs_base', 'gs_msp']}
 ]
[gs_msp]
peer = 'connect://127.0.0.1:14551' # outgoing connection
frame_type = 'data' # Use data or rts frames
fec_k = 1 # FEC K (For tx side. Rx will get FEC settings from session packet)
fec_n = 2 # FEC N (For tx side. Rx will get FEC settings from session packet)
fec_timeout = 0 # [ms], 0 to disable. If no new packets during timeout, emit one empty packet if FEC block is open
fec_delay = 0 # [us], 0 to disable. Issue FEC packets with delay between them.
  1. Using additional wfb_tx/rx pairs does have some benefits, such as less resource usage and faster speed. As for these two, no one has done rigorous testing. It is unknown how much more resources will be occupied and how much delay will be added. It is also difficult to judge whether it is worth supporting only additional wfb_tx/rx pairs for these improvements.

All of the above made me decide to support both methods. For openipc, supporting both methods at the same time only requires adding an elif, two lines of code, for diversity and user convenience, isn't it worth it?

@lida2003
Copy link
Author

  1. To use the tunnel, you only need to delete -osd and change the target ip from 127.0.0.1 to 10.5.0.1, which is convenient for users.

router = 4 supported

2. It is not difficult to open an additional pair of wfb_tx/rx, but it requires cumbersome configuration to use the additional wfb_tx/rx pair elegantly. The following is the ground station configuration shared by henkwiedig, which is good, but not user-friendly.

It's OK, currently oneway wfb-ng(wfb_tx) works for msposd.

3. Using additional wfb_tx/rx pairs does have some benefits, such as less resource usage and faster speed. As for these two, no one has done rigorous testing. It is unknown how much more resources will be occupied and how much delay will be added. It is also difficult to judge whether it is worth supporting only additional wfb_tx/rx pairs for these improvements.

Wfb-ng says tunnel uses more resources. And this patch gives choice for user to select 3(wfb_tx) or 4(tunnel)

@zhouruixi
Copy link

zhouruixi commented Feb 26, 2025

@lida2003 we are only think should use same port for both method. Your pr is use 5000 for now. That's all.

@lida2003
Copy link
Author

lida2003 commented Feb 26, 2025

@JohnDGodwin Please noted this port change on tunnel. As @zhouruixi requested, I think @henkwiedig agree on this 5000 to 14551 change, which might be default port for msposd ground station mode.

3. As for these two, no one has done rigorous testing. It is unknown how much more resources will be occupied and how much delay will be added. It is also difficult to judge whether it is worth supporting only additional wfb_tx/rx pairs for these improvements.

@zhouruixi BTW, as there might be some performance concerns: "IPv4 tunnel for generic usage. You can transmit ordinary ip packets over WFB link. Note, don't use ip tunnel for high-bandwidth transfers like video or mavlink because it has more overhead than raw udp streams."

I have some figures for checking, and @henkwiedig might have much more insight details on this.
for ground station osd one-way communication, here are some numbers:

  • 1:26 test for 20FPS osd update
  • around about 9953 bytes for subtiles (should be more, about 90% same custom message has been dropped)
  • around about 1837300 bytes for osd

Then we have (9953+1837300)*8/(60+26)=171837.48 bps, compare to UART(MAVLink, general settings 115200bps). I think there is no performance issue reported, but it might be noted in advance. And lower osd fps/change tunnel to raw udp can improve and handle the issue.

@zhouruixi
Copy link

zhouruixi commented Feb 27, 2025

as there might be some performance concerns

OK, will change to wfb_rx method as default once this pr is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants