-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Realsense d405 Filtering with Color by Python #11921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Hayouung-Park There is a particular order in which Intel recommend applying the list of post-processing filters in. https://dev.intelrealsense.com/docs/post-processing-filters#using-filters-in-application-code Your list is mostly correct, though it is recommended that depth filtering (thresholding) is applied last in the list, like this:
It also appears that your script is not using a colorizer to color-shade the depth values like the RealSense Viewer does (for example, from blue color for near values to red color for distant values). Below is a pyrealsense2 / Open3D pointcloud script with filtering and colorizer that another RealSense user created that may be a helpful reference. The structure of it is similar to your own script, with some parts in a slightly different order.
|
Thank u for your help. |
There are not many RealSense Open3D examples available, unfortunately. Most of them can be found at the links below. http://www.open3d.org/docs/release/tutorial/sensor/realsense.html |
Do you have a different colored object that you can try? It is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. This could account for the missing data on the image that corresponds to the black bottom of the can. In regard to floating areas of noise that are not connected to the body of the can such as the spray on the edge at the top of the can, an outlier removal pointcloud filter in Open3D could remove such areas of unconnected points. http://www.open3d.org/docs/latest/tutorial/Advanced/pointcloud_outlier_removal.html You can publish a depth-color aligned pointcloud with the RealSense ROS wrapper. If you are asking though if you can generate a pointcloud in Open3D first and then transfer it to a ROS topic, I do not know. |
Hi @Hayouung-Park Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Please provide a description of the problem
Hi~
I'm trying to make point could using open3d for making 3D tscanning trajectory
But there is so many noise. My goal is making point cloud like realsense-viewer.
So I used librealsense2 filters with same parameter of realsense-viewer.
But there is the same noise. And after filtering, there is no color
`import pyrealsense2 as rs
import numpy as np
import open3d as o3d
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth)
config.enable_stream(rs.stream.color)
pipeline.start(config)
pc = rs.pointcloud()
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
decimation_filter = rs.decimation_filter()
decimation_filter.set_option(rs.option.filter_magnitude, 2)
threshold_filter = rs.threshold_filter() # Threshold
threshold_filter.set_option(rs.option.min_distance, 0.05)
threshold_filter.set_option(rs.option.max_distance, 4)
depth_to_disparity = rs.disparity_transform(True)
spatial_filter = rs.spatial_filter() # Spatial
spatial_filter.set_option(rs.option.filter_magnitude, 2)
spatial_filter.set_option(rs.option.filter_smooth_alpha, 0.5)
spatial_filter.set_option(rs.option.filter_smooth_delta, 20)
temporal_filter = rs.temporal_filter() # Temporal
temporal_filter.set_option(rs.option.filter_smooth_alpha, 0.4)
temporal_filter.set_option(rs.option.filter_smooth_delta, 20)
disparity_to_depth = rs.disparity_transform(False)
Depth Image Filtering
filtered_depth = depth_frame
filtered_depth = decimation_filter.process(filtered_depth)
filtered_depth = threshold_filter.process(filtered_depth)
filtered_depth = spatial_filter.process(filtered_depth)
filtered_depth = temporal_filter.process(filtered_depth)
Convert Depth Image and Color Image to Point Cloud
points = pc.calculate(filtered_depth)
pc.map_to(color_frame)
colors = np.asanyarray(color_frame.get_data())
Point Cloud data to Numpy
vtx = np.asarray(points.get_vertices())
vtx = vtx.view(np.float32).reshape(-1, 3)
clr = colors.reshape(-1, 3) / 255.0
Point cloud
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(vtx)
pcd.colors = o3d.utility.Vector3dVector(clr)
Visualize
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(pcd)
vis.run()
vis.destroy_window()
pipeline.stop()
`
Is there any solution to solve this problem?
Or is there any function getting point cloud like realsense-viewer.
(I'm using d405)
This is an image when I used filtering code
(A point above the box makes problems about making trajectory)
And this is an image when I didn't use filtering code
And this is realsense-viewer image what I want to make
The text was updated successfully, but these errors were encountered: