-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory is not released after grabbing is completed #774
Comments
would you try putting the pylon grab result into the queue instead of doing a deep copy? copy.deepcopy(grab.Array) this line is not part of pypylon. We do not know if the memory leak is coming from the copy function. |
It doesn't change. The initial implementation was as you described. We tried creating a deep copy of the array in case something was preventing the release of memory allocated for the |
OK, I will check and get back to you |
It seems to be that the memory leak was caused by opencv. Please do 2 Tests for me.
let me know the result of these 2 test,please |
We don't think is is caused by When we remove
For the second suggesstion we have tried two things. def capture_images(cam, image_queue, num_images):
for i in range(num_images):
with cam.RetrieveResult(100) as grab:
if not grab.GrabSucceeded():
continue
capture_time = time.time()
# image_array = copy.deepcopy(grab.Array)
filename = f'img{i}.tiff'
image_queue.put((None, filename, capture_time, i+1))
grab.Release()
# Signal the save thread that we're done capturing
image_queue.put(None)
def save_images(image_queue, output_path):
initial_ram = get_ram_usage()
print(f"Initial RAM usage: {initial_ram:.2f} MB")
while True:
item = image_queue.get()
if item is None:
break
image_array, filename, capture_time, image_number = item
image_array = np.zeros((1080, 1920), dtype=np.uint8)
cv2.imwrite(os.path.join(output_path, filename), image_array)
# Explicitly delete large objects
del image_array And the output:
Secondly, we put the def capture_images(cam, image_queue, num_images):
for i in range(num_images):
with cam.RetrieveResult(100) as grab:
if not grab.GrabSucceeded():
continue
capture_time = time.time()
image_array = copy.deepcopy(grab.Array)
filename = f'img{i}.tiff'
image_queue.put((image_array, filename, capture_time, i+1))
grab.Release()
# Signal the save thread that we're done capturing
image_queue.put(None)
def save_images(image_queue, output_path):
initial_ram = get_ram_usage()
print(f"Initial RAM usage: {initial_ram:.2f} MB")
while True:
item = image_queue.get()
if item is None:
break
image_array, filename, capture_time, image_number = item
image_array = np.zeros((1080, 1920), dtype=np.uint8)
cv2.imwrite(os.path.join(output_path, filename), image_array)
# Explicitly delete large objects
del image_array And the output didn't change
|
When we remove cv2.imwrite the output was like that: Initial RAM usage: 311.91 MB This seems to be OK. am I correct? can you check if the writing finishes the work, if the queue contains any object? |
Yes, that is the expected memory usage. We have made some changes to the queue consumption logic. The system will now wait 5 seconds before stopping the consumer thread in case of an image might have been grabbed but not yet added to the queue. But it didn't fix the issue. while True:
try:
item = image_queue.get(block=True, timeout=5)
except queue.Empty:
break
if not item: continue
image_array, filename, capture_time, image_number = item
cv2.imwrite(os.path.join(output_path, filename), image_array)
# Explicitly delete large objects
del image_array And logged the queue size at the end of the program.
|
would you test this code please?
|
If you would replace the line cv2.imwrite with PIL.Image.Save() you will not see the issue. So you can use pylon image of PIL image as a workaround. Maybe Opencv uses cache memory to speed up the writing process. Please test both of these alternative libs and give the feedback |
Using Image.fromarray(image_array).save(os.path.join(output_path, filename)) and the output was:
So, the memory consumption is lower but it is still not released after grabbing is completed. ● data-acq.service - Data acquisition service for pad system
Loaded: loaded (/etc/systemd/system/data-acq.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-09-13 14:36:01 +03; 2min 3s ago
Main PID: 179295 (python)
Tasks: 29 (limit: 154020)
Memory: 16.3G
CPU: 5min 51.595s
CGroup: /system.slice/data-acq.service
└─179295 python main.py We prepared a simple script to monitor memory consumption of our service.
What could be the reason for the significant difference between Here's the script we used to monitor memory usage: #!/bin/bash
SERVICE_NAME="data-acq.service"
INTERVAL=5 # Time in seconds between each check
# Function to convert KB to MB
kb_to_mb() {
echo "scale=2; $1 / 1024" | bc
}
# Function to convert human-readable sizes to MB
to_mb() {
local size=$1
case ${size: -1} in
K|k) echo "scale=2; ${size%?} / 1024" | bc ;;
M|m) echo "${size%?}" ;;
G|g) echo "scale=2; ${size%?} * 1024" | bc ;;
*) echo "scale=2; $size / 1024 / 1024" | bc ;; # Assume bytes if no unit
esac
}
# Function to get memory usage
get_memory_usage() {
PID=$(systemctl show -p MainPID -q $SERVICE_NAME | cut -d= -f2)
if [ -z "$PID" ] || [ "$PID" -eq 0 ]; then
echo "Service $SERVICE_NAME is not running."
return 1
fi
# Get RSS and VSZ in kilobytes
read RSS VSZ <<< $(ps -o rss=,vsz= -p $PID)
# Convert to megabytes
RSS_MB=$(kb_to_mb $RSS)
VSZ_MB=$(kb_to_mb $VSZ)
# Get systemd reported memory
SYSTEMD_MEM=$(systemctl status $SERVICE_NAME | grep Memory | awk '{print $2}')
SYSTEMD_MB=$(to_mb $SYSTEMD_MEM)
echo "$(date '+%Y-%m-%d %H:%M:%S') - RSS: ${RSS_MB} MB, VSZ: ${VSZ_MB} MB, Systemd: ${SYSTEMD_MB} MB"
}
echo "Monitoring RAM usage of $SERVICE_NAME every $INTERVAL seconds. Press Ctrl+C to stop."
echo "Timestamp - RSS (ps command), VSZ (ps command), Systemd reported memory"
while true; do
get_memory_usage
sleep $INTERVAL
done |
Hi,
When we grab images using the Python API and save the grabbed images to disk, memory is not released after completing all tasks. The memory consumption is proportional to the number of images we are grabbing. We have tried saving the images with Pillow, manually calling the garbage collector, and explicitly deleting the
image_array
, but none of these approaches had an effect on memory consumption.What could be the reason for this memory leak?
Example output:
Is your camera operational in Basler pylon viewer on your platform
Yes
Hardware setup & camera model(s) used
CPU architecture: x86_64
Operating System: Ubuntu 22.04
RAM: 128GB
Runtime information:
The text was updated successfully, but these errors were encountered: