-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to create MMALISPResizer object #582
Comments
What errors are you getting related to the port format? |
Have you tried OMX.broadcom.resize which only has one output |
Sorry for the delay - just got back from holiday.. I now have a clean install of Raspbian Stretch, with update/upgrade and configured with 160M of video memory. As a relative Noob I have so far just been trying simple python programming using the mmalobj library e.g. MMALCamera(), MMALVideoEncoder() - mostly very successful but I've stumbled trying to get MMALISPResizer() to work. Initially I get the error: 'picamera.exc.PiCameraRuntimeError: Expected 1 outputs but found 2 on component b'vc.ril.isp' - this occurs when instantiating the object. I can sidestep this by changing line 2481 in mmalobj.py from "opaque_output_subformats = (None,)" to either opaque_output_subformats = (None,) * 2 or opaque_output_subformats = ('OPQV-single',) * 2. Either option then throws an identical error when you try to 'connect' the resizer to the camera. The Error messages are: Traceback (most recent call last): Any suggestions gratefully received :-) |
I must have closed this by mistake :-\ Any additional suggestions would be welcome |
Yes, the ISP currently has two outputs as it supports producing two images simultaneously (the second has to be at a lower resolution than the first). It'll be gaining a third port soon to pass out image statistics. At a guess the Python library isn't passing in a large enough structure for the mmal_port_parameter_get of MMAL_PARAMETER_SUPPORTED_ENCODINGS to hold all the encodings that the ISP can support. https://github.com/waveform80/picamera/blob/master/picamera/mmal.py#L1818 would appear to define MMAL_PARAMETER_ENCODING_T as having 30 slots. I thought the ISP only had about 22 supported encodings, but I'd guess that it is now more, and MMAL is returning "Out of resources" as the supplied structure is too small. |
Did anyone figure out why can't use MMALISPResizer, getting same error as reported initially... ispresizer has too many output ports using picamera in just mmal style. shame because I think it does exactly what I want in a single block, 2 differently encoded (1 large jpeg one small rgb) and resized images from one camera port. That is assuming it is in the pi zero and zero w firmware for latest buster raspos tried increasing slots at line 1818 but still no luck. can probably use a splitter a resizer and two encoders, I will try that. But the single block approach seems ideal. OK so changed initial error goes away something a bit deeper a problem print(ispresizer.inputs[0].supported_formats) Traceback (most recent call last): |
https://github.com/waveform80/picamera/blob/master/picamera/mmalobj.py#L2552 wants to be
The ISP component supports even more formats now - 62 input formats, and 18 formats on output[0], and 11 on output[1].
Not quite. The low res output can only be a YUV format, not RGB. |
Yeah spent some time messing around, YUV is cool, the Y channel is better than taking the green channel of RGB, and the picamera trick described in docs of only supplying buffer size of the Y channel also cool, using 'I' type of input to pil the Y channel makes a great grayscale image and way less memory use. picamera has some of the best documentation I have seen, excellent examples, (a most excellent library generally, a vid port capture to jpeg file at the actual speed of camera set to 10fps at half v1 sensor resolution). Got the mmal encoder working OK from a YUV numpy array so can do the ispresizer thing using a longer capture chain anyway, just get a large YUV capture and if want to save it then use mmal encoder on the numpy buffer maybe I'll try changing it to 70 then, and see what happens... but having tested out YUV... just still got to test the basic resizer, where to invest time eh! I think perhaps trying to use mmal callbacks in python not as rewarding as diving into the raspistillyuv C code eh! |
Setting MMAL_PARAMETER_ENCODING_T to 63 slots works OK tested using dummy buffers and one output captured, against the basic resizer and actually basic resizer is faster. being as how can use dummy buffers to the jpeg encoder and the pipeline I want is big image maybe saved as jpeg and small image definitely to test, just using resizer on a picamera yuv capture which works anyway is probably best route. (though a big jpeg image, a medium sized jpeg image and a small test yuv image would be best, could probably do that with a splitter) weighing up against the hassle of not just being able to do a pip install of picamera into a new venv if I want to use extra features. given not a lot of release activity here. Is there anywhere that explains what the 3rd output of the ispresizer is? what the stats are? It is a 10k binary buffer of something or other... perhaps worth trying to reverse engineer if its useful for motion detection? How an ai would do it... f@ck knows what it is... does it work for this purpose... LOL funny the problem of wifi bandwidth from a pi zero actually running motion detect realtime on a busy road is similar to problem of transmission from mars eh! Did that rock just move???? What resolution image available, only transmit on necessity eh! LOL! maybe I have not noticed the benefit of dropping picamera completely and setting up an mmal pipeline and calling a single "blit" function from capture to several outputs, I may have time to mess around some more, maybe a fork of just the mmal stuff might be simplest... picamera is cool for what it is already. anyway... cool! Thanks. |
well, I forked it, and if I'm just using a splitter, ispresizer and two encoders it does work, 10fps solid, producing 2 jpeg files on disk or memory buffer and a smaller test buffer , 15fps-ish max but I was only aiming for 10fps (on a pi zero-w) uploaded a simple test prog to the fork |
isn't it time ls and rsync did not fail on a simple wildcard like no transfer speed problems, but.... fipping ls and rsync fail... no problem with data quantity, but now too many filenames... (OK, workarounds, but... ) Actually I will first try reducing the stupid-long filenames I am using LOL. I mean it (the raw mmal pipeline speed) is better than OK it is great in lots of ways, in daylight, but trawling through the reams of picamera code that produce the the great python user interface I noticed definitions will be missing of the actual massive amount of format conversions that the ispresizer provides (you can list them OK with an existing function) which are not defined as constants... so that is something also needs doing.... but can just pass an integer rather than the ordered typedef lookup based on the output of seeing what it supports. |
Funny, regarding how good the manual is, I was messing with trying to set "sports" etc. mode and just used the mmal brightness parameter example and at 3 fps at night I am getting as good (actually better) images as I would have expected from picamera at 1 second exposure (always seemed some competing thing happening when setting parameters in picamera) also funny, 3fps is too fast for this road with just a 32gb sd card LOL I mean I was estimating 11,000 cars per day passed this window.... and yes... does seem to be more traffic after "lockdown" than before... quite possibly less lorries... what I have noticed are some long-delay dropouts I was wondering if some wifi send-loads-of-crap attack could hang up the processor ???? something is!!!! at indeterminate intervals... well soon to be determinate as I added some time-logged warnings about loop-time delays..... anyway, jaw dropped level awesomeness! Thanks again |
in my motion detect prog based on my blitbuffer example in my fork determinate-wise it is the rsync causing some dropouts... running rsync daemon may help I guess indeterminate (as-yet) wise, up to 2 second dropout in a loop... not very often running nice -18 maybe broadly helped as did maybe switching from default cpu-governer of power save to ondemand but still, on occasion massive dropouts.. (clucking mandb or some other clucking stuff perhaps???) actually maybe it is just the nature of python deciding to a bit of its own thing??? nice, like a tiger or lion! constrictor eh! but... generally it is chuffing awesome... heard this car tornado sound (not the a-hole boy racers that reckon the loudest exhaust is best) earlier and was able to determine an unmarked police car with hidden police lights flashing going past, followed by a marked police car flying past the window... I mean this was in low-light 9.30ish pm precise model of car... maybe with a 100w IR illuminator.... |
last addition to the rant... perhaps... motion detection is very scene specific... always trade-offs... the blitbuffer seems stable, reckon its when come to write the buffers to file, especially writing one image back... so starting in this scene what is usually a large chain of cars going past, roughly 6 frames per car when they are doing 30mph ish... with a double write to sd.... and then doing stuff also on that capture node over ssh.. like rsync or looking at the log file.... or both at the same time .... reckon I'll have a crack at getting gentoo up... I see raspbian lite has preempt voluntary, need to essentially put wifi at a lower priority than the python app... doing threads in python to write the file may also help.... lot more work trying to push a single core pi zero to it's limits.... I also wondered if it was a problem I encountered in my own meanderings, that has happened with picamera... some aspects are workarounds to an old version of the firmware which has improved since.... like do not seem to need to run the preview port with real or nullsink... balances reasonably set on this scene anyway.... |
I am trying to create the more efficient ISPresizer object using the VPU. The documentation indicates that this is only available on more recent firmware but since that is dated 2017 and my Pi 3b is fairly recent and fully updated I am assuming that this is not the problem.
The initial problem is that creating the object fails with an error message that it is expecting 1 output and finding 2 in the hardware. This seems to be an inconsistency between mmalobj.py (which declares 1 output) and the underlying OMX.broadcom.isp which indicates that there are two outputs. I can bypass the initial error by simply changing the port count in mmalobj.py however I then get further errors related to negotiating the port format.
This seems to be a bug - are there any updates in the pipeline to address this?
The text was updated successfully, but these errors were encountered: