-
-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capturing a static image from the local MediaStream #116
Comments
There is no easy way to do that, and given that this plugin mimics the official WebRTC API I don't want a custom API to capture images from the video. The way to go should be adding I'm sorry, I cannot point you in the right direction :) |
Thanks for the update, I understand your approach. And yes, implementing the |
@ibc If you were to add an internal API to grab the image, I can work on integrating it into canvas so you can use the same code you would in a browser |
@contra I'm so sorry, but currently I cannot get the time to work on that. |
@ibc Can you leave the issue open so somebody (ie me) can work on it? |
Sure. |
@oliverlukesch I had to do the same in my application and I've extended the plugin on JS and Swift side. What you can do: Take a snapshot of the UIView element. You need the ID of the mediastreamrenderer In cordova-plugin.iosrtc.js do (for example) the following trick
The ID contains your specific view element. It's mapped to the Swift dictionary
which contains the specific UIView. Do a local snapshot in Swift with this code:
and return it to the function as base64. This should do the trick Regards (Grüße nach Berlin) |
Hey Rolf, that's great, many thanks for sharing! Think I'll have time for implementing this solution next week, will keep you updated. Grüße nach Osnabrück :) |
@derMani @oliverlukesch Awesome! |
Guys, this is exactly what I am missing. The ability to write the current stream to a canvas element. I want to use the camera as motion detector (see this super simple and awesome example: https://github.com/ReallyGood/js-motion-detection). If you manage to get this to work that would be awesome! Grüße aus Stuttgart |
@gefangenimnetz Pull Requests are welcome. |
@derMani I would be interested to capture the image, but the only "limit" I see on your method is that the resolution of the image will depend on the size of the UIView. So this means that you get a 320 or 640 HxW image... Am I wrong? |
@SirSeymour The resolution will depend on what you decide to use with the plugin, from there you can use the base64 image data to resize, crop etc. Anyone on this thread actually working on a PR? @oliverlukesch are you intending to send in a PR? |
@1N50MN14 if the local video UIView is 240x240, you'll get a 240x240 image. Don't you? I'm saying if I want to capture the full resolution image from local camera, with the method stated above I can't (ex. the video source is set a 1280x960 but the UIView is at 640x320, the image I'll get is the 640x320) |
@SirSeymour You're actually right this won't serve say as a 12M full image resolution capture, but it's good for stuff such as "chose your profile photo" |
Anyway I just need to know if anyone on this thread is working on a PR so I could decide whether to write it myself or not as "implementing the solution" is not clear if translates to submitting a PR |
At least for me, I am not working on a PR. Please note that the Screenshot-thing is kind of a heavy task (at least, that's what the documentation says) For anything live / canvas related, it might be better to utilize somehow a AVCaptureConnection and AVCaptureStillImageOutput on the video connection. |
Ok cool thanks @derMani I'll squeeze this on my to-do list next Friday and submit a PR @SirSeymour I'll try to generate a full resolution image I'll have to see how it'll work out in relation to the constrains properties the plugin receives |
@1N50MN14 Nope, haven't had any time to touch this issue so far. Great stuff you're taking a look! |
I am currently working with iosRTC plugin and i can test PR when available. |
@1N50MN14 Hi, i am just wondering if you had any luck creating the solution. DId you made any progress? Many thanks. |
@ikostic Apologies for not updating this, unfortunately I lost interest in this project / plugin, I'll no longer be participating here and will be focusing on my own implementation instead; I have some catching up to do but once I get there I can revisit the repo and send a PR if no one has implemented this until then... sorry about that. |
@1N50MN14 No problem, i understand. I will try to find some solution. |
Is there any chance that someone will make this possible with iosrtc? |
@scottmahr I'm sending a PR for this one on Friday |
Wow, that is amazing timing. In your implementation what sort of format will the image be coming out from the library? Right now I use canvas and canvas.toDataURL to pull a jpeg of the current video output. Let me know if you want help testing to see if it works on different devices. Thanks! Scott |
@scottmahr Similar to |
You might have more joy with it than me! I am sure the theory is sound. Let me know if you have any success on it. |
Unfortunately after I added the plugin to my project it won't build (compiler error, a missing type). But if I figure out why that is and get it working I'll let you know! |
anyone has already got this working? |
No. |
Related #253 (comment) |
Done here: Example:
Test Source: Implementation: |
Can the video element be hidden? I only want to show the frames into the canvas. Is this possible? |
6.0.15 support CanvasRenderingContext2D.drawImage on VideoElement with iosrtc MediaStream // Apply CanvasRenderingContext2D.drawImage monkey patch
var drawImage = CanvasRenderingContext2D.prototype.drawImage;
CanvasRenderingContext2D.prototype.drawImage = function (arg) {
var args = Array.prototype.slice.call(arguments);
var context = this;
if (arg instanceof HTMLVideoElement && arg.render) {
arg.render.save(function (data) {
var img = new window.Image();
img.addEventListener("load", function () {
args.splice(0, 1, img);
drawImage.apply(context, args);
});
img.setAttribute("src", "data:image/jpg;base64," + data);
});
} else {
return drawImage.apply(context, args);
} |
I just tried that but in Safari it gives me Type Error on "drawImage.apply(context, args);". Any idea why? |
@gefangenimnetz This is not made for Safari but for cordova, the img is not a string but an image tag loaded, also I have no idea if Safari support draw on video. If you want really help with this issue please test with Cordova, you need to pass the proper argument (videoEl, 0, 0) but since you did not provided any details on what you exactly called it's impossible to know what to expect. This script above is not made for Safari, |
I am using Cordova, debugging the WKWebView with Safari. This is my code: if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then((stream) => {
const video = this.videoRef.current; // This is the <video> element, taken from the DOM via React
video.onloadedmetadata = (event) => {
this.canvasRef.current.width = event.srcElement.videoWidth;
this.canvasRef.current.height = event.srcElement.videoHeight;
};
video.srcObject = stream;
const onFrame = () => {
window.requestAnimationFrame(onFrame);
const canvasContext = this.canvasRef.current.getContext('2d');
canvasContext.drawImage(video, 0, 0);
};
onFrame();
})
.catch((error) => {
console.log(error);
});
} |
@gabrielenosso looking into it one moment. |
To complete the information: I am using version 6.0.15. Changing the line: EDIT: In your comment the line is: But in the repository code the line is: |
@gabrielenosso First, this issue is originaly closed. Please comment on #582 not this issue. Note: Regarding the background thread warning, I have a fix but it also create another error and also requestAnimationFrame will not be efficient with the current implementation. |
#582 will be closed on 6.0.16 with the img fix. its on master already. |
Note: 6.0.21 fix memory leak on Capture video. |
Hi Iñaki, first I'd like to thank you for this great plugin. I'm currently building a Cordova-based video-chat solution using OpenTok.js, Crosswalk and your plugin and most of the things are working out great. I'll happily share the project once it's done.
Now to the issue at hand: I need to be able to capture static images from the MediaStream of the local client before the stream gets sent to the remote client and therefore gets compressed and loses quality. The stream needs to continue while capturing the static images.
The common WebRTC solution is to draw the content of the
<video>
element onto a<canvas>
element and use<canvas>.toDataURL('image/png')
to get the static image. However, due to the specifics of iosrtc, this is obviously not possible.My guess is that this requires some sort of native implementation. While my Swift/Objective-C skills are limited, I'll happily give it a shot and create a pull-request if you can point me into the right direction. Or maybe you already have a solution at hand.
Many thanks!
The text was updated successfully, but these errors were encountered: