Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for sending speech over remote desktop protocol channels #3564

Open
nvaccessAuto opened this issue Oct 8, 2013 · 75 comments
Open
Labels
enhancement p4 https://github.com/nvaccess/nvda/blob/master/projectDocs/issues/triage.md#priority triaged Has been triaged, issue is waiting for implementation.

Comments

@nvaccessAuto
Copy link

Reported by camlorn on 2013-10-08 00:52
The remote desktop protocol is Microsoft's protocol for remote access. I believe it is also used by Cytrix, and possibly some of VMWare's products. Regardless, it is used by Microsoft's own implementation which comes with windows. As part of RDP, applications can ask for RDP channels, a method by which they can send arbitrary data back and forth.
Here's some documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa383509%28v=vs.85%29.aspx
Anyhow, more and more places are moving to remote-only access. A good chunk of my university does this. A lot of businesses do it. There are threads from time to time on audiogames.net by people who aren't even working in the computer industry needing some sort of remote access solution. RDP is possibly the easiest to add. Currently, one needs to purchase a even more expensive jaws license for the functionality there, and I believe window-eyes is likewise. There is nothing a blind person can point at for this type of situation that doesn't require someone, usually the blind person, paying extra money. Remote access has become a common thing now, and it is my opinion that it is important that NVDA look into adding it soon.
I am almost certain the Jaws implementation uses RDP channels to send speech strings, which is why I mentioned the protocol. Obviously, implementing an NVDA-specific protocol that requires more open ports won't fly with a lot of IT people-the first thing I thought of was just opening a plane text socket to send speech over, but that's pretty easy to hack. RDP can, in some cases, work without any extra setup beyond installing NVDA on the server.
Blocking #3694

@nvaccessAuto
Copy link
Author

Comment 2 by camlorn on 2013-10-13 18:19
For some reason, I marked this as defect. My bad.

@nvaccessAuto
Copy link
Author

Comment 4 by nvdakor on 2014-04-19 08:48
Hi,
I'm doing some research on it at the moment - reading articles about RDP, reading jFW documentation on remote desktop and others. I think if speech can be sent over a remote link, then we may have an easier time coming up with solutions. The next feature request would be remote braille support.
From what I can tell from reading JFW help system, the way JFW does is by sending speech messages, not audio itself. This has a huge advantage, in that with the message protocol, one can listen and read what remote NVDA is saying via local speech and braille, respectively. Of course there should be an option to listen to the actual audio from the remote end.
I propose that we come up with a global plugin first for our initial implementation, then after testing and reviewing core integration route, to integrate this into NVDA core in the end. I'd be happy to work on some initial work in summer when I get my new desktop (so the desktop can serve as the remote server while my development computer (laptop) can serve as the client).
Comments are appreciated.

@nvaccessAuto
Copy link
Author

Comment 5 by jteh on 2014-04-19 08:58
Yes, you would push just the speech (and potentially braille) data, both text and speech commands. You do this using RDP virtual channels. Once you get the basic architecture in place, speech shouldn't be too hard.

Braille is quite a bit harder than speech because you need to handle input from the client as well (not just pushing output from the server), but every braille display has different input bindings, which doesn't work well for the "virtual" display on the remote side.

@nvaccessAuto
Copy link
Author

Comment 6 by nvdakor on 2014-04-20 04:30
Hi,
Just tested it with a virtualized Windows Server 2012 R2 standard as the host. For now, the easiest way is copying NVDA settings over to the roaming appData for the remote user. When the remote user logs in, one can start NVDA by pressing the shortcut key. Also the remote user needs to set audio playback to be done on the local computer.
Thanks.

@nvaccessAuto
Copy link
Author

Comment 7 by nvdakor on 2014-04-26 04:17
Hi,
I'm trying to run trace on various speech-related functions, starting with ui.message to see when NVDA sends the message to the synthesizer to be spoken.
Another idea might be to utilize speech viewer as an example of remote channel output - use its methods to send speech message data over to the remote connection to be spoken by the synthesizer at the local machine.

@nvaccessAuto
Copy link
Author

Comment 8 by nvdakor on 2014-04-26 06:35
Hi,
Throwing out some implementation ideas:

  • Create a speech synthesizer named "remote speech" that'll send speech sequence over virtual channel when synth.speak(speechSequence) method is invoked. This should be run from remote NVDA.
  • Allow remote and local NVDA to detect that we're indeed in RDP session so that: Remote NVDA can initialize remote speech synth. The initialize() method should find out if local NVDA is running, and if so, send a handshake request to allow speech output via virtual channels. The local NVDA should receive this handshake and reply "true" to receive remote speech data. The handshake would be needed if we're communicating over the Internet for security.
  • In remote speech, synth.speak(sequence) will send the speech data to the local NvDA, which in turn will speak remote NVDA's speech data. We must find a way to differentiate between local and remote data (if possible).
  • From local NVDA, if we're focusing away from RDP client (either when switching to windowed mode or using other apps), local NvDA should send speech pause command to remote NVDA to "close" speech virtual channel (to reclaim bandwidth and to avoid security issues). Then when we're back in RDP, local NVDA should send resume command so the local user can listen to remote NVDA's speech output.
  • When the remote NVDA exits, remote speech's terminate() should close the virtual channel connection.
    The above protocol could be extended with braille.handler.message() method to allow remote braille access (might be separate ticket).
    Few possible issues to solve:
  • What if remote NVDA's remote speech fails? If so, revert to eSpeak, but before that, pop up a dialog saying that remote speech has failed to start and offer to retry or use eSpeak.
  • What if the user exits local NVDA? If so, we need a way to tell remote NVDA to stop sending data.
    Since this whole ticket deals with a feature that might be used by very few users, I suggest starting out with a global plugin/speech synth package, then think about integrating into core later. Thanks.

@nvaccessAuto
Copy link
Author

Comment 9 by nvdakor on 2014-04-26 13:34
Hi,
Beqa told me (via Twitter) to take a look at NVDA Remote Control branch. Apparently Aleskis has implemented Tandem-like feature. I think this is a good start although it may not provide the answer to this ticket.
At the moment i'm laying the foundation for this ticket as an add-on package - implemented a small app module for MSTSC (MS Terminal Services Client) that contains a script for Control+Alt+Break (toggle full-screen mode) and told NVDA to announce that we're entering remote session. The next task would be to write a global plugin.initialize that detects whether we're under remote session or not (the code would be windll.user32.GetSystemMetrics(0x1000); when run, this returns nonzero if we're in remote session, and with that, we can log it in info).
Thanks.

@nvaccessAuto
Copy link
Author

Comment 10 by nvdakor on 2014-04-27 09:15
Hi,
A repo for the community add-on is now ready:
https://bitbucket.org/nvdaaddonteam/remoteaccess
Thanks.

@nvaccessAuto
Copy link
Author

Comment 11 by nvdakor on 2014-06-17 12:46
Hi,
It appears that remote access is harder than I thought:
• You need two modules: a client library (DLL) and a server application. We have a server app (nvda.exe running on a remote computer) but we need to build a client DLL that can interface with the server app, monitor the server NVDA for activities such as exits and receive speech text.
• At least the client DLL needs to be registered with the local computer, which defeats the add-on proposal (or at least requires installed version of NVDA in order for it to work). This applies to client NVDA, as remote NVDA might be an installed version (we’ll strictly assume that the remote NVDA is an installed copy).
Other than that, using virtual channels to send speech text would be feasible (we need to build the DLL and monitor virtual channel text traffic). Thanks.

@nvaccessAuto
Copy link
Author

Comment 12 by nvdaksh (in reply to comment description) on 2014-07-11 19:29
Replying to camlorn:
The method of creating another speech synthesizer to channel speech through a remote connection is the method that Window-eyes uses.You have to open the Virtual channel and then select the virtual speech synthesizer, all of which is done from the system that is accessing the remote desktop.
I have used this feature with Window-eyes for several years to access the server at the company where I work.
Many people who need to access their workspaces remotely could use this feature.

The remote desktop protocol is Microsoft's protocol for remote access. I believe it is also used by Cytrix, and possibly some of VMWare's products. Regardless, it is used by Microsoft's own implementation which comes with windows. As part of RDP, applications can ask for RDP channels, a method by which they can send arbitrary data back and forth.
Here's some documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa383509%28v=vs.85%29.aspx

Anyhow, more and more places are moving to remote-only access. A good chunk of my university does this. A lot of businesses do it. There are threads from time to time on audiogames.net by people who aren't even working in the computer industry needing some sort of remote access solution. RDP is possibly the easiest to add. Currently, one needs to purchase a even more expensive jaws license for the functionality there, and I believe window-eyes is likewise. There is nothing a blind person can point at for this type of situation that doesn't require someone, usually the blind person, paying extra money. Remote access has become a common thing now, and it is my opinion that it is important that NVDA look into adding it soon.

I am almost certain the Jaws implementation uses RDP channels to send speech strings, which is why I mentioned the protocol. Obviously, implementing an NVDA-specific protocol that requires more open ports won't fly with a lot of IT people-the first thing I thought of was just opening a plane text socket to send speech over, but that's pretty easy to hack. RDP can, in some cases, work without any extra setup beyond installing NVDA on the server.

@nvaccessAuto
Copy link
Author

Comment 13 by nvdakor (in reply to comment 12) on 2014-07-11 19:46
Replying to nvdaksh:

Replying to camlorn:

The method of creating another speech synthesizer to channel speech through a remote connection is the method that Window-eyes uses.You have to open the Virtual channel and then select the virtual speech synthesizer, all of which is done from the system that is accessing the remote desktop.

I have used this feature with Window-eyes for several years to access the server at the company where I work.

Many people who need to access their workspaces remotely could use this feature.

The remote desktop protocol is Microsoft's protocol for remote access. I believe it is also used by Cytrix, and possibly some of VMWare's products. Regardless, it is used by Microsoft's own implementation which comes with windows. As part of RDP, applications can ask for RDP channels, a method by which they can send arbitrary data back and forth.

Here's some documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa383509%28v=vs.85%29.aspx

Anyhow, more and more places are moving to remote-only access. A good chunk of my university does this. A lot of businesses do it. There are threads from time to time on audiogames.net by people who aren't even working in the computer industry needing some sort of remote access solution. RDP is possibly the easiest to add. Currently, one needs to purchase a even more expensive jaws license for the functionality there, and I believe window-eyes is likewise. There is nothing a blind person can point at for this type of situation that doesn't require someone, usually the blind person, paying extra money. Remote access has become a common thing now, and it is my opinion that it is important that NVDA look into adding it soon.

I am almost certain the Jaws implementation uses RDP channels to send speech strings, which is why I mentioned the protocol. Obviously, implementing an NVDA-specific protocol that requires more open ports won't fly with a lot of IT people-the first thing I thought of was just opening a plane text socket to send speech over, but that's pretty easy to hack. RDP can, in some cases, work without any extra setup beyond installing NVDA on the server.

Hi,
But we're talking about more than speech here: a client NvDA DLL should be able to pass the gestures over to the server NVDA executable. This gets interesting when we talk about braille, which needs the foundation provided by virtual channel communication for passing speech strings back and forth (see subsequent comments as to why this is needed). Another potential issue with text sockets is handling of Unicode, especially for east Asian languages and others which requires Unicode text, and interpreting speech commands such as changing languages for the synthesizer in use by the client (this can be negotiated in the startup phase where client DLL and server executable exchange key information such as capabilities of the synthesizer at the client end).
Hope this helps.
Thanks.

@nvaccessAuto
Copy link
Author

Comment 14 by Brendon22 on 2014-07-12 07:56
Hi nvdakor,

Don't forget about the MS Window Eyes offer. As far as I am aware with this offer, you get remote access for free. But yes, if you by a license of Window Eyes it is not free.

Thank you.

@nvaccessAuto
Copy link
Author

Comment 15 by camlorn on 2014-07-14 18:30
This is being overcomplicated by an order of magnitude. NVDA has full control over the defined protocol and need only talk to NVDA-there is no negotiation. Either the client is capable of supporting the server's speech commands, or it isn't-at most, this is a "I am NVDA 2013.1" or something at the beginning of the stream. Encoding is also a nonissue, so long as both sides standardize on something-in this case, probably the encoding which is the superset of all of them (I want to say it's UTF32, or something, but it's escaping me for the moment. In all honesty, UTF8 may be enough). A first implementation need not be concerned with anything save respecting two commands at all times: say string and braille string. The latter shall not translate, as the server can do this for us beforehand.
More things can come later, i.e. respecting settings on the client's computer and being silent when the remote session isn't the focused window. Braille gestures are indeed important, but simply getting the keyboard to work should be the first step-getting this far should only require the client to be a dumb terminal. Make the client a smart terminal only after the client works as a dumb terminal, and then you can throw in all the bells and whistles. Doing as much as possible on the server is a hard requirement here: add-ons can do a lot of things, and the server may have different add-ons than us.
And I don't know for sure, but suspect this does not require changing synths on the client-I see no reason that it can't funnel into the one you're already using. Being able to only use the remote session would be pretty lame, to be very honest-especially in situations where you've got to do something on the client after/during configuring something on the server or in situations where moving data froma program on the server to a program on the client and back (yes, really. Lots of remote access software offers shared clipboards now).
If no one manages to get this working, I may have the time and inclination to revisit this in the near future. I've got too much going on in terms of programming to add something else, however. But it doesn't look as hard to do as it used to.

@nvaccessAuto
Copy link
Author

Comment 16 by ragb (in reply to comment 15) on 2014-07-14 22:30
Hi,
Replying to camlorn:

This is being overcomplicated by an order of magnitude. NVDA has full control over the defined protocol and need only talk to NVDA-there is no negotiation.

I agree. Moreover, using virtual channels or whatever, people are asking for every single feature before anything is done, even a simple dummy channel between two NVDA instances. I'd say that remote speech would solve 90% of all current use cases, and that one is a good start.

Regarding implementation, I'd go for dynamic virtual channels as Jamie proposed. My only concern is how we can register a terminal services plugin so that it can communicate with NVDA.

If it is possible to directly expose a COM server from NVDA implementing the IWTSPlugin interface, we may be able to do all implementation in Python/comptypes on the client side. I'm not really sure how to do this, neither where where to find a tlb with this and otter interfaces (they are defined in Tsvirtualchannels.idl). If possible, we just need to listen for connections, open a virtual channel, wait for the remote side and receive speech commands. IMO speech commands should be passed to the current client speech synthesizer as they come (whatever remote configuration is).

ON the server side (NVDA running in a remote session) things seem easier since one can find if it in a remote session and act appropriately. Just need to implement a remote speech driver and use that to write commands ove dynamic virtual channel.

I'm really still not concerned about the data protocol used, it is a matter of deciding something flexible or using some existing binary format (protobuf, etc.)

@nvaccessAuto
Copy link
Author

Comment 17 by jteh on 2014-07-15 01:09
I suggested virtual channels, but not dynamic virtual channels. From what I recall, static virtual channels looked to be easier to implement.

For dynamic virtual channels, I doubt you could have the client COM server in NVDA itself, as it probably needs to be an in-process COM server, which means it has to be a dll. I could be wrong, though.

@nvaccessAuto
Copy link
Author

Comment 18 by ragb (in reply to comment 17) on 2014-07-16 13:41
Hi,
Replying to jteh:

I suggested virtual channels, but not dynamic virtual channels. From what I recall, static virtual channels looked to be easier to implement.

Ah, ok. Dynamic virtual channels have all the COM baggage associated :). I'm not sure, but for our use case static channels might be enough, and one can use the already existing DLL infrastructure to export the needed functions for the remote desktop services client dll plugin.

For dynamic virtual channels, I doubt you could have the client COM server in NVDA itself, as it probably needs to be an in-process COM server, which means it has to be a dll. I could be wrong, though.

Don't know that much about the subject. In

http://starship.python.net/crew/theller/comtypes/server.html

You can find a COM server written in Python but probably doesn't apply to our situation.

BTW, I guess it would be harder to implement this as an add-on than modifying core....

@nvaccessAuto
Copy link
Author

Comment 19 by mdcurran (in reply to comment description) on 2014-07-17 05:20
Replying to camlorn:
From what I can tell from reading various things from Citrix and otherwise, Citrix is implemented on top of RDP, but much of its own functionality is done with in its own protocol (ICA). Therefore if we did implement support for RDP, this would only provide improved access to Microsoft Remote Desktop Connection (terminal services, Remote Assistance etc) but not Citrix products. To support Citrix products we would need Citrix specific code which made use of virtual channels over ICA.

The question is: What would be the most important implementation for NV Access to work on?

camlorn: You said your University has remote solutions. But do they use Microsoft Remote Desktop (Windows Terminal Services), or do they use Citrix?

We really need to hear from NVDA users as to technical requirements for this.

@nvaccessAuto
Copy link
Author

Comment 20 by ragb (in reply to comment 19) on 2014-07-17 11:59
Hi,
Replying to mdcurran:

We really need to hear from NVDA users as to technical requirements for this.

At least here, all people I contacted about this are using windows terminal services (microsoft) or trying too do that by routing also audio and such.

I'd say if we abstract the transport protocol as much as possible, that is, use RDP virtual channels or whatever just to pass data around, we may be able to accommodate different solutions.

Anyway, I've being thinking about this for windows terminal services/static virtual channels, and it does't seem as complex as it seemed, in implementation details.

For the server application module, I believe we can open, read and write to a virtual channel directly from Python using types, which is more or less comfortable :).

on the client side, we must implement a dll to run on the mstsc process. This should communicate received data and other events to NVDA, and allow for NVDA to send data to the remote client and probably control some other aspects. Correct me if I'm wrong, but I think the RPC infrastructure used in nvdaHelperRemote may be reused for this, or even extend nvdaHelperRemote with the needed methods and use it as the terminal services dll plugin.

@nvaccessAuto
Copy link
Author

Comment 21 by nvdakor (in reply to comment 20) on 2014-07-17 12:11
Replying to ragb:

Hi,

Replying to mdcurran:

We really need to hear from NVDA users as to technical requirements for this.

At least here, all people I contacted about this are using windows terminal services (microsoft) or trying too do that by routing also audio and such.

I'd say if we abstract the transport protocol as much as possible, that is, use RDP virtual channels or whatever just to pass data around, we may be able to accommodate different solutions.

Anyway, I've being thinking about this for windows terminal services/static virtual channels, and it does't seem as complex as it seemed, in implementation details.

For the server application module, I believe we can open, read and write to a virtual channel directly from Python using types, which is more or less comfortable :).

on the client side, we must implement a dll to run on the mstsc process. This should communicate received data and other events to NVDA, and allow for NVDA to send data to the remote client and probably control some other aspects. Correct me if I'm wrong, but I think the RPC infrastructure used in nvdaHelperRemote may be reused for this, or even extend nvdaHelperRemote with the needed methods and use it as the terminal services dll plugin.

Right. The hardest is that the client DLL must be registered in the system, which limits its use to installed copies (see my comment above).
A few points to consider:

  • What if the server NVDA runs with higher privileges and if the user uses a program with admin rights?
  • What if we encounter UAC and other secure screens in the server environment?
    I do agree that at this stage, passing strings within the virtual channel would be the priority (see the link in one of the comments for a global plugin which logs when NVDA detects that it's running under remote user). As for further questions down the road (minimum version; I prefer to use Vista and up), let's hold off (I say).
    BTW, @ragb: there is a member on the NVDA add-ons list who has experience with network programming and has offered to implement support for this ticket. Also, there are others on the development list who are willing to assist you and Mick. I'll do some more reading and help out with testing the feature. Thanks.

@nvaccessAuto
Copy link
Author

Comment 22 by ragb (in reply to comment 21) on 2014-07-17 13:23
Hi,
Replying to nvdakor:

Right. The hardest is that the client DLL must be registered in the system, which limits its use to installed copies (see my comment above).

This is the same as add-on extension registration. One can try to temporely register the dll or whatever, but if we don't have privileges it's impossible. Anyway, if we really need an installed copy, this should not block the implementation that much. In the end this is a very specific feature.

A few points to consider:

  • What if the server NVDA runs with higher privileges and if the user uses a program with admin rights?
  • What if we encounter UAC and other secure screens in the server environment?

I think this two questions are somehow the same. I didn't thought much about this, but I believe we will need to account for various remote NVDA instances to communicate with the same client. Moreover I don't know if there are any restrictions for remote apps in UAC contexts, e.g. be restricted in virtual channel usage or something.

I do agree that at this stage, passing strings within the virtual channel would be the priority (see the link in one of the comments for a global plugin which logs when NVDA detects that it's running under remote user). As for further questions down the road (minimum version; I prefer to use Vista and up), let's hold off (I say).

If we can pass data around we can build all the rest over it, in a much higher abstraction level. Say speech or whatever. As per MSDN documentation, the code in the plugin doesn't account for some specific remote sessions in windows 8 and above, we will need some registry checks and so on, but there are clear code samples for that.

I found some c++ code that might be of use here:

http://www.codeproject.com/Articles/87875/User-Mode-Transport-of-the-Library-Via-Virtual-Cha

At least gives some idea how to use the windows API calls in context.

BTW, @ragb: there is a member on the NVDA add-ons list who has experience with network programming and has offered to implement support for this ticket. Also, there are others on the development list who are willing to assist you and Mick. I'll do some more reading and help out with testing the feature. Thanks.

As I will surely need this in a future job, I'm trying to help what I can, but any more help is appreciated, as always :).

@nvaccessAuto
Copy link
Author

Comment 23 by camlorn on 2014-07-17 18:09
I can't find out what my college is using until the fall. I believe it is VMWare, actually. I think, but don't quote me on this, that they can enable RDP.
As for my thoughts, if you can run a network connected windows VM, you can enable remote access to it with Microsoft Remote Desktop: this means that, in effect, it's probably possible for employers to turn it on. The problem with literally every other solution, given that we know more about Citrix now, is that they're proprietary and thus require deals with or knowledge provided by 3rd-party companies.
I'd like to see this abstracted: an interface of some sort for connecting two NVDA instances and a specific implementation of that for RDP. The advantage of this is that said implementations can do whatever--if the only way to support it is to communicate via a 3rd VPS running the NVDA speech bridge, then someone can implement it as an add-on or something. The point about not knowing which services are most important is indeed a good one, and I don't have a good answer for it.
As for message format, just use Json or xml. We don't need to tightly pack it unless there's some low-level requirement Microsoft imposes. Using Google protocol buffers will bring in more dependencies; Json doesn't. Alternatively, and this is horrid in terms of security, pickle classes and send those across.

@nvaccessAuto
Copy link
Author

Comment 24 by k_kolev1985 on 2014-07-17 19:31
Hello,

Since you want input from other NVDA users about the usage cases for this feature (if I understood correctly), I'll tell you mine, based on my own experience and observations. For the beginning, NVDA should be able to control another NVDA copy and another computer via an established connection. It should be able to send speech and braille feedback from the controlled machine to the controlling one. The speech should be send as text and read aloud or shown on a braille display. The settings used for speech and other NVDA stuff should be from the controlling copy of NVDA and not taken from the controlled one. My idea is to be able, if needs be, to help people like us, who have computer problems: we log on in their machine via an established connection and start controlling it; the things read by the controlled copy of NVDA are send to the controlling copy of NVDA and it speaks it or shows it on braille (or maybe both, if desired). I'm asking for this, because in most cases, less experienced blind computer users have problems with their machine and software and don't know how to solve them and need help from somebody else. I personally have been helping such people, but it wasn't easy. The controlling was done via TeamViewer and the speech audio from the controlled copy of NVDA was transfered via a Skype call to me. But that solution is far from best and one of its smallest problems is the inevitable audio latency. And speaking of TeamViewer, I've thought of 2 more feature suggestions: sending not only speech and braille, but video as well from the controlled to the controlling machine, for partially sighted users like me, who help, but use a screen reader also; Voice and text/chat communication, in order for the people on both sides to be able to communicate with each other if needs be, without needing to rely on other VoIP programs for that. For the 2nd one: yes, I know there's Skype for that, but what if the person's problem is that he can't use Skype for some reason and wants help from us just for that (?).

Well, those are my 2 cents on the matter :). Thanks for reading!

@nvaccessAuto
Copy link
Author

Comment 25 by ragb (in reply to comment 23) on 2014-07-18 11:58
Hi,
Replying to camlorn:

I can't find out what my college is using until the fall. I believe it is VMWare, actually. I think, but don't quote me on this, that they can enable RDP.

I'd like to see this abstracted: an interface of some sort for connecting two NVDA instances and a specific implementation of that for RDP. The advantage of this is that said implementations can do whatever--if the only way to support it is to communicate via a 3rd VPS running the NVDA speech bridge, then someone can implement it as an add-on or something. The point about not knowing which services are most important is indeed a good one, and I don't have a good answer for it.

I agree with this. However, as nice as it may seem, we should not try to account for every sort of possibility, that is, abstract too much. I'm saying this because I myself sometimes tend to try to solve everything early on and then things get too much complex with no need.

As for message format, just use Json or xml. We don't need to tightly pack it unless there's some low-level requirement Microsoft imposes. Using Google protocol buffers will bring in more dependencies; Json doesn't. Alternatively, and this is horrid in terms of security, pickle classes and send those across.

I thought about a binary-based format due to the amounts of data we need to pass plus the need to have the less latency possible. I believe Json and specially XML have some overhead. But this is just a guess, in practice I don't know if it would be noticeable or not. Protobuf was just the first thing that came to mind.

@nvaccessAuto
Copy link
Author

Comment 26 by ragb on 2014-07-19 11:57
Just remember something that might be important to ask.

When we send say speech from the remote NVDA to the client, where should we pre-process that speech with dictionaries and so on? I'd say on the client side, since we want to use client settings (I guess). This makes the implementation a bit trickier though, as one can't just rely or can't rely at all in a remote speech synth driver.

@nvaccessAuto
Copy link
Author

Comment 27 by ragb on 2014-07-19 16:42
I started some of this here:

https://bitbucket.org/ragb/nvda

See branch T3564.

I coded the stub for the client DLL plugin to be called by remote desktop. I decided to create a standalone DLL with this functionality, I don't think it is correct or even suitable to use nvdaHelperRemote for this.

My plan is to implement another interface in NVDA helper (call it nvdaRdp) to be called over RPC by the client DLL, at least for control messages (connecting, disconnecting, etc.). I'm not sure if this is suitable for pas sing data to and from NVDA, probably will need a RPC server running in the DLL too. For passing data, and this could be a more efficient alternative, I thought about a named pipe between processes, so the client dll just works as a bridge between client and remote NVDA instances with no RPC marshaling and such. Even with this we need RPC for control I believe.

@nvaccessAuto
Copy link
Author

Comment 28 by manish on 2014-09-07 01:39
Someone in the above thread had asked for user input on the kind of remote solutions they use. In my day job, I mostly need to use mstsc or remote desktop. I also frequently need citrix and most recently my-pc .
If we can start by implementing for remote desktop (mstsc) and abstract out the transport so it can be extended to citrix and others later, that'll be a good start.

@nvaccessAuto
Copy link
Author

Comment 29 by blindbhavya on 2014-09-07 08:10
Hi.
Somewhere, in some comment, which I can't find right now, someone said that this feature would be very specific and used by very few users.
But according to me, this feature would be extremely useful while helping a less experienced NVDA user.
Also, with regards to inputs from NVDA users on usage cases, I more or less agree with what has already been told.
Only thing, if I was the controller of another copy of NVDA, I should use the settings in the other copy of NVDA which I am controlling rather than my own configuration.

@nvaccessAuto
Copy link
Author

Comment 31 by nvdakor on 2014-09-07 11:06
Hi,
You might be thinking of Tandem. It is completely different from Remote Access in that it is usually meant for people to use remote applications or log onto work computer from home.Thanks.

@nvaccessAuto
Copy link
Author

Comment 32 by blindbhavya on 2014-09-07 12:19
Hi.
Yes, I certainly was thinking this ticket to be about a feature in NVDA similar to the JAWS Tandum.
So nvdakor, could you tell me off this issue tracker if there is a ticket for an NVDA Tandum? Since I don't you know your e-mail address, I am asking here. My e-mail address is [email protected]
Sorry for asking such stuff here. So apart from nvdakor everyone else may ignore this comment.

@nvaccessAuto
Copy link
Author

Comment 33 by nvdakor on 2014-09-07 12:53
Hi,
Done via email. Thanks.

@LeonarddeR
Copy link
Collaborator

@jcsteh commented on 10 aug. 2017 02:13 CEST:

What error do you get?

Honestly, it is quite a long time ago I tested this, so I don't recall. Implementing this using comtypes isn't the way to go anyway, since you will have to implement com interfaces by hand, whereas there are perfect c++ headers for this. For the server side of things, hwIo would be a good starting point

@florianionascu7
Copy link

Hello everyone!
As we know, the current version of NVDA Remote is incompatible with
NVDA. As I've heard, a new with version of this add-on will be
available once NVDA 2018.3 is released. In order to avoid this kind of
issues in the future, I suggest you to include this add-on in NVDA.
This suggestion comes from the entire Romanian community of NVDA. I,
like all members in the Romanian community, think that it would be a
great thing that NV Access can doMaybe it's hard to do, but it's very
useful for NVDA users.

@josephsl
Copy link
Collaborator

josephsl commented Sep 16, 2018 via email

@florianionascu7
Copy link

florianionascu7 commented Sep 16, 2018 via email

@Adriani90
Copy link
Collaborator

Replying to @michaelDCurran

Comment 19 by mdcurran (in reply to comment description) on 2014-07-17 05:20
Replying to camlorn:
From what I can tell from reading various things from Citrix and otherwise, Citrix is implemented on top of RDP, but much of its own functionality is done with in its own protocol (ICA). Therefore if we did implement support for RDP, this would only provide improved access to Microsoft Remote Desktop Connection (terminal services, Remote Assistance etc) but not Citrix products. To support Citrix products we would need Citrix specific code which made use of virtual channels over ICA.

The question is: What would be the most important implementation for NV Access to work on?

There is also basic RDP suport in Citrix, see
https://support.citrix.com/article/CTX129184

In my opinnion here is not a question of costs. Because I think if we start a specific campaign there will be thousants of donations related to this functionality and it might be enough to implement both RDP and ICA. The problem is here manpower. This would be a project where many developers have to work together. It needs communication with Microsoft and Citrix, maybe separated, to find the best way to implement both protocols. There are thousants of users who are dealing with this at work. In terms of priority, RDP and ICA are both on same level. Citrix is being used in almost every bank and almost every state owned company where data must be centralized. RDP and VM-Ware are mostly used in document management areas and system administration domains. So, the question is rather, is NV Access willing to start such a worldwide specific campaign for this purpose and start getting donations and sponsoring? I think NV Access could coordinate this project but I honnestly don't think that NV Access alone will be able to do all work alone. But a campaign started by the creators of NVDA is much credible than a campaign started by a users group or a motivated NVDA Team from a certain country.

@Adriani90
Copy link
Collaborator

I see this discussion is going on almost in every community and every year on NVDACon it is also a topic which is being debated. So we have to start anywhere because it affects really lots of people.

@michaelDCurran
Copy link
Member

michaelDCurran commented Dec 13, 2018 via email

@Adriani90
Copy link
Collaborator

Adriani90 commented Dec 14, 2018 via email

@elliott94
Copy link

I would be more than happy to contribute towards development costs for this; I'm finding that more and more employers are relying on access to multiple terminal servers, and to have at least RDP functionality integrated into NVDA's core would be a huge starting point.

@LeonarddeR
Copy link
Collaborator

I'm currently building an open source solution for this as Unicorn is a paid product and lacks several essential features, particularly the possibility to operate multiple remote sessions at the same time.
The backend is written in rust, see https://github.com/leonardder/rd_pipe-rs
The NVDA side of things will use a remote braille display driver and remote speech synthesizer, similar to how JAWS does this.

seanbudd pushed a commit that referenced this issue Feb 14, 2023
…14531)

Related to #3564, #2315

Summary of the issue:
Currently, USB and Bluetooth devices are supported for braille display detection. Other devices using other protocols or software devices aren't supported. This pr intends to add support for this.

Description of user facing changes
None. User shouldn't experience anything different for now.

Description of development approach
Added Chain, a new extension point type that allows to register handlers that return iterables (mainly generators). Calling iter on the Chain returns a generator that iterates over all the handlers.
The braille display detector now relies on a new Chain. By default, functions are registered to the chain that yield usb and bluetooth devices. A custom provider can yield own driver names and device matches that are supported by that particular driver. A potential use case would be implementing automatic detection for displays using BRLTTY, for example. It will also be used to fix Braille does not work on secure Windows screens while normal copy of NVDA is running #2315 (see below)
Added a moveToEnd method on HandlerRegistrar, which allows changing the order of registered handlers. This allows add-ons to give priority to their handlers, which is especially helpful for both Chain and Filter. NVDA Remote should come for the braille viewer, otherwise controlling a system with braille viewer on with a 80 cell display connected to the controller would lower the display size to 40 unnecessarily. This will also be used to register a custom handler to bdDetect.scanForDevices to support auto detection of the user display on the secure screen instance of NVDA, which should come before USB and Bluetooth.
As a bonus, added type hints to extension points. For Filters and Chains, you could provide the value type and then a type checker can check validity.
As another bonus, all features are covered by new tests. So there are tests for the Chain extension point and for the specific use case in bdDetect
Testing strategy:
As this touches braille display auto detection quite a lot, probably merge this early in the 2023.2 cycle.

Known issues with pull request:
bdDetect.Detector does no longer take constructor parameters, rather queueBgScan should be called explicitly. This is because if we queue a scan in the constructor of Detector, the detector could switch to a display and disable detection before the _detector was set on the braille handler. Ideally we should use a lock as well, but that might be something as a follow up for both this pr and #14524. Note that though we're changing the constructor of a public class in bdDetect, the doc string of the class explicitly states that the Detector class should be used by the braille module only. That should be enough warning for users not to use this class and therefore I don't consider this API breaking.
@alexstine
Copy link

Any movement on this issue? Employer is looking to shift our team to using RDP into Windows boxes and wondering if I'm going to have any options.

Thanks.

@LeonarddeR
Copy link
Collaborator

Sure, RDAccess is now a stable add-on. I have no plans to integrate this in core, especially now there's an add-on store that allows you to install this pretty easily.

@alexstine
Copy link

@LeonarddeR Ever see a future where NVDA will just work on a client machine and have no requirement to be installed on the server? I highly doubt our security team approves this.

@lukaszgo1
Copy link
Contributor

Ever see a future where NVDA will just work on a client machine and have no requirement to be installed on the server?

That is not technically possible - NVDA has to be running on the server side to know what is happening on it, as the server is a completely separate machine which just sends image to a client.

I highly doubt our security team approves this.

That is IMHO the biggest disadvantage of keeping the support as an add-on. There will always be companies where employers are not allowed to install add-ons, and they would be forced to use a commercial screen reader to do their job.

@alexstine
Copy link

Company is not against installing add-ons, it would be the screen reader in general. These servers run web apps, policy states that only the software required to run the app/service is allowed. Got to love the healthcare world. I have a feeling it would be a long battle to change that. SSH is so much nicer.

@XLTechie
Copy link
Collaborator

XLTechie commented Dec 28, 2023 via email

@ahicks92
Copy link
Contributor

It may not violate the ADA because the ADA is limited to reasonable accommodations, and if they are in the U.S. in this case they're probably sandwiched between ADA and HIPAA in some fashion. Which one would win? That's a question for lawyers, not this GitHub thread. But I do at least second the general point that there are advantages from a social perspective of having such things in core, because it is much easier to get one piece of software approved than two.

@Adriani90 Adriani90 mentioned this issue Jul 24, 2024
@seanbudd seanbudd added the triaged Has been triaged, issue is waiting for implementation. label Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement p4 https://github.com/nvaccess/nvda/blob/master/projectDocs/issues/triage.md#priority triaged Has been triaged, issue is waiting for implementation.
Projects
None yet
Development

No branches or pull requests