Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QSRV 2 #37

Merged
merged 17 commits into from
May 10, 2023
Merged

QSRV 2 #37

merged 17 commits into from
May 10, 2023

Conversation

mdavidsaver
Copy link
Member

@mdavidsaver mdavidsaver commented Mar 27, 2023

The rewrite of QSRV using PVXS.

Current Status: Alpha. Testable, but lacking features. Principally PVA links.

  • Single PV access (aka. RSRV equivalence)
  • Group PV access
  • PVA Links

My compatibly target for the QSRV functionality is Base >= 3.15 for Single PV, and >= 7.0 for Group PV. Lack of support for Group PV with 3.16 is only a case of my not spending the time to add #ifdefs and testing.

Following the pattern of pvget -> pvxget, this branch adds a softIocPVX executable.

QSRV functionality is added to the existing libpvxsIoc.so library when available (aka. > 3.14). So the build time instructions don't change. However, while the PVXS server is started during iocInit, at the moment the database interactions must be explicitly enabled by setting:

export PVXS_QSRV_ENABLE=YES

If all goes well iocInit() will print:

INFO: PVXS QSRV2 is loaded and ENABLED.
Starting iocInit

Open Questions:

  • What should interoperability with QSRV be? My current goal is to allow both to be linked and run without ill effect (apart from duplicate PV warnings). Is this the way to go?

TODO:

pva2pva issues already addressed:

Added features:

  • dbLoadGroup() accepts macros.

@mdavidsaver mdavidsaver added the enhancement New feature or request label Mar 27, 2023
@mdavidsaver mdavidsaver self-assigned this Mar 27, 2023
@AppVeyorBot
Copy link

Build pvxs 1.0.851 failed (commit 606e2a77a3 by @mdavidsaver)

@AppVeyorBot
Copy link

Build pvxs 1.0.856 failed (commit 5d8737676b by @mdavidsaver)

@AppVeyorBot
Copy link

Build pvxs 1.0.859 failed (commit b8d6bcddf0 by @mdavidsaver)

@AppVeyorBot
Copy link

Build pvxs 1.0.861 failed (commit 71e3ea4120 by @mdavidsaver)

@AppVeyorBot
Copy link

Build pvxs 1.0.869 failed (commit 1aba651cea by @mdavidsaver)

@AppVeyorBot
Copy link

Build pvxs 1.0.869 completed (commit 1aba651cea by @mdavidsaver)

@ericonr
Copy link
Contributor

ericonr commented Apr 27, 2023

Hi!
I got to this PR from the EPICS Collab Meeting :)
I am using it in one of our new IOCs which I'm currently developing and testing, and it seems to be working fine. I have tested puts, gets and monitors from scalar (long* and a* records) and waveforms (aa* records with FTVL being SHORT, LONG or DOUBLE).

Are there any attention points I should focus on? Things to stress test?

@mdavidsaver mdavidsaver force-pushed the qsrv branch 2 times, most recently from 93136b5 to 4ee3edd Compare May 8, 2023 17:47
@mdavidsaver
Copy link
Member Author

Are there any attention points I should focus on? Things to stress test?

This is a good question for which I don't have a simple answer. The areas where I can imagine problems include:

  • Feature completeness.

Especially the odd corners of group mappings and other info(... tags. (Some of which I may not have intended)

  • Performance with many clients.

I expect that performance with a single client will be similar to QSRV1. However, the PVXS client and server don't use as many threads as pvAccessCPP, so there is a potential for performance regression with a large number of clients.

  • Resource leaks

This was (and is) a common bug with the pv*CPP modules. Particularly those were the "leak" is cleaned up on exit, but grows without bound in a long running process. The kind which are "invisible" to tools like valgrind.

@AppVeyorBot
Copy link

Build pvxs 1.0.879 completed (commit 4bf6e46f5a by @mdavidsaver)

@mdavidsaver
Copy link
Member Author

  • Resource leaks

This was (and is) a common bug with the pv*CPP modules. Particularly those were the "leak" is cleaned up on exit, but grows without bound in a long running process. The kind which are "invisible" to tools like valgrind.

I have added a set of three IOC shell commands to help with monitoring long term resource usage: pvxrefshow, pvxrefsave, and pvxrefdiff. Similarly named commands are provided by pva2pva. These commands read and compare a set of per c++ class instance counters. Checking these counters should make it apparent if certain types of slow memory leaks are happening.

eg. the delta after a client monitoring one PV has disconnected.

epics> pvxrefdiff 
MonitorOp       = -1
ServerChan      = -1
ServerConn      = -1
ServerMonitorControl    = -1
SingleInfo      = -1
SingleSourceSubscriptionCtx     = -1
StructTop       = -3

@mdavidsaver
Copy link
Member Author

mdavidsaver commented May 10, 2023

At this point I am satisfied that with QSRV2 runtime disabled (the default) current users will not see a regression. Even in the pathological case where both libpvxsIoc and libqsrv (from pva2pva) are loaded at the same time.

So this meets my criteria for merging and releasing this code as a "feature preview".

@mdavidsaver mdavidsaver marked this pull request as ready for review May 10, 2023 01:10
ioc: check for mis-matched onStartSubscription()/onDisableSubscription()

ioc: fix subscription lifetime

ioc: catch exceptions in dbEvent callbacks

ioc: avoid unnecessary virtual

ioc: minor

ioc: fix qsrv -S

ioc: qsrvGroupSourceInit() catch+log

ioc: runOnServer avoid std::function

ioc: cleanup and simplifications.

Avoid some redundant std::map lookups.
Make Group partially const to prevent implicit ctor.

ioc: avoid typedefs only used once

ioc: overhaul Group::show().  shows triggers

ioc: MappingType

ioc: pvxsgl -> pvxgl

ioc: separate group config singleton from server singleton

ioc: remove unnecessary forward declarations

ioc: restructure pvxsInitHook

ioc: qsrv runtime disable by default

ioc: compat w/ older Base

ioc: link pvxsIoc w/ DB libs

ioc: Channel proper detection of invalid PV

ioc: no need to keep vector<dbCommon*> around

ioc: fix initial group update for mappings w/o dbChannel

ioc: redo testing

split out group tests, only run with Base >= 7.0

ioc: minor

ioc: loc_bad_alloc

ioc: avoid symbol/DTYP clash with pva2pva

ioc: test record alias in group json

ioc: test put failure when SPC_NOMOD and DISP=1

ioc: test channel filters

ioc: unnecessary capture

ioc: avoid sharing Value between multiple subscriptions

It is possible to create two subscriptions through the same channel.

ioc: group subscription include queueSize

ioc: eliminate unused atomicMonitor

ioc: consolidate GroupSource::get()

avoid some indirection

ioc: pvRequest override of atomicPutGet

ioc: fix group non-atomic put

ioc: test asTrap hooks

ioc: test putOrder also sets field order

ioc: simplify GroupConfigProcessor::loadConfigFiles()

Also ensure that groupMapMutex is held

ioc: testqgroup cover JSON def.

ioc: dbLoadGroup() use macros

ioc: pvxsl() take integer argument

ioc: display.form and info(Q:form

ioc: "NO_ALARM" -> ""

ioc: use dbServer

at least for informational callbacks.

ioc: consolidate createRequestAndSubscriptionHandlers()

ioc: eliminate ChannelAndLock

properties dbChannel doesn't need a separate DBManyLock

ioc: test that putOrder also controls field order

ioc: MappingType -> MappingInfo

Handle info(Q:time:tag
Add +type:"const"

ioc: cleanup includes

ioc: test dbNotifyCancel()

ioc: inline checkForTrailingCommentsAtEnd()
Allow for use by pvxsIoc
inline runOnServer(), which shouldn't re-throw anyway
@AppVeyorBot
Copy link

Build pvxs 1.0.886 completed (commit 1a05e2338f by @mdavidsaver)

@mdavidsaver mdavidsaver merged commit 752b2f7 into master May 10, 2023
@mdavidsaver
Copy link
Member Author

Merged.

Please continue to report test results here. Open issues for any bugs encountered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants