Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve handling of platform config #724

Closed
jphickey opened this issue May 28, 2020 · 11 comments · Fixed by #751 or #765
Closed

Improve handling of platform config #724

jphickey opened this issue May 28, 2020 · 11 comments · Fixed by #751 or #765
Milestone

Comments

@jphickey
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Almost every app, including the CFE core apps, have some sort of "platform scope" internal config options. And the way we handle this for apps and external entities is currently different than the way we handle this for CFE core. To move forward we need to consolidate this into a single, consistent method that can be applied for both external apps and core apps.

Describe the solution you'd like
CMake already generates the cfe_platform_cfg.h file so with some tweaks we can get it to work for everything.

There are several possible approaches to consider:

Option 1: Do we generate a single "monolithic" platform header file and let all apps include it?

  • Advantage: Would look basically like the current "cfe_platform_cfg.h" and we can even keep the name, preserving backward compatibility
  • Disadvantage: Would contain configs for every app/module on the platform thereby giving access to all sorts of out-of-scope info, no way to enforce apps to use only their own config items, so they could inadvertently break ABI consistency by using config items they don't own.

Option 2: Do we generate a per-app "focused" platform header file which is only used by that app?

  • Advantage: Cleaner, Better scoping, Only give apps/modules a header file containing their own config items, they can't use what they can't see, and thereby can't introduce unexpected ABI dependencies.

  • Disadvantage: Would probably need to be a different name, as we can't call everything "cfe_platform_cfg.h" (too confusing), and would probably (eventually) require breaking up the current cfe_platform_cfg.h into a config file per core app (es_platform_cfg.h, evs_platform_cfg.h, etc). In the current CFE core there are examples of cross-pollination too, where EVS uses data structures defined by ES which are based on platform config. So these become undocumented/uncontrolled ABI dependencies. We'd have to fix those.

Additional context
Option 2 is cleaner but arguably more work, might take a little longer to implement, and have a bigger impact on apps.

This type of issue is coming more to the forefront when considering things like #554, but there have been periodic issues posted in the past regarding the "weirdness" around the way cfe_platform_cfg.h is handled, so it would be good to generally fix that too, but need to get some sort of community consensus before implementing anything.

Requester Info
Joseph Hickey, Vantage Systems, Inc.

@skliper
Copy link
Contributor

skliper commented May 28, 2020

Does this have to be done? I don't think the current pattern is all that unreasonable. Apps each have their own config, core has it's config.

@skliper
Copy link
Contributor

skliper commented May 28, 2020

If we start actively changing config options often it seems like it will lead to more breakage... I'd prefer reasonable stability (keep things as-is), with the expectation that cfe_platform_cfg should typically just work (except when we do actually change the core). If a mission does want to go off on their own they certainly can, but the core framework doesn't seem to me like it needs a "fix".

@jphickey
Copy link
Contributor Author

jphickey commented May 28, 2020

Not changing anything may be an option, but there are some concerns:

  1. The idea of replacing core apps such as TBL or maybe EVS with a user-supplied alternative implementation of that core app. Does this alternative get access to cfe_platform_cfg.h or would it have its own platform config?

This isn't currently an issue, but it will become more of an issue if we really want to modularize the CFE core components further and allow mission replacement of these.

  1. How to customize an app config without forking it. The current recommendation is to put platform-scope configs into a loadable table, which I still recommend whenever possible, but in some cases this isn't as easy.

The "sample_app" is possibly a bad example as it only has a single perf ID in its mission config, but still, it is something that the users are expected to customize (because each mission individually manages perf IDs) but currently the only way to do so is by editing the sample_app/fsw/mission_inc/sample_app_perfids.h file in place, which is exactly the type of direct modification that we try to avoid.

Something like the "FM" CFS app is possibly a better example, which has lots of platform scope options here: https://github.com/nasa/FM/blob/master/fsw/platform_inc/fm_platform_cfg.h

These include things related to its structure sizes which are not always easy to put into a loadable config table. If a user wants to customize then currently the only option is to edit the file directly. Furthermore we don't have the ability to use a different app config for different mission config which might be a logical expectation.

  1. The current inconsistency of how cfe_platform_cfg.h is exposed (or not) to apps. The current implementation has a rather ugly workaround that provides the path to this platform file if and only if it is being built for a single CPU. Merely adding SIMULATION=native can cause this to change, and suddenly the users find this file is unavailable and the build fails.

Obviously the better/preferred solution is to not include cfe_platform_cfg.h but a similar issue occurs with cfe_msgids.h too, which is something that goes back to the previous issue (2) above.

The workaround that's currently there was only ever supposed to be temporary for backward compatibility until apps could get updated but its been about 5-6 years and some apps still aren't updated. For msgids the goal was to get them out of a database of some type but that probably won't be anything near term either. In the meantime users are getting caught on this and we don't have a real solid solution/alternative.

@jphickey
Copy link
Contributor Author

Some of these concerns can also be addressed by changing the patterns used in app CMake files, such as replacing direct refs like this in:

https://github.com/nasa/sample_app/blob/4b6d54ff6216cc8dd6cceab1dbcd1db21f456f69/CMakeLists.txt#L4-L5

The direct inclusion of the default mission_inc and platform_inc dirs could potentially be changed to a call to generate_include_configfile to allow use of a file from the user's config instead. That might be enough for concern (2) above but wouldn't help the others.

@skliper
Copy link
Contributor

skliper commented May 29, 2020

How about implementing a general pattern where any header file prefixed with target name gets copied into a target specific include directory early in the search path (and update the name as needed), and anything prefixed with mission goes to the mission include directory? I'd think it would be useful to still fall back on the default if there isn't a specific one in the user config? Perspective is a change like this under the hood doesn't require updating all the app CMake files, and no user action required to get the default behavior.

Maybe also a possible solution to the mission configured message header, if we broke up ccsds.h to separate the actual definition and no defines would need to clutter the code... I like this approach way better than adding the MESSAGE_FORMAT_IS_CUSTOM.

@jphickey
Copy link
Contributor Author

Yes, I will try this out....

I'm not sure that simply catching every header file is the right way, but if we accept a general naming convention of:

${MISSION_NAME}_${APP_NAME}_${FILENAME}.h
${CONFIG_NAME}_${APP_NAME}_${FILENAME}.h

Then we can generate wrapper headers and provide an include path to them, alleviating the need for apps to include their own platform_inc/mission_inc dirs, although we could still allow that as a default/fallback.

@skliper
Copy link
Contributor

skliper commented May 29, 2020

Then we can generate wrapper headers and provide an include path to them, alleviating the need for apps to include their own platform_inc/mission_inc dirs, although we could still allow that as a default/fallback.

Sounds good... could you make it APP_NAME or CFE_MODULE_NAME (or whatever so I can customize the msg header using the same mechanism?)

@jphickey
Copy link
Contributor Author

OK, after digging in a bit, I think it can work for mission-scope stuff, but the problem (mismatch?) remains that we don't actually build apps on a per-config basis, we build apps on a per-toolchain basis. CFE core, OTOH, is built per-config, so (potentially) more than once per toolchain.

We could move apps to be the same way, but it'll involve moving some logic around, and it will affect the resulting build tree, but in some ways it might makes sense to do that. It would reduce the modularity of apps to some degree, but maybe not a big concern.

@jphickey
Copy link
Contributor Author

It would mean baking in the config name into the target name like is done for CFE (that's why its called e.g. libcfe_core_default.a rather than just libcfe_core.a). Because apps were built independent of config the name just matched which was nice and simple (sample_app is just sample_app) but now it would be more like sample_app_default_cpu1 or something, which can be renamed on install but it will affect anyone who has done target_link_libraries or something assuming that the target name matches the app name.

@jphickey
Copy link
Contributor Author

The other option is to totally turn the build around and create a sub-build per config rather than per toolchain. While this would be a major change it might actually be more backward compatible/less disruptive than the former.

@skliper
Copy link
Contributor

skliper commented May 29, 2020

I don't have strong opinions related to any of the options, but per config like the CFE core to be a consistent pattern sounds reasonable? Would this approach simplify/address any other user concerns?

jphickey added a commit to jphickey/cFE that referenced this issue Jun 2, 2020
The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
jphickey added a commit to jphickey/cFE that referenced this issue Jun 19, 2020
The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
jphickey added a commit to jphickey/cFE that referenced this issue Jun 26, 2020
The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
@jphickey jphickey linked a pull request Jun 26, 2020 that will close this issue
astrogeco pushed a commit that referenced this issue Jul 1, 2020
The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
astrogeco pushed a commit that referenced this issue Jul 26, 2020
The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
@skliper skliper added this to the 6.8.0 milestone Sep 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants