Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #724, implement config-based target builds #728

Closed
wants to merge 1 commit into from

Conversation

jphickey
Copy link
Contributor

@jphickey jphickey commented Jun 2, 2020

Describe the contribution
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly set the TGTx_PLATFORM setting in targets.cmake to the same value.

Fixes #724

Testing performed
Build CFE and unit test + sanity check (default config)
Add CPU2 configuration using default platform settings (different) and unit test + sanity check. CPU2 is built separately from CPU1, all OSAL, PSP, CFE, and app binaries are re-built.
Add CPU2 configuration sharing the same platform config settings and unit test + sanity check. CPU2 is built like the previous implementation where binaries are built once and copied to both installations.

Expected behavior changes
No impact to behavior, only affects build.
This does affect the build tree adding an extra subdirectory at the architecture level. For instance rather than build/native it becomes build/native/default_cpu1. Sample makefile is updated so make test works with the default.

System(s) tested on
Ubuntu 20.04

Additional context
As a side effect this makes all platform configs available to all apps, no real scope separation. So inclusion of cfe_platform_cfg.h from an app will no longer trigger an error, even though I'd still recommend against actually doing that.

A future improvement might be to build OSAL + PSP separately and only do the CFE + APPS per config, using an imported target.

Contributor Info - All information REQUIRED for consideration of pull request
Joseph Hickey, Vantage Systems, Inc.

The existing build system built target executables grouped by toolchain
as a proxy for CPU architecture + machine options/flags.  The app binaries
would be built once and copied to any/all targets sharing that toolchain.

The side effect of doing this is that the application needs to be written
in an CPU-agnostic manner, performing its subscriptions and configurations
from runtime table data rather than hardcoded/fixed values.  Unfortunately
most apps are not coded that way, so workarounds were needed.

This changes the top level process to include the "platform" within this
target build logic, effectively treating different platform configs as
entirely different builds, even if they share the same toolchain file.

As a result, binaries will only be shared between targets that explicitly
set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
@jphickey jphickey added the CCB:Ready Ready for discussion at the Configuration Control Board (CCB) label Jun 2, 2020
@skliper
Copy link
Contributor

skliper commented Jun 2, 2020

I see the mission_inc and platform_inc override support now, are those the only two new categories of headers that can be overridden? I don't have any problem with that, just wanted to make sure I didn't miss anything. Likely worth some documentation in the app developer's guide? Seems like a significant new feature? Ping @ejtimmon @tngo67 for awareness... no longer need to change files within app directories for platform/mission_inc customization, they can be overridden from the *_defs directory.

@astrogeco
Copy link
Contributor

astrogeco commented Jun 3, 2020

CCB-2020-06-03:

  • @jphickey work with Tam to see how their team sets things up
  • @tngo67 will set up separate meeting with interested parties

edit - tagged the right Tam

@astrogeco astrogeco added CCB-20200610 and removed CCB:Ready Ready for discussion at the Configuration Control Board (CCB) labels Jun 5, 2020
@astrogeco
Copy link
Contributor

astrogeco commented Jun 10, 2020

CCB 2020-06-10: @jphickey @skliper and @tngo67 will tagup to discuss further implementation changes to the architecture in this PR. I'll convert this to draft until there's an agreement.

@astrogeco astrogeco marked this pull request as draft June 10, 2020 17:03
@skliper
Copy link
Contributor

skliper commented Jun 10, 2020

My take on the discussion - suggestion is to remove the logic for app include name matching, etc and just include directories from within the target defs directory, as part of the user config. Eventually also remove the mission "special" headers that get moved in also to transition to this single model. Note I'll show an example of this related to #726.

@ghost
Copy link

ghost commented Jun 11, 2020

I definitely agree that a toolchain specific build would ideally have all configuration items in tables, but as @jphickey mentioned, that is not possible with defines that set sizes.
With this change, how would we name target specific headers for apps and the cFE core, and where are they located?
I could probably use a higher level overview of how we intend to do platform specific configuration, please include me in a tagup.

@skliper
Copy link
Contributor

skliper commented Jun 11, 2020

I definitely agree that a toolchain specific build would ideally have all configuration items in tables, but as @jphickey mentioned, that is not possible with defines that set sizes.
With this change, how would we name target specific headers for apps and the cFE core, and where are they located?
I could probably use a higher level overview of how we intend to do platform specific configuration, please include me in a tagup.

In my example at https://github.com/skliper/cFS/tree/test_module_override I just put mission config elements in sample_defs/mission_inc and included it at the arch level cmake. I'd likely do something similar to what JSC does for target specific (name a directory for each target's header files and include from the target level cmake.)

@ghost
Copy link

ghost commented Jun 11, 2020

Does this separate the toolchain file from the CPU/target (i.e. toolchain-cpu1.cmake vs. toolchain-i686-rtems4.11.cmake)? Although I guess toolchain-cpuX.cmake were not actually tied to the cpu1 build, but provided for a convenience, correct?

I'm not sure it is possible, but it seems logical to me if we had targets (cpu1, cpu2, backup_processor, etc) then each target required a toolchain file.
If we had such a separation, would we need to have native vs. SIMULATION= in a build? If we always have to specify a toolchain file for a configuration, I wouldn't think we would need SIMULATION= :

Example:
'build' TARGET=cpu1 TOOLCHAIN=x86_64-linux-gcc
or
'build' TARGET=cpu1 TOOLCHAIN=aarch64-linux-gcc
or
'build' TARGET=cpu2 TOOLCHAIN=rtems5-leon3
etc,

or even something like:
'build' TARGET_CFG=cpu1 TARGET_ARCH='rtems5-leon3-rcc1.3'
'build' TARGET_CFG=cpu1 TARGET_ARCH='rtems5-leon3-gcc'
and so on ( showing public RTEMS tools, verses Gaisler toolchain release, rcc)

I used TARGET_ARCH, since TOOLCHAIN includes the OSAL and PSP selection.
Would it even make more sense to break out TOOLCHAIN from OSAL/PSP, or are they always tied together?

@ghost
Copy link

ghost commented Jun 11, 2020

I definitely agree that a toolchain specific build would ideally have all configuration items in tables, but as @jphickey mentioned, that is not possible with defines that set sizes.
With this change, how would we name target specific headers for apps and the cFE core, and where are they located?
I could probably use a higher level overview of how we intend to do platform specific configuration, please include me in a tagup.

In my example at https://github.com/skliper/cFS/tree/test_module_override I just put mission config elements in sample_defs/mission_inc and included it at the arch level cmake. I'd likely do something similar to what JSC does for target specific (name a directory for each target's header files and include from the target level cmake.)

Makes sense. Does JSC have duplicate filenames under each target specific directory? This was one of the complaints about the original build system: having many files with the same name scattered throughout the tree.

@skliper
Copy link
Contributor

skliper commented Jun 11, 2020

I definitely agree that a toolchain specific build would ideally have all configuration items in tables, but as @jphickey mentioned, that is not possible with defines that set sizes.
With this change, how would we name target specific headers for apps and the cFE core, and where are they located?
I could probably use a higher level overview of how we intend to do platform specific configuration, please include me in a tagup.

In my example at https://github.com/skliper/cFS/tree/test_module_override I just put mission config elements in sample_defs/mission_inc and included it at the arch level cmake. I'd likely do something similar to what JSC does for target specific (name a directory for each target's header files and include from the target level cmake.)

Makes sense. Does JSC have duplicate filenames under each target specific directory? This was one of the complaints about the original build system: having many files with the same name scattered throughout the tree.

For platform configuration header files, yes. Basically a full "set" of the configuration headers (same name) for each target within target_defs. They actually had a depot with the common headers and just linked whatever didn't need to change. In terms of the rest of the tree, there's just the platform_inc/mission_inc version... which they may have removed from the include path. Hopefully it's not as scattered as before.

@tngo67
Copy link

tngo67 commented Jun 11, 2020

I definitely agree that a toolchain specific build would ideally have all configuration items in tables, but as @jphickey mentioned, that is not possible with defines that set sizes.
With this change, how would we name target specific headers for apps and the cFE core, and where are they located?
I could probably use a higher level overview of how we intend to do platform specific configuration, please include me in a tagup.

In my example at https://github.com/skliper/cFS/tree/test_module_override I just put mission config elements in sample_defs/mission_inc and included it at the arch level cmake. I'd likely do something similar to what JSC does for target specific (name a directory for each target's header files and include from the target level cmake.)

Makes sense. Does JSC have duplicate filenames under each target specific directory? This was one of the complaints about the original build system: having many files with the same name scattered throughout the tree.

For platform configuration header files, yes. Basically a full "set" of the configuration headers (same name) for each target within target_defs. They actually had a depot with the common headers and just linked whatever didn't need to change. In terms of the rest of the tree, there's just the platform_inc/mission_inc version... which they may have removed from the include path. Hopefully it's not as scattered as before.

To clarify, we don't have a full set of config headers, only the ones that we actually need to customize. If the header is not found in target_defs, cmake will use the one in the app's fsw directory. And it is the same file name, via sym link, so we can tell which version of the header file is used for a particular build.

@jphickey
Copy link
Contributor Author

Makes sense. Does JSC have duplicate filenames under each target specific directory? This was one of the complaints about the original build system: having many files with the same name scattered throughout the tree.

This is one of my remaining concerns about not doing the generate_config_includefile step. One important thing that this function does is to consolidate several differently-named files (but all of which have a prefix to a common base name) into a single include file that the FSW can use. This importantly avoids the situation where we have many copies of files with the same name and it is not clear which one is used.

For instance consider "cfe_mission_cfg.h" and "cfe_platform_cfg.h" -- in a configured build there should be just one copy of this file. An automated tool (such as an IDE) can find this file, and parse its content, and know exactly which definitions are in use - its completely unambiguous.

In contrast, If we have multiple different versions of the same file name within the same source tree, it becomes very unclear which is being used. One has to look at the include path supplied to the compiler to figure out which is active, and this is neither obvious nor automatic. IDE's and other tools will be much more confused when parsing the code.

So for this reason I'm still thinking we should keep the generate_config_includefile() function as it is, because it avoids the duplicate name problem.

@jphickey
Copy link
Contributor Author

Clarification on above - there is actually an editor_inc directory which has a separate copy of the same file name -- this is actually a duplicate so either one will be OK to reference, but I think with this change we can get rid of that duplicate too. It should ideally be just one file name, one content, particularly for headers. Source files are a little different paradigm, I think - identically-named sources that are chosen by the build system don't have quite the same level of confusion as identically-named headers.

@skliper
Copy link
Contributor

skliper commented Jun 18, 2020

What would the approach be for users to customize core header files? Or override default app configurations without removing search paths and requiring clone and own of every header file even if you don't want to customize it?

@jphickey
Copy link
Contributor Author

What would the approach be for users to customize core header files? Or override default app configurations without removing search paths and requiring clone and own of every header file even if you don't want to customize it?

I just verified/validated that adding a line such as this to arch_build_custom.cmake works just fine:

include_directories(${MISSION_DEFS}/custom_inc)

One can then add whatever header files you want to custom_inc, and as this file is processed early, the path(s) here become the first -I options to the compiler, thereby superseding any other header files within the project itself.

Obviously, this creates the exact problem/confusion above of having multiple copies of the same header file name with different content. But if users are OK with that, there is nothing to stop them from using this approach, and it works today, and you can have multiple directories and choose from them.

As I've said before, I'd caution anyone against overriding just a single arbitrary header file. For event ids and msgids and configuration files these are always expected to be customized so it should be easy to do, but anything beyond this is power user territory and all bets are off. For updating an arbitrary header, It seems you'd at least have to also update a .c file to provide an implementation for whatever you changed.

The way I see it, changing a file which isn't part of the designated customization set is effectively creating a fork. Even if you just tweak the include path to find your file before the standard file, it's still changing the app in unexpected ways and thereby becomes a fork for all intents and purposes. It's just a fork that's less obvious that's been forked because of an include path trick, which I don't see as a good thing.

I'd rather see users accept the reality that in doing this they are in fact changing the app, and use git to track this in their own repo. Or clone it under a different name and replace at the module level.

If one really does not want to track their own fork or clone and own, then I'd suggest keeping the changes as a patch. Build systems like Yocto use this approach extensively. Clone a specific commit from a git repo, then apply a local patch within the build script/makefile before actually building the code. This would allow you to update not only header files but C files or any other file too, and is more appropriate IMO for what you are actually doing - locally patching the app/module for your own purpose.

So I think there are lots of options that "power users" can do without really needing any specific support for it in the framework.

@jphickey
Copy link
Contributor Author

Specifically - quilt is a very robust tool to manage customizations to a 3rd party project that you cannot fork. It allows one to put a set of patches at the top level and selectively apply them. You can then commit only the patches to your local repo and the 3rd party apps can still come from the official source even though you've locally modified them.

If you think that could be acceptable for the problem at hand I can provide an example for how quilt can be invoked as e.g. part of the "prep" step in the top level makefile.

@skliper
Copy link
Contributor

skliper commented Jun 18, 2020

I just verified/validated that adding a line such as this to arch_build_custom.cmake works just fine...

Right. It's exactly what my sample did that I have shared previously as a power user option (and it works fine). Sounds like we are on the same page - add to the include path or clone and own (or apply changes on top, whatever you want to call it). As you said, really in the end users can do whatever they want, and I think that's they key point.

I definitely agree with avoiding multiple headers of the same name in the framework (except for unit testing to override).

@jphickey
Copy link
Contributor Author

OK then - I'm fine with the power users adding their own include_directories to override arbitrary headers so long as they understand the risks and implications of doing that. Would propose that as the method for doing that (I don't want to say "preferred" method, but it works and it is simple enough).

Independently I was also investigating ways to push the calls to generate_config_includefile to the apps themselves rather than the mission_build.cmake file, which would remove another spot where some names had special meaning (e.g. fsw/mission_inc and fsw/platform_inc). I'd prefer if the application build scripts handled all of this explicitly.

But - problem is the "generate" call has to be done at mission scope, not arch scope - we do have a an existing hook for this in the form of a mission_build.cmake file but it hasn't really been used to date. This would be the place for generate_config_includefile calls though.

Does this sound worthwhile to make this change? It would avoid a little more of the "magic" name-based logic in the mission_build.cmake file, which I know was a concern.

@jphickey
Copy link
Contributor Author

Update - I submitted a draft PR of what I came up with to not only push the generate_config_includefile calls into the app-specific cmake scripts, but also do some further cleanup of the logic in general.

Please see/review PR #751. It basically combines this PR here with the one in #740 and then does some additional cleanup/simplification and adds some more status info to make it obvious what's coming from where.

@jphickey
Copy link
Contributor Author

This has been rebased and is now part of #751. Closing this PR.

@jphickey jphickey closed this Jun 26, 2020
@jphickey jphickey deleted the fix-724-config-based-builds branch December 3, 2020 18:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve handling of platform config
5 participants