Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tnum_scast #7883

Open
wants to merge 40 commits into
base: bpf-next_base
Choose a base branch
from
Open

Conversation

dkanaliev
Copy link

No description provided.

theihor and others added 30 commits September 24, 2024 22:42
test_skb_cgroup_id.sh was deleted in
https://git.kernel.org/bpf/bpf-next/c/f957c230e173

It has to be removed from TEST_PROGS variable in
tools/testing/selftests/bpf/Makefile, otherwise install target fails.

Signed-off-by: Ihor Solodrai <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Tested-by: Björn Töpel <[email protected]>
Link:
Link: https://lore.kernel.org/bpf/[email protected]

https://lore.kernel.org/bpf/Q3BN2kW9Kgy6LkrDOwnyY4Pv7_YF8fInLCd2_QA3LimKYM3wD64kRdnwp7blwG2dI_s7UGnfUae-4_dOmuTrxpYCi32G_KTzB3PfmxIerH8=@pm.me/
Signed-off-by: Alexei Starovoitov <[email protected]>
Auto-dependencies generated for %.test.o files refer to skels using
filenames as opposed to full paths. This requires make to be able to
link this name to an actual path, because not all generated skels are
put in the working directory.

In the original patch [1], this was mitigated by this target:

$(notdir %.skel.h): $(TRUNNER_OUTPUT)/%.skel.h
	@true

This turned out to be insufficient.

First, %.lskel.h and %.subskel.h were missed, because a typical
selftests/bpf build could find these files in the working directory.
This error was detected by an out-of-tree build [2].

Second, even with missing rules added, this target causes unnecessary
rebuilds in the out-of-tree case, as X.skel.h is searched for in the
working directory, and not in the $(OUTPUT).

Using vpath directive [3] is a better solution. Instead of introducing
a separate target (X.skel.h in addition to $(TRUNNER_OUTPUT)/X.skel.h),
make is instructed to search for skels in the output, which allows make
to correctly detect that skel has already been generated.

[1]: https://lore.kernel.org/bpf/VJihUTnvtwEgv_mOnpfy7EgD9D2MPNoHO-MlANeLIzLJPGhDeyOuGKIYyKgk0O6KPjfM-MuhtvPwZcngN8WFqbTnTRyCSMc2aMZ1ODm1T_g=@pm.me/
[2]: https://lore.kernel.org/bpf/CIjrhJwoIqMc2IhuppVqh4ZtJGbx8kC8rc9PHhAIU6RccnWT4I04F_EIr4GxQwxZe89McuGJlCnUk9UbkdvWtSJjAsd7mHmnTy9F8K2TLZM=@pm.me/
[3]: https://www.gnu.org/software/make/manual/html_node/Selective-Search.html

Reported-by: Björn Töpel <[email protected]>
Signed-off-by: Ihor Solodrai <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Tested-by: Björn Töpel <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
With newly merged code the uprobe behaviour is slightly different
and affects uprobe consumer test.

We no longer need to check if the uprobe object is still preserved
after removing last uretprobe, because it stays as long as there's
pending/installed uretprobe instance.

This allows to run uretprobe consumers registered 'after' uprobe was
hit even if previous uretprobe got unregistered before being hit.

The uprobe object will be now removed after the last uprobe ref is
released and in such case it's held by ri->uprobe (return instance)
which is released after the uretprobe is hit.

Reported-by: Ihor Solodrai <[email protected]>
Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Tested-by: Ihor Solodrai <[email protected]>
Closes: https://lore.kernel.org/bpf/w6U8Z9fdhjnkSp2UaFaV1fGqJXvfLEtDKEUyGDkwmoruDJ_AgF_c0FFhrkeKW18OqiP-05s9yDKiT6X-Ns-avN_ABf0dcUkXqbSJN1TQSXo=@pm.me/
Let's bail out from consumer test after we hit first fail,
so we don't pollute the log with many instances with possibly
the same error.

Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
struct btf_kind_operations are not modified in BTF.

Constifying this structures moves some data to a read-only section,
so increase overall security, especially when the structure holds
some function pointers.

On a x86_64, with allmodconfig:

Before:
======
   text	   data	    bss	    dec	    hex	filename
 184320	   7091	    548	 191959	  2edd7	kernel/bpf/btf.o

After:
=====
   text	   data	    bss	    dec	    hex	filename
 184896	   6515	    548	 191959	  2edd7	kernel/bpf/btf.o

Signed-off-by: Christophe JAILLET <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/r/9192ab72b2e9c66aefd6520f359a20297186327f.1726417289.git.christophe.jaillet@wanadoo.fr
There is no va_end after va_copy, just add it.

Signed-off-by: Zhang Jiao <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reduce log level of BTF loading error to INFO if BTF is not required.

Andrii says:

  Nowadays the expectation is that the BPF program will have a valid
  .BTF section, so even though .BTF is "optional", I think it's fine
  to emit a warning for that case (any reasonably recent Clang will
  produce valid BTF).

  Ihor's patch is fixing the situation with an outdated host kernel
  that doesn't understand BTF. libbpf will try to "upload" the
  program's BTF, but if that fails and the BPF object doesn't use
  any features that require having BTF uploaded, then it's just an
  information message to the user, but otherwise can be ignored.

Suggested-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Ihor Solodrai <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Referenced commit broke the logic of resetting expected_attach_type to
zero for allowed program types if kernel doesn't yet support such field.
We do need to overwrite and preserve expected_attach_type for
multi-uprobe though, but that can be done explicitly in
libbpf_prepare_prog_load().

Fixes: 5902da6 ("libbpf: Add uprobe multi link support to bpf_program__attach_usdt")
Suggested-by: Jiri Olsa <[email protected]>
Signed-off-by: Tao Chen <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
danobi/vmtest is going to migrate from using 9p to using virtio_fs to
mount the local rootfs: danobi/vmtest#88

BPF CI uses danobi/vmtest to run bpf selftests and will need to support
VIRTIO_FS.

This change enables new kconfigs to be able to support the upcoming
danobi/vmtest.

Tested by building a new kernel with those config and confirming it
would successfully run with 9p (currently what is used by vmtest), and
with virtio_fs (using a local build of vmtest).

  $ vmtest -k arch/x86/boot/bzImage "findmnt /"
  => bzImage
  ===> Booting
  ===> Setting up VM
  ===> Running command
  TARGET SOURCE    FSTYPE OPTIONS
  /      /dev/root 9p     rw,relatime,cache=5,access=client,msize=512000,trans=virtio
  $ /home/chantra/local/danobi-vmtest/target/debug/vmtest -k arch/x86/boot/bzImage "findmnt /"
  => bzImage
  ===> Initializing host environment
  ===> Booting
  ===> Setting up VM
  ===> Running command
  TARGET SOURCE FSTYPE   OPTIONS
  /      rootfs virtiofs rw,relatime

Changes in v2:
* Sorted configs alphabetically

Signed-off-by: Manu Bretelle <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Acked-by: Daniel Xu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
This variable is never referenced in the code, just remove it.

Signed-off-by: Zhu Jun <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Remove unneeded semicolon in zip_archive_open().

Signed-off-by: Chen Ni <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
A kfree() call is always used at the end of this function implementation.
Thus specify such a function call only once instead of duplicating it
in a previous if branch.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
When building selftests, the following was seen:

uprobe_multi.c: In function ‘trigger_uprobe’:
uprobe_multi.c:108:40: error: ‘MADV_PAGEOUT’ undeclared (first use in this function)
  108 |                 madvise(addr, page_sz, MADV_PAGEOUT);
      |                                        ^~~~~~~~~~~~
uprobe_multi.c:108:40: note: each undeclared identifier is reported only once for each function it appears in
make: *** [Makefile:850: bpf-next/tools/testing/selftests/bpf/uprobe_multi] Error 1

...even with updated UAPI headers. It seems the above value is
defined in UAPI <linux/mman.h> but including that file triggers
other redefinition errors.  Simplest solution is to add a
guarded definition, as was done for MADV_POPULATE_READ.

Fixes: 3c217a1 ("selftests/bpf: add build ID tests")
Signed-off-by: Alan Maguire <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Fix missing newlines and extraneous terminal spaces in messages.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/086884b7cbf87e524d584f9bf87f7a580e378b2b.1726475448.git.tony.ambardar@gmail.com
Mention struct btf_ext_info_sec rather than non-existent btf_sec_func_info
in BTF.ext struct documentation.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/cde65e01a5f2945c578485fab265ef711e2daeb6.1726475448.git.tony.ambardar@gmail.com
Object linking output data uses the default ELF_T_BYTE type for '.symtab'
section data, which disables any libelf-based translation. Explicitly set
the ELF_T_SYM type for output to restore libelf's byte-order conversion,
noting that input '.symtab' data is already correctly translated.

Fixes: faf6ed3 ("libbpf: Add BPF static linker APIs")
Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/87868bfeccf3f51aec61260073f8778e9077050a.1726475448.git.tony.ambardar@gmail.com
Support for handling BTF data of either endianness was added in [1], but
did not include BTF.ext data for lack of use cases. Later, support for
static linking [2] provided a use case, but this feature and later ones
were restricted to native-endian usage.

Add support for BTF.ext handling in either endianness. Convert BTF.ext data
to native endianness when read into memory for further processing, and
support raw data access that restores the original byte-order for output.
Add internal header functions for byte-swapping func, line, and core info
records.

Add new API functions btf_ext__endianness() and btf_ext__set_endianness()
for query and setting byte-order, as already exist for BTF data.

[1] 3289959 ("libbpf: Support BTF loading and raw data output in both endianness")
[2] 8fd27bf ("libbpf: Add BPF static linker BTF and BTF.ext support")

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/133407ab20e0dd5c07cab2a6fa7879dee1ffa4bc.1726475448.git.tony.ambardar@gmail.com
Allow bpf_object__open() to access files of either endianness, and convert
included BPF programs to native byte-order in-memory for introspection.
Loading BPF objects of non-native byte-order is still disallowed however.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/26353c1a1887a54400e1acd6c138fa90c99cdd40.1726475448.git.tony.ambardar@gmail.com
Allow static linking object files of either endianness, checking that input
files have consistent byte-order, and setting output endianness from input.

Linking requires in-memory processing of programs, relocations, sections,
etc. in native endianness, and output conversion to target byte-order. This
is enabled by built-in ELF translation and recent BTF/BTF.ext endianness
functions. Further add local functions for swapping byte-order of sections
containing BPF insns.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/b47ca686d02664843fc99b96262fe3259650bc43.1726475448.git.tony.ambardar@gmail.com
Track target endianness in 'struct bpf_gen' and process in-memory data in
native byte-order, but on finalization convert the embedded loader BPF
insns to target endianness.

The light skeleton also includes a target-accessed data blob which is
heterogeneous and thus difficult to convert to target byte-order on
finalization. Add support functions to convert data to target endianness
as it is added to the blob.

Also add additional debug logging for data blob structure details and
skeleton loading.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Link: https://lore.kernel.org/bpf/569562e1d5bf1cce80a1f1a3882461ee2da1ffd5.1726475448.git.tony.ambardar@gmail.com
Update Makefile build rules to compile BPF programs with target endianness
rather than host byte-order. With recent changes, this allows building the
full selftests/bpf suite hosted on x86_64 and targeting s390x or mips64eb
for example.

Signed-off-by: Tony Ambardar <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Acked-by: Yonghong Song <[email protected]>
Link: https://lore.kernel.org/bpf/880ccc6342cfc4d3c48b44f581e87adfbce2876e.1726475448.git.tony.ambardar@gmail.com
Tony Ambardar says:

====================
libbpf, selftests/bpf: Support cross-endian usage

Hello all,

This patch series targets a long-standing BPF usability issue - the lack
of general cross-compilation support - by enabling cross-endian usage of
libbpf and bpftool, as well as supporting cross-endian build targets for
selftests/bpf.

Benefits include improved BPF development and testing for embedded systems
based on e.g. big-endian MIPS, more build options e.g for s390x systems,
and better accessibility to the very latest test tools e.g. 'test_progs'.

The series touches many functional areas: BTF.ext handling; object access,
introspection, and linking; generation of normal and "light" skeletons.

Initial development and testing used mips64, since this arch makes
switching the build byte-order trivial and is thus very handy for A/B
testing. However, it lacks some key features (bpf2bpf call, kfuncs, etc)
making for poor selftests/bpf coverage.

Final testing takes the kernel and selftests/bpf cross-built from x86_64
to s390x, and runs the result under QEMU/s390x. That same configuration
could also be used on kernel-patches/bpf CI for regression testing endian
support or perhaps load-sharing s390x builds across x86_64 systems.

This thread includes some background regarding testing on QEMU/s390x and
the generally favourable results:
    https://lore.kernel.org/bpf/ZsEcsaa3juxxQBUf@kodidev-ubuntu/

Earlier versions and related discussion of the series are here:

v1: https://lore.kernel.org/bpf/[email protected]/
v2: https://lore.kernel.org/bpf/[email protected]/
v3: https://lore.kernel.org/bpf/[email protected]/
v4: https://lore.kernel.org/bpf/[email protected]/
v5: https://lore.kernel.org/bpf/[email protected]/

Feedback and suggestions are welcome!

Best regards,
Tony

Changelog:
---------
v5 -> v6: (comments from Andrii, Alexei, Eduard)
 - clarify info_blob_bswap() by making it explicitly conditional on
   non-native target endianness, and merge a pair of related debug
   statements
 - reformat debug statement in bpf_object_bswap_progs() on single line
 - update existing info setup functions to validate and parse info
   section metadata prior to any byte-swapping, and drop earlier added
   validation checks
 - rework cross-endian BTF.ext handling by using callback functions to
   byte-swap different types of info records, but after initial parsing
 - fix a bug always outputting BTF.ext raw data in native endianness
 - include v5 "Acked-by:" from Alexei, Yonghong

v4 -> v5: (feedback from Andrii and Eduard)
 - add separate functions to byte-swap info metadata and records, and
   ensure ordering so record bswaps occur when metadata is native endian
 - use new and existing macros to iterate through info sections/records,
   and check embedded record sizes match that of info structs used
 - drop use of <cough> evil callbacks
 - move setting swapped_endian flag to after byte-swapping functions are
   called during initialization, allowing funcs to infer endianness and
   drop a 'bool native' call parameter
 - simplify byte-swapping macro used to generate light skeleton, and use
   internal lib funcs to swap info records instead of assuming all __u32
 - change info bswap library funcs to void return
 - rework/consolidate new debug statements to reduce their number
 - remove some unneeded handling of impossible errors, and drop a safety
   check already handled elsewhere
 - add and clarify some comments

v3 -> v4:
 - fix a use-after-free ELF data-handling error causing rare CI failures
 - move bswap functions for func/line/core-relo records to internal header
 - use bswap functions also for info blobs in light skeleton

v2 -> v3: (feedback from Andrii)
 - improve some log and commit message formatting
 - restructure BTF.ext endianness safety checks and byte-swapping
 - use BTF.ext info record definitions for swapping, require BTF v1
 - follow BTF API implementation more closely for BTF.ext
 - explicitly reject loading non-native endianness program into kernel
 - simplify linker output byte-order setting
 - drop redundant safety checks during linking
 - simplify endianness macro and improve blob setup code for light skel
 - no unexpected test failures after cross-compiling x86_64 -> s390x

v1 -> v2:
 - fixed a light skeleton bug causing test_progs 'map_ptr' failure
 - simplified some BTF.ext related endianness logic
 - remove an 'inline' usage related to CI checkpatch failure
 - improve some formatting noted by checkpatch warnings
 - unexpected 'test_progs' failures drop 3 -> 2 (x86_64 to s390x cross)
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Andrii Nakryiko <[email protected]>
Allow a new optional 'Attributes' section to be specified for helper
functions description, e.g.:

 * u32 bpf_get_smp_processor_id(void)
 * 		...
 * 	Return
 * 		...
 * 	Attributes
 * 		__bpf_fastcall
 *

Generated header for the example above:

  #ifndef __bpf_fastcall
  #if __has_attribute(__bpf_fastcall)
  #define __bpf_fastcall __attribute__((bpf_fastcall))
  #else
  #define __bpf_fastcall
  #endif
  #endif
  ...
  __bpf_fastcall
  static __u32 (* const bpf_get_smp_processor_id)(void) = (void *) 8;

The following rules apply:
- when present, section must follow 'Return' section;
- attribute names are specified on the line following 'Attribute'
  keyword;
- attribute names are separated by spaces;
- section ends with an "empty" line (" *\n").

Valid attribute names are recorded in the ATTRS map.
ATTRS maps shortcut attribute name to correct C syntax.

Signed-off-by: Eduard Zingerman <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Since [1] kernel supports __bpf_fastcall attribute for helper function
bpf_get_smp_processor_id(). Update uapi definition for this helper in
order to have this attribute in the generated bpf_helper_defs.h

[1] commit 91b7fbf ("bpf, x86, riscv, arm: no_caller_saved_registers for bpf_get_smp_processor_id()")

Signed-off-by: Eduard Zingerman <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
In order to allow pahole add btf_decl_tag("bpf_fastcall") for kfuncs
supporting bpf_fastcall, mark such functions with KF_FASTCALL in
id_set8 objects.

Signed-off-by: Eduard Zingerman <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Generate __attribute__((bpf_fastcall)) for kfuncs marked with
"bpf_fastcall" decl tag. E.g. for the following BTF:

    $ bpftool btf dump file vmlinux
    ...
    [A] FUNC 'bpf_rdonly_cast' type_id=...
    ...
    [B] DECL_TAG 'bpf_kfunc' type_id=A component_idx=-1
    [C] DECL_TAG 'bpf_fastcall' type_id=A component_idx=-1

Generate the following vmlinux.h:

    #ifndef __VMLINUX_H__
    #define __VMLINUX_H__
    ...
    #ifndef __bpf_fastcall
    #if __has_attribute(bpf_fastcall)
    #define __bpf_fastcall __attribute__((bpf_fastcall))
    #else
    #define __bpf_fastcall
    #endif
    #endif
    ...
    __bpf_fastcall extern void *bpf_rdonly_cast(...) ...;

The "bpf_fastcall" / "bpf_kfunc" tags pair would generated by pahole
when constructing vmlinux BTF.

While at it, sort printed kfuncs by name for better vmlinux.h
stability.

Signed-off-by: Eduard Zingerman <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
Eduard Zingerman says:

====================
'bpf_fastcall' attribute in vmlinux.h and bpf_helper_defs.h

The goal of this patch-set is to reflect attribute bpf_fastcall
for supported helpers and kfuncs in generated header files.
For helpers this requires a tweak for scripts/bpf_doc.py and an update
to uapi/linux/bpf.h doc-comment.
For kfuncs this requires:
- introduction of a new KF_FASTCALL flag;
- modification to pahole to read kfunc flags and generate
  DECL_TAG "bpf_fastcall" for marked kfuncs;
- modification to bpftool to scan for DECL_TAG "bpf_fastcall"
  presence.

In both cases the following helper macro is defined in the generated
header:

    #ifndef __bpf_fastcall
    #if __has_attribute(bpf_fastcall)
    #define __bpf_fastcall __attribute__((bpf_fastcall))
    #else
    #define __bpf_fastcall
    #endif
    #endif

And is used to mark appropriate function prototypes. More information
about bpf_fastcall attribute could be found in [1] and [2].

Modifications to pahole are submitted separately.

[1] LLVM source tree commit:
    64e464349bfc ("[BPF] introduce __attribute__((bpf_fastcall))")

[2] Linux kernel tree commit (note: feature was renamed from
    no_caller_saved_registers to bpf_fastcall after this commit):
    52839f3 ("Merge branch 'no_caller_saved_registers-attribute-for-helper-calls'")
====================

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Andrii Nakryiko <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants