Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for L3 interface support in connected endpoints (M&E and AI Fabric) #4970

Open
1 task done
sarunac opened this issue Feb 3, 2025 · 0 comments
Open
1 task done
Labels
type: enhancement New feature or request

Comments

@sarunac
Copy link

sarunac commented Feb 3, 2025

Enhancement summary

Support L3 capabilities in connected endpoints. AVD currently only supports L2 for connected endpoints. This is beneficial for both M&E and AI deployments.

Which component of AVD is impacted

eos_designs

Use case example

M&E and AI deployments currently use following configs for L3 endpoints,

For M&E fabric,

interface Ethernet{{ interface }}
description {{ description }}
no shutdown
mtu {{mtu number}}
no switchport
ip address {{ ipv4_address }}/{{prefix /30 or /31 or /29}}
ptp enable
ptp sync-message interval -3
ptp announce interval 0
ptp transport ipv4
ptp announce timeout 3
ptp delay-req interval -3
ptp role {{roleName}}
multicast ipv4 static or pim ipv4 sparse-mode
no error-correction encoding (should not be default config)
ip helper-address 10.246.32.8 # DHCP ip helper
ip helper-address 10.247.32.8
ip igmp last-member-query-count 0
speed

For AI Fabric, following sub config is also beneficial,
service-profile QOS-CPU-PROFILE"

Describe the solution you would like

  • Ability to set routed endpoints . ( It may be useful to lump them all in one file to support large deployments)
  • Ability to mention this for a range of interfaces
  • Ability to set this for range of interfaces and also able to group them if possible
    example: EtX-Y is all for EndpointA with dedicated pool of ip address and qos profile
  • Ability to allocate a pool of ip addresses and distribute it as /31 or /30 or /29.
  • Ability to handle breakouts.
  • Optional: Being able to pull endpoint data (interface description, ip address, etc) from Netbox

Describe alternatives you have considered

Using profiles and separate yml files in host_vars for every switch having connected endpoint.

l3_edge:
p2p_links_ip_pools:
- name: RedLeaf1_pool
ipv4_pool: 10.91.196.0/24
prefix_size: 30
p2p_links_profiles:
- name: HostProfile
mtu: 1500
ptp_enable: true
raw_eos_cli: |
ptp role master
multicast ipv4 static

p2p_links:

- id: 1
  ip_pool: RedLeaf1_pool
  nodes: [ leaf1-red-rk1012-7, RedLeaf1_HOST_NOT_USED]
  interfaces: [ Ethernet1, NicRed]
  profile: HostProfile
- id: 2
  ip_pool: RedLeaf1_pool
  nodes: [ leaf1-red-rk1012-7, AUD CORE PRI AES67 5]
  interfaces: [ Ethernet2, NicRed]
  profile: HostProfile

Additional context

No response

Contributing Guide

  • I agree to follow this project's Code of Conduct
@sarunac sarunac added the type: enhancement New feature or request label Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant