Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce resource attribute faas.max_memory #1296

Merged
merged 4 commits into from
Jan 4, 2021
Merged

Introduce resource attribute faas.max_memory #1296

merged 4 commits into from
Jan 4, 2021

Conversation

z1c0
Copy link
Contributor

@z1c0 z1c0 commented Dec 17, 2020

Fixes #

Changes

This PR introduces the new attribute max_memory to the faas resource attributes. Maximum memory allocated to a serverless function is a useful metric since e.g. too little memory can easily stop a Java AWS Lambda function from working correctly.

@z1c0 z1c0 requested review from a team December 17, 2020 14:08
@arminru arminru added area:semantic-conventions Related to semantic conventions release:allowed-for-ga Editorial changes that can still be added before GA since they don't require action by SIGs spec:resource Related to the specification/resource directory labels Dec 17, 2020
Copy link
Member

@arminru arminru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Wolfgang!

@arminru arminru requested a review from a team December 17, 2020 15:21
- id: max_memory
type: number
brief: >
The amount of memory available to the serverless function in MB.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use bytes here. AWS probably uses MiB (1024 based) instead of MB (1000 based), but using bytes would avoid such confusion and may be more exact for other FaaS environments.

I think we also use bytes instead of a derived unit for all other byte-related attributes (like message size, HTTP body size).

Even if we decide to keep using Mega- then I would prefer writing MiB as it is more clear that it is 1024-based (even though that is expected anyway for everything but filesystem/storage related stuff).

Copy link
Contributor Author

@z1c0 z1c0 Dec 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bytes is the wrong granularity here.
The suggestion to use MiB is legit though. I changed that in b3669dc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably bytes is unusual as something shown/configured in the UI. But MiB is also somewhat arbitrary. Maybe some other FaaS provider will let you configure the memory in KiB? Or it only let's you configure MiB but the API to query the max memory returns bytes.

I would be happy if we just used the unit B (bytes) for all memory-related stuff, instead of mixing GiB, MiB, KiB, B, MB, KB, GB. IMHO these derived units are better used only for display purposes.

Copy link
Contributor Author

@z1c0 z1c0 Dec 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not arbitrary but the most reasonable unit here. It's the lowest resolution needed regarding this metric.

The other example listed (request/response size, ...) actually have byte granularity, this does not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the lowest resolution needed regarding this metric.

If we call it aws_lambda.max_memory, then yes. And I agree that it is unlikely we will come across another FaaS provider that provides finer granularity (it would not make sense).

Still, this will the first attribute we introduce with MB/MiB as unit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still no good reason not to use it in this case IMHO. Having different units based on the use case is quite common.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: #1307 (comment) suggests to always use the smallest practical unit.

@arminru arminru requested review from anuraaga and a team December 17, 2020 15:36
@anuraaga
Copy link
Contributor

Just a general question mostly for other spec approvers - I think container can also have max_memory or CPU set e.g., as k8s pod limits. Would we want to have a general location for these that apply to all types or have faas and container-specific attributes?

@Oberon00
Copy link
Member

@anuraaga I was thinking about this too. But I could e.g. imagine that faas.max_memory for a container-based AWS lambda might be different from what we would detect as container.max_memory. E.g. a FaaS implementation might allow short memory surges if the host has enough memory free. Or it might use a process limit instead of a container limit. I think there is merit in having an attribute that is (more or less) defined to show what was configured as "Max Memory" for the Lambda.
This is no strong opinion though.

@carlosalberto
Copy link
Contributor

Merging as a) It seems an non-polemical addition and b) The PR has enough reviews and it has been waiting the whole break to get merged already.

@carlosalberto carlosalberto merged commit fee9daf into open-telemetry:master Jan 4, 2021
carlosalberto pushed a commit to carlosalberto/opentelemetry-specification that referenced this pull request Oct 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:semantic-conventions Related to semantic conventions release:allowed-for-ga Editorial changes that can still be added before GA since they don't require action by SIGs spec:resource Related to the specification/resource directory
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants