-
Notifications
You must be signed in to change notification settings - Fork 471
Add structured data in code evaluation logs #3077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+161
−24
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The rationale is that one would want audit log only for people "manually" executing code in an app server. If the code being evaluated is from a deployed app, I'm assuming they reviewed the code before. And if they want to log something there, they can use a logger inside their app's source code.
This will be easier to process from an external log aggregator. Useful when auditing code evalution in an app server connected to a prod environment.
hugobarauna
commented
Oct 3, 2025
josevalim
reviewed
Oct 4, 2025
No need to check if cell.source is a string, it's always a string inside a session process.
@jonatanklosko I think I addressed all comments. Can you take one last look to see if it's good to go? |
jonatanklosko
approved these changes
Oct 6, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I started working on docs for our "audit logs" feature, and I noticed we could make some improvements.
Structured log messages when evaluating code
The goal is to emit structured data instead of strings whenever possible, making it easier to filter code evaluation log entries and parse its data in an external log aggregator.
With that PR in place, one could run Livebook with those env vars:
MIX_ENV=prod LIVEBOOK_LOG_LEVEL=info \ LIVEBOOK_LOG_METADATA="users,session_mode,code,event" \ LIVEBOOK_LOG_FORMAT=json \ mix phx.server
To get a nice structured log message when a code is evaluated:
{"message":"Evaluating code\n Session mode: default\n Code: \"7 + 13\"","time":"2025-10-03T17:48:22.238Z","metadata":{"code":"7 + 13","session_mode":"default","event":"code.evaluate","users":[{"id":"1","name":"Hugo Baraúna"}]},"severity":"info"}
Notice I kept "duplicated" data inside the "message" part of the log, that's now inside the
session_mode
andcode
metadata on purpose. That's to avoid a "breaking change". Let's say someone is already relying on the value of "message" for extracting code evaluation logs. We can deprecate that from the message and later move only to metadata.Conditionally log code evaluation
Another change is that code evaluation is now only logged when the code is being evaluated in the context of a regular notebook session or an app preview session.
The rationale is that the purpose of logging code evaluation is to have a way to see who ran what code and when. This is relevant when someone is running code in a Livebook app server against a production environment.
However, logging code evaluation is arguably not relevant when the code is inside a deployed Livebook used by an internal user, since that user is not choosing which code to run—they are using an app or code deployed by someone else.