-
Notifications
You must be signed in to change notification settings - Fork 9
Log env packages #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log env packages #62
Changes from all commits
7021e8a
47d2c7c
efb12fe
646f62f
fbce37d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,11 @@ | ||
import json | ||
import logging | ||
import os | ||
import platform | ||
import subprocess | ||
import sys | ||
from importlib.metadata import distributions | ||
from unittest.mock import MagicMock, patch | ||
|
||
import pytest | ||
from rich.logging import RichHandler | ||
|
@@ -206,3 +213,215 @@ def test_named_logger_doesnt_propagate(tmp_path, capsys): | |
assert "PQ&*" not in captured.out, ( | ||
"logger writing to stdout through root handler" | ||
) | ||
|
||
|
||
@pytest.mark.parametrize("boolean, operator", [(True, True), (False, False)]) | ||
def test_python_version_header(boolean, operator, tmp_path): | ||
ver_header = f"{lateral_separator} PYTHON VERSION {lateral_separator}\n" | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_python=boolean) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
with open(log_file) as file: | ||
assert (ver_header in file.read()) == operator | ||
|
||
|
||
def test_correct_python_version_logged(tmp_path): | ||
"""Python version logged should be equal to | ||
the output of platform.python_version(). | ||
""" | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_python=True) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
# Test logged python version is equal to platform.python_version() | ||
with open(log_file) as file: | ||
assert f"Python version: {platform.python_version()}" in file.read() | ||
|
||
|
||
@pytest.mark.parametrize("boolean, operator", [(True, True), (False, False)]) | ||
def test_environment_header(boolean, operator, tmp_path): | ||
ver_header = f"{lateral_separator} ENVIRONMENT {lateral_separator}\n" | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_env_packages=boolean) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
with open(log_file) as file: | ||
assert (ver_header in file.read()) == operator | ||
|
||
|
||
def test_correct_pkg_version_logged(tmp_path): | ||
"""Package versions logged should be equal to | ||
the output of `conda list` or `pip list`. | ||
""" | ||
fancylog.start_logging(tmp_path, fancylog, write_env_packages=True) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
try: | ||
# If there is a conda environment, assert that the correct | ||
# version is logged for all pkgs | ||
conda_exe = os.environ["CONDA_EXE"] | ||
conda_list = subprocess.run( | ||
[conda_exe, "list", "--json"], capture_output=True, text=True | ||
Comment on lines
+267
to
+269
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can this be tested in a non-conda env somehow? It won't currently be tested in our GitHub actions. Perhaps the return values could be mocked? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have tested this in both conda and venv. In my hands works well in both cases (i.e. when it is run in a conda env it logs the conda env packages and when it is run out of a conda env, such as in a venv, the appropriate packages are also logged). I will look into mocking return values and be back with an implementation There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry I should have been clear, the tests run fine in all environments, but the issue is that if you're in a conda environment, the pip version won't be tested (and vice versa). This is particularly important with our automated tests that run in GitHub actions. It doesn't use conda, and so the conda version of the code could end up broken for a long time without us noticing. Currently the the tests run some kind of system command, and then make sure the right section is added to the log. We don't really care if e.g.
This may not be that clear, so let me know if I help any further. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you! I am currently working on this, I hope I have a new version with all the changes tomorrow or on Thursday There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just commited changes, adding two new tests, one of which mocks the existence of a conda environment, and asserts the presence in the log of the appropriate texts, and another which mocks the lack of a conda environment. |
||
) | ||
|
||
conda_pkgs = json.loads(conda_list.stdout) | ||
for pkg in conda_pkgs: | ||
assert f"{pkg['name']:20} {pkg['version']:15}\n" | ||
|
||
except KeyError: | ||
# If there is no conda environment, assert that the correct | ||
# version is logged for all packages logged with pip list | ||
with open(log_file) as file: | ||
file_content = file.read() | ||
|
||
# Test local environment versions | ||
local_site_packages = next( | ||
p for p in sys.path if "site-packages" in p | ||
) | ||
|
||
for dist in distributions(): | ||
if str(dist.locate_file("")).startswith(local_site_packages): | ||
assert ( | ||
f"{dist.metadata['Name']:20} {dist.version}" | ||
in file_content | ||
) | ||
|
||
|
||
def test_mock_pip_pkgs(tmp_path): | ||
"""Mock pip list subprocess | ||
and test that packages are logged correctly. | ||
""" | ||
|
||
# Simulated `pip list --json` output | ||
fake_pip_output = json.dumps( | ||
[ | ||
{"name": "fancylog", "version": "1.1.1", "location": "fake_env"}, | ||
{"name": "pytest", "version": "1.1.1", "location": "global_env"}, | ||
] | ||
) | ||
|
||
# Patch the environment and subprocess | ||
with ( | ||
patch.dict(os.environ, {}, clear=False), | ||
patch("os.getenv") as mock_getenv, | ||
patch("subprocess.run") as mock_run, | ||
): | ||
# Eliminate conda environment packages triggers logging pip list | ||
os.environ.pop("CONDA_PREFIX", None) | ||
os.environ.pop("CONDA_EXE", None) | ||
|
||
mock_getenv.return_value = "fake_env" | ||
|
||
# Mocked subprocess result | ||
mock_run.return_value = MagicMock(stdout=fake_pip_output, returncode=0) | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_env_packages=True) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
# Log contains conda subheaders and mocked pkgs versions | ||
with open(log_file) as file: | ||
file_content = file.read() | ||
|
||
assert ( | ||
"No conda environment found, reporting pip packages" | ||
) in file_content | ||
|
||
assert f"{'fancylog':20} {'1.1.1'}" | ||
assert f"{'pytest':20} {'1.1.1'}" | ||
|
||
|
||
def test_mock_conda_pkgs(tmp_path): | ||
"""Mock conda environment variables | ||
and test that packages are logged correctly. | ||
""" | ||
|
||
fake_conda_env_name = "test_env" | ||
fake_conda_prefix = os.path.join( | ||
"path", "conda", "envs", fake_conda_env_name | ||
) | ||
fake_conda_exe = os.path.join("fake", "conda") | ||
|
||
# Simulated `conda list --json` output | ||
fake_conda_output = json.dumps( | ||
[ | ||
{"name": "fancylog", "version": "1.1.1"}, | ||
{"name": "pytest", "version": "1.1.1"}, | ||
] | ||
) | ||
|
||
# Patch the environment and subprocess | ||
with ( | ||
patch.dict( | ||
os.environ, | ||
{"CONDA_PREFIX": fake_conda_prefix, "CONDA_EXE": fake_conda_exe}, | ||
), | ||
patch("subprocess.run") as mock_run, | ||
): | ||
# Mocked subprocess result | ||
mock_run.return_value = MagicMock( | ||
stdout=fake_conda_output, returncode=0 | ||
) | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_env_packages=True) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
# Log contains conda subheaders and mocked pkgs versions | ||
with open(log_file) as file: | ||
file_content = file.read() | ||
|
||
assert "Conda environment:" in file_content | ||
assert "Environment packages (conda):" in file_content | ||
assert f"{'fancylog':20} {'1.1.1'}" | ||
assert f"{'pytest':20} {'1.1.1'}" | ||
|
||
|
||
def test_mock_no_environment(tmp_path): | ||
"""Mock lack of any environment, | ||
and test that packages are logged correctly. | ||
""" | ||
|
||
# Simulated `pip list --json` output | ||
fake_pip_output = json.dumps( | ||
[ | ||
{"name": "fancylog", "version": "1.1.1", "location": "fake_env"}, | ||
{"name": "pytest", "version": "1.1.1", "location": "global_env"}, | ||
] | ||
) | ||
|
||
# Patch the environment and subprocess | ||
with ( | ||
patch.dict(os.environ, {}, clear=False), | ||
patch("os.getenv") as mock_getenv, | ||
patch("subprocess.run") as mock_run, | ||
): | ||
# Eliminate conda environment packages triggers logging pip list | ||
os.environ.pop("CONDA_PREFIX", None) | ||
os.environ.pop("CONDA_EXE", None) | ||
|
||
# Mock lack of any local environment | ||
mock_getenv.return_value = None | ||
|
||
# Mocked subprocess result | ||
mock_run.return_value = MagicMock(stdout=fake_pip_output, returncode=0) | ||
|
||
fancylog.start_logging(tmp_path, fancylog, write_env_packages=True) | ||
|
||
log_file = next(tmp_path.glob("*.log")) | ||
|
||
# Log contains conda subheaders and mocked pkgs versions | ||
with open(log_file) as file: | ||
file_content = file.read() | ||
|
||
assert ( | ||
"No environment found, reporting global pip packages" | ||
) in file_content | ||
|
||
assert f"{'fancylog':20} {'1.1.1'}" | ||
assert f"{'pytest':20} {'1.1.1'}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpicking, but this could be a pytest fixture to reduce duplication
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was not familiar with fixtures, thank you for the suggestion. I will try to combine the two tests this way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They can take some getting used to, but they're very helpful to create "things" to be reused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By using
@pytest.mark.parametrize
decorators I can now combine the two pairs of header tests together
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The updated version uses parametrization to combine the tests for the presence/absence of the headers