-
-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
I think we should come up with some guidelines on how analyzers should be tested. My proposal would be:
- Unit tests should exist to obtain as much code coverage as possible. Correctness of each exercise analyzer should be tested that way.
- Each exercise should have at least two smoke tests: one with an optimal solution that receives no feedback, and one with a solution that receives at least one exercise-specific comment.
- Next to that, a few smoke tests to cover exercises for which no analyzer is implemented should be present too, to make sure the analyzer works properly for every exercise. By that I mean that it shouldn't crash or something.
Once we come up with some concrete guidelines, we should probably write them down in the docs.
Originally posted by @sanderploegsma in #122 (comment)
Metadata
Metadata
Assignees
Labels
No labels