Skip to content

Conversation

@y0shi
Copy link
Member

@y0shi y0shi commented Nov 1, 2025

No description provided.

@github-actions
Copy link

github-actions bot commented Nov 1, 2025

AI Code Review

Okay, here's a review of the pull request. Overall, this looks like a great start to integrating vision and a swerve drive! The code is well-structured and follows a lot of good practices.

Summary: This PR introduces a vision subsystem and integrates it with a CTRE Phoenix 6-based swerve drive. The vision subsystem uses PhotonVision to estimate robot pose, and this information is fed into the drivetrain's odometry. The code is well-organized and makes good use of WPILib and CTRE libraries.

Positive Highlights:

  • The use of CommandSwerveDrivetrain to wrap the Phoenix 6 swerve drive and make it compatible with the command-based framework is excellent.
  • The DriveState singleton is a good way to manage the flow of vision data between the vision subsystem and the drivetrain.
  • The use of WPILib's Units system throughout the TunerConstants is great for type safety.

Suggestions:

  • Architecture Standards:

    • Command Composition Pattern: Consider creating commands to handle the vision pipeline switching and pose reset functionality. This will make it easier to integrate these actions into autonomous routines or driver control.
    • Subsystem Organization: Great Job!
  • IgniteDrivetrain.java:

    • Consider renaming IgniteDrivetrain to DrivetrainSubsystem to better reflect it's purpose in the project.
    • Consider using the built in WPILIB fusion instead of manual fusion:
      public void useVision(EstimatedRobotPose estimatedPose) {
              var visionMeasurement = estimatedPose.estimatedPose.toPose2d();
              var visionTime = estimatedPose.timestampSeconds;
              drivetrain.addVisionMeasurement(visionMeasurement, visionTime,
                      VecBuilder.fill(0.5, 0.5, Units.degreesToRadians(10)));
      }
    • DriveState is being polled for vision estimates, but consider using Supplier<List<VisionMeasurement>> to pass the vision data to the drivetrain using the applyRequest method.
  • DriveState.java:

    • Consider making the DriveState class immutable. This will help with thread safety and prevent accidental modification of the state.
    • The current implementation of grabVisionEstimateList replaces the list with a new empty list. This could lead to a race condition if the vision subsystem is writing to the list at the same time. Consider using a lock or a concurrent data structure to prevent this.
  • CameraConstants.java:

    • The AprilTagFieldLayout is loaded statically. This is good for performance, but it means that the field layout cannot be changed at runtime. If you need to support different field layouts, you'll need to find a way to load the layout dynamically.
    • The photonCameraTransform1 and photonCameraTransform2 are currently set to identity transforms. Make sure to update these with the actual camera poses relative to the robot.
    • These constants could be renamed to remove the number. With the singleton, there is no chance of collision.
  • VisionSubsystem.java:

    • The setPipeline method sets the pipeline index for both cameras to the same value. Is this intentional? If not, you should have separate methods for setting the pipeline index for each camera.
    • In calculateVisionMeasurement, the xyStds and thetaStd values are hardcoded. These values should be configurable, either through constants or preferences. Also, consider using AutoLog to log these values.
    • Consider adding a method to reset the pose estimator to a known pose. This can be useful for initializing the robot's position at the start of a match.
  • Performance and Efficiency:

    • Ensure that the periodic methods in VisionSubsystem and IgniteDrivetrain are not doing any unnecessary computations. These methods are called frequently, so any performance bottlenecks here can have a significant impact on robot performance.
  • Type-Safe Units:

    • The code makes great use of type-safe units, which is fantastic!
  • Safety & Error Handling:

    • Consider adding error handling to the vision subsystem to handle cases where the camera is disconnected or the vision processing fails.

Questions:

  • What is the intended use of the AllianceState class? It currently only returns the alliance color, but it could be expanded to include other alliance-specific information.
  • How are you planning to handle the case where the vision system loses track of the robot's pose? Will you switch to a different localization method, or will you simply stop updating the odometry?

This is a solid start! Keep up the great work, and I'm excited to see how this vision system improves the robot's performance!


This review was automatically generated by AI. Please use your judgment and feel free to discuss any suggestions!

@y0shi
Copy link
Member Author

y0shi commented Nov 14, 2025

NOICE!

One thing stood out to me from the code review bot. There's an unlikely, but statistically possible, race condition.

The current implementation of grabVisionEstimateList replaces the list with a new empty list. This could lead to a race condition if the vision subsystem is writing to the list at the same time. Consider using a lock or a concurrent data structure to prevent this.

We should discuss a way to eliminate that.

And also, can you run the spotlessApply to clean up any format stuff?

@github-actions
Copy link

✓ Build successful and code formatting check passed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants