You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 10, 2022. It is now read-only.
Various anomalies in the quality, framerate, and visual elements in a video (such as compression artifacts) could pose a problem when it comes to the possibility of false positives.
The notion that we publicly shame upon a positive match for cheating is not going to end very well if that positive match winds up being caused by the detection algorithm mistaking some anomaly in the given video as evidence of cheating.
For those who aren't familiar with the terminologies or the potential issue at hand, please allow me to elaborate...
Elaboration
A False Positive is when a test result comes back wrongfully claiming a positive match.
In this case a false positive would mean that the AI detected something in the footage that returned a positive match for cheating, when in reality the footage given to the AI to process was of inadequate quality to accurately determine if someone was cheating or not.
Weaponized Shame is a buzzword that describes when one or more people target someone with the intention of publicly defaming them; usually to draw in more people to join in on the bullying of said person.
While we can't confidently ensure that everyone who uses this resource will abide by any "good faith" agreements, we could do our part when it comes to presenting this tool in a manner that suggests that we won't stand for it being used to stir up pointless animosity for the sake of clout, schadenfreude, catharsis, monetary incentive, etc...
Suggestions
We could attempt to make use of a content ID system to determine if the footage given to the bot is adequate enough to serve as evidence of cheating. This means the bot would have to either mark inadequate footage accordingly or outright deny footage that doesn't meet the quality standards necessary to minimize false positives.
Another method of alleviating this is through the "closet hacker" system that's been discussed before wherein one or more cheaters willingly donate footage of their cheats in action for the sake of calibrating the bot to improve the accuracy of the bot's detection whenever it's given more footage of the same cheats in action.
If there was some way to work in tandem with existing anti-cheat systems in a manner that wouldn't expose the methodology to anyone who would use it to strengthen their cheats. That might come in handy as the work has already been done for us, so all we would have to do is compare the given footage to the known footage of the cheating, and quickly return the pre-determined results instead of having to waste CPU time on the same footage over-and-over ad infinitum.
Discussion Points
How accurate should the AI detection be before we could consider it solid evidence of cheating?
How should we handle evidence that is of poor or inadequate quality?
How will we be able to determine false positives?
How much time will pass before this project yields its first controversial case when it inevitably gets someone riled up?
Conclusion
This project is mostly aimed towards detecting cheaters who elude most forms of anti-cheat with AI assistance that looks "human enough" to pass by our collective scrutiny undetected. But since the goal of this project is to also store the results of these detections in a public manner. If mishandled then the results of detection could be weaponized.
The common targets for this are streamers who are well off enough to make it worthwhile for someone to blackmail them with the potential of their whole career being dragged down by a false positive. If there's enough of this then people could write this project off as another failed anti-cheat mechanism. All someone would have to do is alter recorded footage in a manner that would cause the AI to become suspicious (something as basic as frame skipping might do the trick if it looks like an aimbot snapping to an enemy player).
There's no easy answer nor is there a single answer to this issue, and the potential downfall that could result from this makes it worth the consideration in the long-run.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Brief
Various anomalies in the quality, framerate, and visual elements in a video (such as compression artifacts) could pose a problem when it comes to the possibility of false positives.
The notion that we publicly shame upon a positive match for cheating is not going to end very well if that positive match winds up being caused by the detection algorithm mistaking some anomaly in the given video as evidence of cheating.
I'm certain that this question has been raised many times by now
but I haven't been successful in finding any public mention of this anywhere in any repository belonging to this organizationIt seems that I should have given myself some more time to look around before making this discussion because I would have found a discussion related to video quality and the handling of inadequate evidence . So I suppose I could mirror the obvious here for the sake of discussing how we can overcome this early-on to prevent it from being dealt with in hindsight after someone becomes wrongfully shamed.For those who aren't familiar with the terminologies or the potential issue at hand, please allow me to elaborate...
Elaboration
A False Positive is when a test result comes back wrongfully claiming a positive match.
In this case a false positive would mean that the AI detected something in the footage that returned a positive match for cheating, when in reality the footage given to the AI to process was of inadequate quality to accurately determine if someone was cheating or not.
Weaponized Shame is a buzzword that describes when one or more people target someone with the intention of publicly defaming them; usually to draw in more people to join in on the bullying of said person.
While we can't confidently ensure that everyone who uses this resource will abide by any "good faith" agreements, we could do our part when it comes to presenting this tool in a manner that suggests that we won't stand for it being used to stir up pointless animosity for the sake of clout, schadenfreude, catharsis, monetary incentive, etc...
Suggestions
We could attempt to make use of a content ID system to determine if the footage given to the bot is adequate enough to serve as evidence of cheating. This means the bot would have to either mark inadequate footage accordingly or outright deny footage that doesn't meet the quality standards necessary to minimize false positives.
Another method of alleviating this is through the "closet hacker" system that's been discussed before wherein one or more cheaters willingly donate footage of their cheats in action for the sake of calibrating the bot to improve the accuracy of the bot's detection whenever it's given more footage of the same cheats in action.
If there was some way to work in tandem with existing anti-cheat systems in a manner that wouldn't expose the methodology to anyone who would use it to strengthen their cheats. That might come in handy as the work has already been done for us, so all we would have to do is compare the given footage to the known footage of the cheating, and quickly return the pre-determined results instead of having to waste CPU time on the same footage over-and-over ad infinitum.
Discussion Points
Conclusion
This project is mostly aimed towards detecting cheaters who elude most forms of anti-cheat with AI assistance that looks "human enough" to pass by our collective scrutiny undetected. But since the goal of this project is to also store the results of these detections in a public manner. If mishandled then the results of detection could be weaponized.
The common targets for this are streamers who are well off enough to make it worthwhile for someone to blackmail them with the potential of their whole career being dragged down by a false positive. If there's enough of this then people could write this project off as another failed anti-cheat mechanism. All someone would have to do is alter recorded footage in a manner that would cause the AI to become suspicious (something as basic as frame skipping might do the trick if it looks like an aimbot snapping to an enemy player).
There's no easy answer nor is there a single answer to this issue, and the potential downfall that could result from this makes it worth the consideration in the long-run.
Beta Was this translation helpful? Give feedback.
All reactions