Need for Better Documentation with Sample Examples, Tutorials in Infosys Responsible AI Toolkit #36
Replies: 3 comments 1 reply
-
Thanks for your Valuable feedback. We really appreciating your effort in installing & trying out the toolkit. We will be addressing all of your queries in upcoming release, however we request you to share us few points that will help us address your requirements a) Confirm the model you would like to load & test. Kindly share these details through e-mail [infosysraitoolkit@infosys.com] to help you with precise jupyter notebook samples and Documentation. |
Beta Was this translation helpful? Give feedback.
-
Hi Sagar, Thanks for sharing requested details, we are working on code samples and documentation to share it with you, Will reply to your mail with these details very soon. |
Beta Was this translation helpful? Give feedback.
-
Hi Sagar, As we mentioned in the latest email response, while analyzing your requirements, we observed that this functionality is not in current scope of toolkit features. We will check the priority and consider for upcoming quarterly releases and the roadmap will be updated once we finalize. We will appreciate if you could point out to us any discrepancies in our published information on GitHub. Please find the toolkit details that are currently released for your kind reference. The Responsible-AI-Security module consists of the following two sub-modules, that need to be integrated for effective utilization. In the meantime, we are happy to provide some public references that may help you build your model. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear @InfosysResponsibleAI Team,
Thank you for open-sourcing the Responsible AI Security Toolkit — it’s a promising and much-needed initiative to advance AI robustness and trustworthiness. However, during hands-on usage of the toolkit (in my case, to evaluate a YOLOv8 model, I was using responsible-ai-security), several challenges were encountered that significantly hinder the first-time user experience.
Challenges Observed
1. Lack of a Clear Getting Started Guide:
While installation steps are provided, there is no structured tutorial or walkthrough guiding users through an end-to-end evaluation process. It's unclear what steps should follow once the server is running.
2. Absence of Sample Projects and Tutorials:
The repository lacks example configurations, datasets, and Python notebooks demonstrating how to use the toolkit with various model types. Having sample use cases would significantly help new users understand the flow.
3. Unclear Execution Flow:
The intended sequence of actions is ambiguous — for example:
After launching the application, there's no clear direction on how to load a model, execute evaluations, or analyze the results. The current Swagger interface leaves many questions unanswered for new users.
✅ Suggested Improvements
Provide end-to-end sample use cases (e.g., object detection, tabular classification, NLP) with working folder structures, configuration files, and supported model formats.
Include Jupyter notebook examples to demonstrate how to:
Load models and datasets
Run specific attacks
Evaluate metrics and interpret outputs
Create short tutorial videos explaining each workflow right from installation to running a sample example and analyzing the results.
Beta Was this translation helpful? Give feedback.
All reactions