This project allows to analyze VTOM logs with AI. Several LLM providers are supported. The script retrieves logs via API, extracts job instructions and context, then uses an LLM to analyze errors and propose solutions. The results are sent by email to the specified recipients. Azure AD and SMTP are supported.
- Automatic analysis of VTOM logs
- Support for multiple LLM providers: Groq, OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Together AI, Cohere
- Structured analysis with identification of errors, causes and solutions
- French or English summary for quick understanding
- Robust error handling with fallback
No Support and No Warranty are provided by Absyss SAS for this project and related material. The use of this project's files is at your own risk.
Absyss SAS assumes no liability for damage caused by the usage of any of the files offered here via this Github repository.
Consultings days can be requested to help for the implementation.
- Visual TOM 7.1.2 or greater
- API Token for an LLM provider
- Python 3.10 or greater on Visual TOM server
The project now supports 7 different LLM providers.
- Install the library related to the provider you want to use (see requirements.txt)
- Configure the API key in the .env file
The following parameters are optional and can be configured in the .env file:
- Model
- Temperature
- Maximum number of tokens
The script supports two different methods of sending emails:
- Azure AD + Microsoft Graph
- SMTP classic
You can configure the method to use in the .env file.
You can configure the VTOM server in the .env file.
- VTOM server
- VTOM port
- VTOM API key
- VTOM Domain API version
- VTOM Monitoring API version
Create an alarm in VTOM to trigger the script in case of error.
python vtom_api_analyzer.py -f {VT_JOB_LOG_OUT_NAME} -e {VT_ENVIRONMENT_NAME} -a {VT_APPLICATION_NAME} -j {VT_JOB_NAME} --to {VT_EMAIL_RECIPIENTS} --agent {VT_JOB_HOSTS_ERROR}
It is possible to configure the language of the analysis and the email in the .env file (optional).
The email sent contains the analysis of the error, the instruction of the job or an external link if it is an external instruction, the context of the job (variables, etc.) and the logs of the job as attachments. See the email_example.html file for an example of the email.
Multi-agents jobs are not supported.
If the instruction is external, the LLM will not be able to analyze it.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details
Absyss SAS has adopted the Contributor Covenant as its Code of Conduct, and we expect project participants to adhere to it. Please read the full text so that you can understand what actions will and will not be tolerated.