Defining and measuring quality across diverse agent platforms is essential as the line between developer-written code and agent-generated code blurs. Currently, agent-specific guardrails lack cross-compatibility.
This session uses an AGENTS.md file which lives in memory for multiple chat sessions and acts as a universal agent README, providing context, instructions, and references to integrated static analysis tools (Check style, PMD, Spot bugs). This is a case study presentation that includes a live demonstration of the pipeline and quantitative data analysis.
The demonstration shows initial agent-generated code failing static analysis checks. Subsequent generation, after introducing the `AGENTS.md` file, results in an approximate 85% reduction in violations and 60% increase in unit test coverage. Attendees learn to craft `AGENTS.md` to ensure enterprise-ready, standards-compliant Java code and leverage static tools as verifiable quality guardrails.
Type: Learning Session (50 min)
Track: Application Performance, Manageability, and Tooling
Audience Level: Intermediate
Speaker: Shuchita Prasad