“The era of humans writing code is over” —Ryan Dahl.
LLMs are getting better at generating code that works, but they still introduce vulnerabilities at a troubling rate. This session addresses the security risks that emerge when Java code is generated by GenAI assistants and shipped at scale. Risks such as injection vulns that slip without proper input validation, authz bypasses, data leaks, and deserialization gadgets. In today's SDLC reality, code generation and ownership is fragmented. This session makes the case for runtime-first security:
If Java code is generated by prompts, then IAST and RASP are mandatory controls, not niche tools.
Type: Learning Session (50 min)
Track: Machine Learning and Artificial Intelligence
Audience Level:
Speaker: Doug Ennis