JavaOne 2026

JavaOne 2026 Session

Duke in front of a whiteboard

Writing GPU-Ready AI Models in Pure Java with Babylon

Summary

Imagine building AI models like LLMs and image classifiers directly in Java and running them efficiently on your GPU. Project Babylon introduces the experimental Code Reflection technology that lets you express machine-learning logic in plain Java, without Python or external model files.

It uses FFM to link your code to native runtimes like ONNX Runtime for fast inference and GPU acceleration. For broader parallel workloads, Babylon’s Heterogeneous Accelerator Toolkit (HAT) offers a programming model for writing and dispatching compute kernels, enabling Java libraries to tap GPU power for high-performance parallel computing.

In this session, we’ll explore Babylon’s upcoming features and how they connect Java with modern AI workloads.

Profile

Type: Learning Session (50 min)

Track: Machine Learning and Artificial Intelligence

Audience Level: Expert

Speakers:
  • Lize Raes
  • Ana-Maria Mihalceanu

Session: Tuesday, March 17th at 5:00 PM in Auditorium