Imagine building AI models like LLMs and image classifiers directly in Java and running them efficiently on your GPU. Project Babylon introduces the experimental Code Reflection technology that lets you express machine-learning logic in plain Java, without Python or external model files.
It uses FFM to link your code to native runtimes like ONNX Runtime for fast inference and GPU acceleration. For broader parallel workloads, Babylon’s Heterogeneous Accelerator Toolkit (HAT) offers a programming model for writing and dispatching compute kernels, enabling Java libraries to tap GPU power for high-performance parallel computing.
In this session, we’ll explore Babylon’s upcoming features and how they connect Java with modern AI workloads.
Type: Learning Session (50 min)
Track: Machine Learning and Artificial Intelligence
Audience Level: Expert