Enterprise and government document systems hold terabytes of valuable unstructured information, yet most still rely on keyword and metadata search with little semantic context. Retrieval-Augmented Generation (RAG) promises a breakthrough, but tutorials rarely prepare you for regulated, large-scale environments.
In this talk, we'll share lessons from building a RAG stack with Spring Boot, Elasticsearch, LangChain4j, Docker, and ActiveMQ, using both Azure OpenAI and Ollama. Expect concrete insights on document chunking, enforcing access control, and keeping LLMs grounded in facts, all practical takeaways for anyone bringing RAG from demo to production.
Type: Learning Session (50 min)
Track: Machine Learning and Artificial Intelligence
Audience Level: Intermediate
Speaker: Susanne Pieterse