Join us for a day of technical talks and hands-on workshops about Google Cloud and Generative AI
Mar 13, 2025
09:00–19:00 GMT+3
Huzur, Maslak Ayazağa Cd. No:4, Sarıyer
34396, İstanbul, Türkiye
Get started building with expert guidance, together with us.
Join the Google Cloud Advocacy team for a day of technical sessions, hands-on demos, live coding, networking, workshops, and more.
Register now to join us in person at UNIQ Hall in Istanbul for a day of learning and connection. Don’t forget to bring your laptop!
You’ll learn how to:
Whether you're an experienced Developer or just getting started, this event is your ticket to get started with Google Cloud, or to dive in really deep with our product experts.
Important: All attendees should bring their own laptops for the hands-on labs.
Register now to reserve your spot. Limited seats available.
Photographs and videos will be taken at this event, and the same may be used for promotional purposes, such as on the event's website, social media, and other marketing materials.
By registering for this event, you consent to the use of your image and visual likeness in our marketing materials. If you do not wish for your visuals to be included in potential marketing materials, please inform your Google Cloud representative as soon as possible.
Senior Director and Chief Evangelist at Google Cloud
Google Cloud Country Manager
Developer Advocate at Google
Developer Advocate at Google
Developer Relations Engineer at Google
Cloud Architect at Google
Developer Advocate at Google
EMEA Technical Practice Lead
Strategic Cloud Engineer at Google
Cloud Developer Advocate at Google
Developer Advocate at Google
Developer Advocate at Google
09:00–10:00 | Registration & Breakfast |
---|---|
10:00–10:15 | Welcome & Intro |
10:15–11:00 | Developer Keynote: What's the future of software development? Nobody knows what the future holds, but it's obvious that software development is changing. How? In what ways? In this talk filled with live demos, Richard will explore seven areas that are impacting the way we will build software. From AI to platforms to open source, we'll explore what's changing, and how you can be ready for it. |
11:00–11:45 | Gemini 2.0 for developers Discover Gemini 2.0, Google's new experimental generative AI model. Learn to build real-time voice and video apps with the Multimodal Live API, integrate Google Search to create advanced workflows, and detect objects in images and video using natural language prompts. Explore Gemini 2.0's improved multimodal understanding, coding, and complex instruction following – making it ideal for developing AI agents. Get a preview of unreleased, allowlist-only features like customizable speech generation (control tone, pace, accent, and intonation) and image generation. Plus, learn where all the great notebooks and tutorials are for learning more. After this session, you'll be ready to build with Gemini 2.0 on Google Cloud and other platforms |
---|---|
11:45–12:15 | A practical introduction to retrieval augmented generation (RAG) This talk introduces Retrieval-Augmented Generation (RAG), a technique that grounds Large Language Model (LLM) responses in retrieved data to enhance accuracy and relevance. Through practical examples with LangChain4j in Java, the session covers RAG fundamentals, including architecture, implementation, and basic chunking. It then progresses to advanced techniques: sophisticated chunking, query compression, routing, metadata filtering, content aggregation, and re-ranking. |
11:45–12:15 | The ultimate Cloud Run guide: From zero to production Cloud Run is a fully managed platform that enables you to run your code directly on top of Google's scalable infrastructure. Cloud Run is simple, automated, and designed to make you more productive. Learn how to get started, and dive deep into even the most advanced features to learn how to make the most of Cloud Run. |
13:00–14:00 | Lunch |
Main stage | |
14:00–14:30 | Code Your Way Around Istanbul with Gemini in Google Cloud So much to do, so little time! In this session we are building a web application that we can use to explore new and exciting places in Istanbul. But with so many attractions, restaurants, and hidden gems, creating a comprehensive and engaging app can be a real challenge. In a live coding session we'll demonstrate how Gemini Code Assist helps us generate code for a variety of features including data fetching, adding magic AI features and of course unit testing. See how Gemini in Google Cloud helps us in bringing our ideas to life! Whether you're a local or just visiting for the event, get ready for a whirlwind tour of Istanbul – through the lens of code! This session is perfect for developers of all levels who are curious about the future of coding with AI. |
Break out rooms | |
14:00–16:00 | Workshop 1 : Gemini with Vertex AI and LangChain This hands-on workshop gets you building with Gemini in Java or Python. You’ll learn how to send prompts to Gemini, handle streaming and synchronous responses, build a chatbot, process text and image inputs, extract structured data from raw text, use prompt templates, perform sentiment analysis, and implement Retrieval Augmented Generation (RAG). You’ll even learn how to use Gemma, Google’s open model on your local machine with Ollama. You'll leave with the skills to use Gemini in your Java or Python projects immediately with LangChain and LangChain4j. Bring your laptop 💻 Limited capacity: please apply here |
14:00–16:00 | Workshop 2 : Cloud Runner 2049 Synth City is under threat. A powerful corporation's sinister plans have been uncovered by Synthia, a friendly AI with a rebellious spirit. In Cloud Runner 2049, you'll join Synthia to save the city. Immerse yourself in a neon-drenched world, master Google Cloud skills like Pub/Sub, Cloud Run, and Cloud Storage through engaging quests, and thwart the corporation's catastrophic plot. Bring your laptop 💻 Limited capacity please apply here |
Main stage | |
14:30–15:00 | Gemini’s 2 million token context window in action Large language models (LLMs) have revolutionized natural language problem-solving, but most operate with limited memory – struggling to recall details from even short conversations. Google’s Gemini has an unprecedented context window size of 2 million tokens. What does this mean in practice? In this session, we will cover the following points through illustrated examples and demos:
|
15:00–15:30 | Hack your pod! Ever wondered how easy it is to turn your Kubernetes cluster into a playground for malicious actors? Join me as we embark on a thrilling (and slightly terrifying) journey to demonstrate how a single vulnerable pod can be compromised and become a springboard for a full-blown security nightmare, hopping from pod to pod, like a caffeinated rabbit on a mission. But don't despair! This isn't just a horror show. We'll then explore the powerful security and networking features of Google Kubernetes Engine, transforming our vulnerable landscape into an impenetrable fortress (or at least something much harder to crack). Prepare for an engaging session with live demos, attack scenarios, and practical solutions to keep your Kubernetes deployments safe and sound. Because let's face it, nobody wants their cluster to become the next headline. |
16:00–16:30 | Why does your next Generative AI app need multimodal embeddings? This fast-paced session dives into how seamlessly blending text, images, audio, and more can revolutionize your GenAI applications. We'll explore the key advantages – from slashing costs and boosting accuracy to delivering lightning-fast response times. Through compelling live demos, you'll witness firsthand why incorporating multimodal embeddings isn't just a good idea – it's essential for building truly groundbreaking AI. Don't get left behind; discover the future of Generative AI. |
Break out rooms | |
16:00–17:30 | Workshop 3 : Gemini Multimodality and Long Context 2025 is the year of AI agents. Advanced LLMs like Gemini are enabling natural, live interaction through multimodal interfaces and bidirectional APIs. In this workshop, learn how to interact with AI agents in real-time (using audio and video), analyze videos with transcription and speaker identification, and build a knowledge graph from massive text data (1M tokens). No experience or special setup needed—just bring your computer, a browser, and your curiosity! Bring your laptop 💻 Limited capacity: please apply here |
16:00–17:30 | Workshop 4 : Breathing new life into a legacy application Your boss wants you to revive an old application. The original developer left the company, and the programming language used is five years past the end of life. However, the application continues to have strong business value. You’ve heard that the application has clear-text passwords in the code (apparently the developer was in a hurry to change roles and didn't anticipate the application’s success), and it was deployed manually to production – nobody remembers how… It's time for you to modernize the app with the latest technologies and make the application 12-factor compliant, but you don't want to learn an old and unsupported language. What can you do? Let’s find out! Bring your laptop 💻 Limited capacity: please apply here |
Main stage | |
16:30–17:00 | Generative AI with AlloyDB Omni and open models AlloyDB Omni is Google's PostgreSQL-compatible database that can be deployed anywhere. Similarly, open models, with their downloadable weights, allow you to run inference anywhere. Both of these tools can be deployed anywhere but are ideal for a standard containerized environment like Kubernetes. The combination of these tools provide a great solution for restricted environments with limited cloud access, strict data locality needs, or a need for complete control over your AI infrastructure. In this talk, you'll learn how to deploy AlloyDB Omni alongside Hugging Face's TEI (Text Embedding Interface), an open model inference server, within a GKE cluster on Google Cloud. You'll see how to integrate these components, enabling AI-powered features like semantic search and content generation directly within your database, without relying on additional cloud services. |
17:00–17:30 | Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications When you change prompts or modify the Retrieval-Augmented Generation (RAG) pipeline in your LLM applications, how do you know it’s making a difference? You don’t—until you measure. But what should you measure, and how? Similarly, how can you ensure your LLM app is resilient against prompt injections or avoids providing harmful responses? More robust guardrails on inputs and outputs are needed beyond basic safety settings. In this talk, we’ll explore various evaluation frameworks such as Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM outputs, understand the types of metrics they offer, and how these metrics are useful. We’ll also dive into testing and security frameworks like LLM Guard to ensure your LLM apps are safe and limited to precisely what you need. |
17:30–18:00 | Wrap up, survey, raffle |
18:00–19:00 | Networking |