DevOps & Cloud

DevOps & Cloud

In questo stage alcuni esperti del settore hanno esplorato le metodologie e le tecnologie che stanno ridefinendo lo sviluppo e la gestione delle applicazioni cloud. Essi hanno svelato i principi, le pratiche e le ultime tendenze del mondo DevOps e Cloud Computing.

Hosting della sala

Andrea Stefani
Andrea Stefani
General Manager
Appfactory
4 GIUGNO
5 GIUGNO
6 GIUGNO
05 giu 08:30
06 giu 08:30
04 giu 08:30
06 giu 11:50 - 12:30
40 min
Starting with a blank IDE, I will build out a working app from scratch before the sessions ends. I will be using my trusted AI coding assistant to help me, and will show you how I use these to create code that I can deploy. There may be bumps along the way, and I will use the same AI coding assistant to help me fix any problems that come along. I will share the code that we end up with attendees, and provide resources that show how they too can achieve the same results as me. I have spent over a year working with various tools, and will share essential tips to help you get started.
06 giu 12:40 - 13:20
40 min
 In questa sessione, esploreremo come Azure API Management (APIM) svolga un ruolo fondamentale nella gestione e ottimizzazione delle architetture AI. Con l'aumento delle applicazioni basate su AI che utilizzano servizi come modelli di machine learning e servizi come Azure OpenAI, diventa essenziale disporre di un sistema di gestione API efficiente, sicuro e scalabile. Approfondiremo le principali politiche di API Management, come il load balancing, il throttling e la cache, dimostrando come queste migliorino le prestazioni dei carichi di lavoro AI e contribuiscano a gestire i costi. La sessione coprirà anche le migliori pratiche per la sicurezza degli endpoint AI, l’applicazione di quote e l'uso di strumenti di monitoraggio per ottenere insight. Alla fine della sessione, avrai una chiara comprensione dei benefici architetturali e di risparmio che l'uso di API Management può portare alle soluzioni AI, garantendo servizi AI robusti, scalabili e sostenibili economicamente. Questa sessione è ideale per sviluppatori, architetti e professionisti del cloud che desiderano ottimizzare i servizi AI mantenendo il controllo su prestazioni e costi.
06 giu 14:00 - 14:40
40 min
In the rapidly evolving AI landscape, organizations and developers often encounter certain challenges when utilizing cloud-based LLM services like ChatGPT, Claude, and Gemini. While these platforms offer powerful capabilities, they can present scaling cost considerations and raise important questions around data security and privacy governance. Many businesses require complete control over where their sensitive information is processed and stored as part of their compliance frameworks. Additionally, SaaS solutions are not one-size-fits-all, with emerging use cases where specialized, lightweight models can deliver satisfactory results for specific domains. Businesses increasingly seek models optimized for particular tasks, requiring flexibility that generic cloud services don’t always efficiently provide. The combination of high costs, security governance requirements, and the need for targeted solutions has created momentum toward exploring alternative approaches.This talk will showcase the potential of local LLM usage during development as a viable alternative to cloud-based services. We'll share performance insights from locally-run models and explore how some models can perform well with limited resources in real-world conditions. The presentation will highlight the growing ecosystem of specialized models designed for specific domains or programming languages, demonstrating how these purpose-built systems can effectively address targeted applications. Through practical examples, we'll illustrate a development lifecycle that leverages local LLMs, giving developers a clear path to building, testing, and deploying AI-powered applications with full control over their data and infrastructure.
06 giu 14:50 - 15:30
40 min
What if you could 20x your productivity? If you write software, there is a way to ship faster, better code, that scales to millions of users. This way is making strategic decisions around your architecture and design, and if they create technical debt - something free now but that creates a cost for you in the future. In this talk, you will see how tech debt is a tool at your disposal, and how you can make the right decisions to ship 20 times faster.
06 giu 15:40 - 16:20
40 min
Distributed cloud systems represent the ongoing evolution in cloud computing by decentralizing cloud services across multiple locations, including on-premises data centers, regional cloud zones, and edge environments. This talk will go over Architecture of decentralized and edge based systems, data problems and infrastructure options. The talk will cover how cloud companies enable managed distributed architecture via edge cloud devices, how they are built and how they enable edge AI. The combination of distributed cloud architecture, edge computing, and AI not only optimizes resource utilization and network efficiency but also opens new avenues for innovation in secure, scalable, and intelligent systems.

Early Bird Expo: approfitta dell'offerta attiva e assicurati il tuo spazio al WMF 2026