About This Series
Apache Kafka, Spark, Flink, Iceberg are the state of the art technologies for data engineering. This series focuses on how to use them to build real-time, data-intensive applications.
What you will learn
- Essential elements of data infrastructure theory
- Hands-on end-to-end streaming data pipelines
Who This Is For
Engineers who are:
- Working in data engineering, platform engineering, or a related role
- Comfortable with the basics and ready to move to intermediate or advanced topics
- Interested in understanding system internals, not just APIs and configuration
- Looking to build production-grade knowledge in streaming and data infrastructure
What you will bring
- Laptop: With docker-compose installed. (Fully charged)
- Lots of enthusiasm to follow along and learn
Topics
Across all three technologies:
- Theory grounded in system internals - the reasoning behind architectural decisions
- Hands-on labs that participants follow along in real time
- Use-cases from fintech, logistics, e-commerce, and analytics
Schedule
| Month |
Dates |
Session |
Timing |
| May |
16 May & 23 May |
1/4 & 2/4 |
Evening, 5-7 pm |
| June |
6 June & 13 June |
3/4 & 4/4 |
Evening, 5-7 pm |
All sessions are in-person at 43 Workspace & Events, Bengaluru. Weekend evenings, 5-7 pm. Free to attend.
Facilitators
- Pavan Keshavamurthy
Co-founder & CTO, Platformatory
15+ years of experience in product and platform engineering, technology architecture, and building large-scale distributed systems. He works across Apache kafka,Apache Flink, Kong, and cloud-native infrastructure.
- Avinash Upadhyaya K R
Platform Engineer, Platformatory
Avinash specialises in platform engineering with a focus on Apache Kafka, Kubernetes, and Kong. Multi-cloud certified across Azure, Google Cloud, Linux Foundation, and Kong.