top of page


7-9 OCT. 2026
BERLIN


( SPEAKER )
Vivien Dollinger
ObjectBox - we empower the edge
( SESSION )
2025: On‑Device AI Goes Mainstream on Android
The “small” LLM revolution has accelerated—today’s Gemma 3 4B (quantized to 4‑bit) needs just ≈3.2 GB of RAM to run entirely offline on phones. In combination with an on‑device vector database (aka semantic index), powerful local AI apps are now easy to realize. No cloud required. This is also called “Edge AI.”
In this talk, we’ll have a look at:
* The current Edge AI market (numbers, stats, trends, examples)
* SLMs (state‑of‑the‑art benchmarks)
* Semantic indices/vector databases (definition, landscape, on‑device options)
* Some hands‑on examples (aka what is possible in Edge AI with vibe coding, the general tech stack, options, e.g. Google AI Edge RAG SDK).
Offline‑first/on‑device matters - again!
--> Privacy & Trust: All inference happens locally - no data sharing
--> Latency & Cost: Instant responses, zero cloud costs
--> Resilience: Works offline - ideal for mission‑critical mobile use cases and remote or low‑connectivity environments
bottom of page