Abstract
Artificial intelligence is reshaping the startup journey end to endโfrom ideation and product development to scaling, governance, and exit strategies. For founders and investors alike, AI-native tools promise unprecedented speed, automation, and capital efficiency. But this same velocity introduces new risks: opaque decision-making, fragile dependencies, security gaps, and ethical blind spots that can surface only after scale.
This session will examine what it means to build resilient, responsible companies in the AI era. As code increasingly writes code and experimentation outpaces regulation, leaders will share how they balance innovation with governance, manage AI-driven risk, and make investment decisions in a landscape where traditional due diligence models are rapidly evolving. This session will cover: Designing AI-native products and workflows without embedding long-term security, bias, or compliance debt; Managing new risk categories introduced by automation, third-party models, and rapid iteration cycles; How investors evaluate AI startups on governance, resilience, and ethical readiness.