Welcome to HackerNoon’s Building with AI interview series, where we learn how developers around the world are adopting, shaping, and experimenting with AI in their local ecosystems.


Today, we’re speaking with Val Garnaga, Lead @ the Suki ML Platform and Staff ML Engineer, working at the forefront of AI in the Bay Area and Silicon Valley.

1. Tell us the story behind your journey into AI — what first drew you to it, and what inspired the project you’re currently building?

Answer with as much detail as possible (at least 3 - 5 sentences)

My journey into AI began during my PhD research in 2000, where I developed a hybrid statistical and neural network forecasting model to predict snow avalanches. The work combined statistical ARIMA models with recurrent neural networks that selected and optimized parameters autonomously. This research introduced a new method of partially supervised neural network training and categorical parameter encoding, which laid the foundation for my long-term interest in combining classical statistical modeling with adaptive AI systems.

This experience shaped my long-term interest in combining mathematical modeling with adaptive learning systems. Today, I lead the Machine Learning Platform at Suki AI, focusing on large-scale medical AI systems that apply deep learning and natural language understanding to assist clinicians. In parallel, I have been exploring Quantum Neural Networks (QNNs), where quantum circuits are used to detect subtle biomedical patterns. In one of my recent projects, I implemented a quantum-classical hybrid model to identify early-stage Parkinson’s disease from voice data, achieving 99% diagnostic accuracy, outperforming classical models such as Random Forests and standard neural networks. This research marks a step toward integrating computation, biology, and quantum theory to push the boundaries of medical AI.

2. What are some of the biggest challenges or limitations you’ve faced while building with AI in your local ecosystem (and how are you working around them)?

Answer with as much detail as possible (at least 3 - 5 sentences)

In the Bay Area, the greatest challenges include balancing scalability, cost, and precision while ensuring that AI systems remain secure and interpretable. Healthcare AI adds further complexity, as models must maintain strict compliance and reliability under real-world variability. Transitioning from GPU-based systems to specialized ASIC-based AI accelerators such as TPUs required rethinking training orchestration and large-scale optimization.
To solve these challenges, I designed modular and fault-tolerant pipelines that automatically monitor data quality, retrain models as needed, and optimize execution. In parallel research, I am exploring quantum-inspired hybrid architectures for domains where classical learning plateaus, leveraging the expressive power and robustness of quantum layers for modeling complex biological and linguistic data.

3. How would you describe the AI ecosystem where you live — in terms of talent, community, education, or investment?

Dedicate at least one paragraph to each answer.

From my perspective, what makes the Bay Area unique is how quickly ideas move from research papers to prototypes and startups. It brings together scientists, engineers, and entrepreneurs who continuously push boundaries and share research across academia and industry.

The educational foundations from Stanford and Berkeley feed a constant flow of innovation, while community programs such as Google’s AI design partner initiatives and OpenAI research collaborations encourage practical experimentation.

Investors are increasingly focused on efficiency and specialization, funding solutions such as domain-tuned healthcare LLMs and cost-optimized inference systems. This collaborative environment continuously challenges and inspires my own work, especially in translating research into production-grade healthcare AI systems.

4. What tools, frameworks, or models have been most useful in your work — and why do they fit your approach?

Answer with as much detail as possible (at least 3 - 5 sentences)

My core ecosystem includes TensorFlow, PyTorch, and Google Vertex AI for orchestration, paired with AI/ML accelerators of different architectures. For speech and language tasks, I use OpenAI Whisper and Google Gemini to build scalable, multimodal pipelines. These tools align with my approach of rapidly experimenting while maintaining reproducible and production-grade ML pipelines.

Beyond classical ML frameworks, I explore quantum-enhanced architectures aimed at solving problems that require higher-order representations, such as biomedical pattern detection. I integrate hybrid models that combine classical neural controllers with quantum circuits capable of learning entangled representations. The controller network learns to prepare data for quantum encoding, while the quantum layer extracts high-dimensional relationships that classical models often miss. Together, these hybrid systems extend deep learning’s expressive power by integrating quantum principles that improve generalization and sensitivity to subtle data patterns.

5. Looking ahead, what excites you most about the future of AI — and what advice would you give to developers just starting their journey?

Answer with as much detail as possible (at least 3 - 5 sentences)

The future of AI lies in cross-domain fusion, where quantum computation, symbolic reasoning, and generative intelligence converge. I am particularly excited about quantum-enhanced learning, an area I explore through hybrid quantum neural networks that can manage uncertainty, noise, and complex correlations in biomedical data. These systems could enable new diagnostic tools, adaptive assistants, and models that reason with both physical and informational constraints.

From my experience leading large-scale ML platforms, I have found that developers who start by mastering the fundamentals of mathematics, statistics, and data structures build a much stronger intuition before using high-level frameworks. Focus on understanding how models learn, how data quality shapes outcomes, and how reproducibility builds trust. The strongest AI professionals are those who combine analytical rigor with creativity, always ready to adapt as technology evolves.