I am Kevin, a software engineer and AI researcher who works across mechanical engineering, software, and artificial intelligence. I do not start with a technology. I start with a problem that matters, then figure out what it takes to fix it.
Every project I take on starts the same way. I see something broken, something that is hurting people, and I can not look away. The tools change. The disciplines change. The reason never does.
I watched oxycontin tear through communities. People I knew, families around me. When I got the chance to do research at Michigan, I chose to work on electrochemical THC detection not because biosensing was trendy, but because I believed in building the scientific foundation for cannabis as a legitimate, testable alternative to opioids. If law enforcement and policymakers could detect and regulate it reliably, it could be legalized responsibly. That was the point. Give people a safer option and give society the tools to manage it.
Social media promised to connect us and instead optimized for addiction, polarization, and anxiety. I see AI heading toward the same crossroads. Immense potential, but only if we get the deployment right. That is why I moved into AI safety and certification at NASA. Not to slow AI down, but to build the frameworks that make sure when we put AI on a spacecraft carrying humans, or into any safety-critical system, it actually earns the trust we are placing in it. I want to be part of making sure this technology is a net contribution to society, not a repeat of what social media did to us.
At NASA, I saw teams wrestling with SharePoint workflows that were never designed for the complexity of human spaceflight operations. Data was scattered, processes were manual, and people were spending hours on formatting instead of engineering. So I built Cortex. Not because I love enterprise software, but because the people doing the hardest work in aerospace deserved better tools. I architected a flexible platform that lets non-technical users define their own data structures and automated away the busywork so engineers could focus on the mission.
I have worked in metallurgical labs, automotive factories, university research labs, and NASA mission control rooms. That range is not random. It is how I operate. The best solutions come from pulling the right knowledge from different worlds.
Most technical problems are actually people problems, process problems, or incentive problems wearing a technical disguise. I dig past the surface to understand what is actually broken before I write a line of code.
I have led lab teams of 20 at Ford, coordinated across NASA programs, and collaborated with cross-functional suppliers. I know that the person who solves the problem is often not the person who found it. My job is to connect the right people and give them the right tools.
Mechanical engineering taught me how physical systems fail. Software engineering taught me how to build systems that scale. AI research taught me what these systems can, and can not, be trusted to do. I bring all three to every problem.
I did not take a straight line to get here, and I think that is the point. I started in metallurgical labs at Hyundai-Kia, GM, and Quaker Chemicals. Learning how materials fail, how quality works at scale, and how to run a lab. That hands-on foundation changed how I think about engineering.
A mechanical engineering degree at Michigan gave me the theory. Research on THC detection gave me the first taste of work that actually mattered to me. Science driven by wanting to help people, not just publish papers.
Ford taught me how to deploy AI in production. Real models, real stakes, real people depending on the output. And now at NASA, I get to work on the hardest version of that question: how do you trust AI with human lives?
I am finishing my M.S. in AI at Michigan because I believe in understanding the tools deeply enough to know their limits. And honestly, because I still want to be an astronaut someday.