I find problems worth solving and build what's needed to solve them. Systems engineer and AI researcher, currently building Safety Critical Labs and working on human spaceflight systems at NASA JSC.
Traditional software assurance assumes deterministic, logic-based systems with stable behavior after deployment. AI doesn't work that way. Safety Critical Labs certifies AI and ML systems against the AI Requirements Framework: requirements and verification methods designed specifically for data-driven, probabilistic systems, covering bias, drift, hallucination, explainability, and human-AI interaction. Applicable across aerospace, aviation, automotive, and medical domains.
Designing and building the EHP Program Hub Dashboard, cataloging 100+ tools across 9 offices to consolidate tool visibility for the Extravehicular Activity and Human Surface Mobility Program, with planned extension to CLDP, ISS, and other programs. Delivered as a vanilla HTML/CSS/JS single-file SPA per deliberate audit-friendly deployment constraints, coordinating with IT staff for structured data collection across offices.
Architected and built Cortex, an enterprise data platform with integrated LLM capabilities (chat-based CRUD, inline suggestions, and data extraction) for Intuitive Machines, built on Django with an EAV architecture and MySQL backend. Deployed internally at Barrios for dev/test validation and delivered to customer-readiness stage. Continued part-time development of the CLDP AI requirements framework during this period, maintaining continuity of the safety-critical AI certification effort.
Served as NASA SE&I reviewer across four CLDP commercial partners, conducting ~40 reviews in 9 months, including CDPO-delegated lead review authority for Blue Origin, and generating RFAs and RIDs reported directly to the Program Office that drove partner-implemented design changes. Led requirements traceability for CLDP, tracing 300+ requirements to CLDP-REQ-1130 and delivering a unified baseline adopted by the Program Office, NASA, and all three partners. Authored 70 requirements in a new AI requirements framework with SME consultation from the University of Michigan, FAA, ISO, and NASA, cofounding the AI-in-Flight working group. Earlier: evaluated Flight Test Objectives across 18 Orion subsystems toward the Artemis I Flight Analysis Report; traced SSP 51074 CERD requirements for Axiom Space docking integration with ISS through SRR.
Launched and led a 20-person eMotor materials laboratory (12 hourly, 8 salary) on 24/7 shift coverage, testing stators and rotors at hundreds of parts per week for F-150 Lightning and Maverick Hybrid programs. Owned full lab buildup (equipment acquisition, hiring, procedure development, calibration) through ISO certification and production readiness. Self-initiated a PyTorch computer vision pipeline (regression with Kalman-filtered segmentation) for varnish surface area measurement on eMotor stators, achieving 99% accuracy and saving 1+ hour per quality check. Conducted failure analysis and supplier investigations driving DFMEA/PFMEA iterations.
Developed novel electrochemical detection method for Δ9-THC (1–20 μM) achieving 0.13 μM limit of detection with R² = 0.995, third-lowest detection limit among comparable SPCE devices. Published peer-reviewed findings demonstrating viability for in-field applications.
Metallurgical analysis, material validation, and quality testing across automotive OEMs and suppliers. Delivered QA protocol training at Quaker Chemical (2012–2015). Established the metallurgical lab supporting the 2017 SGE crankshaft line at GM Flint Engine (2015–2016). Led supplier corrective actions, material validation, and quality testing at Hyundai-Kia (2017).
Open standard for certifying AI in safety-critical systems. Ten requirements (AI-1 through AI-10) covering operational and data foundations, bias, ML test coverage, continuous validation, hallucination, out-of-distribution detection, adversarial robustness, explainability, human-AI teaming, and privacy. Gap analysis confirmed no existing standard adequately addresses AI-specific failure modes in human spaceflight.
Adversarial evaluation pipeline studying model vulnerability to membership inference attacks: confidence-based, label-only, and shadow model variants. Benchmarked three defense strategies (DP-SGD via Opacus, Model Confidence Exclusion, and a fusion defense), sweeping ε values to empirically map privacy vs. utility tradeoffs.
Ensemble classifier combining CLIP zero-shot inference with FAISS similarity-weighted voting across 6,000+ micrographs and 16 steel microstructural phases. Productionized with FastAPI and Docker on Hugging Face Spaces.
CNN-based binary classifier for infrastructure damage assessment in post-event aerial and ground imagery. Empirical evaluation of classification confidence thresholds for first-responder deployment readiness.
Working on machines that behave human pulled me into the philosophy of being one. I see us as spiritual creatures with intellect, manifest physically. Conscience is the first-person experience of that. Love is the quiet recognition upon being shared. Faith in something of value sets the roots, and belief and morals branch from there.
I train every day, produce music, and still want to be an astronaut. I don't have social media. I'd rather be known for what I build and how I treat people than for a curated presence.