Alex Lin Wang 王帅 Resume

Education

Stanford University

Expected June 2027

Graduate Technical Coursework:

  • Programming Languages (CS 242)
  • Deep Learning for NLP (CS 224n)
  • Machine Learning (CS 229)
  • Reinforcement Learning (CS 234)
  • ML with Graphs (CS 224w)
  • Fundamentals of Blockchain Infrastructure (EE 374)
  • Parallel Computing with CUDA/OpenMP/MPI (CME 213)

Undergraduate Technical Coursework:

  • Parallel Computing (CS 149)
  • Groups and Rings (MATH 120)
  • Real Analysis (MATH 171)
  • Linear Algebra and Matrix Theory (MATH 113)
  • Operating Systems (CS 111)
  • Algorithms (CS 161)

Work Experience

Software Engineering Intern at Stealth

Oct 2025 - Present Mountain View, CA

  • Working on stealth projects.

Researcher at Stanford NLP

Sep 2025 - Present Stanford, CA

  • Advised by Chris Potts & Zhengxuan Wu. Developed BoundBench and formalized the PRBO objective to measure & lower-bound steering techniques LLM behavior; combined concept-incorporation + distributional-shift metrics with IWAE-style, logit-based estimators for fast probability estimates without LLM judges.
  • Designed a benchmarking plan across common steering methods (Rank-1 ReFT, activation patching, steering vectors, DiffMean, probes, SAEs, LoRA/FT), with criteria that elicit target behavior while preserving base-model propensities and linking scores to downstream tasks.
  • Reference: BoundBench Presentation

Software Engineering Intern at Meta

June 2025 - Sep. 2025 Menlo Park, CA

  • Disaster Recovery Team. Leveraged Python and statistical analysis in a large-scale Linux environment to build an automated reporting system that quantitatively analyzed disaster recovery test outcomes using time-series analysis and pattern recognition techniques, saving dozens of weekly engineering hours through data-driven optimization.
  • Architected a systematic data analysis pipeline to evaluate operational risk using statistical modeling approaches, implementing modules for time-series analysis of execution latency, error rate tracking with machine learning classification, and quantitative assessment of system failure events to inform data-driven engineering priorities.
  • Deployed a mission-critical analytics engine into production, engineering a full CI/CD pipeline with automated model validation and scheduled job execution to ensure reliable, periodic delivery of quantitative risk insights to downstream systems—demonstrating experience with systematic, data-driven approaches to complex problem-solving.

AI/ML Engineer Intern at Biostate AI

Nov. 2024 - Mar. 2025 Palo Alto, CA

  • Developed and implemented an end-to-end ML pipeline utilizing bulk RNA-seq expression data from proprietary and public datasets to train a 100M+ parameter transformer model, achieving state-of-the-art performance in autoregressive generation of "future" RNAseqs with biologically viable expression patterns.
  • Established comprehensive internal benchmarking protocols and implemented robust data tagging systems to prevent contamination during large model pretraining, while specializing and curating datasets for performance testing.
  • Built and deployed an automated bioinformatics platform that integrated omics data analysis pipelines with fine-tuned LLMs with DPO, optimized for generating scientific abstracts and publication-quality figures, streamlining research workflows while maintaining 95% expert-rated accuracy.

Research Assistant at Stanford Autonomous Systems Laboratory (ASL)

Aug. 2024 - Mar. 2025 Stanford, CA

  • Student researcher working on autonomous systems for trajectory optimization and applying on-board VLMs/LLMs/CV for anomaly detection/reaction under Professor Marco Pavone (Director of Nvidia's Autonomous Systems Division).
  • Engineered a unified software application for Gazebo/Robot Operating System 2 simulation integration with PX4, utilizing Nvidia Orin Jetson Nano and motion-capture Pub-Sub model for real-time autonomous system testing.
  • Developed and implemented trajectory optimization and obstacle avoidance algorithms for kinodynamic motion planning in an indoor environment as part of a 3-person team. Project demo can be found here.

Data Science Intern at Air Force Research Laboratory (AFRL)

June 2023 - Sep. 2023 Dayton, Ohio

  • Designed and implemented a novel group-theoretic MCMC algorithm that significantly improved sampling efficiency for systems with discrete symmetries, demonstrated through application to dielectric polymers.
  • Created clustering algorithms that achieved up to 50% faster convergence compared to standard and umbrella sampling methods by leveraging symmetry properties in potential energy landscapes.
  • Leveraged UMAP/t-SNE for feature extraction of MCMC data and built out PyTorch Autoencoder to detect polymer characteristic anomalies depending on reconstruction error, boosted detection accuracy by 15%. Read more here.