Abbreviated CV (complete CV here)
Hillsboro, Oregon USA
University of California, San Diego, La Jolla, CA
Ph.D., M.S., Neurosciences with Computational specialization
Thesis commitee: John T. Serences (advisor), Ed Vul, Timothy Gentner, Ed Callaway, Douglas Nitz
2013 - 2019
Swarthmore College, Swarthmore, PA
B.A. with High Honors, Biology, Cognitive Science double major
Neuroscience research: Kathleen K. Siwicki; Psychology research: Frank Durgin
2007 - 2011
The Naturalistic Free Recall Dataset: four stories, hundreds of participants, and high-fidelity transcriptions. Raccah, O., Chen, P., Gureckis, T., Poeppel, D., Vo, V., PsyArXiv, 2024 [link]
A unifying computational account of temporal context effects in language across the human cortex. Vo, V.A.*, Jain, S.*, Beckage, N., Chien, H.Y.S., Obinwa, C., Huth, A.G., bioRxiv, 2023 [link]
Domain-specific code language models: Unraveling the potential for HPC codes and tasks. Kadosh, T., Hasabnis, N., Vo, V.A., Schneider, N., Krien, N., Capota, M., Wasay, A., Tamir, G., Willke, T., Ahmed, N., Pinter, Y., Mattson, T., Oren, G., arXiv, 2023 [link]
Scope is all you need: Transforming LLMs for HPC code. Kadosh, T., Hasabnis, N., Vo, V.A., Schneider, N., Krien, N., Capota, M., Wasay, A., Tamir, G., Willke, T., Ahmed, N., Pinter, Y., Mattson, T., Oren, G., arXiv, 2023 [link]
MPIrigen: MPI Code Generation through Domain-Specific Language Models. Schneider, N., Hasabnis, N., Vo, V.A., Kadosh, T., Krien, N., Capotă, M., Tamir, G., Willke, T., Ahmed, N., Pinter, Y., Mattson, T., Oren, G., AI4Sys Workshop, High Performance Parallel and Distributed Computing (HPDC), 2024 [link]
OMPGPT: A Generative Pre-trained Transformer Model for OpenMP. Chen, L., Bhattacharjee, A., Ahmed, N., Hasabnis, N., Oren, G., Vo, V.A., Jannesari, A., International European Conference on Parallel and Distributed Computing (Euro-Par), 2024 [link]
Memory-Augmented Graph Neural Networks: A Brain-Inspired Review. Ma, G., Vo, V.A., Willke, T.L., Ahmed, N.A., IEEE Transactions on Artificial Intelligence, 2023 [link]
Brain encoding models based on multimodal transformers can transfer across language and vision. Tang, J., Du, M., Vo, V.A., Lal, V., Huth, A.G., Neural Information Processing Systems (NeurIPS), 2023 [link]
Augmenting recurrent graph neural networks with a cache. Ma, G., Vo, V.A., Willke, T.L., Ahmed, N.A., Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023 [link]
Computational language modeling and the promise of in silico experimentation. Jain, S., Vo, V.A., Wehbe, L., Huth, A.G., Neurobiology of Language, 2023 [link]
Cache-memory gated graph neural networks. Ma, G., Vo, V.A., Ahmed, N., Willke, T.L., MemARI Workshop, Neural Information Processing Systems (NeurIPS), 2022 [link]
Memory in humans and deep language models: Linking hypotheses for model augmentation. Raccah, O., Chen, P., Willke, T., Poeppel, D., Vo, V.A., MemARI Workshop, Neural Information Processing Systems (NeurIPS), 2022 [link]
Shared representational formats for information maintained in working memory and information retrieved from long-term memory. Vo, V.A., Sutterer, D.W., Foster, J.J., Sprague, T.C., Awh, E., Serences, J.T., Cerebral Cortex, 2022 [link]
Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses. Antonello, R., Turek, J.S., Vo, V.A., Huth, A., Neural Information Processing Systems (NeurIPS), 2021 [link]
BrainIAK: The Brain Imaging Analysis Kit. Kumar, M., Anderson, M., Antony, J.W., Baldassano, C., Brooks, P., Cai, M., Chen, P-H.C., Ellis, C., Henselman-Petrusek, G., Huberdeau, D., Hutchinson, J.B., Li, Y.P., Lu, Q., Manning, J., Mennen, A., Nastase, S., Richard, H., Shapiro, A.C., Schuck, N., Shvartsman, M., Sundaram, N., Suo, D., Turek, J.S., Vo, V., Wallace, G., Wang, Y., Zhang, H., Zhu, X., Capotă, M., Cohen, J., Hasson, U., Li, K., Ramadge, P.J., Turk-Browne, N., Willke, T., Norman, K.A. , Aperture, 2021 [link]
Long short-term memory with slower information decay. Chien, H-Y.S., Beckage, N.M., Vo, V.A., Turek, J.S., Honey, C., Willke, T.L., LatinX in AI workshop, International Conference on Learning Representations (ICLR), 2021 [link]
Multi-timescale representation learning in LSTM language models. Mahto, S., Vo, V.A., Turek, J.S., Huth, A.G., International Conference on Learning Representations (ICLR), 2021 [link]
Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech. Jain, S., Vo, V., Mahto, S., LeBel, A., Turek, J., Huth, A., Neural Information Processing Systems (NeurIPS), 2020 [link]
Approximating stacked and bidirectional recurrent architectures with the delayed recurrent neural network. Turek, J., Jain, S., Vo, V., Capotă, M., Huth, A., Willke, T., International Conference on Machine Learning (ICML), 2020 [link]
Value-driven attentional capture enhances distractor representations in early visual cortex. Itthipuripat, S.I.*, Vo, V.A.*, Sprague, T.C., Serences, J.T., PLOS Biology, 2019 [link]
Multivariate analysis of BOLD activation patterns recovers graded depth representations in human visual and parietal cortex. Henderson, M.H.*, Vo, V.A.*, Chunharas, C., Sprague, T.C., Serences, J.T., eNeuro, 2019 [link]
Inverted encoding models assay population-level stimulus representations, not single-unit neural tuning. Sprague, T.C.*, Adam, K.C.S.*, Foster, J.J.*, Rahmati, M.*, Sutterer, D.W.*, Vo, V.A.*, eNeuro, 2018 [link]
Dissociable signatures of visual salience and behavioral relevance across attentional priority maps in human cortex. Sprague, T.C., Itthipuripat, S., Vo, V.A., and Serences, J.T., Journal of Neurophysiology, 2018 [link]
Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex. Vo, V.A., Sprague, T.C., and Serences, J.T., Journal of Neuroscience, 2017 [link]
Young children bet on their numerical skills: Metacognition in the numerical domain.. Vo, V.A., Li, R., Kornell, N., Pouget, A., Cantlon, J.F., Psychological Science, 2014 [link]
Workshop program committee, ICLR ''Representational Alignment'' (2024)
Workshop co-organizer, NeurIPS ''Memory in Real and Artificial Intelligence (MemARI)'' (2022)
Workshop program committee, NeurIPS ''Gaze Meets ML'' (2022)
Project mentor, NeuroMatch Academy - computational neuroscience (2021)
Workshop co-organizer, ICLR ''How can findings about the brain improve AI systems?'' (2021)
Workshop committee, NeurIPS ''Context and compositionality in biological and artificial systems'' (2019)
Ad-hoc reviewer, NeurIPS, ICLR, ICML, PLOS Computational Biology, Journal of Cognitive Neuroscience, NeuroImage, PNAS
Guest lecturer, Fundamentals in Statistics and Computation for Neuroscientists (graduate), Data Analysis in MATLAB (graduate), Sensation & Perception (undergraduate) (2015-2016)
Teaching assistant, Computational neuroscience workshops/labs for Neurosciences Graduate Program boot camp, Data Analysis in MATLAB (2014-2015)