Vy Ai Vo, Ph.D.
AI/ML Research Scientist

About Me

I am a research scientist in the Brain-Inspired Computing Lab, which is a part of Emergent AI research at Intel Labs.

I currently work on deep neural networks, especially focusing on sequential data, natural language processing (NLP), and memory models. You can find links to my publications on the CV page.

Previously I worked on understanding the human visual system as a cognitive computational neuroscientist, especially focusing on tasks with attention and memory demands.

Publication Updates
  • 11/2022: NeurIPS MemARI workshop, accepted. Two short papers to be presented as posters -- one on language models and one on graph neural networks.
  • 09/2022: arXiv:2209.10818, preprint. Survey paper on memory-augmented graph neural networks from a neuroscience perspective.
  • 03/2022: Cerebral Cortex, published. Work on the representational format of memories in short- and long-term memory.
  • 02/2022: Aperture (OHBM journal), published. A large group effort paper describing neuroimaging analysis methods (e.g. inverted encoding model) with code examples demonstrating use of the open-source software library BrainIAK.
  • 12/2021: NeurIPS, published. Work with UT Austin on transfer learning between language representations.
Conference Updates
  • 12/2022 NeurIPS: Organizing in-person Memory in Artificial and Real Intelligence (MemARI) workshop [website]
  • 05/2022 Context and Episodic Memory Symposium: Collaborator Mariya Toneva (Princeton, Max Planck SWS) presented joint work on 'Language models that can remember'
  • 04/2022 From Neuroscience to Artificially Intelligent Systems (NAISys) at CSHL: Poster summarizing our neuroscience-inspired efforts to improve long-range abilities of language models
  • 03/2022 Computational and Systems Neuroscience (COSYNE) workshop: Virtual talk at the 'Mechanisms, functions, and methods for diversity of neuronal and network timescales' workshop