Killian Laboratory for Vision and Memory Research


Dr. Nathan Killian PhD | Assistant Professor

Department of Neurological Surgery | Nathaniel.killian@einsteinmed.edu

Dr. Killian is a co-founder of Raven Neuro, Inc., and is supported by the Philip V. and Anna S. Brown Foundation and a grant from the Sarah K. de Coizart Perpetual Charitable Trust.

 

We accept applications for post-doctoral and technician positions and welcome Einstein graduate students for rotations. If interested, please email: Nathaniel.killian@einsteinmed.edu

 

We aim to enhance our understanding of the neuronal underpinnings of behavior in complex brain networks, focusing on brain systems involved in visuospatial learning and memory. In addition, we seek to transform the knowledge gained from our research into clinical neural interface applications. To support these goals, we develop technologies and techniques to accelerate experimental work and anticipate translation.

 

We have made significant advances in the research areas of neural prostheses (Front Psychol. 2016, Sci Rep. 2016), bidirectional neuronal interfaces (Front Neurosci. 2016), visuospatial cognition (Nature 2012, PNAS 2015), and the relationship between cortical connectivity and recognition memory. Working with Dr. Elizabeth Buffalo at Emory, we advanced our understanding of the representation of visual space in the entorhinal cortex (EC) through single-unit and field potential electrophysiology in awake, behaving monkeys. In addition, we have reported on the interlaminar functional connectivity of the EC as it relates to the novelty of visual stimuli and the strength of memory formation (Killian and Buffalo, submitted). With Dr. Steve Potter at Georgia Tech, we developed an in vitro brain-machine interface testbed for thick neuronal tissue constructs. As an NIH NRSA (F32) recipient at MGH working with Dr. John Pezaris, I advanced our understanding of non-classical receptive fields in the lateral geniculate nucleus (LGN) and built the non-human primate artificial vision paradigm. We successfully trained monkeys over the course of years to use a visual prosthesis by learning associations between discrete points of visual activation and the letters of the Roman alphabet (Sci Rep. 2016). I then designed, built, and implanted a prosthetic device composed of bundles of 64 to 128 platinum-iridium microwires chronically implanted in each LGN. The wires evoked distinct, reproducible artificial visual percepts called phosphenes. We are currently focusing the efforts of our research team on visuospatial representations that subserve learning and memory.

 

 Publications