Computational Neuroscience

To introduce to graduate students basic concepts and theories of computational neuroscience, some current research questions and focuses in the field, and how ideas from this field may help solve difficult problems in medicine and biomedical engineering.

Monday 15 July: Introduction

17:00

A crash course on the biophysics of the brain

18:30

Coffee break

19:00

Models of a single neuron

20:30

Lunch

22:00

Student presentation session 1

23:30

Coffee break

00:00

Student presentation session 2

01:30

Dinner

03:00

Evening tutorial

Tuesday 16 July: Neural codes

17:00

Models of the visual system

18:30

Coffee break

19:00

Neural coding in the visual system

20:30

Lunch

22:00

Analyzing big data in neuroscience

23:30

Coffee break

00:00

Data analysis in neuroscience

01:30

Dinner

03:00

Evening tutorial

Wednesday 17 July: Motor system and dynamics of neural circuits

17:00

Models of the motor system

18:30

Coffee break

19:00

Brain-machine interfaces and robotics

20:30

Lunch

22:00

Dynamics of neuronal networks

23:30

Coffee break

00:00

Excitation-inhibition balance and chaos

01:30

Dinner

03:00

Evening problem sets

Thursday 18 July: Plasticity and learning

17:00

Synaptic plasticity and learning

18:30

Coffee break

19:00

Dynamics of neuronal networks in the hippocampus

20:30

Lunch

22:00

Outing to Lantau Island

03:00

Evening problem sets

Friday 19 July: Emerging theme in the field

17:00

Whole-brain activity data and circuits in small animals (part I)

18:30

Coffee break

19:00

Whole-brain activity data and circuits in small animals (part II)

20:30

Lunch

22:00

Large-scale brain models (part I)

23:30

Coffee break

00:00

Large-scale brain models (part II)

02:00

Dinner

Speakers

  • Prof. Rosa HM Chan Associate Professor, City University of Hong Kong Dr. Rosa H. M. Chan is currently an Associate Professor in the Department of Electronic Engineering at City University of Hong Kong. She received her B.Eng (1st Hon.) degree in Automation and Computer-Aided Engineering from the Chinese University of Hong Kong in 2003. Her undergraduate studies had brought her to New York University (US) to study computer animation and visual effects and Kyushu University (Japan) to research on microfluidics for astronautics applications. She was later awarded the Croucher Scholarship and Sir Edward Youde Memorial Fellowship for Overseas Studies in 2004 to pursue her graduate studies at University of Southern California (USC). In the summer of 2010, she was supported by Google Scholarship to participate in the Singularity University Graduate Studies Program at NASA AMES. Dr. Chan received her Ph.D. degree in Biomedical Engineering in 2011 at USC, where she also received her M.S. degrees in Electrical Engineering and Aerospace Engineering. Her research interests include computational neuroscience, neural prosthesis and brain-computer interface applications. She was the co-recipient of the Outstanding Paper Award of IEEE Transactions on Neural Systems and Rehabilitation Engineering in 2013, for their research breakthroughs in mathematical modelling for hippocampal cognitive prosthesis and memory facilitation. Dr. Chan was the Chair of the Hong Kong-Macau Joint Chapter of IEEE Engineering in Medicine and Biology Society (EMBS) in 2014 and is recently elected to the IEEE EMBS AdCom as Asia Pacific Representative (2018-2020).
  • Prof. Quan Wen , - School of Life Sciences at University of Science and Technology of China Quan received his BS in Physics from Fudan University (复旦大学) , and PhD in Physics from Stony Brook University and Cold Spring Harbor Laboratory on Long Island, New York. After a short stay at HHMI Janelia Farm Research Campus, he did postdoctoral research in the Department of Physics and Center for Brain Science at Harvard University. He is now a Bairen Professor in the School of Life Sciences at University of Science and Technology of China.
  • Prof Xiao-Jing Wang Global Professor of Neuroscience; Professor of Physics; Co-Director of Swartz Center for Theoretical Neuroscience, New York University Research in my group aims at understanding dynamical behavior and function of neural circuits. Using theoretical and modeling approaches, in close collaboration with experimentalists, we investigate the neural mechanisms and computational principles of cognitive processes, such as decision-making (how we make a choice among multiple options) and working memory (how our brain holds and manipulates information "online" in the absence of sensory stimulation). I obtained my Ph.D. degree in Theoretical Physics, from the Free University of Brussels, in 1987 when I switched to the then nascent field of Computational Neuroscience. I was on the faculty at University of Pittsburgh, Brandeis University and Yale University; I was also visiting professor at École Normale Supérieure in Paris and Tsinghua University in Beijing. Recently, I moved from Yale to join the Center for Neural Science at New York University. My group has been focused on the prefrontal cortex (PFC), which is often called "the CEO of the brain". I am interested in identifying circuit properties that enable PFC to subserve higher cognitive functions, in contrast to early sensory processing. We found that a local circuit endowed with strong but slow recurrent dynamics ("reverberation") is well suited for both decision-making and working memory, suggesting a canonical "cognitive-type" neural circuit. Mathematically, such circuits are described as "attractor networks" that are characterized by powerful feedback mechanisms, long transients as well as self-sustained persistent activity. This finding led us to investigate all sorts of decision processes, including reward-based economic choice behavior, categorization, inhibitory control of action selection, attention switching, probabilistic inference. A recent collaborative work offers a theoretical explanation, supported by single-neuron data from behaving monkeys, of a common and perplexing observation that neural activities in the PFC display a high degree of mixed-selectivity and heterogeneity. Furthermore, we are keen to learn why the brain exhibits such a rich diversity of inhibitory interneuron subtypes, and their roles in tuning, normalization, competition, rhythmic synchronization. Finally, in a new field called "Computational Psychiatry" and in collaboration with psychiatrists, we use our models to examine cellular and circuit abnormalities that may be causally linked to cognitive deficits associated with mental disorders such as schizophrenia. Recently, we have begun to investigate large-scale brain circuit models, with the long-term goal to develop a theoretical framework and computational platform to explore how brain sub-networks dedicated to different "building blocks of cognition" (perceptual judgment, valuation, representations of task rule and uncertainty, inhibitory control of response, etc) work together to underlie flexible behavior.
  • Prof. Vincent Chi Kwan Cheung Assistant Professor, Chinese University of Hong Kong See above (course director)
  • Prof. Yu Hu Assistant Professor, Hong Kong University of Science and Technology Research interest: Theoretical and computational neuroscience; graph motifs and network dynamics; modeling and analysis for large scale neuroscience dataEducation: University of Washington, Ph.D. in Applied Mathematics, 2014 Peking University, B.S. in Mathematics, 2009Professional Positions:Harvard University, Postdoc Fellow, 2014 - 2018HKUST, Dept. Math and Life Science, 2018-presentWepage:http://mahy.people.ust.hk/
  • Prof. Julie Semmelhack Assistant Professor, Hong Kong University of Science and Technology Julie Semmelhack received her PhD in 2009, working on the Drosophila olfactory system in Jing Wang's lab at UCSD. She began working on the zebrafish visual system as a postdoc with Herwig Baier, first at UCSF and then then Max Planck Institute for Neurobiology in Martinsried, Germany. Julie started her lab at HKUST in early 2017, and is using functional imaging and quantitative analysis of behavior to understand the neural circuits involved underlying innate visual behaviors.
  • Prof. Karthik Devarajan Associate Professor, Temple University Educational Background PhD, Northern Illinois University, 2000 MSc, Tech, Birla Institute of Technology & Science, India, 1992 Industry Experience Statistical Scientist, Cancer Bioinformatics, AstraZeneca R&D Boston, Waltham, MA, 2002-2005 Biostatistician, Bristol-Myers Squibb Pharmaceutical Research Institute, Bristol-Myers Squibb, Princeton, NJ, 1999-2002 Lab Overview Advances in high-throughput technologies in the past decade have given rise to large-scale biological data that is measured in a variety of scales. Gene expression studies enable the simultaneous measurement of the expression profiles of tens of thousands of genes and proteins, often from only a handful of biological samples. Data is typically presented as a two-way numeric table in which the rows represent the genes, columns represent the samples and each entry consists of the expression level of a given gene in a given sample. The samples may represent a phenotype such as tissue type, experimental condition or time points. Traditionally these studies have involved the use of microarray technology to measure mRNA expression, and more recently, the use of SNP arrays to measure allele-specific expression and DNA copy number variation, methylation arrays to quantify DNA methylation and next-generation sequencing technologies, such as RNA-Seq and ChIP-Seq, for the measurement of digital gene expression. In addition, high-throughput compound and siRNA screening assays are specifically designed to detect interactions with compounds by directly measuring inhibition of siRNA or kinase activity. These studies have resulted in massive amounts of data requiring analysis and interpretation while offering tremendous potential for growth in our understanding of the pathophysiology of many diseases. The focus of my research is in the development of novel statistical methodology for the analysis of data stemming from such high-throughput studies. It includes methods for dimension reduction and molecular pattern discovery as well as for correlating a qualitative or quantitative outcome variable (including tissue type, presence of disease, patient response to treatment, survival time) with large numbers of covariates (genes, SNPs or sequence tags) based on supervised and unsupervised learning. The primary focus of my research activities consist of the following two problems from statistical learning theory: nonnegative matrix factorization and continuum regression.
  • Dr. Robert Ajemian Research Scientist, Massachusetts Institute of Technology I'm a research scientist at MIT studying the neural control of movement, the representations of learned skills as motor memories and, more recently, the relationship between motor memories and declarative memories. It appears that information storage via these two distinct modalities has more in common than previously thought. My main interests are: 1) Assessing, through pyschophysical experimentation, movement behavior in individuals with neurological disorders, 2) Deciphering the cortical codes by which movement commands are represented in the brain through analysis of neurophysiological data, 3) Elucidating neural principles of learning and self-organization as they pertain throughout the brain generally and the motor system specifically, and 4) Using knowledge of neural representation and learning to build neuroprosthetic devices (or brain-machine interfaces). 5) Developing a unifying framework capable of explaining how both motor memories and declarative memories are formed and persist through time. Recently, my efforts have been focused on two more specific problems. The first is developing a science of human athletic performance based on systems neuroscience principles -- a "sports neuroscience", if you will. For several decades, kinesiologists and sports scientists have made numerous observations on the practice/performance habits of athletes, and these observations have resulted in a set of heuristics on general sensorimotor skill enhancement. Yet these heuristics are relatively sparse and ill-defined, leaving most performance improvements, ultimately, to the whim of trial-and-error. If a codifying theory could be formulated at the systems level, our understanding of athletic performance could be deepened leading, hopefully, to more effective teaching/coaching interventions. So far, I have focused on a few specific heuristics, such as the puzzling problem of why a professional athlete (or musician or other expert practitioner of fine motor skills) needs to practice so extensively immediately prior to performance on the day of a competition, regardless of how much practice has occurred in the preceding days (article on practice effects). Here, I am borrowing on my own personal experience as a collegiate tennis player, because I couldn't play worth a damn without rallying from the baseline for at least 1/2 hour prior to a match. The second problem is how to train a Brain-Machine or Brain-Body interface device to perform at or near the performance level of an unimpaired human. Lately, there has been a lot of hype in the media (and even in scientific journals) about the long-term potential of recent developments in motor neuroprosthetics. However, none of these devices actually work very well. Some might say that tweaking the current methodologies will ultimately lead to better performance. I would diagree, arguing that there are fundamental flaws to the current paradigm. In particular, all current devices rely on a "Decoding" stage, whereby the brain is assumed to represent movement commands in a certain way, and control algorithms are employed to find the best-fit parameters for this presumed representation. However, because we really don't understand enough scientifically about how the brain generates movement commands, any representation we impose on the system for decoding purposes will lead to fundamental performance limitations. It makes far more sense to allow the brain to interact autonomously with the peripheral actuators without the interposition of a "decoding stage", so that the brain figures out its own means of control (just as a baby has to engage in motor babbling to develop coordination). Basically, I am simply saying that the brain is smarter than we are in terms of understanding movement control, so let it solve the problem. Of course, this approach exhibits the downside of a significantly increased learning time, but this is a limitation which can be addressed, unlike the fundamental limitations of the alternative approach.