Skip to main content

Biomedical and Electrical Engineer with interests in information theory, evolution, genetics, abstract mathematics, microbiology, big history, and the entertainment industry including: finance, distribution, representation

boffosocko.com

www.twitter.com/chrisaldrich

www.facebook.com/chrisaldrich

+13107510548

chris@boffosocko.com

plus.google.com/u/0/+ChrisAldrich1/about

stream.boffosocko.com

www.boffosockobooks.com

www.instagram.com/chrisaldrich

pnut.io/@chrisaldrich

 
 
 
 
Reposted J.K. Rowling Verified account's tweet

Bloody Professors of Classics at Cambridge University, with their 'facts' and their books that they SELL for MONEY.pic.twitter.com/vovgJF5kdy

 
 
 
 
 
 

Real-time MRI for precise and predictable intra-arterial stem cell delivery to the central nervous system

An MRI shows stem cells labeled with iron oxide nanoparticles being injected into an animal’s brain. Click to view video. (Credit: Piotr Walczak/Johns Hopkins Medicine)

Working with animals, a team of scientists reports it has delivered stem cells to the brain with unprecedented precision by threading a catheter through an artery and infusing the cells under real-time MRI guidance.

In a description of the work, published online Sept. 12 in the Journal of Cerebral Blood Flow and Metabolism, they express hope that the tests in anesthetized dogs and pigs are a step toward human trials of a technique to treat Parkinson’s disease, stroke, and other brain damaging disorders.

“Although stem cell-based therapies seem very promising, we’ve seen many clinical trials fail. In our view, what’s needed are tools to precisely target and deliver stem cells to larger areas of the brain,” says Piotr Walczak, M.D., Ph.D., associate professor of radiology at the Johns Hopkins University School of Medicine’s Institute for Cell Engineering. The therapeutic promise of human stem cells is derived from their ability to develop into any kind of cell and, in theory, regenerate injured or diseased tissues ranging from the insulin-making islet cells of the pancreas that are lost in type 1 diabetes to the dopamine-producing brain cells that die off in Parkinson’s disease.

Ten years ago, Shinya Yamanaka’s research group in Japan raised hopes further when it developed a technique for “resetting” mature cells, such as skin cells, to become so-called induced pluripotent stem cells. That gave researchers an alternative to embryonic stem cells that could allow the creation of therapeutic stem cells that matched the genetic makeup of each patient, greatly reducing the chances of cell rejection after they were infused or transplanted. But while induced pluripotent stem cells have enabled great strides forward in research, Walczak says they are not yet approved for any treatment, and barriers to success remain.

In a bid to address once such barrier – how to get the cells exactly where needed and no place else – Walczak and his colleague Miroslaw Janowski, M.D., Ph.D., assistant professor of radiology, sought a way around strategies that require physicians to puncture patients’ skulls or inject them intravenously. The former, Walczak says, is not only unpleasant, but also only allows delivery of stem cells to one limited place in the brain. In contrast, injecting cells intravenously scatters the cells throughout the body, with few likely to land where they’re most needed, says Walczak.

“Our idea was to do something in between,” says Janowski, using intra-arterial injection, which involves threading a catheter, or hollow tube, into a blood vessel, usually in a leg, and guiding it to a vessel in a hard-to-reach spot like the brain. The technique currently is used mainly to repair large vessels in the brain, says Janowski, but the research team hoped it might also be used to get stem cells to the exact place where they were needed. To do that, they would need a way of monitoring the catheter placement and movement of implanted cells in real time.

Walczak and Janowski teamed with colleagues including Monica Pearl, M.D., an associate professor of radiology practicing in the Division of Interventional Neuroradiology, who specializes in intra-arterial procedures. Usually the procedure is performed using an X-ray image as a guide, but that approach ruled out watching injected stem cells’ movements and making adjustments in real time.

In their experiments, after placing the catheter under X-ray guidance, they transferred anesthetized dog and pig subjects to an MRI machine, where images were taken every few seconds throughout the procedure. Once the catheter was in the brain, Pearl pre-injected small amounts of a harmless contrast agent that included iron oxide and could be detected on the MRI. “By using MRI to see in real time where the contrast agent went, we could predict where injected stem cells would go and make adjustments to the catheter placement, if needed,” says Janowski. Adds Jeff Bulte, Ph.D., a professor of radiology who participated in the study, “It’s like having GPS guidance in your car to help you stay on the right route, instead of only finding out you’re lost when you arrive at the wrong place.”

The team then injected both small stem cells (glial progenitor cells from the brain) and large mesenchymal stem cells from bone marrow into the animals under MRI, and found that in both cases, the pre-injected contrast agent and MRI allowed them to accurately predict where the cells would end up. They could also tell whether clumps of cells were forming in arteries and, if so, possibly intervene to avoid letting the clumps grow large enough to cut off blood flow and pose a danger. “If further research confirms our progress, we think this procedure could be a big step forward in precision medicine, allowing doctors to deliver stem cells or medications exactly where they’re needed for each patient,” says Walczak. The research team is planning to test the procedure in animals as a treatment for stroke and cancer, delivering both medications and stem cells while the catheter is in place.

Other authors on the paper are Joanna Wojtkiewicz, Aleksandra Habich, Piotr Holak, Zbigniew Adamiak and Wojciech Maksymowicz of the University of Warmia and Mazury in Poland; Adam Nowakowski and Barbara Lukomska of the Mossakowski Medical Research Center in Poland; Jiadi Xu of the Kennedy Krieger Institute; and Moussa Chehade and Philippe Gailloud of the Johns Hopkins University.

The study was funded by the National Institute of Neurological Disorders and Stroke (grant numbers NS076573, NS045062, NS081544), the Maryland Stem Cell Research Fund, the Department of Defense (grant number PT120368), the Polish National Science Centre (grant number NCN 2012/07/B/NZ4/01427), the National Centre for Research and Development, and a Mobility Plus Fellowship from the Polish Ministry of Science and Higher Education.

 

@jlizier @Sydney_Library This sounds a lot like the functionality of LibX: http://libx.org/ which is described pretty well by the @Chronicle http://chronicle.com/blogs/profhacker/your-research-variable-solved-libx/35811

Anand Sarwate built something similar, but far more rudimentary: https://ergodicity.net/2014/02/11/a-bookmarklet-for-the-rutgers-university-library-proxy-server/

 
 

There aren't a lot out there, but here are the ones I'm aware of:
*Thomas Cover (YouTube): https://www.youtube.com/user/classxteam
*Raymond Yeung (Coursera): https://www.coursera.org/course/informationtheory (May require account to see 3 or more archived versions)
*Andrew Eckford/York University (YouTube): Coding and Information Theory https://www.youtube.com/channel/UCEFL7YLqmfa8fMW8iExYF2g
*NPTEL: Electronics & Communication Engineering http://nptel.ac.in/courses/117101053/

Fortunately, most are pretty reasonable, though vary in their coverage of topics. I'd be glad to hear about others, good or bad if others are aware. The top two are from professors who've written two of the most common textbooks on the subject. If I recall a version of the Yeung text is available via download through his course interface.

 

17w5131: Statistical & Computational Challenges in Large Scale Molecular Biology Workshop @BIRS_Math 3/2017 #ITBio

Arriving in Banff, Alberta Sunday, March 26 and departing Friday March 31, 2017

Organizers

  • Barbara Engelhardt (Princeton University)
  • Anna Goldenberg (University of Toronto)
  • Manolis Kellis (Massachusetts Institute of Technology)
  • Jacob Laurent (Centre national de la recherche scientifique)
  • Jeff Leek (John Hopkins University)
  • Stephen Montgomery (Stanford University)

Objectives

Over the past few years, an increasing number of large scale data sets have been made available in molecular biology. GTEx, for example, produced more than 18,000 RNA-Seq assays for multiple tissues in 900 individuals, Mindact generated gene expression data from about 7000 breast tumors in a single study, and 23andMe claims to have sequenced about 900,000 genomes. This growth in the available genomic data is expected to increase our capacity to identify cancer subtypes, regulatory genes, SNPs associated with phenotypes of interest, and biomarkers for many human traits. It also suggests exploring more complex feature representations when analyzing these datasets.

However, increasing the number of samples and features leads to a set of \textbf{interrelated statistical and computational problems}. Accordingly, the objectives of our workshop will be to:

Systematically identify the statistical and computational

problems arising during the analysis of large scale data in molecular biology;

Bring together experts in computational biology, molecular

biology, computer science, and statistics to propose innovative solutions to these problems, by leveraging recent advances in each of these fields.

Relevance, importance and timeliness

A number of studies generating high throughput molecular data for a large number of biological samples have been completed over the past five years. \textbf{Our workshop is important because the availability of these datasets holds great promises in terms of health improvement and understanding of molecular biology}. First of all, if exploited correctly, larger sample sizes should improve our ability to predict phenotypes of interest from molecular data. This entails very important applications such as improving the survival of cancer patients by better predicting which treatment they should receive, or decreasing bacterial resistances by predicting which antibiotic is efficient against a new strain. Correctly exploiting large scale datasets should also allow us to \textbf{better identify genetic and epigenetic determinants of these phenotypes, yielding a better understanding of human diseases and potentially guiding the development of new treatments and prevention policies}. In particular, more samples should allow the detection of less frequent variants in the human genome, or more complex features involving several modalities (copy number, expression, methylation, etc) associated with diseases. Finally, larger sample sizes should help with essential unsupervised tasks such as the \textbf{inference of regulation networks, or the identification of cancer subtypes}.

Our workshop is relevant because \textbf{all of these promises are conditioned on our solving of new statistical and computational challenges}. First (Challenge 1), we need to build new feature spaces and estimators whose complexity is adapted to these larger sample sizes, which involves designing novel, potentially more complex descriptors of the samples but still controlling the bias/variance trade-off. Second (Challenge 2), we need to build models which correctly integrate different modalities, such as copy number variation and gene expression. Third (Challenge 3), larger scale studies are more prone to unwanted variations, because they typically involve different labs and technical changes which can affect the measurements and become confounders in retrospective analyses. Similar or worse problems arise when trying to combine several existing datasets. We need methods which take this unwanted variation into account. Finally, (Challenge 4), we need new algorithms that make existing statistical tools scalable to the new sample sizes, and make estimation over the larger and more complex features of Challenge 1 tractable.

We also believe our workshop is very timely because \textbf{some of these statistical and computational challenges are starting to be addressed in other application fields} of statistics. It is crucial to recognize that the orders of magnitude are still very different in molecular biology and other data science application fields because of the cost and complexity of the data generation process: current large scale high throughput sequencing data sets typically contain a few thousand of samples but millions of features while computer vision, web, or astronomy datasets can involve billions or trillions of samples and relatively fewer features. A first consequence is that not all recent developments in machine learning are immediately transferable to computational biology. For example, so called deep learning methods have gained a lot of popularity and now represent the state of the art in computer vision but may not be the most appropriate tool for prediction of cancer outcome from molecular data. However, the fact that other fields already have much larger sample sizes also means that they had to develop efficient and scalable algorithms for basic tasks like feature selection, classification or clustering. \textbf{These recent developments are a great source of inspiration for computational biology, where large scale computation is still an emerging challenge}.

We believe \textbf{having a small scale workshop involving international experts in machine learning, statistics, computational biology and molecular biology is of utmost importance} for three main reasons. The first reason is that the technical advances we are referring to are very recent, often unknown to computational biologists and involve paradigms such as online optimization, accelerated gradient methods and network flow optimization, with which they are sometimes unfamiliar. The second reason is that it is not always obvious to non-statisticians which novel methods are appropriate given the current n/p regime. Conversely, the third reason is that statisticians do not know what the recent challenges are in molecular biology. Having them work on abstract versions of the problems is often not satisfactory as it is necessary to be aware of technical realities and of the underlying biology of the problem to come up with useful solutions.

 

17w5104: Mathematical Approaches to Evolutionary Trees and Networks Workshop @BIRS_math 2/12/17 #ITBio

Arriving in Banff, Alberta Sunday, February 12 and departing Friday February 17, 2017

Organizers

 

  • Leonid Chindelevitch (Simon Fraser University)
  • Caroline Colijn (Imperial College London)
  • Amaury Lambert (University Pierre and Marie Curie, Paris)
  • Marta Luksza (Institute of Advanced Study, Princeton University)
  • Vincent Moulton (University of East Anglia)
  • Tandy Warnow (University of Illinois)

Objectives

The objectives of the workshop are to bring mathematicians working in three key areas together to make progress in these problems. We will also invite several biologists who are keen to engage with mathematicians on the challenges posed by new data on evolutionary processes. Key challenges in the field at the moment are focused around the following emerging inter-related areas, each of which is raising mathematically interesting problems:

1. Inference with evolutionary trees and networks: Ultimately it is necessary not just to obtain evolutionary trees from data using standard methods, but to infer aspects of an underlying biological process. This requires understanding the likelihood of an evolutionary tree or network, or at least some of its informative features, using some stochastic process as the underlying ecological model. In principle, this approach allows simultaneous inference of both evolutionary trees and parameters of the ecological model. Coalescent theory has made considerable progress, for example, in obtaining tree likelihoods for sparsely sampled populations with geographical structure or with known past demographics (see for just one example [5]). In some simplified cases, epidemiological inference methods can estimate transmission trees [2], branching rates through time [5] and other aspects of epidemic spread [7]. However, none of these approaches is currently applicable if there is non-tree-like evolution, or where datasets are large. Furthermore, the range of models for which we can write down a tree likelihood is very limited. This raising nice new problems in probability, statistical inference and ecological modelling. Recently, more general processes (e.g. Lambda-coalescents, which allow multiple rather than strictly pairwise coalescent events) are beginning to be used to model populations with large offspring variance, or even to model selection in a non-parametric fashion [3]. This is potentially a powerful tool particularly for bacteria, which may acquire resistance to antibiotics and spread rapidly as a consequence, yielding both highly variable effective offspring numbers and a need to model selection carefully.

2. Understanding spaces of evolutionary trees: There are a large number of possible labelled, rooted binary trees for a given set of nn tips (ie for a given set of sequence data): (2n−3)!!=(2n−3)(2n−5)...(3)(1)(2n−3)!!=(2n−3)(2n−5)...(3)(1). This works out to 1018410184 trees on 100 tips; in contrast, current datasets for evolving bacteria contain thousands of tips. Not even the tools of Bayesian inference, the natural approach in such situations, can systematically explore spaces this big. This motivates the development of mathematical approaches for the exploration of tree space. These include new approaches to continuous tree spaces, including those from tropical geometry [8], and the use of tree metrics [1]. These in turn can lead to tools for averaging trees , and for navigating tree space in efficient ways [6] -- with profound applications in statistical inference from sequence data. Generalizing metrics to the case of evolutionary networks (for example tree-based networks) is another natural and important question. 

3. Summarising trees and networks using combinatorial tools: Uncovering shape features, spectral features and other ways to describe trees using quantities that are mathematically tractable will be of considerable interest [4]. As one example, where likelihoods are truly intractable, rapid tools for likelihood-free inference can be used to infer evolutionary processes from sequence data, but only where there are informative ways to summarize key features of the data. Trees are natural combinatorial structures with connections to data; for example, a binary tree is a sequence of partitions of the set of tips (sequences in a dataset), where each partition is one block smaller than the previous one, moving back through time from the partition with each tip on its own to the partition with all tips in one block as we move from the tips of the tree to the root. If the tree is not binary (ie it allows multifurcations), more than two blocks can combine at a branching event. Because of the natural link to partitions, the study of tree shapes links to the enumeration of partitions and to lattice path combinatorics. These in turn allow the characterization and enumeration of possible tree shapes. Meanwhile the study of motifs in other biological networks has been fruitful, and could be extended to tree and evolutionary network shapes. Trees and evolutionary networks are of course also graphs (with an added time dimension); the tools of algebraic graph theory are now finding application in this area of mathematical biology.

The community's response to the idea for this workshop has been very positive. A * beside a participant's name indicates that they have expressed enthusiasm for the workshop, and plan to attend. 

References [1] Louis J Billera, Susan P Holmes, and Karen Vogtmann. Geometry of the space of phylogenetic trees. Adv. Appl. Math., 27(4):733–767, November 2001. [2] Xavier Didelot, Jennifer Gardy, and Caroline Colijn. Bayesian inference of infectious disease transmission from whole-genome sequence data. Mol. Biol. Evol., 31(7):1869–1879, July 2014. [3] Alison M Etheridge, Robert C Griffiths, and Jesse E Taylor. A coalescent dual process in a moran model with genic selection, and the lambda coalescent limit. Theor. Popul. Biol., 78(2):77–92, September 2010. [4] Fanny Gascuel, Regis Ferriere, Robin Aguilee, and Amaury Lambert. How ecology and landscape dynamics shape phylogenetic trees. Syst. Biol., 64(4):590–607, July 2015. [5] Amaury Lambert and Tanja Stadler. Birth–death models and coalescent point processes: The shape and probability of reconstructed phylogenies. Theor. Popul. Biol., 90(0):113–128, December 2013. [6] Tom M W Nye. An algorithm for constructing principal geodesics in phylogenetic treespace. IEEE/ACM Trans. Comput. Biol. Bioinform., 11(2):304–315, March 2014. [7] David A Rasmussen, Erik M Volz, and Katia Koelle. Phylodynamic inference for structured epidemiological models. PLoS Comput. Biol., 10(4):e1003570, April 2014. 2 [8] David Speyer and Bernd Sturmfels. The tropical grassmannian. Adv. Geom., 4(3):389–411, 2004.