Bio-Synergy Analysis with a Virtual Human System CODA
Bio-Synergy National Research Center/KAIST, Korea
Recently, there are growing interests in combinational bio-agents interacting with multiple targets to overcome the limitations of the current single target approaches. Many drug development efforts based on the Paul Ehrlich’s magic bullet principle, where a single therapeutic agent with ideal selectively could successfully regulate a single target causing a particular disease, have been suffering critical hindrances including unwanted off-target effects and degraded efficacy. Synergistic regulation of multiple targets with multiple agents is expected to remedy those hindrances. Furthermore, recent trends of 4P healthcare require more comprehensive spectrum of bio-agents for disease prevention as well as treatment. Functional food and ingredients have drawing increasing attention especially for preventive medicine and life-time healthcare. As they are composed of multiple components inherently, their precise interactions with human physiology are thought to be synergistic regulation of multiple targets with multiple agents This talk introduces a national initiative where multiple-agent-multiple-target systems biology technology for natural product-based healthcare is being developed. Core components of the technology platform are virtual cell and human systems, which are computational models of molecular, cellular, and organ-level physiological mechanisms. The synergistic effects of multiple agents on multiple targets are simulated and predicted with those virtual systems, and validated in real systems including model cells and animals.
Doheon Lee received the B.S., M.S., and Ph.D. degrees in computer science from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 1990, 1992, and 1995, respectively. Currently, he is a professor in Department of Bio and Brain Engineering, KAIST, and the director of Bio-Synergy Research Center (BSRC), a Korean national research project where over 30 principal investigators are collaborating for natural product bioinformatics and systems biology. He was a visiting professor of Stanford University, Indiana University, Translational Genomics Research Institute (TGEN) and Univ. of Texas at Austin, USA. He is also a technical advisor for CRS Diogenes SRL, Italy. He was an Associate Editor for ACM Transactions on Internet Technology for nine years. He is also serving Computers in Biology and Medicine, International Journal of Data Mining in Bioinformatics, and Healthcare Informatics Research as an Editorial Board Member. He is a co-founder of ACM International Workshop on Data and Text Mining for Biomedical Informatics. He has published over 100 academic journal papers in bioinformatics, systems biology, and data mining. He also has around 20 technical patents.
Machine Learning: Status and Perspectives
Department of Computer Science & Technology, Nanjing University, China
Machine learning has achieved great success in both research and application during the past decade. It originated as a research branch of artificial intelligence (AI), and becomes the mainstream of current AI research. In this talk, we will briefly introduce the progress and status of machine learning, and discuss on some future perspectives. We will comment on strengths and weakness of deep learning. Then, we will talk about challenges and opportunities introduced by open environment machine learning tasks. Moreover, considering that in its current form of “data + algorithm”, machine learning suffers from many weakness or even bottlenecks, such as the need of large amount of training data, the difficulty of adapting to environmental change, the incomprehensibility, etc., we advocate to explore the form of learnware, which is a well-performed pre-trained learning model with a specification explaining its purpose and/or specialty. Learnwares can be put into a market, such that when one is going to tackle a machine learning task, rather than building his model from scratch, he can do it in this way: Figure out his own requirement, and then browse/search the market, identify and adopt a good learnware whose specification matches his requirement. In some cases he can use the learnware directly, whereas in more cases he may need to use his own data to adapt/polish the learnware. Nevertheless, the whole process can be much less expensive and more efficient than building a model from scratch by himself. If learnwares come to reality, strong machine learning models can be achieved even for tasks with small data, and data privacy will become a less serious issue for machine learning tasks. More importantly, it will enable common end users to achieve tricky learning performances that previously can only be achieved by machine learning experts.
Zhi-Hua Zhou is a Professor and Founding Director of the LAMDA Group at Nanjing University. His main research interests are in artificial intelligence, machine learning and data mining. He authored the books “Ensemble Methods: Foundations and Algorithms” and “Machine Learning (in Chinese)”, and published more than 100 papers in top-tier international jour! nals and conference proceedings. His work have received more than 23,000 citations, with a h-index of 74. He also holds 18 patents and has good experiences in industrial collaborations. He has received various awards, including the National Natural Science Award of China (premium science award in China), the PAKDD Distinguished Contribution Award, the Microsoft Professorship Award, 12 international paper/competition awards, etc. He serves as the Executive Editor-in-Chief of Frontiers of Computer Science, Associate Editor-in-Chief of Science China, and Associate Editor of ACM TIST, IEEE TNNLS, etc. He founded ACML (Asian Conference on Machine Learning) and served as General co-chair of IEEE ICDM 2016, Program co-chair IJCAI 2015 Machine Learning track, etc. He also serves as the Chair of the IEEE CIS Data Mining and Big Data Analytics Technical Committee, the CCF Artificial Intelligence Technical Committee, etc. He is a Fellow of the ACM, AAAI, AAAS, IEEE, IAPR, IET/IEE and CCF.
Computational Methods for Large-Scale Microbiome Data Analysis
College of Computing & Informatics, Drexel University, USA
We know little about microbes. Recently, huge amounts of data are generated from many microbiome projects such as the Human Microbiome Project (HMP), Metagenomics of the Human Intestinal Tract (MetaHIT),etc. These datasets provide opportunities to study the mystery of the microbial world, and analyzing these data will help us to better understand the function and structure of the microbial community of the human body, earth and other environmental eco-systems. However, the huge data volume, the complexity of the microbial community and the intricate data properties have created a lot of opportunities and challenges for data analysis and mining. In this talk, I will discuss a computational framework to tackle these challenging issues, focusing on the following three tasks: 1) visualization approaches to visualize microbiome data and to infer microbial interactions and relations; 2) computational methods for identifying and visualizing higher-order microbial interactions and relations from three types of microbiome datasets: metagenomes, bacterial genomes and literatures respectively; 3) the extracted interactions and relations from different knowledge sources will be integrated in a knowledge graph. Statistical and machine learning approaches is discussed for consistency checking of inferred microbial interactions and relations.
Xiaohua Tony Hu is a professor and the founding Co-Director of the NSF Center (I/U CRC) on Visual and Decision Informatics (NSF CVDI), IEEE Computer Society Bioinformatics and Biomedicine Steering Committee Chair, and IEEE Computer Society Big Data Steering Committee Chair. He joined Drexel University in 2002. Earlier, he worked as a research scientist in Nortel Research Center, and Verizon Lab (the former GTE labs). In 2001, he founded the DMW Software in Silicon Valley, California. Tony’s current research interests are in data/text/web mining, big data and bioinformatics. He has published more than 270 peer-reviewed research papers in various journals, conferences and books. His research projects are funded by the National Science Foundation (NSF), US Dept. of Education, the PA Dept. of Health, the Natural Science Foundation of China (NSFC). He has obtained more than US$8.0 million research grants in the past 8 years as PI or Co-PI (PIs of 7 NSF grants, PI of 1 IMLS grant in the last 8 years), has graduated 18 Ph.D. students from 2006 to 2016, and is currently supervising 10 Ph.D. students.