Data Fusion, Applied Algebraic Topology, Structural

0 downloads 0 Views 990KB Size Report
May 22, 2017 - large data sets in certain situations [6]. ... following developments in the Blue Brain Project. In other ... July 2017 applies deep learning and homology to predict .... 2007-2008, the Arab spring, and the 2016 American presidential election. .... Complex Networks, IEEE Transactions on Network Science and ...
Data Fusion, Applied Algebraic Topology, Structural Recurrent Neural Networks, and Deep Reinforcement Learning for Autonomous Network-Centric OODA Loop Game Theory Alex Alaniz PhD, 21 Dec 2017 “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Vladimir Putin, 2017. Autonomous Network Centric Game Theory, Applied Algebraic Topology, Structural Recurrent Neural Networks (S-RNNs), and Deep Reinforcement Learning (DRL). A preeminent goal of game theoretic cyberspace operations is data fusion to integrate local information and detect emergent topological structures in static or dynamic networks, a task algebraic topology is good at doing. Elementary methods of applied algebraic topology are inherent in S-RNNs [1-3] and have recently begun to spread into many areas of deep learning allowing for one-shot robot programming and capturing predictive graph topologies [2]. The goal of DRL is to maximize rewards, e.g., maximize video game scores. DRL and applied algebraic topology, however, have yet to merge. It is argued below that methods combining DRL and S-RNNs with applied topology in deep topological reinforcement learning (DTRL) should be developed for execution of autonomous network centric game theoretic strategies. DRL. As a technique based on learning from experience, DRL is one of the leading approaches to produce fully autonomous agents that interact with their environments to learn optimal behaviors by improving over time through trial and error [4]. DRL has been in the news lately with the defeat of Go champion, Ke Jie, on 23 May 2017 by DeepMind’s AlphaGo. Trained and tuned by human experts using DRL methods, AlphaGo was itself defeated one-hundred to zero by AlphaGo Zero in October 2017. AlphaGo Zero trained itself using DRL without human intervention over a span of days [5]. Acquired by Google in 2014, DeepMind used DRL to master all forty-nine classic Atari 2600 games without recourse to big data in 2015, highlighting that algorithms that learn from experience are more fundamental than large data sets in certain situations [6]. Accordingly, researchers at DeepMind believe that DRL will become an important component in the development of autonomous artificial general intelligence [7]. Applied Algebraic Topology. Motivated by Henri Poincaré in the late nineteenth and early twentieth centuries, algebraic topology uses abstract algebra to study topological spaces and mappings between these spaces. Applied algebraic topology (or computational topology) is currently advancing into many disciplines. Leading methods in topological data analysis (TDA) include homology [8] and persistent homology [9]. In physics, these methods are being used to study Hamiltonian dynamics [10] and phase transitions in statistical physics [11]. In neuroscience, homological methods are becoming indispensable to map out brain networks [12-20]. The Blue Brain Project recently determined that neocortical networks contain directed simplices of dimensions up to seven, with as many as eighty-million directed 3-simplices [13]. Up to eleven dimensional structures are being reported in the popular scientific press following developments in the Blue Brain Project. In other fields, homology is being applied to sensor networks [21], social networks [22], viral evolution [23], autonomous driving [24], automata, languages and programing [25], game theory [26], protein folding [27], and target enumeration [28]. A paper dated July 2017 applies deep learning and homology to predict biomolecular properties [29]. Another paper from July 2017 explores the application of deep learning and homology for classifying two-dimensional shapes and social networks [30]. No papers to date link DRL to algebraic topology.

Algebraic Topology and Games. Topology is not absent from agent-based game theoretic applications in econometric modeling [31]. The machine methods that defeated the world champions at chess and Go, however, did so without explicit topological methods as these games have relatively simple topologies. In a doctoral dissertation for building an intelligent player for the game of Risk, the word topology is not used [32]. The meaning of geographical position is reduced largely to simple topological adjacency. In Game of War devised by Guy Debord, the topology is more complex. Evaluating lines of communication, geographical position, logistics, and the relative speed and strength of different units all factor in the outcome [33]. The image below of WikiGalaxy is a topologized version of Wikipedia linking onehundred-thousand clickable articles by link distances with flyby capability of humanity’s concept space.

Figure 1. Screenshot of WikiGalaxy centered on the 2014 Wikipedia article on calculus. Framework for DTRL and Autonomous Network-Centric Game Theoretic Strategies. Autonomous execution of game theoretic strategies in autonomous network-centric environments will require the development of a deep learning framework for artificial general intelligence that has yet defied the field from its seminal paper in 1950 by Alan Turing [34]. Effective exploitation of topologized data structures and their spatial temporal dynamics in such a framework will require DTRL methods capable of extracting topological invariants with deep neural net machine learning [35-37] over topologized data structures coupled to machine learning methods for mapping contexts and ontology [38] as well as new neural network layer types with topological considerations in mind [39]. A high level conceptual framework for stitching all the dynamical topological concepts is the predictor-corrector observe, orient, decide, and act (OODA) loop pioneered by Air Force Colonel John Boyd to the combat operations process [40]. Reduced to a first order system, the DTRL predictor-corrector OODA-loop framework would look like: 𝜏𝑖 (𝑡𝑖 + ∆𝑡𝑖 ) ↔ 𝑇̂(𝜏𝑖 , 𝜏𝑖𝑗 , … , 𝜏𝑛−𝑡𝑢𝑝𝑙𝑒 )𝜏𝑖 ∆𝑡𝑖 + 𝑊(𝜏𝑖 , 𝜏𝑖𝑗 , … , 𝜏𝑛−𝑡𝑢𝑝𝑙𝑒 )∆𝑡𝑖 , where 𝑇̂ is a collection of time evolution operators, 𝜏𝑖 are a set of dynamical graph topologies in the presence of noise with clock rates, 𝑡𝑖 , and 𝜏𝑖𝑗 are any two-way couplings between the ith and jth network

topologies, 𝜏𝑖𝑗𝑘 are any three-way couplings, and so forth. Such methods are already occurring in network systems biology [41] (see figures 2 and 3 below from). The accretive process would develop a rich representation of the world providing a platform for artificial general intelligence in the spirit of Jeff Hawkins’ premise that the accretion of new synapses (the accretion of new nodes and/or topologies) is a more powerful form of learning than deep learning weight tuning methods alone [42]. Work towards a topological OODA-loop framework has already begun within cognitive science [43-45]. Optimizing a large set of reward functions, humans develop OODA loop experiential graph theoretic predictor-corrector representations of the dynamics of subsets of the universe that steadily interlace into increasingly larger graphs across emergent boundaries enabling autodidactic learning, discovery, and increasingly sophisticated narrow and broad scale game theoretic strategies. The unsupervised and supervised tasks would be fivefold: discover topologies, determine their ontological dynamics and reward functions, apply Monte Carlo predictor-corrector DTRL to tune parameters against real time data, identify state functions indicative of phase transformations and other dynamical catastrophes, and search for related linkages across topologies, e.g., relating methods in epidemiology to botnet driven denial of service attacks. Computational burdens will be eased with the arrival of topological quantum field theory computers [46] allowing for significant “many worlds” Monte Carlo studies. Dynamic, real-world rabbit hole graph topologies for training DTRL OODA-loop predictor-corrector AI systems are available from Wikipedia, news aggregators, social media feeds, and many other online resources with the purpose of extracting human reward functions to beat them and their nation states at their own games. Low hanging problems will include expanding on systems capable of machine-based attention [47], inferencing [48], and generalization [49]. Low hanging-applications would extend network percolation studies [50-51] and mean field approximations of Earth’s online topological data structures and data flows to highlight correlated risks threatening phase changes and other dynamical catastrophes in the sense of modern catastrophe theory [52], e.g., the bursting of the housing bubble in 2007-2008, the Arab spring, and the 2016 American presidential election.

Figure 2 – Systems biology [41].

Mathematical modeling methods span ordinary differential equations, linear, quadratic, and nonlinear programming, singular value decomposition (SVD), statistical methods, probabilistic methods, combinatorial optimization, machine learning and (topological) graph theory [41]. SVD [53] and/or other n-way network reconstruction methods, i.e., neural net point process topology methods [54-56], currently automate extraction of gene networks and their dynamical interactions.

Figure 3 – Methods in systems biology

References 1. Aslesh Jain, Amir R. Zamir, Sivio Savarese, and Ashutosh Saxena, Structural-RNN: Deep Learning on Spatio-Temporal Graphs, arXiv:1511.05298, 11 April 2016 2. Bing Yu, Haoteng Yin, Zhanxing Zhu, Spatio-temporal Graph Convolutional Neural Network: A Deep Learning Framework for Traffic Forecasting, arXiv:1709.04875v2, 25 Sep 2017 3. Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutsketer, Pieter Abbeel, Wojciech Zaremba, One-Shot Imitation Learning, arXiv:1703.07326v2, 22 Mar 2017 4. Kai Arulkumaran, Mark Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath, A Brief Survey of Deep Reinforcement Learning, IEEE Signal Processing Magazine, Issue on Learning for Image Understanding, arXiv: 1708.05866v2, 2017 5. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert,Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis, Mastering the game of Go without human knowledge, Nature 550, 354–359, October 19, 2017 6. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis, Human-level control through deep reinforcement learning, Nature 518, 529–533, 2015 7. Brenden M. Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J. Gershman, Building Machines That Learn and Think Like People, The Behavioral and Brain Sciences, 2016 8. M. Nakahara, Geometry, Topology, and Physics, Graduate Student Series in Physics, Institute of Physics Publishing, Bristol and Philadelphia, 1990 9. Robert Ghrist, Elementary Applied Topology, CreateSpace Independent Publishing Platform, 2014 10. Leonid Polterovicha and Egor Shelukhin, Autonomous Hamiltonian flows, Hofer’s geometry and persistence modules, arXiv:1412.8277v2, 2015 11. Irene Donato, Matteo Gori, Marco Pettini, Giovanni Petri, Sarah De Nigris, Roberto Franzosi, and Francesco Vaccarino, Persistent Homology analysis of Phase Transitions, arXiv:1601.03641v1, 2016 12. Ann Sizemore, Chad Giusti, Ari Kahn, Richard F. Betzel, and Danielle S. Bassett, Cliques and Cavities in the Human Connectome, arXiv:1608.03520v2, 2016 13. Michael W. Reimann, Max Nolte, Martha Scolamiero, Katherine Turner, Rodrigo Perin, Giuseppe Chindemi, Pawel Dlokto, Ran Levi, Katheryn Hess, and Henry Markram, Cliques of Neurons Bound Intro Cavities Provide a Missing Link between Structure and Function, Frontiers in Computational Neuroscience, 2017 14. Chad Giusti, Robert Ghrist, Danielle S. Bassett, Two’s Company, three (or more) is a simplex, Algebraic-topological tools for understanding higher-order structure in neural data, J Comput Neursci, 2016

15. Helmut Schmidt, George Petkov, Mark P. Richardson, and John R. Terry, Dynamics on Networks: The Role of Local Dynamics and Global Networks on the Emergence of Hypersynchronous Neural Activity, PLOS, Computational Biology 2014 16. Dileep George and Jeff Hawkins, A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex, In Proceedings of the International Joint Conference on Neural Networks. IEEE, 2005 17. Gagan S. Wig, Bradley L. Schlaggar, and Steven E. Peterson, Concepts and principles in the analysis of brain networks, Annals of the New York Academy of Sciences, 2011 18. Eric Goles, Gonzalo A. Ruz, Dynamics of neural networks over undirected graphs, Journal Neural Networks, Volume 63 Issue C, 2015 19. Lee H, Kang H, Chung MK, Kim BN, and Lee DS., Persistent brain network homology from the perspective of dendrogram, IEEE Trans Med Imaging, 2012 20. Lee H, Kang H, Chung MK, Kim BN, and Lee DS., Computing the shape of brain networks using graph filtration and Gromov-Hausdorff metric, Med Image Comput Assist Interv., 2011 21. Vin de Silva and Robert Ghrist, Homological Sensor Networks, Notices of the American Mathematical Society, Volume 54, Number 1, 2007 22. C. J. Carstens and K. J. Horadam, Persistent Homology of Collaboration Networks, Mathematical Problems in Engineering, Volume 2013 23. Joseph Minhow Chana, Gunnar Carlssonc, and Raul Rabadan, Topology of viral evolution, PNAS, vol. 110, no. 46, 2013 24. Florian T. Pokorny, Ken Goldberg, and Danica Kragic, Topological Trajectory Clustering with Relative Persistent Homology, IEEE Conference on Robotics and Automation (ICRA), 2016 25. Jeremy Debut, Eric Goubault, and Jean Goubault Larreqc, Natural Homology, published within Automata, Languages, and Programming, 42nd International Colloquium, ICALP 2015, Kyoto, Japan, Proceedings, Part II, 2015 26. Bistra Dilkina and Carla P. Gomes and Ashish Sabharwal, The Impact of Network Topology on Pure Nash Equilibria in Graphical Games, Association for the Advancement of Artificial Intelligence, 2007 27. Kelin Xia and Guo-Wei Wei, Persistent homology analysis of protein structure, flexibility and folding, International Journal Numerical Methods Biomed, 2014 28. Yuliy Baryshnikov and Robert Ghrist, Target Enumeration Via Euler Characteristic Integrals, SIAM J. Appl. Math. 70, 825, 2009 29. Zixuan Cang and Guo-Wei Wei, TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions, PLOS Computational Biology, July 2017 30. Christoph Hofer, Roland Kwitt, and Marc Niethammer, Deep Learning with Topological Signatures, arXiv:1707.04041v1, July 2017

31. Stefan Turner, Rudolf and Stefan Pichler, Risk trading, network topology and banking regulation, Quantitative Finance, Vol. Issue 4, 2003 32. Michael Wolf, An intelligent artificial player for the game of Risk, Department of Computer Science, Darmstadt University of Technology, 2005 33. Keith Sanborn, Postcard from Berezina, in Napoleon, How to Make War, edited by Yann Cloarec, translated by Keith Sanborn, Ediciones La Calavera, New York, pp. 103-4, 1998 34. Alan M. Turing, Computing Machinery and Intelligence, Mind, 59, 433-460, 1950 35. Alex Alaniz, Method and system for detecting correlation in data sets, United States Patent 8,229,866, US 20110060703 A1, 24 July 2012 36. Pengfei Zhang, Huitao Shen, and Hui Zhai, Machine Learning Topological Invariants with Neural Networks, arXiv:1708.09401v2, 7 September 2017 37. Juan Carrasaquilla, Neural Networks Identify Topological Phases, American Physical Society, Physics 22 May 2017 38. Aviv Segev and Avigdor Gal, Putting Things in Context: A Topological Approach to Mapping Contexts and Ontologies, American Association for Artificial Intelligence, 2005 39. Anonymous, Deep Function Machines: Generalized Neural Networks for Topological Layer Expression, International Conference on Learning Representations (ICLR) 2018 Conference Blind Submission, 3 Nov 2017 40. Robert Coran, Boyd, The Fighter Pilot Who Changed the Art of War, Back Bay Books, 2002 41. Luonan Chen, Rui-Sheng Wang, Xiang-Sun Zhang, Biomolecular Networks, Methods and Applications in System Biology, Wiley, 2009 42. Jeff Hawkins, On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, St. Martin’s Griffin, 2005 43. Nicholas Watters, Andrea Tacchetti, Th´eophane Weber, Razvan Pascanu, Peter Battaglia, and Daniel Zoran, Visual Interaction Networks, arXiv:1706.01433v1 5 Jun 2017 44. Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap, A simple neural network module for relational reasoning, arXiv:1706.01427v1 [cs.CL] 5 Jun 2017 45. Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350:1332–1338, 2015 46. Chetan Nayak, Steven H. Simon, Ady Stern, Michael Freedman, and Sankar Das Sarma, Non-Abelian anyons and topological quantum computation, Reviews of Modern Physics, 80, 1083, September 2008 47. Denny Britz, Attention and Memory in Deep Learning and NLP, http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/, 2016

48. Omid Askari Sichani and Mahdi Jalili, Inference of Hidden Social Power Through Opinion Formation in Complex Networks, IEEE Transactions on Network Science and Engineering, Vol. 4, No. 3, 2017 49. Pedro Domingos, A Few Useful Things to Know about Machine Learning, Communications of the ACM, 2012 50. Takashi Ichinomiya, Ippei Obayashi, and Yasuaki Hiraoka, Persistent homology analysis of craze formation, Phys. Rev. E 95, 012504, 2017 51. Eric Babson and Itai Benjamini, Cut Sets and Normed Cohomology with Applications to Percolation, Proceedings of the American Mathematical Society, Volume 127, Number 2, 1999 52. Robert Gilmore, Catastrophe Theory for Scientists and Engineers, John Wiley & Sons Inc., 1981 53. M. K. Stephen Yeung, Jesper Tegner, and James J. Collins, Reverse Engineering gene networks using singular value decomposition and robust regression, PNAS, 2002 54. D. Yogeshwaran and Robert J. Adler, ON THE TOPOLOGY OF RANDOM COMPLEXES BUILT OVER STATIONARY POINT PROCESSES, arXiv:1211.0061v3, 27 October 2015 55. Shuai Xiao, Junchi Yan, Stephen M. Chu, Xiaokang Yang, and Hongyuan Zha, Modeling the Intensity Function of Point Process via Recurrent Neural Networks, arXiv:1705.08982v1 24 May 2017 56. Hongyuan Mei and Jason Eisner, The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process, arXiv:1612.0932v2 23 May 2017