Skip to content

Latest commit

 

History

History
36 lines (19 loc) · 5.16 KB

File metadata and controls

36 lines (19 loc) · 5.16 KB

Morphogenetic Patterns and the Theory of Deep Learning

Among the many recent advances in deep learning (DL) , one set of applications stands out: Generative Adversarial Networks [1] (GANs), a framework which uses existing data to generate new lifelike data [2]. Our goal is to propose a model of morphogenesis derived from the theory and mechanism behind deep neural networks. Relying upon the principle of computational equivalence [3], we can potentially uncover the universal elements of self-organization in a morphogenetic system[4]. Attempts at this using tools such as Cellular Automata (CAs) have yielded a means to generate patterns [5] but largely in a qualitative fashion. Connecting Neural Networks (NNs) to pattern-generating cellular automata [6] allows us to identify patterns that are generated. Generally, machine learning methods can enable image segmentation [7] and other types of data-driven discovery [8]. There is also a role for such methods in simulating development and other life-like phenomena [9, 10]. There is much potential for us to exploit deep generative networks in the service of understanding how these patterns are formed and self-organize in a wide range of naturalistic contexts.

To see how deep learning architectures might make advances upon this, we will focus on three aspects of the theory behind deep learning networks: feature discovery, network depth, and network homogeneity. In a GAN, feature discovery proceeds in part through an adversarial approach. This allows for greater recognition ability as opposed to random or irrelevant images. Aside from features, NNs provide us with both circuits (network subgraphs that serve as a set of solutions) and universality (features and circuits that generalize across different systems and contexts) [11]. The GAN framework also uses deep neural networks, and so unlike CAs and standard NNs can exploit many layers of processing, specifying the generation and recognition of classes of specific features within a complex pattern. The depth of a DL allows for so-called cheap learning [12], which can be analogized to mechanisms of facilitated variation [13] and biological adaptation more generally. Finally, future DL models may allow for network heterogeneity, which has been identified as a factor in making artificial networks more like intelligent biological systems [14]. In conclusion, we will consider how DLs might be incorporated into embodied agents (morphogenetic agents) [15] that couple pattern generation with perception.

References:
[1] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. Generative adversarial nets. arXiv, 1406.2661.

[2] Deb, Mayukh (2020). Torch Dreams. link and Deb, Mainak (2020). DigitGAN. link

[3] Zenil, H. (2012). Irreducibility and Computational Equivalence: 10 years after Wolfram's "A New Kind of Science". Springer, Berlin.

[4] Osokin, A., Chessel, A., Carazo Salas, R.E., and Vaggi, F. (2017). Generative Adversarial Networks for Biological Image Synthesis. arXiv, 1708.04692.

[5] Ishida, T. (2020). Emergence of Turing Patterns in a Simple Cellular Automata-Like Model via Exchange of Integer Values between Adjacent Cells. Discrete Dynamics in Nature and Society, 2308074. doi:10.1155/2020/2308074.

[6] Portegys, T., Pascualy, G., Gordon, R., McGrew, S., and Alicea, B. (2016). Morphozoic: cellular automata with nested neighborhoods as a metamorphic representation of morphogenesis. In “Multi-Agent Based Simulations Applied to Biological and Environmental Systems“. IGI Press, Hershey, PA.

[7] Villoutreix, P. (2021). What machine learning can do for developmental biology. Development, 148, dev.188474.

[8] Feltes, B.C., Grisci, B.I., Poloni, J.dF., and Dorn, M. (2018). Perspectives and applications of machine learning for evolutionary developmental biology. Molecular Omics, 14(5), 289-306. doi:10.1039/c8mo00111a.

[9] Levin, M. (2018). What Bodies Think About: Bioelectric Computation Beyond the Nervous System as Inspiration for New Machine Learning Platforms. Proceedings of Neural Information Processing, 31.

[10] Mordvintsev, A., Randazzo, E., Niklasson, E., and Levin, M. (2020). Growing Neural Cellular Automata Differentiable Model of Morphogenesis. Distill, doi:10.23915/distill.00023.

[11] Olah., C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., and Carter, S. (2020). Zoom In: An Introduction to Circuits. Distill, doi:10.23915/distill.00024.001.

[12] Lin, H.W., Tegmark, M., and Rolnick, D. (2017). Why Does Deep and Cheap Learning Work So Well? Journal of Statistical Physics, 168, 1223–1247.

[13] Parter, M., Kashtan, N., and Alon, U. (2008) Facilitated Variation: How Evolution Learns from Past Environments To Generalize to New Environments. PLoS Computational Biology, 4(11), e1000206.

[14] Marblestone, A.H., Wayne, G., and Kording, K. (2016). Towards an Integration of Deep Learning and Neuroscience. Frontiers in Computational Neuroscience, 10, 94.

[15] Alicea, B. (2020). Observer-dependent Models. Figshare, https://doi.org/10.6084/m9.figshare.13340306.v6.