• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5178
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 51
  • 50
  • 44
  • 43
  • Tagged with
  • 10796
  • 5868
  • 2855
  • 2743
  • 2655
  • 2446
  • 1699
  • 1622
  • 1548
  • 1525
  • 1346
  • 1140
  • 1036
  • 933
  • 906
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Uso do potencial evocado auditivo de tronco encefálico como indicador de integridade neural em neonatos e lactentes submetidos à cirurgia cardíaca de alta complexidade / The use of brainstein auditory evoked response as indicator of neural integrity in newborn and infants submitted to high complexity cardiac surgery

Ramos, Elaine Cristina 07 August 2018 (has links)
Objetivo: Determinar as mudanças nas ondas dos potenciais evocados auditivos de tronco encefálico (PEATE) como indicador de integridade neural em eventos de hipoxia-isquemia cerebral em crianças submetidas à cirurgia cardíaca com circulação extracorpórea (CEC). Método: Participaram deste estudo 09 pacientes pediátricos com idade entre 0 a 2 anos, submetidos à cirurgia cardíaca de alta complexidade no Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto. A avaliação eletrofisiológica da audição foi realizada no centro cirúrgico do Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto, durante os respectivos momentos da cirurgia cardíaca: Sedação, Circulação Extracorpórea (CEC) e Hipotermia, e ao Final da Cirurgia, com o registro do potencial evocado auditivo de tronco encefálico (PEATE), por meio do equipamento Nicolet Endeavor CR, conectado a um notebook, duas sondas de espuma, 4 eletrodos de agulha, posicionados na mastóide direita e esquerda, vertex craniano e fronte. Resultados: As crianças tinham média de idade de 10 meses. Dessas 9 crianças, 3 do sexo feminino e 6 do sexo masculino, a média do peso foi de 6,0 Kg, as patologias e ou propostas cirúrgicas foram: Correção de Disfunção Valvar Aórtica e Tricúspide por Valvoplastia, Correção de Drenagem Anômola Total das Veias Pulmonares, Correção de Cor Triatriatum, Operação de Glenn, Implante de Marcapasso Cardíaco Intracavitário de Dupla Câmara, Correção de Disfunção Valvar Mitral por Valvoplastia, Fechamento de Comunicação Interatrial, Fechamento de Comunicação Interventricular, Bandagem e Correção da Persistência Canal Arterial. Duas dessas crianças possuem Síndrome de Down. Foi observado quanto às latências das ondas I, III e V, e seus interpicos da orelha direita e esquerda mudanças dos valores comparadas ao início com CEC e hipotermia, assim como ao início com o final da cirurgia, porém os valores foram estatisticamente não significantes, p > 0,05. Conclusão: Não foram encontradas alterações nas latências absolutas e na latência dos intervalos interpicos do PEATE para as situações de hipotermia e circulação extracorpórea. / Objective: To determine changes in the waves of brainstem evoked response audiometry (BERA) as an indicator of neural integrity in events of brain hypoxiaischemia in children undergoing cardiac surgery with cardiopulmonary bypass (CPB). Method: 09 pediatric patients aged 0 to 2 have taken part in this study, who were submitted to highly complex cardiac surgery at Hospital das Clínicas, Medical School of Ribeirão Preto. The electrophysiological assessment of hearing was performed at the surgical center of Hospital das Clínicas of the Medical School of Ribeirão Preto, during the respective moments of the cardiac surgery: Sedation, Extracorporeal Circulation (CPB) and Hypothermia, and at the End of Surgery, by recording the brainstem auditory evoked response (BERA), using the Nicolet Endeavor CR equipment, connected to a laptop, two foam probes, four needle electrodes positioned in the right and left mastoid, cranial vertex and forehead. Results: The mean age of the children was 10 months. Out of these 9 children, 3 were female and 6 were male, and their mean weight was 6.0 kg. The pathologies and or surgical proposals were: Correction of Aortic Valve Dysfunction and Tricuspid by valvuloplasty, Correction of total anomalous pulmonary venous drainage, Color Correction Triatriatum, Glenn Procedure, Double-chambered Intracavitary Cardiac Pacemaker Implantation, Mitral Valve Dysfunction Correction, Interatrial Communication Closure, Interventricular Communication Closure, Bandaging, and Correction of Persistent Arterial Canal. Two of these children have Down Syndrome. As for the latencies of the I, III and V waves and their interpeaks of the right and left ear, it was observed changes in values compared to the onset with CPB and hypothermia, as well as to the beginning with the end of surgery, but the values were not statistically significant, p > 0.05. Conclusion: No alterations were found in the absolute latencies and in the latency of the BERA intervals for situations of hypothermia and cardiopulmonary bypass.
582

Alpha Matting via Residual Convolutional Grid Network

Zhang, Huizhen 23 July 2019 (has links)
Alpha matting is an important topic in areas of computer vision. It has various applications, such as virtual reality, digital image and video editing, and image synthesis. The conventional approaches for alpha matting perform unsatisfactorily when they encounter complicated background and foreground. It is also difficult for them to extract alpha matte accurately when the foreground objects are transparent, semi-transparent, perforated or hairy. Fortunately, the rapid development of deep learning techniques brings new possibilities for solving alpha matting problems. In this thesis, we propose a residual convolutional grid network for alpha matting, which is based on the convolutional neural networks (CNNs) and can learn the alpha matte directly from the original image and its trimap. Our grid network consists of horizontal residual convolutional computation blocks and vertical upsampling/downsampling convolutional computation blocks. By choosing different paths to pass information by itself, our network can not only retain the rich details of the image but also extract high-level abstract semantic information of the image. The experimental results demonstrate that our method can solve the matting problems that plague conventional matting methods for decades and outperform all the other state-of-the-art matting methods in quality and visual evaluation. The only matting method performs a little better than ours is the current best matting method. However, that matting method requires three times amount of trainable parameters compared with ours. Hence, our matting method is the best considering the computation complexity, memory usage, and matting performance.
583

Neural Dynamics and the Geometry of Population Activity

Russo, Abigail Anita January 2019 (has links)
A growing body of research indicates that much of the brain’s computation is invisible from the activity of individual neurons, but instead instantiated via population-level dynamics. According to this ‘dynamical systems hypothesis’, population-level neural activity evolves according to underlying dynamics that are shaped by network connectivity. While these dynamics are not directly observable in empirical data, they can be inferred by studying the structure of population trajectories. Quantification of this structure, the ‘trajectory geometry’, can then guide thinking on the underlying computation. Alternatively, modeling neural populations as dynamical systems can predict trajectory geometries appropriate for particular tasks. This approach of characterizing and interpreting trajectory geometry is providing new insights in many cortical areas, including regions involved in motor control and areas that mediate cognitive processes such as decision-making. In this thesis, I advance the characterization of population structure by introducing hypothesis-guided metrics for the quantification of trajectory geometry. These metrics, trajectory tangling in primary motor cortex and trajectory divergence in the Supplementary Motor Area, abstract away from task-specific solutions and toward underlying computations and network constraints that drive trajectory geometry. Primate motor cortex (M1) projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ‘trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low trajectory tangling confers noise robustness. We were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low trajectory tangling. The Supplementary Motor Area (SMA) has been implicated in many higher-order aspects of motor control. Previous studies have demonstrated that SMA might track motor context. We propose that this computation necessitates that neural activity avoids ‘trajectory divergence’: moments where two similar neural states become dissimilar in the future. Indeed, we found that population activity in SMA, but not in M1, reliably avoided trajectory divergence, resulting in fundamentally different geometries: cyclical in M1 and helix-like in SMA. Analogous structure emerged in artificial networks trained without versus with context-related inputs. These findings reveal that the geometries of population activity in SMA and M1 are fundamentally different, with direct implications regarding what computations can be performed by each area. The characterization and statistical analysis of trajectory geometry promises to advance our understanding of neural network function by providing interpretable, cohesive explanations for observed population structure. Commonality between individuals and networks can be uncovered and more generic, task-invariant, fundamental aspects of neural response can be explored.
584

Deep learning and SVM methods for lung diseases detection and direction recognition

Li, Lei January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
585

Localization of chemical and electrical synapses in the retina

Unknown Date (has links)
The amphibian retina is commonly used as a model system for studying function and mechanism of the visual system in electrophysiology, since the neural structure and synaptic mechanism of the amphibian retina are similar to higher vertebrate retinas. I determined the specific subtypes of receptors and channels that are involved in chemical and electrical synapses in the amphibian retina. My study indicates that glycine receptor subunits of GlyRº1, 3 and 4 and glutamate receptor subunit of GluR4 are present in bipolar and amacrine dendrites and axons to conduct chemical synapses in the retinal circuit. I also found that the gap junction channel, pannexin 1a (panx1a), is present in cone-dominated On-bipolar cells and rod-dominated amacrine processes possibly to connect rod-and cone-pathway in the inner retina. In addition, panx1a may form hemi-channels that pass ATP and Ca2+ signals. The findings of my study fill the gap of our knowledge about the subtypes of neurotransmitter receptors and gap junction channels conducting visual information in particular cell types and synaptic areas. / by Yufei Liu. / Thesis (M.S.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
586

Molecular Mechanisms Of CaV2.1 Expression and Functional Organization at the Presynaptic Terminal

Unknown Date (has links)
Neuronal circuit output is dependent on the embedded synapses’ precise regulation of Ca2+ mediated release of neurotransmitter filled synaptic vesicles (SVs) in response to action potential (AP) depolarizations. A key determinant of SV release is the specific expression, organization, and abundance of voltage gated calcium channel (VGCC) subtypes at presynaptic active zones (AZs). In particular, the relative distance that SVs are coupled to VGCCs at AZs results in two different modes of SV release that dramatically impacts synapse release probability and ultimately the neuronal circuit output. They are: “Ca2+ microdomain,” SV release due to cooperative action of many loosely coupled VGCCs to SVs, or “Ca2+ nanodomain,” SV release due to fewer more tightly coupled VGCCs to SVs. VGCCs are multi-subunit complexes with the pore forming a1 subunit (Cav2.1), the critical determinant of the VGCC subtype kinetics, abundance, and organization at the AZ. Although in central synapses Cav2.2 and Cav2.1 mediate synchronous transmitter release, neurons express multiple VGCC subtypes with differential expression patterns between the cell body and the pre-synapse. The calyx of Held, a giant axosomatic glutamatergic presynaptic terminal that arises from the globular bushy cells (GBC) in the cochlear nucleus, exclusively uses Cav2.1 VGCCs to support the early stages of auditory processing. Due to its experimental accessibility the calyx provides unparalleled opportunities to gain mechanistic insights into Cav2.1 expression, organization, and SV release modes at the presynaptic terminal. Although many molecules are implicated in mediating Cav2.1 expression and SV to VGCC coupling through multiple binding domains on the C-terminus of the Cav2.1 a1 subunit, the underlying fundamental molecular mechanisms remain poorly defined. Here, using viral vector mediated approaches in combination with Cav2.1 conditional knock out transgenic mice, we demonstrate that that there a two independent pathways that control Cav2.1 expression and SV to VGCC coupling at the calyx of Held. These implications for the regulation of synaptic transmission in CNS synapses are discussed. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection
587

Leannet : uma arquitetura que utiliza o contexto da cena para melhorar o reconhecimento de objetos

Silva, Leandro Pereira da 27 March 2018 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-06-15T16:40:47Z No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-26T13:25:28Z (GMT) No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) / Made available in DSpace on 2018-06-26T13:34:22Z (GMT). No. of bitstreams: 1 LEANDRO PEREIRA DA SILVA_DIS.pdf: 16008947 bytes, checksum: 327a925ea56fcca0a86530a0eb3b1637 (MD5) Previous issue date: 2018-03-27 / Computer vision is the science that aims to give computers the capability of see- ing the world around them. Among its tasks, object recognition intends to classify objects and to identify where each object is in a given image. As objects tend to occur in particular environments, their contextual association can be useful to improve the object recognition task. To address the contextual awareness on object recognition task, the proposed ap- proach performs the identification of the scene context separately from the identification of the object, fusing both information in order to improve the object detection. In order to do so, we propose a novel architecture composed of two convolutional neural networks running in parallel: one for object identification and the other to the identification of the context where the object is located. Finally, the information of the two-streams architecture is concatenated to perform the object classification. The evaluation is performed using PASCAL VOC 2007 and MS COCO public datasets, by comparing the performance of our proposed approach with architectures that do not use the scene context to perform the classification of the ob- jects. Results show that our approach is able to raise in-context object scores, and reduces out-of-context objects scores. / A vis?o computacional ? a ci?ncia que permite fornecer aos computadores a ca- pacidade de verem o mundo em sua volta. Entre as tarefas, o reconhecimento de objetos pretende classificar objetos e identificar a posi??o onde cada objeto est? em uma imagem. Como objetos costumam ocorrer em ambientes particulares, a utiliza??o de seus contex- tos pode ser vantajosa para melhorar a tarefa de reconhecimento de objetos. Para utilizar o contexto na tarefa de reconhecimento de objetos, a abordagem proposta realiza a iden- tifica??o do contexto da cena separadamente da identifica??o do objeto, fundindo ambas informa??es para a melhora da detec??o do objeto. Para tanto, propomos uma nova arquite- tura composta de duas redes neurais convolucionais em paralelo: uma para a identifica??o do objeto e outra para a identifica??o do contexto no qual o objeto est? inserido. Por fim, a informa??o de ambas as redes ? concatenada para realizar a classifica??o do objeto. Ava- liamos a arquitetura proposta com os datasets p?blicos PASCAL VOC 2007 e o MS COCO, comparando o desempenho da abordagem proposta com abordagens que n?o utilizam o contexto. Os resultados mostram que nossa abordagem ? capaz de aumentar a probabili- dade de classifica??o para objetos que est?o em contexto e reduzir para objetos que est?o fora de contexto.
588

Development of a fault location method based on fault induced transients in distribution networks with wind farm connections

Lout, Kapildev January 2015 (has links)
Electrical transmission and distribution networks are prone to short circuit faults since they span over long distances to deliver the electrical power from generating units to where the energy is required. These faults are usually caused by vegetation growing underneath bare overhead conductors, large birds short circuiting the phases, mechanical failure of pin-type insulators or even insulation failure of cables due to wear and tear, resulting in creepage current. Short circuit faults are highly undesirable for distribution network companies since they cause interruption of supply, thus affecting the reliability of their network, leading to a loss of revenue for the companies. Therefore, accurate offline fault location is required to quickly tackle the repair of permanent faults on the system so as to improve system reliability. Moreover, it also provides a tool to identify weak spots on the system following transient fault events such that these future potential sources of system failure can be checked during preventive maintenance. With these aims in mind, a novel fault location technique has been developed to accurately determine the location of short circuit faults in a distribution network consisting of feeders and spurs, using only the phase currents measured at the outgoing end of the feeder in the substation. These phase currents are analysed using the Discrete Wavelet Transform to identify distinct features for each type of fault. To achieve better accuracy and success, the scheme firstly uses these distinct features to train an Artificial Neural Network based algorithm to identify the type of fault on the system. Another Artificial Neural Network based algorithm dedicated to this type of fault then identifies the location of the fault on the feeder or spur. Finally, a series of Artificial Neural Network based algorithms estimate the distance to the point of fault along the feeder or spur. The impact of wind farm connections consisting of doubly-fed induction generators and permanent magnet synchronous generators on the accuracy of the developed algorithms has also been investigated using detailed models of these wind turbine generator types in Simulink. The results obtained showed that the developed scheme allows the accurate location of the short circuit faults in an active distribution network. Further sensitivity tests such as the change in fault inception angle, fault impedance, line length, wind farm capacity, network configuration and white noise confirm the robustness of the novel fault location technique in active distribution networks.
589

The Induced topology of local minima with applications to artificial neural networks.

January 1992 (has links)
by Yun Chung Chu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 163-[165]). / Chapter 1 --- Background --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Basic notations --- p.4 / Chapter 1.3 --- Object of study --- p.6 / Chapter 2 --- Review of Kohonen's algorithm --- p.22 / Chapter 2.1 --- General form of Kohonen's algorithm --- p.22 / Chapter 2.2 --- r-neighborhood by matrix --- p.25 / Chapter 2.3 --- Examples --- p.28 / Chapter 3 --- Local minima --- p.34 / Chapter 3.1 --- Theory of local minima --- p.35 / Chapter 3.2 --- Minimizing the number of local minima --- p.40 / Chapter 3.3 --- Detecting the success or failure of Kohonen's algorithm --- p.48 / Chapter 3.4 --- Local minima for different graph structures --- p.59 / Chapter 3.5 --- Formulation by geodesic distance --- p.65 / Chapter 3.6 --- Local minima and Voronoi regions --- p.67 / Chapter 4 --- Induced graph --- p.70 / Chapter 4.1 --- Formalism --- p.71 / Chapter 4.2 --- Practical way to find the induced graph --- p.88 / Chapter 4.3 --- Some examples --- p.95 / Chapter 5 --- Given mapping vs induced mapping --- p.102 / Chapter 5.1 --- Comparison between given mapping and induced mapping --- p.102 / Chapter 5.2 --- Matching the induced mapping to given mapping --- p.115 / Chapter 6 --- A special topic: application to determination of dimension --- p.131 / Chapter 6.1 --- Theory --- p.133 / Chapter 6.2 --- Advanced examples --- p.151 / Chapter 6.3 --- Special applications --- p.156 / Chapter 7 --- Conclusion --- p.159 / Bibliography --- p.163
590

A specification-based design tool for artificial neural networks.

January 1992 (has links)
Wong Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 78-80). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Specification Environment --- p.2 / Chapter 1.2. --- Specification Analysis --- p.2 / Chapter 1.3. --- Outline --- p.3 / Chapter 2. --- Survey --- p.4 / Chapter 2.1. --- Concurrence Specification --- p.4 / Chapter 2.1.1. --- Sequential Approach --- p.5 / Chapter 2.1.2. --- Mapping onto Concurrent Architecture --- p.6 / Chapter 2.1.3. --- Automatic Concurrence Introduction --- p.7 / Chapter 2.2. --- Specification Analysis --- p.8 / Chapter 2.2.1. --- Motivation --- p.8 / Chapter 2.2.2. --- Cyclic Dependency --- p.8 / Chapter 3. --- The Design Tool --- p.11 / Chapter 3.1. --- Specification Environment --- p.11 / Chapter 3.1.1. --- Framework --- p.11 / Chapter 3.1.1.1. --- Formal Neurons --- p.12 / Chapter 3.1.1.2. --- Configuration --- p.12 / Chapter 3.1.1.3. --- Control Neuron --- p.13 / Chapter 3.1.2. --- Dataflow Specification --- p.14 / Chapter 3.1.2.1. --- Absence of Control Information --- p.14 / Chapter 3.1.2.2. --- Single-Valued Variables & Explicit Time Indices --- p.14 / Chapter 3.1.2.3. --- Explicit Notations --- p.15 / Chapter 3.1.3. --- User Interface --- p.15 / Chapter 3.2. --- Specification Analysis --- p.16 / Chapter 3.2.1. --- Data Dependency Analysis --- p.16 / Chapter 3.2.2. --- Attribute Analysis --- p.16 / Chapter 4. --- BP-Net Specification --- p.18 / Chapter 4.1. --- BP-Net Paradigm --- p.18 / Chapter 4.1.1. --- Neurons of a BP-Net --- p.18 / Chapter 4.1.2. --- Configuration of BP-Net --- p.20 / Chapter 4.2. --- Constant Declarations --- p.20 / Chapter 4.3. --- Formal Neuron Specification --- p.21 / Chapter 4.3.1. --- Mapping the Paradigm --- p.22 / Chapter 4.3.1.1. --- Mapping Symbols onto Parameter Names --- p.22 / Chapter 4.3.1.2. --- Mapping Neuron Equations onto Internal Functions --- p.22 / Chapter 4.3.2. --- Form Entries --- p.23 / Chapter 4.3.2.1. --- Neuron Type Entry --- p.23 / Chapter 4.3.2.2. --- "Input, Output and Internal Parameter Entries" --- p.23 / Chapter 4.3.2.3. --- Initial Value Entry --- p.25 / Chapter 4.3.2.4. --- Internal Function Entry --- p.25 / Chapter 4.4. --- Configuration Specification --- p.28 / Chapter 4.4.1. --- Fonn Entries --- p.29 / Chapter 4.4.1.1. --- Neuron Label Entry --- p.29 / Chapter 4.4.1.2. --- Neuron Character Entry --- p.30 / Chapter 4.4.1.3. --- Connection Pattern Entry --- p.31 / Chapter 4.4.2. --- Characteristics of the Syntax --- p.33 / Chapter 4.5. --- Control Neuron Specification --- p.34 / Chapter 4.5.1. --- Form Entries --- p.35 / Chapter 4.5.1.1. --- "Global Input, Output, Parameter & Initial Value Entries" --- p.35 / Chapter 4.5.1.2. --- Input & Output File Entries --- p.36 / Chapter 4.5.1.3. --- Global Function Entry --- p.36 / Chapter 5. --- Data Dependency Analysis_ --- p.40 / Chapter 5.1. --- Graph Construction --- p.41 / Chapter 5.1.1. --- Simplification and Normalization --- p.41 / Chapter 5.1.1.1. --- Removing Non-Esscntial Information --- p.41 / Chapter 5.1.1.2. --- Removing File Record Parameters --- p.42 / Chapter 5.1.1.3. --- Rearranging Temporal offset --- p.42 / Chapter 5.1.1.4. --- Conservation of Temporal Relationship --- p.43 / Chapter 5.1.1.5. --- Zero/Negative Offset for Determining Parameters --- p.43 / Chapter 5.1.2. --- Internal Dependency Graphs (IDGs) --- p.43 / Chapter 5.1.3. --- IDG of Control Neuron (CnIDG) --- p.45 / Chapter 5.1.4. --- Global Dependency Graphs (GDGs) --- p.45 / Chapter 5.2. --- Cycle Detection --- p.48 / Chapter 5.2.1. --- BP-Net --- p.48 / Chapter 5.2.2. --- Other Examples --- p.49 / Chapter 5.2.2.1. --- The Perceptron --- p.50 / Chapter 5.2.2.2. --- The Boltzmann Machinc --- p.51 / Chapter 5.2.3. --- Number of Cycles --- p.52 / Chapter 5.2.3.1. --- Different Number of Layers --- p.52 / Chapter 5.2.3.2. --- Different Network Types --- p.52 / Chapter 5.2.4. --- Cycle Length --- p.53 / Chapter 5.2.4.1. --- Different Number of Layers --- p.53 / Chapter 5.2.4.2. --- Comparison Among Different Networks --- p.53 / Chapter 5.2.5. --- Difficulties in Analysis --- p.53 / Chapter 5.3. --- Dependency Cycle Analysis --- p.54 / Chapter 5.3.1. --- Temporal Index Analysis --- p.54 / Chapter 5.3.2. --- Non-Temporal Index Analysis --- p.55 / Chapter 5.3.2.1. --- A Simple Example --- p.55 / Chapter 5.3.2.2. --- Single Parameter --- p.56 / Chapter 5.3.2.3. --- Multiple Parameters --- p.57 / Chapter 5.3.3. --- Combined Method --- p.58 / Chapter 5.3.4. --- Scheduling --- p.58 / Chapter 5.3.4.1. --- Algorithm --- p.59 / Chapter 5.3.4.2. --- Schedule for the BP-Net --- p.59 / Chapter 5.4. --- Symmetry in Graph Construction --- p.60 / Chapter 5.4.1. --- Basic Approach --- p.60 / Chapter 5.4.2. --- Construction of the BP-Net GDG --- p.61 / Chapter 5.4.3. --- Limitation --- p.63 / Chapter 6. --- Attribute Analysis__ --- p.64 / Chapter 6.1. --- Parameter Analysis --- p.64 / Chapter 6.1.1. --- Internal Dependency Graphs (IDGs) --- p.65 / Chapter 6.1.1.1. --- Correct Properties of Parameters in IDGs --- p.65 / Chapter 6.1.1.2. --- Example --- p.65 / Chapter 6.1.2. --- Combined Internal Dependency Graphs (CIDG) --- p.66 / Chapter 6.1.2.1. --- Tests on Parameters of CIDG --- p.66 / Chapter 6.1.2.2. --- Example --- p.67 / Chapter 6.1.3. --- Finalized Neuron Obtained --- p.67 / Chapter 6.1 4. --- CIDG of the BP-Net --- p.68 / Chapter 6.2. --- Constraint Checking --- p.68 / Chapter 6.2.1. --- "Syntactic, Semantic and Simple Checkings" --- p.68 / Chapter 6.2.1.1. --- The Syntactic & Semantic Techniques --- p.68 / Chapter 6.2.1.2. --- Simple Matching --- p.70 / Chapter 6.2.2. --- Constraints --- p.71 / Chapter 6.2.2.1. --- Constraints on Formal Neuron --- p.71 / Chapter 6.2.2.2. --- Constraints on Configuration --- p.72 / Chapter 6.2.2.3. --- Constraints on Control Neuron --- p.73 / Chapter 6.3. --- Complete Checking Procedure --- p.73 / Chapter 7. --- Conclusions_ --- p.75 / Chapter 7.1. --- Limitations --- p.76 / Chapter 7.1.1. --- Exclusive Conditional Dependency Cycles --- p.76 / Chapter 7.1.2. --- Maximum Parallelism --- p.77 / Reference --- p.78 / Appendix --- p.1 / Chapter I. --- Form Syntax --- p.1 / Chapter A. --- Syntax Conventions --- p.1 / Chapter B. --- Form Definition --- p.1 / Chapter 1. --- Form Structure --- p.1 / Chapter 2. --- Constant Declaration --- p.1 / Chapter 3. --- Formal Neuron Declaration --- p.1 / Chapter 4. --- Configuration Declaration --- p.2 / Chapter 5. --- Control Neuron --- p.2 / Chapter 6. --- Supplementary Definition --- p.3 / Chapter II. --- Algorithms --- p.4 / Chapter III. --- Deadlock & Dependency Cycles --- p.14 / Chapter A. --- Deadlock Prevention --- p.14 / Chapter 1. --- Necessary Conditions for Deadlock --- p.14 / Chapter 2. --- Resource Allocation Graphs --- p.15 / Chapter 3. --- Cycles and Blocked Requests --- p.15 / Chapter B. --- Deadlock in ANN Systems --- p.16 / Chapter 1. --- Shared resources --- p.16 / Chapter 2. --- Presence of the Necessary Conditions for Deadlocks --- p.16 / Chapter 3. --- Operation Constraint for Communication --- p.16 / Chapter 4. --- Checkings Required --- p.17 / Chapter C. --- Data Dependency Graphs --- p.17 / Chapter 1. --- Simplifying Resource Allocation Graphs --- p.17 / Chapter 2. --- Expanding into Parameter Level --- p.18 / Chapter 3. --- Freezing the Request Edges --- p.18 / Chapter 4. --- Reversing the Edge Directions --- p.18 / Chapter 5. --- Mutual Dependency Cycles --- p.18 / Chapter IV. --- Case Studies --- p.19 / Chapter A. --- BP-Net --- p.19 / Chapter 1. --- Specification Forms --- p.19 / Chapter 2. --- Results After Simple Checkings --- p.21 / Chapter 3. --- Internal Dependency Graphs Construction --- p.21 / Chapter 4. --- Results From Parameter Analysis --- p.21 / Chapter 5. --- Global Dependency Graphs Construction --- p.21 / Chapter 6. --- Cycles Detection --- p.21 / Chapter 7. --- Time Subscript Analysis --- p.21 / Chapter 8. --- Subscript Analysis --- p.21 / Chapter 9. --- Scheduling --- p.21 / Chapter B. --- Perceptron --- p.21 / Chapter 1. --- Specification Forms --- p.22 / Chapter 2. --- Results After Simple Checkings --- p.24 / Chapter 3. --- Internal Dependency Graphs Construction --- p.24 / Chapter 4. --- Results From Parameter Analysis --- p.25 / Chapter 5. --- Global Dependency Graph Construction --- p.25 / Chapter 6. --- Cycles Detection --- p.25 / Chapter 7. --- Time Subscript Analysis --- p.25 / Chapter 8. --- Subscript Analysis --- p.25 / Chapter 9. --- Scheduling --- p.25 / Chapter C. --- Boltzmann Machine --- p.26 / Chapter 1. --- Specification Forms --- p.26 / Chapter 2. --- Results After Simple Checkings --- p.35 / Chapter 3. --- Graphs Construction --- p.35 / Chapter 4. --- Results From Parameter Analysis --- p.36 / Chapter 5. --- Global Dependency Graphs Construction --- p.36 / Chapter 6. --- Cycle Detection --- p.36 / Chapter 7. --- Time Subscript Analysis --- p.36 / Chapter 8. --- Subscript Analysis --- p.36 / Chapter 9. --- Scheduling --- p.36

Page generated in 0.0353 seconds