• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5178
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 51
  • 50
  • 44
  • 43
  • Tagged with
  • 10796
  • 5868
  • 2855
  • 2743
  • 2655
  • 2446
  • 1699
  • 1622
  • 1548
  • 1525
  • 1346
  • 1140
  • 1036
  • 933
  • 906
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Neural Bursting Activity Mediates Subtype-Specific Neural Regeneration by an L-type Calcium Channel

Ruppell, Kendra Takle 02 April 2019 (has links)
Axons are injured after stroke, spinal cord injury, or neurodegenerative disease such as ALS. Most axons do not regenerate. A recent report suggests that not all neurons are poor regenerators, but rather a small subset can regenerate robustly. What intrinsic property of these regenerating neurons allows them to regenerate, but not their neighbors, remains a mystery. This subtype-specific regeneration has also been observed in Drosophila larvae sensory neurons. We exploited this powerful genetic system to unravel the intrinsic mechanism of subtype-specific neuron regeneration. We found that neuron bursting activity after axotomy correlates with regeneration ability. Furthermore, neuron bursting activity is necessary for regeneration of a regenerative neuron subtype, and sufficient for regeneration of a non-regenerative neuron subtype. This optogenetically-induced regeneration is dependent on a bursting pattern, not simply overall activity increase. We conclude that neuron bursting activity is an intrinsic mechanism of subtype-specific regeneration. We then discovered through a reverse genetic screen that an L-type voltage gated calcium channel (VGCC) promotes neuron bursting and subsequent regeneration. This VGCC has high expression in the regenerative neuron and weak expression in the non-regenerative neuron. This suggests that VGCC expression level is the molecular mechanism of subtype-specific neuron regeneration. Together, our findings identify a cellular and molecular intrinsic mechanism of subtype-specific regeneration, which is why some neurons are able to regenerate while the majority of neurons do not. Perhaps VGCC activation or neuron activity pattern modulation could be used therapeutically for patients with nerve injury.
482

Efficient image based localization using machine learning techniques

Elmougi, Ahmed 23 April 2021 (has links)
Localization is critical for self-awareness of any autonomous system and is an important part of the autonomous system stack which consists of many phases including sensing, perceiving, planning and control. In the sensing phase, data from on board sensors are collected, preprocessed and passed to the next phase. The perceiving phase is responsible for self awareness or localization and situational awareness which includes multi-objects detection and scene understanding. After the autonomous system is aware of where it is and what is around it, it can use this knowledge to plan for the path it can take and send control commands to pursue this path. In this proposal, we focus on the localization part of the autonomous stack using camera images. We deal with the localization problem from different perspectives including single images and videos. Starting with the single image pose estimation, our approach is to propose systems that not only have good localization accuracy, but also have low space and time complexity. Firstly, we propose SurfCNN, a low cost indoor localization system that uses SURF descriptors instead of the original images to reduce the complexity of training convolutional neural networks (CNN) for indoor localization application. Given a single input image, the strongest SURF features descriptors are used as input to 5 convolutional layers to find its absolute position and orientation in arbitrary reference frame. The proposed system achieves comparable performance to the state of the art using only 300 features without the need for using the full image or complex neural networks architectures. Following, we propose SURF-LSTM, an extension to the idea of using SURF descriptors instead the original images. However, instead of CNN used in SurfCNN, we use long short term memory (LSTM) network which is one type of recurrent neural networks (RNN) to extract the sequential relation between SURF descriptors. Using SURF-LSTM, We only need 50 features to reach comparable or better results compared with SurfCNN that needs 300 features and other works that use full images with large neural networks. In the following research phase, instead of using SURF descriptors as image features to reduce the training complexity, we study the effect of using features extracted from other CNN models that were pretrained on other image tasks like image classification without further training and fine tuning. To learn the pose from pretrained features, graph neural networks (GNN) are adopted to solve the single image localization problem (Pose-GNN) by using these features representations either as features of nodes in a graph (image as a node) or converted into a graph (image as a graph). The proposed models outperform the state of the art methods on indoor localization dataset and have comparable performance for outdoor scenes. In the final stage of single image pose estimation research, we study if we can achieve good localization results without the need for training complex neural network. We propose (Linear-PoseNet) by which we can achieve similar results to the other methods based on neural networks with training a single linear regression layer on image features from pretrained ResNet50 in less than one second on CPU. Moreover, for outdoor scenes, we propose (Dense-PoseNet) that have only 3 fully connected layers trained on few minutes that reach comparable performance to other complex methods. The second localization perspective is to find the relative poses between images in a video instead of absolute poses. We extend the idea used in SurfCNN and SURF-LSTM systems and use SURF descriptors as feature representation of the images in the video. Two systems are proposed to find the relative poses between images in the video using 3D-CNN and 2DCNN-RNN. We show that using 3D-CNN is better than using the combination of CNN-RNN for relative pose estimation. / Graduate
483

Restoring Sensation in Human Upper Extremity Amputees using Chronic Peripheral Nerve Interfaces

Tan, Daniel 02 September 2014 (has links)
No description available.
484

Machine Learning Approaches to Data-Driven Transition Modeling

Zafar, Muhammad-Irfan 15 June 2023 (has links)
Laminar-turbulent transition has a strong impact on aerodynamic performance in many practical applications. Hence, there is a practical need for developing reliable and efficient transition prediction models, which form a critical element of the CFD process for aerospace vehicles across multiple flow regimes. This dissertation explores machine learning approaches to develop transition models using data from computations based on linear stability theory. Such data provide strong correlation with the underlying physics governed by linearized disturbance equations. In the proposed transition model, a convolutional neural network-based model encodes information from boundary layer profiles into integral quantities. Such automated feature extraction capability enables generalization of the proposed model to multiple instability mechanisms, even for those where physically defined shape factor parameters cannot be defined/determined in a consistent manner. Furthermore, sequence-to-sequence mapping is used to predict the transition location based on the mean boundary layer profiles. Such an end-to-end transition model provides a significantly simplified workflow. Although the proposed model has been analyzed for two-dimensional boundary layer flows, the embedded feature extraction capability enables their generalization to other flows as well. Neural network-based nonlinear functional approximation has also been presented in the context of transport equation-based closure models. Such models have been examined for their computational complexity and invariance properties based on the transport equation of a general scalar quantity. The data-driven approaches explored here demonstrate the potential for improved transition prediction models. / Doctor of Philosophy / Surface skin friction and aerodynamic heating caused by the flow over a body significantly increases due to the transition from laminar to turbulent flow. Hence, efficient and reliable prediction of transition onset location is a critical component of simulating fluid flows in engineering applications. Currently available transition prediction tools do not provide a good balance between computational efficiency and accuracy. This dissertation explores machine learning approach to develop efficient and reliable models for predicting transition in a significantly simplified manner. Convolutional neural network is used to extract features from the state of boundary layer flow at each location along the body. These extracted features are then processed sequentially using recurrent neural network to predict the amplification of instabilities in the flow, which is directly correlated to the onset of transition. Such an automated nature of feature extraction enables the generalization of this model to multiple transition mechanisms associated with different flow conditions and geometries. Furthermore, an end-to-end mapping from flow data to transition prediction requires no user expertise in stability theory and provides a significantly simplified workflow as compared to traditional stability-based computations. Another category of neural network-based models (known as neural operators) is also examined which can learn functional mapping from input variable field to output quantities. Such models can learn directly from data for complex set of problems, without the knowledge of underlying governing equations. Such attribute can be leveraged to develop a transition prediction model which can be integrated seamlessly in flow solvers. While further development is needed, such data-driven models demonstrate the potential for improved transition prediction models.
485

Studying Perturbations on the Input of Two-Layer Neural Networks with ReLU Activation

Alsubaihi, Salman 07 1900 (has links)
Neural networks was shown to be very susceptible to small and imperceptible perturbations on its input. In this thesis, we study perturbations on two-layer piecewise linear networks. Such studies are essential in training neural networks that are robust to noisy input. One type of perturbations we consider is `1 norm bounded perturbations. Training Deep Neural Networks (DNNs) that are robust to norm bounded perturbations, or adversarial attacks, remains an elusive problem. While verification based methods are generally too expensive to robustly train large networks, it was demonstrated in [1] that bounded input intervals can be inexpensively propagated per layer through large networks. This interval bound propagation (IBP) approach lead to high robustness and was the first to be employed on large networks. However, due to the very loose nature of the IBP bounds, particularly for large networks, the required training procedure is complex and involved. In this work, we closely examine the bounds of a block of layers composed of an affine layer followed by a ReLU nonlinearity followed by another affine layer. In doing so, we propose probabilistic bounds, true bounds with overwhelming probability, that are provably tighter than IBP bounds in expectation. We then extend this result to deeper networks through blockwise propagation and show that we can achieve orders of magnitudes tighter bounds compared to IBP. With such tight bounds, we demonstrate that a simple standard training procedure can achieve the best robustness-accuracy tradeoff across several architectures on both MNIST and CIFAR10. We, also, consider Gaussian perturbations, where we build on a previous work that derives the first and second output moments of a two-layer piecewise linear network [2]. In this work, we derive an exact expression for the second moment, by dropping the zero mean assumption in [2].
486

Learning Compact Architectures for Deep Neural Networks

Srinivas, Suraj January 2017 (has links) (PDF)
Deep neural networks with millions of parameters are at the heart of many state of the art computer vision models. However, recent works have shown that models with much smaller number of parameters can often perform just as well. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications. While prior work on compressing neural networks have looked at methods based on sparsity, quantization and factorization of neural network layers, we look at the alternate approach of pruning neurons. Training Neural Networks is often described as a kind of `black magic', as successful training requires setting the right hyper-parameter values (such as the number of neurons in a layer, depth of the network, etc ). It is often not clear what these values should be, and these decisions often end up being either ad-hoc or driven through extensive experimentation. It would be desirable to automatically set some of these hyper-parameters for the user so as to minimize trial-and-error. Combining this objective with our earlier preference for smaller models, we ask the following question - for a given task, is it possible to come up with small neural network architectures automatically? In this thesis, we propose methods to achieve the same. The work is divided into four parts. First, given a neural network, we look at the problem of identifying important and unimportant neurons. We look at this problem in a data-free setting, i.e; assuming that the data the neural network was trained on, is not available. We propose two rules for identifying wasteful neurons and show that these suffice in such a data-free setting. By removing neurons based on these rules, we are able to reduce model size without significantly affecting accuracy. Second, we propose an automated learning procedure to remove neurons during the process of training. We call this procedure ‘Architecture-Learning’, as this automatically discovers the optimal width and depth of neural networks. We empirically show that this procedure is preferable to trial-and-error based Bayesian Optimization procedures for selecting neural network architectures. Third, we connect ‘Architecture-Learning’ to a popular regularize called ‘Dropout’, and propose a novel regularized which we call ‘Generalized Dropout’. From a Bayesian viewpoint, this method corresponds to a hierarchical extension of the Dropout algorithm. Empirically, we observe that Generalized Dropout corresponds to a more flexible version of Dropout, and works in scenarios where Dropout fails. Finally, we apply our procedure for removing neurons to the problem of removing weights in a neural network, and achieve state-of-the-art results in scarifying neural networks.
487

Multimodal Deep Learning for Multi-Label Classification and Ranking Problems

Dubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012]. On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC), (ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies. Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
488

Determining Properties of Synaptic Structure in a Neural Network through Spike Train Analysis

Brooks, Evan 05 1900 (has links)
A "complex" system typically has a relatively large number of dynamically interacting components and tends to exhibit emergent behavior that cannot be explained by analyzing each component separately. A biological neural network is one example of such a system. A multi-agent model of such a network is developed to study the relationships between a network's structure and its spike train output. Using this model, inferences are made about the synaptic structure of networks through cluster analysis of spike train summary statistics A complexity measure for the network structure is also presented which has a one-to-one correspondence with the standard time series complexity measure sample entropy.
489

Molecules involved in the regulation of enteric neural crest cell migration: 影響腸道神經脊細胞正常遷移的基因表達的研究. / 影響腸道神經脊細胞正常遷移的基因表達的研究 / Molecules involved in the regulation of enteric neural crest cell migration: Ying xiang chang dao shen jing ji xi bao zheng chang qian yi de ji yin biao da de yan jiu. / Ying xiang chang dao shen jing ji xi bao zheng chang qian yi de ji yin biao da de yan jiu

January 2014 (has links)
腸神經系統(enteric nervous system, ENS)是由大量神經元和神經膠質細胞聚集而成的最複雜的周圍神經系統。這些腸道的神經元和神經膠質細胞來源于迷走神經脊和骶神經脊細胞,在胚胎發育過程中,這些神經脊細胞沿著腸道移動最終占滿整個腸道。儘管神經脊細胞的遷移對於腸道神經系統的形成及功能的正常發揮起到很重要的作用,然而影響神經脊細胞遷移的分子機制的研究卻相對較少。因此找出參與調控神經脊細胞遷移的基因對於更好的瞭解腸道神經脊系統的發育起到非常重要的作用,並且為治療腸道神經系統紊亂所導致的相關疾病提供治療靶點。 / 本研究論文是由兩部份實驗課題所組成來研究影響腸和調控道神經脊細胞遷移及腸道神經系統發育的相關基因。第一部份課題主要研究的是Semaphorin3A (Sema3A)對於骶神經脊細胞遷移的影響。本論文的研究發現Sema3A不僅被腸道內的上皮細胞所表達,腸道兩側的盆神經節周圍的間質細胞也表達Sema3A。同時Sema3A的受體neuropilin-1被骶神經脊細胞所表達。體外培養的實驗表明Sema3A能夠抑制骶神經脊細胞的遷移。另外,當表達Sema3A的腸道末端與骶神經脊細胞共同培養時,骶神經脊細胞的遷移同樣也受到抑制。這些研究結果表明由腸道末端的上皮細胞和腸道外圍的間質細胞所表達的Sema3A共同作用來調控骶神經脊細胞在停滯時期的遷移活動。 / 第二部份的研究課題主要研究的是轉錄因子Sox10以及其靶基因對於迷走神經脊細胞遷移的影響。Dominant megacolon (Dom)是一種攜帶有Sox10突變的巨結腸癥小鼠模型。本研究利用這種小鼠模型來發現突變鼠中可能影響迷走神經脊細胞遷移的基因。從迷走神經脊細胞體外培養發現: 由於Sox10突變,迷走神經脊細胞在體外培養24小時后,細胞遷移延遲,細胞的分化能力被改變,並且細胞死亡增加。利用基因芯片的方法比較了純和變異鼠迷走神經脊細胞和正常鼠迷走神經脊細胞的基因表達的差異。螢光素酶報告基因分析顯示,Sox10可以結合Lama4, Itga4和Gfra2的啟動子并激活它們的表達。 Sox10能與Gfra2啟動子上-116bp到-58bp之間序列的結合誘導Gfra2的表達。在純和變異鼠迷走神經脊細胞中,通過上調Gfra2信使RNA的表達,細胞死亡的數目大大下降,表明Gfra2作為Sox10的靶基因,對迷走神經脊細胞的存亡有著重要作用。 / 綜上所述,我們發現在骶神經脊細胞未進入腸道末端的這段停滯期內,Sema3A對於骶神經脊細胞的遷移起到抑制作用,Sema3A通過其表達在這段停滯期內的時空改變來調控骶神經脊細胞進入腸道。另外我們發現由於Sox10的突變,迷走神經脊細胞表現出非正常的遷移和基因表達的變化。作為Sox10的靶基因,Gfra2對於迷走神經脊細胞的死亡有重要的作用。 / The enteric nervous system (ENS) is the most complex part of the peripheral nervous system which is composed of a vast number of neurons and glial cells. The enteric neurons and glial cells arise from vagal and sacral neural crest cells (NCCs) which migrate along the gastrointestinal tract to colonize the whole gut during the embryonic development. The molecular mechanisms regulating the NCC migration are poorly characterized despite the importance of this migration process in the ENS formation. Therefore, identification and characterization of molecules involved in the modulation of NCC migration are essential to understand the ENS development and could provide potential therapeutic targets for the treatment of human ENS disorders. / The present study was aimed to identify and characterize the molecules involved in modulating the NCC migration during the ENS development, and was divided into two parts. The first part focused on semaphorin3A (Sema3A) signaling, Sema3A was found to be expressed in the hindgut epithelium and also the adjacent regions of pelvic ganglia, while its receptor, neuropilin-1, was expressed by sacral NCCs before sacral NCCs entered the hindgut. Sacral NCC migration and neuronal fiber extension in vitro were retarded in the culture medium containing Sema3A. When a hindgut segment expressing Sema3A was co-cultured with sacral NCCs, sacral NCC migration and neuronal fiber extension were also suppressed by the hindgut segment. These findings provide evidence for the repulsive activity of Sema3A before the entry of sacral NCCs to the hindgut. / The second part focused on the potential target genes of the transcription factor Sox10 which is expressed by migrating NCCs. A naturally occurring mouse mutant Dominant megacolon (Sox10Dom) which expresses a mutant Sox10 was used to identify candidate molecules which may possibly affect the NCC migration. After 24 hours in culture, vagal NCCs from Sox10Dom/Dom embryos showed retarded migration, abnormal cell differentiation and excessive cell death in vitro when compared to Sox10⁺/⁺ vagal NCCs. Results of microarray analyses revealed differentially expressed genes in Sox10Dom/Dom as compared to Sox10⁺/⁺ vagal NCCs after 24 hours in culture. Among these genes, Sox10 was able to bind to the promoter of Itga4, Lama4, and Gfra2 to induce their expression. Sox10 activated Gfra2 promoter by direct binding to the critical region located between -116bp and -58bp upstream of the Gfra2 transcription start site. Finally, re-expression of Gfra2 in Sox10Dom/Dom vagal NCCs resulted in decreased cell death, suggesting that down-regulation of Gfra2 in the mutant mice played an important role in early cell death of vagal NCCs. / In conclusion, before sacral NCCs entered into the hindgut, Sema3A inhibited the sacral NCC migration, and the spatiotemporal change of the Sema3A distribution regulated the entry of sacral NCCs into hindgut. Furthermore, retarded cell migration, abnormal cell differentiation, increased cell death and differential gene expression were found in Sox10Dom/Dom vagal NCCs as compared with those from Sox10⁺/⁺ embryos in vitro. The expression of Gfra2, a potential target gene of Sox10, promoted the cell viability of vagal NCCs. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Wang, Cuifang. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 180-196). / Abstracts also in Chinese. / Wang, Cuifang.
490

Redes neurais artificiais aplicadas na caracterização e predição de regiões promotoras

Silva, Scheila de Avila e 11 January 2007 (has links)
Made available in DSpace on 2015-03-05T13:57:00Z (GMT). No. of bitstreams: 0 Previous issue date: 11 / Nenhuma / A região promotora é uma seqüência de DNA que localiza-se anteriormente a uma determinada região gênica. Ela é responsável pelo início do processo de transcrição de um gene ou conjunto de genes. Assim, ela também atua como um elemento regulador da expressão gênica. O estudo da regulação da expressão gênica é relevante porque é essencial para a compreensão da maquinária vital dos seres vivos, já que a diferença entre duas espécies está mais relacionada em como e quando seus genes estão “ativos” ou “inativos” do que com a estrutura destes em si. Embora exista métodos computacionais para a predição de genes com boa acurácia, o mesmo não é conseguido para os promotores. Esta dificuldade deve-se ao pequeno e pouco conservado padrão das seqüências, gerando assim resultados com alto número de falsos positivos. Além dos motivos consensuais, os promotores possuem características físicas que os diferem de seqüências não-promotoras. No entanto, estas ainda não são amplamente utilizadas no problema de predição in silic / The promoter region is localized few base pairs before the gene region. It is responsible by initiate the gene expression process, thus, it plays a regulatory role. The study about the gene expression regulation is a great area, since it can assist in the comprehension of complex metabolic network presented by several organisms and, because the difference between two different species is how and when your genes are turn off and turn on than your structure. The computational methods to gene prediction have a good accuracy, but this is not achieved in the promoter prediction. This difficulty occurs because the length of promoter and the degenerate pattern presented, thus the results presented a great number of false positives. This work aims employed Neural Networks to promoter prediction and recognition of Escherichia coli by two approaches: whit the orthogonal codification and stability values of the promoter sequence. For characterization, realize the extraction rules of type if … then. The results in this

Page generated in 0.0499 seconds