• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3025
  • 278
  • 199
  • 187
  • 165
  • 82
  • 54
  • 29
  • 26
  • 23
  • 22
  • 22
  • 15
  • 14
  • 12
  • Tagged with
  • 5134
  • 3092
  • 1339
  • 1146
  • 1140
  • 850
  • 756
  • 747
  • 585
  • 563
  • 561
  • 535
  • 503
  • 480
  • 461
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Ressuage des matériaux cimentaires : origine physique et changement d'échelle / Bleeding of cementitious materials

Massoussi, Nadia 10 October 2017 (has links)
Au vu de la différence de densité entre les composants minéraux solides et l’eau entrant dans la composition d’un béton, une instabilité gravitaire peut apparaître et provoquer une séparation de phase. Cette séparation est à l’origine de la formation d’une pellicule d’eau à la surface du béton et est appelé ressuage. Malgré le fait que le ressuage peut directement ou indirectement nuire aux propriétés finales du béton durci, les connaissances existantes ne permettent pas de prédire ce phénomène ou de le corréler à la formulation du béton. L’objectif de cette thèse est d’identifier la physique mise en jeu lors du phénomène de ressuage de façon à proposer à la fois une méthodologie de mesure adaptée et un cadre théorique prédictif.La démarche retenue consiste à commencer par l’étude d’un matériau simple tel qu’une pâte de ciment en laboratoire pour terminer à l’échelle plus complexe d’un béton de fondation coulé sur chantier.Dans une première partie, nos résultats expérimentaux sur pâte de ciment suggèrent que le ressuage ne peut pas être considéré comme un simple phénomène de consolidation homogène d'un matériau poreux déformable mais comme un phénomène de consolidation hétérogène conduisant à la formation de canaux préférentiels d'extraction d'eau. Nous montrons ainsi l'existence de trois régimes de ressuage : une période d'induction, une période d’accélération et une période de consolidation. Seuls les deux derniers régimes avaient été observés et discutés jusqu'à maintenant dans la littérature. Nos résultats suggèrent que la formation de ces canaux préférentiels semble être initiée par les défauts du système (les bulles d’air au premier ordre).Dans une seconde partie, les deux essais normalisés utilisés à ce jour dans la pratique industrielle pour la mesure du ressuage des bétons sur chantier, l’essai ASTM et l’essai Bauer, sont étudiés. Nous montrons que ces essais capturent des aspects différents du ressuage et qu’ils ne peuvent donc être corrélés. Nous montrons par ailleurs l’existence de limites dans la capacité de ces essais à capturer le risque de ressuage pour un béton donné. Des modifications de protocole sont alors proposées pour améliorer ces essais et leur permettre de fournir les données nécessaires à la prédiction du ressuage à l’échelle de la fondation.Enfin, nous étudions à la fois les différences entre ressuage d’une pâte de ciment et ressuage d’un béton et l’influence de la hauteur totale de matériau soumis au ressuage. La forte dépendance de la vitesse de ressuage à la profondeur est mise en évidence dans le cas des bétons. Un modèle permettant d’extrapoler une vitesse de ressuage dans une fondation à partir d’une mesure de ressuage à l’aide de l’essai ASTM est proposé. Ce modèle est validé sur des essais de laboratoire et des fondations réelles.Mots clés : ressuage, béton, pâte de ciment, consolidation, effet d’échelle / Due to the density differences between the solid mineral components and the suspending water, gravity can induce phase separation in concrete. This phase separation is at the origin of the formation of a film of water on the upper surface of fresh concrete, commonly known as bleeding. Although bleeding is known to directly or indirectly affect the final properties of hardened concrete, the existing knowledge does not allow for the prediction of this phenomenon or its correlation to mix proportions. The objective of this thesis, therefore, is to identify the physics behind the bleeding phenomenon in order to propose both an adapted measurement methodology and a predictive theoretical framework.The approach adopted is to start from the study of a simple model material, a cement paste in the laboratory, and upscale to the more complex scale of concrete poured into a real foundation on site.In the first part, our experimental results on cement paste suggest that bleeding cannot be simply described as the consolidation of a soft porous material, but, in fact, is of an obvious heterogeneous nature leading to the formation of preferential water extraction channels within the cement paste. We thus show the existence of three bleeding regimes: an induction period, an acceleration period, and a consolidation period. Only the last two regimes had been observed and discussed in the literature. Our results suggest that the formation of these preferential channels seems to be initiated by system defects (air bubbles at first order).In the second part, the two industrial standard tests used for the measurement of bleeding on site, the ASTM test and the Bauer test, are studied. We show that these tests capture different aspects of bleeding, and therefore, cannot be correlated. We also show the existence of limits in the capacity of these tests to capture the risk of bleeding for a given concrete. Changes and improvements are proposed in order to enable these tests to provide the data necessary for the prediction of bleeding at the concrete foundation scale.Finally, in the last part, we study the differences between the bleeding of a cement paste and the bleeding of a concrete and the influence of the total height of material subjected to bleeding. The high dependence of the bleeding rate on the depth of the foundation is captured in the case of concretes. A model is proposed to extrapolate a bleeding rate in a foundation from a bleeding measurement using the ASTM test. This model is validated on laboratory tests and on onsite measurements of real concrete foundations.Keywords: bleeding, concrete, cement paste, consolidation, scale effect
382

Taxonomy and ecology of the deep-pelagic fish family Melamphaidae, with emphasis on interactions with a mid-ocean ridge system

Unknown Date (has links)
Much of the world's oceans lie below a depth of 200 meters, but very little is known about the creatures that inhabit these deep-sea environments. The deep-sea fish family Melamphaidae (Stephanoberyciformes) is one such example of an understudied group of fishes. Samples from the MAR-ECO (www.mar-eco.no) project represent one of the largest melamphaid collections, providing an ideal opportunity to gain information on this important, but understudied, family of fishes. The key to the family presented here is the first updated, comprehensive key since those produced by Ebeling and Weed (1963) and Keene (1987). Samples from the 2004 MAR-ECO cruise and the U.S. National Museum of Natural History provided an opportunity to review two possible new species, the Scopelogadus mizolepis subspecies, and a Poromitra crassiceps species complex. Results show that Scopeloberyx americanus and Melamphaes indicoides are new species, while the two subspecies of Scopelogadus mizolepis are most likely o nly one species and the Poromitra crassiceps complex is actually several different species of Poromitra. Data collected from the MAR-ECO cruise provided an opportunity to study the distribution, reproductive characteristics and trophic ecology of the family Melamphaidae along the Mid-Atlantic Ridge (MAR). Cluster analysis showed that there are five distinct groups of melamphaid fishes along the MAR. This analysis also supported the initial observation that the melamphaid assemblage changes between the northern and southern edges of an anti-cyclonic anomaly that could be indicative of a warm-core ring. Analysis of the reproductive characteristics of the melamphaid assemblage revealed that many of the female fishes have a high gonadosomatic index (GSI) consistent with values found for other species of deep-sea fishes during their spawning seasons. / This may indicate that melamphaids use this ridge as a spawning ground. Diets of the melamphaid fishes were composed primarily of ostracods, a mphipods, copepods and euphausiids. Scopelogadus was the only genus shown to have a high percent of gelatinous prey in their digestive system, while Melamphaes had the highest concentration of chaetognaths. This work presents data on the ecology and taxonomy of the family Melamphaidae and provides a strong base for any future work on this biomass-dominant family of fishes. / by Kyle Allen Bartow. / Thesis (Ph.D.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
383

3D Visualization of MPC-based Algorithms for Autonomous Vehicles

Sörliden, Pär January 2019 (has links)
The area of autonomous vehicles is an interesting research topic, which is popular in both research and industry worldwide. Linköping university is no exception and some of their research is based on using Model Predictive Control (MPC) for autonomous vehicles. They are using MPC to plan a path and control the autonomous vehicles. Additionally, they are using different methods (for example deep learning or likelihood) to calculate collision probabilities for the obstacles. These are very complex algorithms, and it is not always easy to see how they work. Therefore, it is interesting to study if a visualization tool, where the algorithms are presented in a three-dimensional way, can be useful in understanding them, and if it can be useful in the development of the algorithms.  This project has consisted of implementing such a visualization tool, and evaluating it. This has been done by implementing a visualization using a 3D library, and then evaluating it both analytically and empirically. The evaluation showed positive results, where the proposed tool is shown to be helpful when developing algorithms for autonomous vehicles, but also showing that some aspects of the algorithm still would need more research on how they could be implemented. This concerns the neural networks, which was shown to be difficult to visualize, especially given the available data. It was found that more information about the internal variables in the network would be needed to make a better visualization of them.
384

Skin lesion segmentation and classification using deep learning

Unknown Date (has links)
Melanoma, a severe and life-threatening skin cancer, is commonly misdiagnosed or left undiagnosed. Advances in artificial intelligence, particularly deep learning, have enabled the design and implementation of intelligent solutions to skin lesion detection and classification from visible light images, which are capable of performing early and accurate diagnosis of melanoma and other types of skin diseases. This work presents solutions to the problems of skin lesion segmentation and classification. The proposed classification approach leverages convolutional neural networks and transfer learning. Additionally, the impact of segmentation (i.e., isolating the lesion from the rest of the image) on the performance of the classifier is investigated, leading to the conclusion that there is an optimal region between “dermatologist segmented” and “not segmented” that produces best results, suggesting that the context around a lesion is helpful as the model is trained and built. Generative adversarial networks, in the context of extending limited datasets by creating synthetic samples of skin lesions, are also explored. The robustness and security of skin lesion classifiers using convolutional neural networks are examined and stress-tested by implementing adversarial examples. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
385

Parallel Distributed Deep Learning on Cluster Computers

Unknown Date (has links)
Deep Learning is an increasingly important subdomain of arti cial intelligence. Deep Learning architectures, arti cial neural networks characterized by having both a large breadth of neurons and a large depth of layers, bene ts from training on Big Data. The size and complexity of the model combined with the size of the training data makes the training procedure very computationally and temporally expensive. Accelerating the training procedure of Deep Learning using cluster computers faces many challenges ranging from distributed optimizers to the large communication overhead speci c to a system with o the shelf networking components. In this thesis, we present a novel synchronous data parallel distributed Deep Learning implementation on HPCC Systems, a cluster computer system. We discuss research that has been conducted on the distribution and parallelization of Deep Learning, as well as the concerns relating to cluster environments. Additionally, we provide case studies that evaluate and validate our implementation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
386

Using Deep Learning Semantic Segmentation to Estimate Visual Odometry

Unknown Date (has links)
In this research, image segmentation and visual odometry estimations in real time are addressed, and two main contributions were made to this field. First, a new image segmentation and classification algorithm named DilatedU-NET is introduced. This deep learning based algorithm is able to process seven frames per-second and achieves over 84% accuracy using the Cityscapes dataset. Secondly, a new method to estimate visual odometry is introduced. Using the KITTI benchmark dataset as a baseline, the visual odometry error was more significant than could be accurately measured. However, the robust framerate speed made up for this, able to process 15 frames per second. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
387

Learning representations for speech recognition using artificial neural networks

Swietojanski, Paweł January 2016 (has links)
Learning representations is a central challenge in machine learning. For speech recognition, we are interested in learning robust representations that are stable across different acoustic environments, recording equipment and irrelevant inter– and intra– speaker variabilities. This thesis is concerned with representation learning for acoustic model adaptation to speakers and environments, construction of acoustic models in low-resource settings, and learning representations from multiple acoustic channels. The investigations are primarily focused on the hybrid approach to acoustic modelling based on hidden Markov models and artificial neural networks (ANN). The first contribution concerns acoustic model adaptation. This comprises two new adaptation transforms operating in ANN parameters space. Both operate at the level of activation functions and treat a trained ANN acoustic model as a canonical set of fixed-basis functions, from which one can later derive variants tailored to the specific distribution present in adaptation data. The first technique, termed Learning Hidden Unit Contributions (LHUC), depends on learning distribution-dependent linear combination coefficients for hidden units. This technique is then extended to altering groups of hidden units with parametric and differentiable pooling operators. We found the proposed adaptation techniques pose many desirable properties: they are relatively low-dimensional, do not overfit and can work in both a supervised and an unsupervised manner. For LHUC we also present extensions to speaker adaptive training and environment factorisation. On average, depending on the characteristics of the test set, 5-25% relative word error rate (WERR) reductions are obtained in an unsupervised two-pass adaptation setting. The second contribution concerns building acoustic models in low-resource data scenarios. In particular, we are concerned with insufficient amounts of transcribed acoustic material for estimating acoustic models in the target language – thus assuming resources like lexicons or texts to estimate language models are available. First we proposed an ANN with a structured output layer which models both context–dependent and context–independent speech units, with the context-independent predictions used at runtime to aid the prediction of context-dependent states. We also propose to perform multi-task adaptation with a structured output layer. We obtain consistent WERR reductions up to 6.4% in low-resource speaker-independent acoustic modelling. Adapting those models in a multi-task manner with LHUC decreases WERRs by an additional 13.6%, compared to 12.7% for non multi-task LHUC. We then demonstrate that one can build better acoustic models with unsupervised multi– and cross– lingual initialisation and find that pre-training is a largely language-independent. Up to 14.4% WERR reductions are observed, depending on the amount of the available transcribed acoustic data in the target language. The third contribution concerns building acoustic models from multi-channel acoustic data. For this purpose we investigate various ways of integrating and learning multi-channel representations. In particular, we investigate channel concatenation and the applicability of convolutional layers for this purpose. We propose a multi-channel convolutional layer with cross-channel pooling, which can be seen as a data-driven non-parametric auditory attention mechanism. We find that for unconstrained microphone arrays, our approach is able to match the performance of the comparable models trained on beamform-enhanced signals.
388

Diversidade de hidroides (Cnidaria) do Atlântico profundo sob uma perspectiva macroecológica / Diversity of deep-sea Atlantic hydroids (Cnidaria) under a macroecological perspective

Fernandez, Marina de Oliveira 13 December 2017 (has links)
A variação batimétrica nos oceanos e suas mudanças ambientais associadas impõem limites à distribuição de espécies, modulando a ocorrência de indivíduos com diferentes formas, funções e histórias de vida de acordo com a profundidade, e sendo, portanto, importante para o entendimento de padrões da biodiversidade marinha. Este estudo objetiva inferir padrões de distribuição de hidroides no Oceano Atlântico e mares árticos e antárticos adjacentes a mais de 50 m de profundidade, buscando contribuir para o entendimento da diversificação e estruturação associadas à variação batimétrica que propiciaram a ocupação dos diferentes ambientes pelo grupo. Apresentamos pela primeira vez inferências das amplitudes de distribuição batimétrica das espécies, da variação de características funcionais de indivíduos e espécies com a profundidade e da distribuição da composição de espécies ao longo da profundidade e da latitude. Em conjunto, os resultados indicam que a distribuição de hidroides no Atlântico profundo está relacionada a fatores históricos e a gradientes ambientais associados às variações latitudinal e batimétrica. Os tamanhos reduzidos e a baixa fertilidade em mar profundo sugerem que a colonização e a evolução de hidroides ao longo da profundidade são principalmente influenciadas pela disponibilidade de alimento e pelas baixas densidades populacionais. Ainda, a maior proporção de espécies e indivíduos solitários em mar profundo e o maior uso de substratos não-consolidados sugerem influência da disponibilidade de substrato. A proporção de espécies capazes de liberar medusas abaixo de 50 m é geralmente menor que em águas rasas costeiras, mas a proporção aumenta com a profundidade, principalmente abaixo de 1.500 m. A liberação de medusas seria desvantajosa em um ambiente com baixas densidades populacionais, por aumentar a incerteza da fecundação dada pela dispersão de gametas, e despender mais energia para reprodução em um cenário de poucos recursos alimentares. Amplas distribuições batimétricas sugerem capacidade de dispersão vertical e alta tolerância às mudanças ambientais associadas à variação batimétrica. Os resultados indicam também que a colonização de hidroides em mar profundo ocorre em um sistema de fonte-sumidouro, no qual as populações de mar profundo seriam sustentadas por imigração de águas mais rasas. Mostramos neste estudo que hidroides são importantes habitantes do mar profundo e que o entendimento da diversidade do grupo neste ambiente se beneficiará de investigações em áreas ainda pouco amostradas, como latitudes tropicais sul e profundidades abaixo de 1.000 m / The bathymetric variation in the oceans and associated environmental changes impose limits on the distribution of species, modulating the occurrence of individuals with different forms, functions and life histories according to depth, and is therefore important for the understanding of marine biodiversity patterns. This study aims to infer patterns of hydroid distribution in the Atlantic Ocean and adjacent Arctic and Antarctic seas at more than 50 m deep, seeking to contribute to the understanding of the diversification and structuring associated with the bathymetric variation that favored the occupation of the different environments by the group. We present for the first time inferences on the bathymetric ranges of distribution of the species, on the variation of functional traits of individuals and species with depth, and on the distribution of the species composition along depth and latitude. Together, the results indicate that the distribution of hydroids in the deep Atlantic is related to historical factors and to the environmental gradients associated with latitudinal and bathymetric variations. Reduced sizes and low fertility in deep sea suggest that colonization and evolution of hydroids along depth are mainly influenced by food availability and low population densities. Also, the greater proportion of solitary species and individuals in the deep sea and the greater use of unconsolidated substrates suggest influence of substrate availability. The proportion of species capable of releasing medusae below 50 m deep is generally lower than in shallow coastal waters, but the proportion increases with depth, especially below 1,500 m. The release of medusae would be disadvantageous in an environment with low population densities, by increasing the uncertainty of fertilization given by the dispersion of gametes, and expending more energy for reproduction in a scenario of few food resources. Wide bathymetric distributions suggest vertical dispersal capacity and high tolerance to the environmental changes associated to the bathymetric variation. The results also indicate that colonization of hydroids in the deep sea occurs in a source-sink system in which deep-sea populations would be sustained by shallower water immigration. We show in this study that hydroids are important inhabitants of the deep sea and that the understanding of the diversity of the group in this environment will benefit from investigations in areas still poorly sampled, such as southern tropical latitudes and depths below 1,000 m
389

Encoder-decoder neural networks

Kalchbrenner, Nal January 2017 (has links)
This thesis introduces the concept of an encoder-decoder neural network and develops architectures for the construction of such networks. Encoder-decoder neural networks are probabilistic conditional generative models of high-dimensional structured items such as natural language utterances and natural images. Encoder-decoder neural networks estimate a probability distribution over structured items belonging to a target set conditioned on structured items belonging to a source set. The distribution over structured items is factorized into a product of tractable conditional distributions over individual elements that compose the items. The networks estimate these conditional factors explicitly. We develop encoder-decoder neural networks for core tasks in natural language processing and natural image and video modelling. In Part I, we tackle the problem of sentence modelling and develop deep convolutional encoders to classify sentences; we extend these encoders to models of discourse. In Part II, we go beyond encoders to study the longstanding problem of translating from one human language to another. We lay the foundations of neural machine translation, a novel approach that views the entire translation process as a single encoder-decoder neural network. We propose a beam search procedure to search over the outputs of the decoder to produce a likely translation in the target language. Besides known recurrent decoders, we also propose a decoder architecture based solely on convolutional layers. Since the publication of these new foundations for machine translation in 2013, encoder-decoder translation models have been richly developed and have displaced traditional translation systems both in academic research and in large-scale industrial deployment. In services such as Google Translate these models process in the order of a billion translation queries a day. In Part III, we shift from the linguistic domain to the visual one to study distributions over natural images and videos. We describe two- and three- dimensional recurrent and convolutional decoder architectures and address the longstanding problem of learning a tractable distribution over high-dimensional natural images and videos, where the likely samples from the distribution are visually coherent. The empirical validation of encoder-decoder neural networks as state-of- the-art models of tasks ranging from machine translation to video prediction has a two-fold significance. On the one hand, it validates the notions of assigning probabilities to sentences or images and of learning a distribution over a natural language or a domain of natural images; it shows that a probabilistic principle of compositionality, whereby a high- dimensional item is composed from individual elements at the encoder side and whereby a corresponding item is decomposed into conditional factors over individual elements at the decoder side, is a general method for modelling cognition involving high-dimensional items; and it suggests that the relations between the elements are best learnt in an end-to-end fashion as non-linear functions in distributed space. On the other hand, the empirical success of the networks on the tasks characterizes the underlying cognitive processes themselves: a cognitive process as complex as translating from one language to another that takes a human a few seconds to perform correctly can be accurately modelled via a learnt non-linear deterministic function of distributed vectors in high-dimensional space.
390

Deep neural networks in computer vision and biomedical image analysis

Xie, Weidi January 2017 (has links)
This thesis proposes different models for a variety of applications, such as semantic segmentation, in-the-wild face recognition, microscopy cell counting and detection, standardized re-orientation of 3D ultrasound fetal brain and Magnetic Resonance (MR) cardiac video segmentation. Our approach is to employ the large-scale machine learning models, in particular deep neural networks. Expert knowledge is either mathematically modelled as a differentiable hidden layer in the Artificial Neural Networks, or we tried to break the complex tasks into several small and easy-to-solve tasks. Multi-scale contextual information plays an important role in pixel-wise predic- tion, e.g. semantic segmentation. To capture the spatial contextual information, we present a new block for learning receptive field adaptively by within-layer recurrence. While interleaving with the convolutional layers, receptive fields are effectively enlarged, reaching across the entire feature map or image. The new block can be initialized as identity and inserted into any pre-trained networks, therefore taking benefit from the "pre-train and fine-tuning" paradigm. Current face recognition systems are mostly driven by the success of image classification, where the models are trained to by identity classification. We propose a multi-column deep comparator networks for face recognition. The architecture takes two sets (each contains an arbitrary number of faces) of images or frames as inputs, facial part-based (e.g. eyes, noses) representations of each set are pooled out, dynamically calibrated based on the quality of input images, and further compared with local "experts" in a pairwise way. Unlike the computer vision applications, collecting data and annotation is usually more expensive in biomedical image analysis. Therefore, the models that can be trained with fewer data and weaker annotations are of great importance. We approach the microscopy cell counting and detection based on density estimation, where only central dot annotations are needed. The proposed fully convolutional regression networks are first trained on a synthetic dataset of cell nuclei, later fine-tuned and shown to generalize to real data. In 3D fetal ultrasound neurosonography, establishing a coordinate system over the fetal brain serves as a precursor for subsequent tasks, e.g. localization of anatomical landmarks, extraction of standard clinical planes for biometric assessment of fetal growth, etc. To align brain volumes into a common reference coordinate system, we decompose the complex transformation into several simple ones, which can be easily tackled with Convolutional Neural Networks. The model is therefore designed to leverage the closely related tasks by sharing low-level features, and the task-specific predictions are then combined to reproduce the transformation matrix as the desired output. Finally, we address the problem of MR cardiac video analysis, in which we are interested in assisting clinical diagnosis based on the fine-grained segmentation. To facilitate segmentation, we present one end-to-end trainable model that achieves multi-view structure detection, alignment (standardized re-orientation), and fine- grained segmentation simultaneously. This is motivated by the fact that the CNNs in essence is not rotation equivariance or invariance, therefore, adding the pre-alignment into the end-to-end trainable pipeline can effectively decrease the complexity of segmentation for later stages of the model.

Page generated in 0.0514 seconds