• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 16
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 130
  • 130
  • 67
  • 28
  • 24
  • 19
  • 18
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Stochastic Multimedia Modelling of Watershed-Scale Microbial Transport in Surface Water

Safwat, Amr M. 10 October 2014 (has links)
No description available.
72

Scaling Analytics via Approximate and Distributed Computing

Chakrabarti, Aniket 12 December 2017 (has links)
No description available.
73

Mapping and localization for extraterrestrial robotic explorations

Xu, Fengliang 01 December 2004 (has links)
No description available.
74

Email Thread Summarization with Conditional Random Fields

Shockley, Darla Magdalene 23 August 2010 (has links)
No description available.
75

Named Entity Recognition for Search Queries in the Music Domain / Identifiering av namngivna enheter för sökfrågor inom musikdomänen

Liljeqvist, Sandra January 2016 (has links)
This thesis addresses the problem of named entity recognition (NER) in music-related search queries. NER is the task of identifying keywords in text and classifying them into predefined categories. Previous work in the field has mainly focused on longer documents of editorial texts. However, in recent years, the application of NER for queries has attracted increased attention. This task is, however, acknowledged to be challenging due to queries being short, ungrammatical and containing minimal linguistic context. The usage of NER for queries is especially useful for the implementation of natural language queries in domain-specific search applications. These applications are often backed by a database, where the query format otherwise is restricted to keyword search or the usage of a formal query language. In this thesis, two techniques for NER for music-related queries are evaluated; a conditional random field based solution and a probabilistic solution based on context words. As a baseline, the most elementary implementation of NER, commonly applied on editorial text, is used. Both of the evaluated approaches outperform the baseline and demonstrate an overall F1 score of 79.2% and 63.4% respectively. The experimental results show a high precision for the probabilistic approach and the conditional random field based solution demonstrates an F1 score comparable to previous studies from other domains. / Denna avhandling redogör för identifiering av namngivna enheter i musikrelaterade sökfrågor. Identifiering av namngivna enheter innebär att extrahera nyckelord från text och att klassificera dessa till någon av ett antal förbestämda kategorier. Tidigare forskning kring ämnet har framför allt fokuserat på längre redaktionella dokument. Däremot har intresset för tillämpningar på sökfrågor ökat de senaste åren. Detta anses vara ett svårt problem då sökfrågor i allmänhet är korta, grammatiskt inkorrekta och innehåller minimal språklig kontext. Identifiering av namngivna enheter är framför allt användbart för domänspecifika sökapplikationer där målet är att kunna tolka sökfrågor skrivna med naturligt språk. Dessa applikationer baseras ofta på en databas där formatet på sökfrågorna annars är begränsat till att enbart använda nyckelord eller användande av ett formellt frågespråk. I denna avhandling har två tekniker för identifiering av namngivna enheter för musikrelaterade sökfrågor undersökts; en metod baserad på villkorliga slumpfält (eng. conditional random field) och en probabilistisk metod baserad på kontextord. Som baslinje har den mest grundläggande implementationen, som vanligtvis används för redaktionella texter, valts. De båda utvärderade metoderna presterar bättre än baslinjen och ges ett F1-värde på 79,2% respektive 63,4%. De experimentella resultaten visar en hög precision för den probabilistiska implementationen och metoden ba- serad på villkorliga slumpfält visar på resultat på en nivå jämförbar med tidigare studier inom andra domäner.
76

Integrative Modeling and Analysis of High-throughput Biological Data

Chen, Li 21 January 2011 (has links)
Computational biology is an interdisciplinary field that focuses on developing mathematical models and algorithms to interpret biological data so as to understand biological problems. With current high-throughput technology development, different types of biological data can be measured in a large scale, which calls for more sophisticated computational methods to analyze and interpret the data. In this dissertation research work, we propose novel methods to integrate, model and analyze multiple biological data, including microarray gene expression data, protein-DNA interaction data and protein-protein interaction data. These methods will help improve our understanding of biological systems. First, we propose a knowledge-guided multi-scale independent component analysis (ICA) method for biomarker identification on time course microarray data. Guided by a knowledge gene pool related to a specific disease under study, the method can determine disease relevant biological components from ICA modes and then identify biologically meaningful markers related to the specific disease. We have applied the proposed method to yeast cell cycle microarray data and Rsf-1-induced ovarian cancer microarray data. The results show that our knowledge-guided ICA approach can extract biologically meaningful regulatory modes and outperform several baseline methods for biomarker identification. Second, we propose a novel method for transcriptional regulatory network identification by integrating gene expression data and protein-DNA binding data. The approach is built upon a multi-level analysis strategy designed for suppressing false positive predictions. With this strategy, a regulatory module becomes increasingly significant as more relevant gene sets are formed at finer levels. At each level, a two-stage support vector regression (SVR) method is utilized to reduce false positive predictions by integrating binding motif information and gene expression data; a significance analysis procedure is followed to assess the significance of each regulatory module. The resulting performance on simulation data and yeast cell cycle data shows that the multi-level SVR approach outperforms other existing methods in the identification of both regulators and their target genes. We have further applied the proposed method to breast cancer cell line data to identify condition-specific regulatory modules associated with estrogen treatment. Experimental results show that our method can identify biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. Third, we propose a bootstrapping Markov Random Filed (MRF)-based method for subnetwork identification on microarray data by incorporating protein-protein interaction data. Methodologically, an MRF-based network score is first derived by considering the dependency among genes to increase the chance of selecting hub genes. A modified simulated annealing search algorithm is then utilized to find the optimal/suboptimal subnetworks with maximal network score. A bootstrapping scheme is finally implemented to generate confident subnetworks. Experimentally, we have compared the proposed method with other existing methods, and the resulting performance on simulation data shows that the bootstrapping MRF-based method outperforms other methods in identifying ground truth subnetwork and hub genes. We have then applied our method to breast cancer data to identify significant subnetworks associated with drug resistance. The identified subnetworks not only show good reproducibility across different data sets, but indicate several pathways and biological functions potentially associated with the development of breast cancer and drug resistance. In addition, we propose to develop network-constrained support vector machines (SVM) for cancer classification and prediction, by taking into account the network structure to construct classification hyperplanes. The simulation study demonstrates the effectiveness of our proposed method. The study on the real microarray data sets shows that our network-constrained SVM, together with the bootstrapping MRF-based subnetwork identification approach, can achieve better classification performance compared with conventional biomarker selection approaches and SVMs. We believe that the research presented in this dissertation not only provides novel and effective methods to model and analyze different types of biological data, the extensive experiments on several real microarray data sets and results also show the potential to improve the understanding of biological mechanisms related to cancers by generating novel hypotheses for further study. / Ph. D.
77

Influence du champ aléatoire et des interactions à longue portée sur le comportement critique du modèle d'Ising : une approche par le groupe de renormalisation non perturbatif / Influence of random fields and long-range interactions on the critical behavior of the Ising model : an approach by the non pertubrative renormalization group

Baczyk, Maxime 23 June 2014 (has links)
Nous étudions l’influence du champ magnétique aléatoire et des interactions à longue portée sur le comportement critique du modèle d’Ising ; notre approche est basée sur une version non perturbative et fonctionnelle du groupe de renormalisation. Les concepts du groupe de renormalisation non perturbatif sont tout d’abord introduits, puis illustrés dans le cadre simple d’une théorie classique d’un champ scalaire. Nous discutons ensuite les propriétés critiques de cette dernière en présence d’un champ magnétique aléatoire gelé qui traduit le désordre dans le système. Celui-ci est distribué comme un bruit blanc gaussien dans l’espace. Nous insistons principalement sur la propriété de réduction dimensionnelle qui prédit un comportement critique identique pour le modèle en champ aléatoire à d dimensions et le modèle pur (c’est à dire sans champ aléatoire) en dimension d − 2. Bien que cette propriété soit démontrée à tous les ordres par la théorie de perturba- tion, on montre que celle-ci est brisée en dessous d’une dimension critique dDR = 5.13. La réduction dimensionnelle et sa brisure sont alors reliées aux caractéristiques d’échelle des grandes avalanches intervenant dans le système à température nulle. Nous considérons, dans un second temps, une généralisation du modèle d’Ising dans laquelle l’interaction ferromagnétique décroit désormais à longue portée comme r^−(d+σ) avec σ > 0 (d désigne toujours la dimension de l’espace). Dans un tel système, il est possible de travailler en dimension fixée (incluant la dimension d = 1) et de varier l’exposant σ afin de parcourir une gamme de comportements critiques similaire à celle obtenue entre les dimensions critiques inférieure et supérieure de la version à courte portée du modèle. Nous avons caractérisé la transition de phase dans le plan (σ, d), et notamment calculé les exposants critiques en fonction du paramètre σ pour les dimensions physiquement intéressantes d = 1, 2 et 3. Finalement, on s’intéresse aussi à la théorie en présence d’un champ magnétique aléatoire dont les corrélations décroissent à grande distance comme r^−d+ρ avec ρ > −d. Dans le cas particulier où ρ = 2 − σ, on montre que la propriété de réduction dimensionnelle est vérifiée lorsque σ est suffisamment petit, mais brisée à grand σ (en dimension inférieure à dDR ). En particulier, concernant le modèle tridimensionnel, nos résultats prédisent une brisure de réduction dimensionnelle lorsque σ > σDR = 0.71 / We study the influence of the presence of a random magnetic field and of long-ranged interactions on the critical behavior of the Ising model. Our approach is based on a nonperturbative and functional version of the renormalization group. The bases of the nonperturbative renormalization group are introduced first and then illustrated in the simple case of the classical scalar field theory. We next discuss the critical properties of the latter in the presence of a random magnetic field, which is associated with frozen disorder in the system. The distribution of the random field in space is taken as that of a gaussian white noise. We focus on the property of dimensional reduction that predicts identical critical behavior for the random-field model in dimension $d$ and the pure model, \textit{i.e.} in the absence of random field, in dimension d-2. Although this property is found at all orders of the perturbation theory, it is violated below a critical dimension $d_{DR} \approx 5.13$. We show that the dimensional reduction and its breakdown are related to the large-scale properties of the avalanches that are present in the system at zero temperature. We next consider a generalization of the Ising model in which the ferromagnetic interaction varies at large distance like $r^{-(d+\sigma)}$ with $\sigma > 0$ ($d$ being the spatial dimension). In this system, it is possible to obtain a range of critical behavior similar to that encountered in the short-ranged version of the model between the lower and the upper critical dimensions by varying the exponent $\sigma$ while keeping the dimension $d$ fixed (including the case $d=1$).We have characterized the phase transition of this long-ranged model in the plane $(\sigma,d)$ and computed the critical exponents as a function of the parameter $\sigma$ for the physically interesting dimensions, $d=1,2$ and $3$. Finally, we have also studied the long-ranged random-field Ising model when the correlations of the random magnetic field decrease at large distance as $r^{-d+\rho}$ with $\rho > -d$. In the special case where $\rho=2-\sigma$, we have shown that the dimensional-reduction property is satisfied when $\sigma$ is small enough but breaks down above a critical value (when the spatial dimension $d$ is less than $d_{DR}$). In particular, for $d=3$, we predict a breakdown of dimensional reduction for $\sigma_{DR}\approx 0.71$.
78

Padrões estruturados e campo aleatório em redes complexas

Doria, Felipe França January 2016 (has links)
Este trabalho foca no estudo de duas redes complexas. A primeira é um modelo de Ising com campo aleatório. Este modelo segue uma distribuição de campo gaussiana e bimodal. Uma técnica de conectividade finita foi utilizada para resolvê-lo. Assim como um método de Monte Carlo foi aplicado para verificar os resultados. Há uma indicação em nossos resultados que para a distribuição gaussiana a transição de fase é sempre de segunda ordem. Para as distribuições bimodais há um ponto tricrítico, dependente do valor da conectividade . Abaixo de um certo mínimo de , só existe transição de segunda ordem. A segunda é uma rede neural atratora métrica. Mais precisamente, estudamos a capacidade deste modelo para armazenar os padrões estruturados. Em particular, os padrões escolhidos foram retirados de impressões digitais, que apresentam algumas características locais. Os resultados mostram que quanto menor a atividade de padrões de impressões digitais, maior a relação de carga e a qualidade de recuperação. Uma teoria, também foi desenvolvido como uma função de cinco parâmetros: a relação de carga, a conectividade, o grau de densidade da rede, a relação de aleatoriedade e a correlação do padrão espacial. / This work focus on the study of two complex networks. The first one is a random field Ising model. This model follows a gaussian and bimodal distribution, for the random field. A finite connectivity technique was utilized to solve it. As well as a Monte Carlo method was applied to verify our results. There is an indication in our results that for a gaussian distribution the phase transition is always second-order. For the bimodal distribution there is a tricritical point, tha depends on the value of the connectivity . Below a certain minimum , there is only a second-order transition. The second one is a metric attractor neural network. More precisely we study the ability of this model to learn structured patterns. In particular, the chosen patterns were taken from fingerprints, which present some local features. Our results show that the higher the load ratio and retrieval quality are the lower is the fingerprint patterns activity. A theoretical framework was also developed as a function of five parameters: the load ratio, the connectivity, the density degree of the network, the randomness ratio and the spatial pattern correlation.
79

Evaluation et réduction des risques sismiques liés à la liquéfaction : modélisation numérique de leurs effets dans l’ISS / Assessment and mitigation of liquefaction seismic risk : numerical modeling of their effects on SSI

Montoya Noguera, Silvana 29 January 2016 (has links)
La liquéfaction des sols qui est déclenchée par des mouvements sismiques forts peut modifier la réponse d’un site. Ceci occasionne des dégâts importants dans les structures comme a été mis en évidence lors des tremblements de terre récents tels que celui de Christchurch, Nouvelle-Zélande et du Tohoku, Japon. L’évaluation du risque sismique des structures nécessite une modélisation robuste du comportement non linéaire de sols et de la prise en compte de l’interaction sol-structure (ISS). En général, le risque sismique est décrit comme la convolution entre l’aléa et la vulnérabilité du système. Cette thèse se pose comme une contribution à l’étude, via une modélisation numérique, de l’apparition de la liquéfaction et à l’utilisation des méthodes pour réduire les dommages induits.A cet effet, la méthode des éléments finis(FEM) dans le domaine temporel est utilisée comme outil numérique. Le modèle principal est composé d’un bâtiment fondé sur un sable liquéfiable. Comme la première étape de l’analyse du risque sismique, la première partie de cette thèse est consacrée à la caractérisation du comportement du sol et à sa modélisation.Une attention particulière est donnée à la sensibilité du modèle à des paramètres numériques. En suite, le modèle est validé pour le cas d’une propagation des ondes 1D avec les mesures issus du benchmark international PRENOLIN sur un site japonais. D’après la comparaison, le modèle arrive à prédire les enregistrements dans un test en aveugle.La deuxième partie, concerne la prise en compte dans la modélisation numérique du couplage de la surpression interstitielle (Δpw)et de la déformation du sol. Les effets favorables ou défavorables de ce type de modélisation ont été évalués sur le mouvement en surface du sol lors de la propagation des ondes et aussi sur le tassement et la performance sismique de deux structures.Cette partie contient des éléments d’un article publié dans Acta Geotechnica (Montoya-Noguera and Lopez-Caballero, 2016). Il a été trouvé que l’applicabilité du modèle dépend à la fois du niveau de liquéfaction et des effets d’ISS.Dans la dernière partie, une méthode est proposée pour modéliser la variabilité spatiale ajoutée au dépôt de sol dû à l’utilisation des techniques pour diminuer le degré de liquéfaction. Cette variabilité ajoutée peut différer considérablement de la variabilité inhérente ou naturelle. Dans cette thèse, elle sera modélisée par un champ aléatoire binaire.Pour évaluer l’efficience du mélange, la performance du système a été étudiée pour différents niveaux d’efficacité, c’est-à-dire,différentes fractions spatiales en allant de non traitées jusqu’à entièrement traitées. Tout d’abord le modèle binaire a été testé sur un cas simple, tel que la capacité portante d’une fondation superficielle sur un sol cohérent.Après, il a été utilisé dans le modèle de la structure sur le sol liquéfiable. Ce dernier cas,en partie, a été publié dans la revue GeoRisk (Montoya-Noguera and Lopez-Caballero,2015). En raison de l’interaction entre les deux types de sols du mélange, une importante variabilité est mise en évidence dans la réponse de la structure. En outre, des théories classiques et avancées d’homogénéisation ont été utilisées pour prédire la relation entre l’efficience moyenne et l’efficacité. En raison du comportement non linéaire du sol, les théories traditionnelles ne parviennent pas à prédire la réponse alors que certaines théories avancées qui comprennent la théorie de la percolation peuvent fournir une bonne estimation. En ce qui concerne l’effet de la variabilité spatiale ajoutée sur la diminution du tassement de la structure, différents séismes ont été testés et la réponse globale semble dépendre de leur rapport de PHV et PHA. / Strong ground motions can trigger soil liquefaction that will alter the propagating signal and induce ground failure. Important damage in structures and lifelines has been evidenced after recent earthquakes such as Christchurch, New Zealand and Tohoku, Japanin 2011. Accurate prediction of the structures’ seismic risk requires a careful modeling of the nonlinear behavior of soil-structure interaction (SSI) systems. In general, seismic risk analysisis described as the convolution between the natural hazard and the vulnerability of the system. This thesis arises as a contribution to the numerical modeling of liquefaction evaluation and mitigation.For this purpose, the finite element method (FEM) in time domain is used as numerical tool. The main numerical model consists of are inforced concrete building with a shallow rigid foundation standing on saturated cohesionless soil. As the initial step on the seismic risk analysis, the first part of the thesis is consecrated to the characterization of the soil behavior and its constitutive modeling. Later on, some results of the model’s validation witha real site for the 1D wave propagation in dry conditions are presented. These are issued from the participation in the international benchmark PRENOLIN and concern the PARI site Sendaiin Japan. Even though very few laboratory and in-situ data were available, the model responses well with the recordings for the blind prediction. The second part, concerns the numerical modeling of coupling excess pore pressure (Δpw) and soil deformation. The effects were evaluated on the ground motion and on the structure’s settlement and performance. This part contains material from an article published in Acta Geotechnica (Montoya-Noguera andLopez-Caballero, 2015). The applicability of the models was found to depend on both the liquefaction level and the SSI effects.In the last part, an innovative method is proposed to model spatial variability added to the deposit due to soil improvement techniques used to strengthen soft soils and mitigate liquefaction. Innovative treatment processes such as bentonite permeations and biogrouting,among others have recently emerged.However, there remains some uncertainties concerning the degree of spatial variability introduced in the design and its effect of the system’s performance.This added variability can differ significantly from the inherent or natural variability thus, in this thesis, it is modeled by coupling FEM with a binary random field. The efficiency in improving the soil behavior related to the effectiveness of the method measured by the amount of soil changed was analyzed. Two cases were studied: the bearing capacity of a shallow foundation under cohesive soil and the liquefaction-induced settlement of a structure under cohesionless loose soil. The latter, in part, contains material published in GeoRisk journal (Montoya-Noguera and Lopez-Caballero, 2015). Due to the interaction between the two soils, an important variability is evidenced in the response. Additionally, traditional and advanced homogenization theories were used to predict the relation between the average efficiency and effectiveness. Because of the nonlinear soil behavior, the traditional theories fail to predict the response while some advanced theories which include the percolation theory may provide a good estimate. Concerning the effect of added spatial variability on soil liquefaction, different input motions were tested and the response of the whole was found to depend on the ratio of PHV and PHA of the input motion.
80

Image analysis and representation for textile design classification

Jia, Wei January 2011 (has links)
A good image representation is vital for image comparision and classification; it may affect the classification accuracy and efficiency. The purpose of this thesis was to explore novel and appropriate image representations. Another aim was to investigate these representations for image classification. Finally, novel features were examined for improving image classification accuracy. Images of interest to this thesis were textile design images. The motivation of analysing textile design images is to help designers browse images, fuel their creativity, and improve their design efficiency. In recent years, bag-of-words model has been shown to be a good base for image representation, and there have been many attempts to go beyond this representation. Bag-of-words models have been used frequently in the classification of image data, due to good performance and simplicity. “Words” in images can have different definitions and are obtained through steps of feature detection, feature description, and codeword calculation. The model represents an image as an orderless collection of local features. However, discarding the spatial relationships of local features limits the power of this model. This thesis exploited novel image representations, bag of shapes and region label graphs models, which were based on bag-of-words model. In both models, an image was represented by a collection of segmented regions, and each region was described by shape descriptors. In the latter model, graphs were constructed to capture the spatial information between groups of segmented regions and graph features were calculated based on some graph theory. Novel elements include use of MRFs to extract printed designs and woven patterns from textile images, utilisation of the extractions to form bag of shapes models, and construction of region label graphs to capture the spatial information. The extraction of textile designs was formulated as a pixel labelling problem. Algorithms for MRF optimisation and re-estimation were described and evaluated. A method for quantitative evaluation was presented and used to compare the performance of MRFs optimised using alpha-expansion and iterated conditional modes (ICM), both with and without parameter re-estimation. The results were used in the formation of the bag of shapes and region label graphs models. Bag of shapes model was a collection of MRFs' segmented regions, and the shape of each region was described with generic Fourier descriptors. Each image was represented as a bag of shapes. A simple yet competitive classification scheme based on nearest neighbour class-based matching was used. Classification performance was compared to that obtained when using bags of SIFT features. To capture the spatial information, region label graphs were constructed to obtain graph features. Regions with the same label were treated as a group and each group was associated uniquely with a vertex in an undirected, weighted graph. Each region group was represented as a bag of shape descriptors. Edges in the graph denoted either the extent to which the groups' regions were spatially adjacent or the dissimilarity of their respective bags of shapes. Series of unweighted graphs were obtained by removing edges in order of weight. Finally, an image was represented using its shape descriptors along with features derived from the chromatic numbers or domination numbers of the unweighted graphs and their complements. Linear SVM classifiers were used for classification. Experiments were implemented on data from Liberty Art Fabrics, which consisted of more than 10,000 complicated images mainly of printed textile designs and woven patterns. Experimental data was classified into seven classes manually by assigning each image a text descriptor based on content or design type. The seven classes were floral, paisley, stripe, leaf, geometric, spot, and check. The result showed that reasonable and interesting regions were obtained from MRF segmentation in which alpha-expansion with parameter re-estimation performs better than alpha-expansion without parameter re-estimation or ICM. This result was not only promising for textile CAD (Computer-Aided Design) to redesign the textile image, but also for image representation. It was also found that bag of shapes model based on MRF segmentation can obtain comparable classification accuracy with bag of SIFT features in the framework of nearest neighbour class-based matching. Finally, the result indicated that incorporation of graph features extracted by constructing region label graphs can improve the classification accuracy compared to both bag of shapes model and bag of SIFT models.

Page generated in 0.0617 seconds