• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 5
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 9
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Sparsity and Group Sparsity Constrained Inversion for Spectral Decomposition of Seismic Data

Bonar, Christopher David Unknown Date
No description available.
12

New tools for unsupervised learning

Xiao, Ying 12 January 2015 (has links)
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem. In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms -- these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well. Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
13

Line outage vulnerabilities of power systems : models and indicators / Modèles et indicateurs pour l'analyse des vulnérabilités des réseaux électriques aux pertes de lignes

Ha, Dinh Truc 06 March 2018 (has links)
La vulnérabilité des systèmes électriques est l'un des problèmes liés à leur complexité. Il a fait l’objet d’une attention croissante des chercheurs au cours des dernières décennies. Malgré cela, les phénomènes fondamentaux qui régissent la vulnérabilité du système ne sont pas encore bien compris.Comprendre comment la vulnérabilité des réseaux électriques émerge de leur topologie est la motivation principale du présent travail. Pour cela, le présent travail de recherché propose une nouvelle méthode pour évaluer la vulnérabilité des systèmes électriques et identifier leurs éléments les plus critiques. La méthode permet d’avoir une bonne compréhension des liens entre la topologie d’un réseau et sa vulnérabilité à des pertes d’ouvrages (lignes ou transformateurs).La première partie de ce travail consiste en une analyse critique des approches rencontrées dans la littérature, s’appuyant sur la théorie des graphes, pour analyser la vulnérabilité des réseaux électriques. Les résultats fournis par ces approches pour quatre réseaux IEEE sont comparés à ceux fournis par une analyse de contingence de référence, basée sur une résolution d’un load-flow AC. Des avantages et inconvénients de chaque approche est tirée une méthode améliorée pour l'évaluation de la vulnérabilité des réseaux électriques aux pertes d’ouvrage. Cette méthode est basée sur une approximation courant continue du load flow.La deuxième partie propose une nouvelle approche basée sur la théorie spectrale des graphes et son utilisation pour la résolution d’un load flow DC. Elle permet de mieux comprendre comment la vulnérabilité des réseaux électriques et leurs composants critiques émergent de la topologie du graphe sous-jacent au réseau. / The vulnerability of electrical systems is one of the problems related to their complexity. It has received increasing attention from researchers in recent decades. Despite this, the fundamental phenomena that govern the vulnerability of the system are still not well understood.Understanding how the vulnerability of power systems emerges from their complex organization is, therefore, the main motivation of the present work. It proposes the definition of a standard method to assess the vulnerability of power systems and identify their most critical elements. The method enables a better understanding of the links between the topology of the grid and the line outage vulnerabilities.The first part of this research work offers a critical review of literature approaches used to assess system vulnerability. The results provided by these approaches for four IEEE test systems are confronted to a reference contingency analysis using AC power flow calculations. From these analyses, pros and cons of each approach are outlined. An improved method for assessment of system vulnerability to line outages is defined from this confrontation. It is based on DC load flow and graph theory.The second part proposes a new approach based on spectral graph theory and solving of DC power flow to identify how system vulnerability and critical components emerge from the power network topology.
14

Application de la théorie des matrices aléatoires pour les statistiques en grande dimension / Application of Random Matrix Theory to High Dimensional Statistics

Bun, Joël 06 September 2016 (has links)
De nos jours, il est de plus en plus fréquent de travailler sur des bases de données de très grandes tailles dans plein de domaines différents. Cela ouvre la voie à de nouvelles possibilités d'exploitation ou d'exploration de l'information, et de nombreuses technologies numériques ont été créées récemment dans cette optique. D'un point de vue théorique, ce problème nous contraint à revoir notre manière d'analyser et de comprendre les données enregistrées. En effet, dans cet univers communément appelé « Big Data », un bon nombre de méthodes traditionnelles d'inférence statistique multivariée deviennent inadaptées. Le but de cette thèse est donc de mieux comprendre ce phénomène, appelé fléau (ou malédiction) de la dimension, et ensuite de proposer différents outils statistiques exploitant explicitement la dimension du problème et permettant d'extraire des informations fiables des données. Pour cela, nous nous intéresserons beaucoup aux vecteurs propres de matrices symétriques. Nous verrons qu’il est possible d’extraire de l'information présentant un certain degré d’universalité. En particulier, cela nous permettra de construire des estimateurs optimaux, observables, et cohérents avec le régime de grande dimension. / Nowadays, it is easy to get a lot ofquantitative or qualitative data in a lot ofdifferent fields. This access to new databrought new challenges about data processingand there are now many different numericaltools to exploit very large database. In atheoretical standpoint, this framework appealsfor new or refined results to deal with thisamount of data. Indeed, it appears that mostresults of classical multivariate statisticsbecome inaccurate in this era of “Big Data”.The aim of this thesis is twofold: the first one isto understand theoretically this so-called curseof dimensionality that describes phenomenawhich arise in high-dimensional space.Then, we shall see how we can use these toolsto extract signals that are consistent with thedimension of the problem. We shall study thestatistics of the eigenvalues and especially theeigenvectors of large symmetrical matrices. Wewill highlight that we can extract someuniversal properties of these eigenvectors andthat will help us to construct estimators that areoptimal, observable and consistent with thehigh dimensional framework.
15

Constitutive compatibility based identification of spatially varying elastic parameters distributions

Moussawi, Ali 12 1900 (has links)
The experimental identification of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identification procedures employ standard shaped specimens, assume that the mechanical fields in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full-field measurement techniques from which complex kinematic fields can be recorded has opened the door to a new way of thinking. From the identification point of view, suitable methods can be used to process these complex kinematic fields in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identification techniques that can process these complex experimental data. This thesis introduces a novel identification technique called the constitutive compatibility method. The key idea is to define stresses as compatible with the observed kinematic field through the chosen class of constitutive equation, making possible the uncoupling of the identification of stress from the identification of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems
16

Identifying Complex Fluvial Sandstone Reservoirs Using Core, Well Log, and 3D Seismic Data: Cretaceous Cedar Mountain and Dakota Formations, Southern Uinta Basin, Utah.

Hokanson, William H. 10 March 2011 (has links) (PDF)
The Cedar Mountain and Dakota Formations are significant gas producers in the southern Uinta Basin of Utah. To date, however, predicting the stratigraphic distribution and lateral extent of potential gas-bearing channel sandstone reservoirs in these fluvial units has proven difficult due to their complex architecture, and the limited spacing of wells in the region. A new strategy to correlate the Cedar Mountain and Dakota Formations has been developed using core, well-log, and 3D seismic data. The detailed stratigraphy and sedimentology of the interval were interpreted using descriptions of a near continuous core of the Dakota Formation from the study area. The gamma-ray and density-porosity log signatures of interpreted mud-dominated overbank, coal-bearing overbank, and channel sandstone intervals from the cored well were used to identify the same lithologies in nearby wells and correlate similar stratal packages across the study area. Data from three 3D seismic surveys covering approximately 140 mi2 (225 km2) of the study area were utilized to generate spectral decomposition, waveform classification, and percent less-than-threshold attributes of the Dakota-Cedar Mountain interval. These individual attributes were combined to create a composite attribute that was merged with interpreted lithological data from the well-log correlations. The overall process resulted in a high-resolution correlation of the Dakota-Cedar Mountain interval that permitted the identification and mapping of fluvial-channel reservoir fairways and channel belts throughout the study area. In the future, the strategy employed in this study may result in improved well-success rates in the southern Uinta Basin and assist in more detailed reconstructions of the Cedar Mountain and Dakota Formation depositional systems.
17

Optimizing Linear Queries Under Differential Privacy

Li, Chao 01 September 2013 (has links)
Private data analysis on statistical data has been addressed by many recent literatures. The goal of such analysis is to measure statistical properties of a database without revealing information of individuals who participate in the database. Differential privacy is a rigorous privacy definition that protects individual information using output perturbation: a differentially private algorithm produces statistically indistinguishable outputs no matter whether the database contains a tuple corresponding to an individual or not. It is straightforward to construct differentially private algorithms for many common tasks and there are published algorithms to support various tasks under differential privacy. However methods to design error-optimal algorithms for most non-trivial tasks are still unknown. In particular, we are interested in error-optimal algorithms for sets of linear queries. A linear query is a sum of counts of tuples that satisfy a certain condition, which covers the scope of many aggregation tasks including count, sum and histogram. We present the matrix mechanism, a novel mechanism for answering sets of linear queries under differential privacy. The matrix mechanism makes a clear distinction between a set of queries submitted by users, called the query workload, and an alternative set of queries to be answered under differential privacy, called the query strategy. The answer to the query workload can then be computed using the answer to the query strategy. Given a query workload, the query strategy determines the distribution of the output noise and the power of the matrix mechanism comes from adaptively choosing a query strategy that minimizes the output noise. Our analyses also provide a theoretical measure to the quality of different strategies for a given workload. This measure is then used in accurate and approximate formulations to the optimization problem that outputs the error-optimal strategy. We present a lower bound of error to answer each workload under the matrix mechanism. The bound reveals that the hardness of a query workload is related to the spectral properties of the workload when it is represented in matrix form. In addition, we design an approximate algorithm, which generates strategies generated by our a out perform state-of-art mechanisms over (epsilon, delta)-differential privacy. Those strategies lead to more accurate data analysis while preserving a rigorous privacy guarantee. Moreover, we also combine the matrix mechanism with a novel data-dependent algorithm, which achieves differential privacy by adding noise that is adapted to the input data and to the given query workload.
18

Studies of PF Resole / Isocyanate Hybrid Adhesives

Zheng, Jun 09 January 2003 (has links)
Phenol-formaldehyde (PF) resole and polymeric diphenylmethane diisocyanate (PMDI) are two commonly used exterior thermosetting adhesives in the wood-based composites industry. There is an interest in combining these two adhesives in order to benefit from their positive attributes while also neutralizing some of the negative ones. Although this novel adhesive system has been reportedly utilized in some limited cases, a fundamental understanding is lacking. This research serves this purpose by investigating some of the important aspects of this novel adhesive system. The adhesive rheological and viscometric properties were investigated with an advanced rheometer. The resole/PMDI blends exhibited non-Newtonian flow behavior. The blend viscosity and stability were dependent on the blend ratio, mixing rate and time. The adhesive penetration into wood was found to be dependent on the blend ratio and correlated with the blend viscosity. By using dynamic mechanical analysis, the blend cure speed was found to increase with the PMDI content. Mode I fracture testing of resole/PMDI hybrid adhesive bonded wood specimens indicated the dependence of bondline fracture energy on the blend ratio. The 75/25 PF/PMDI blend exhibited a high fracture energy with a fast cure speed and processable viscosity. Exposure to water-boil weathering severely deteriorated the fracture energies of the hybrid adhesive bondlines. More detailed chemistry and morphological studies were performed with cross-polarization nuclear magnetic resonance and 13C, 15N-doubly labeled PMDI. A spectral decomposition method was used to obtain information regarding chemical species concentration and relaxation behavior of the contributing components within the major nitrogen resonance. Different urethane concentrations were present in the cured blend bondlines. Water-boil weathering and thermal treatment at elevated temperatures (e.g. > 200°C) caused reduced urethane concentrations in the bondline. Solid-state relaxation parameters revealed a heterogeneous structure in the non-weathered blends. Water boil weathering caused a more uniform relaxation behavior in the blend bondline. By conducting this research, more fundamental information regarding the PF/PMDI hybrid adhesives will become available. This information will aid in the evaluation of, and improve the potential use of PF/PMDI hybrid adhesives for wood-based composites. / Ph. D.
19

Ergodic theory of mulitidimensional random dynamical systems

Hsieh, Li-Yu Shelley 13 November 2008 (has links)
Given a random dynamical system T constructed from Jablonski transformations, consider its Perron-Frobenius operator P_T. We prove a weak form of the Lasota-Yorke inequality for P_T and thereby prove the existence of BV- invariant densities for T. Using the Spectral Decomposition Theorem we prove that the support of an invariant density is open a.e. and give conditions such that the invariant density for T is unique. We study the asymptotic behavior of the Markov operator P_T, especially when T has a unique absolutely continuous invariant measure (ACIM). Under the assumption of uniqueness, we obtain spectral stability in the sense of Keller. As an application, we can use Ulam's method to approximate the invariant density of P_T.
20

Τμηματοποίηση εικόνων υφής με χρήση πολυφασματικής ανάλυσης και ελάττωσης διαστάσεων

Θεοδωρακόπουλος, Ηλίας 16 June 2010 (has links)
Τμηματοποίηση υφής ονομάζεται η διαδικασία του διαμερισμού μίας εικόνας σε πολλαπλά τμήματα-περιοχές, με κριτήριο την υφή κάθε περιοχής. Η διαδικασία αυτή βρίσκει πολλές εφαρμογές στους τομείς της υπολογιστικής όρασης, της ανάκτησης εικόνων, της ρομποτικής, της ανάλυσης δορυφορικών εικόνων κλπ. Αντικείμενο της παρούσης εργασίας είναι να διερευνηθεί η ικανότητα των αλγορίθμων μη γραμμικής ελάττωσης διάστασης, και ιδιαίτερα του αλγορίθμου Laplacian Eigenmaps, να παράγει μία αποδοτική αναπαράσταση των δεδομένων που προέρχονται από πολυφασματική ανάλυση εικόνων με χρήση φίλτρων Gabor, για την επίλυση του προβλήματος της τμηματοποίησης εικόνων υφής. Για το σκοπό αυτό προτείνεται μία νέα μέθοδος επιβλεπόμενης τμηματοποίησης υφής, που αξιοποιεί μία χαμηλής διάστασης αναπαράσταση των χαρακτηριστικών διανυσμάτων, και γνωστούς αλγόριθμους ομαδοποίησης δεδομένων όπως οι Fuzzy C-means και K-means, για την παραγωγή της τελικής τμηματοποίησης. Η αποτελεσματικότητα της μεθόδου συγκρίνεται με παρόμοιες μεθόδους που έχουν προταθεί στη βιβλιογραφία, και χρησιμοποιούν την αρχική , υψηλών διαστάσεων, αναπαράσταση των χαρακτηριστικών διανυσμάτων. Τα πειράματα διενεργήθηκαν χρησιμοποιώντας την βάση εικόνων υφής Brodatz. Κατά το στάδιο αξιολόγησης της μεθόδου, χρησιμοποιήθηκε ο δείκτης Rand index σαν μέτρο ομοιότητας ανάμεσα σε κάθε παραγόμενη τμηματοποίηση και την αντίστοιχη ground-truth τμηματοποίηση. / Texture segmentation is the process of partitioning an image into multiple segments (regions) based on their texture, with many applications in the area of computer vision, image retrieval, robotics, satellite imagery etc. The objective of this thesis is to investigate the ability of non-linear dimensionality reduction algorithms, and especially of LE algorithm, to produce an efficient representation for data derived from multi-spectral image analysis using Gabor filters, in solving the texture segmentation problem. For this purpose, we introduce a new supervised texture segmentation algorithm, which exploits a low-dimensional representation of feature vectors and well known clustering methods, such as Fuzzy C-means and K-means, to produce the final segmentation. The effectiveness of this method was compared to that of similar methods proposed in the literature, which use the initial high-dimensional representation of feature vectors. Experiments were performed on Brodatz texture database. During evaluation stage, Rand index has been used as a similarity measure between each segmentation and the corresponding ground-truth segmentation.

Page generated in 0.1147 seconds