• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Tuck, Terry W. 05 1900 (has links)
Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the extent out-of-order commits and reads. With respect to the stated concern with optimistic approaches, two of the proposed algorithms are risk-free and return to read operations only committed data-item values. Risk with the third algorithm is greatly reduced by its conservative bias. All three algorithms avoid deadlock while providing risk-free or reduced-risk operation. The performance of the algorithms is determined analytically and with experimentation. Experiments are performed using functional database management system models that implement the proposed algorithms and the well-known Conservative Multiversion Timestamp Ordering algorithm.
2

Enforcing Temporal and Ontological Dependencies Over Graphs

Alipourlangouri, Morteza January 2022 (has links)
Graphs provide powerful abstractions, and are widely used in different areas. There has been an increasing demand in using the graph data model to represent data in many applications such as network management, web page analysis, knowledge graphs, social networks. These graphs are usually dynamic and represent the time evolving relationships between entities. Enforcing and maintaining data quality in graphs is a critical task for decision making, operational efficiency and accurate data analysis as recent studies have shown that data scientists spend 60-80% of their time cleaning and organizing data [2]. This effort motivates the need for effective data cleaning tools to reduce the user burden. The study of data quality management focuses along a set of dimensions, including data consistency, data deduplication, information completeness, data currency, and data accuracy. Achieving all these data characteristics is often not possible in practice due to personnel costs, and for performance reasons. In this thesis, we focus on tackling three problems in two data quality dimensions: data consistency and data deduplication. To address the problem of data consistency over temporal graphs, we present a new class of data dependencies called Temporal Graph Functional Dependency (TGFDs). TGFDs generalize functional dependencies to temporal graphs as a sequence of graph snapshots that are induced by time intervals, and enforce both topological constraints and attribute value dependencies that must be satisfied by these snapshots. We establish the complexity results for the satisfiability and implication problems of TGFDs. We propose a sound and complete axiomatization system for TGFDs. We also present efficient parallel algorithms to detect inconsistencies in temporal graphs as violations of TGFDs. To address the data deduplication problem, we first address the problem of key discovery for graphs. Keys for graphs use topology and value constraints to uniquely identify entities in a graph database and keys are the main tools for data deduplication in graphs. We present two properties that define a key, including minimality and support and an algorithm to mine keys over graphs via frequent subgraph expansion. However, existing key constraints identify entities by enforcing label equality on node types. These constraints can be too restrictive to characterize structures and node labels that are syntactically different but semantically equivalent. Lastly, we propose a new class of key constraints, Ontological Graph Keys (OGKs) that extend conventional graph keys by ontological subgraph matching between entity labels and an external ontology. We study the entity matching problem with OGKs. We develop efficient algorithms to perform entity matching based on a Chase procedure. The proposed dependencies and algorithms in this thesis improve consistency detection in temporal graphs, automate the discovery of keys in graphs, and enrich the semantic expressiveness of graph keys. / Dissertation / Doctor of Science (PhD)
3

Speech Enhancement Using Nonnegative MatrixFactorization and Hidden Markov Models

Mohammadiha, Nasser January 2013 (has links)
Reducing interference noise in a noisy speech recording has been a challenging task for many years yet has a variety of applications, for example, in handsfree mobile communications, in speech recognition, and in hearing aids. Traditional single-channel noise reduction schemes, such as Wiener filtering, do not work satisfactorily in the presence of non-stationary background noise. Alternatively, supervised approaches, where the noise type is known in advance, lead to higher-quality enhanced speech signals. This dissertation proposes supervised and unsupervised single-channel noise reduction algorithms. We consider two classes of methods for this purpose: approaches based on nonnegative matrix factorization (NMF) and methods based on hidden Markov models (HMM).  The contributions of this dissertation can be divided into three main (overlapping) parts. First, we propose NMF-based enhancement approaches that use temporal dependencies of the speech signals. In a standard NMF, the important temporal correlations between consecutive short-time frames are ignored. We propose both continuous and discrete state-space nonnegative dynamical models. These approaches are used to describe the dynamics of the NMF coefficients or activations. We derive optimal minimum mean squared error (MMSE) or linear MMSE estimates of the speech signal using the probabilistic formulations of NMF. Our experiments show that using temporal dynamics in the NMF-based denoising systems improves the performance greatly. Additionally, this dissertation proposes an approach to learn the noise basis matrix online from the noisy observations. This relaxes the assumption of an a-priori specified noise type and enables us to use the NMF-based denoising method in an unsupervised manner. Our experiments show that the proposed approach with online noise basis learning considerably outperforms state-of-the-art methods in different noise conditions.  Second, this thesis proposes two methods for NMF-based separation of sources with similar dictionaries. We suggest a nonnegative HMM (NHMM) for babble noise that is derived from a speech HMM. In this approach, speech and babble signals share the same basis vectors, whereas the activation of the basis vectors are different for the two signals over time. We derive an MMSE estimator for the clean speech signal using the proposed NHMM. The objective evaluations and performed subjective listening test show that the proposed babble model and the final noise reduction algorithm outperform the conventional methods noticeably. Moreover, the dissertation proposes another solution to separate a desired source from a mixture with arbitrarily low artifacts.  Third, an HMM-based algorithm to enhance the speech spectra using super-Gaussian priors is proposed. Our experiments show that speech discrete Fourier transform (DFT) coefficients have super-Gaussian rather than Gaussian distributions even if we limit the speech data to come from a specific phoneme. We derive a new MMSE estimator for the speech spectra that uses super-Gaussian priors. The results of our evaluations using the developed noise reduction algorithm support the super-Gaussianity hypothesis. / <p>QC 20130916</p>
4

Predicting stock market trends using time-series classification with dynamic neural networks

Mocanu, Remus 09 1900 (has links)
L’objectif de cette recherche était d’évaluer l’efficacité du paramètre de classification pour prédire suivre les tendances boursières. Les méthodes traditionnelles basées sur la prévision, qui ciblent l’immédiat pas de temps suivant, rencontrent souvent des défis dus à des données non stationnaires, compromettant le modèle précision et stabilité. En revanche, notre approche de classification prédit une évolution plus large du cours des actions avec des mouvements sur plusieurs pas de temps, visant à réduire la non-stationnarité des données. Notre ensemble de données, dérivé de diverses actions du NASDAQ-100 et éclairé par plusieurs indicateurs techniques, a utilisé un mélange d'experts composé d'un mécanisme de déclenchement souple et d'une architecture basée sur les transformateurs. Bien que la méthode principale de cette expérience ne se soit pas révélée être aussi réussie que nous l'avions espéré et vu initialement, la méthodologie avait la capacité de dépasser toutes les lignes de base en termes de performance dans certains cas à quelques époques, en démontrant le niveau le plus bas taux de fausses découvertes tout en ayant un taux de rappel acceptable qui n'est pas zéro. Compte tenu de ces résultats, notre approche encourage non seulement la poursuite des recherches dans cette direction, dans lesquelles un ajustement plus précis du modèle peut être mis en œuvre, mais offre également aux personnes qui investissent avec l'aide de l'apprenstissage automatique un outil différent pour prédire les tendances boursières, en utilisant un cadre de classification et un problème défini différemment de la norme. Il est toutefois important de noter que notre étude est basée sur les données du NASDAQ-100, ce qui limite notre l’applicabilité immédiate du modèle à d’autres marchés boursiers ou à des conditions économiques variables. Les recherches futures pourraient améliorer la performance en intégrant les fondamentaux des entreprises et effectuer une analyse du sentiment sur l'actualité liée aux actions, car notre travail actuel considère uniquement indicateurs techniques et caractéristiques numériques spécifiques aux actions. / The objective of this research was to evaluate the classification setting's efficacy in predicting stock market trends. Traditional forecasting-based methods, which target the immediate next time step, often encounter challenges due to non-stationary data, compromising model accuracy and stability. In contrast, our classification approach predicts broader stock price movements over multiple time steps, aiming to reduce data non-stationarity. Our dataset, derived from various NASDAQ-100 stocks and informed by multiple technical indicators, utilized a Mixture of Experts composed of a soft gating mechanism and a transformer-based architecture. Although the main method of this experiment did not prove to be as successful as we had hoped and seen initially, the methodology had the capability in surpassing all baselines in certain instances at a few epochs, demonstrating the lowest false discovery rate while still having an acceptable recall rate. Given these results, our approach not only encourages further research in this direction, in which further fine-tuning of the model can be implemented, but also offers traders a different tool for predicting stock market trends, using a classification setting and a differently defined problem. It's important to note, however, that our study is based on NASDAQ-100 data, limiting our model's immediate applicability to other stock markets or varying economic conditions. Future research could enhance performance by integrating company fundamentals and conducting sentiment analysis on stock-related news, as our current work solely considers technical indicators and stock-specific numerical features.

Page generated in 0.0996 seconds