• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 82
  • 77
  • 47
  • 41
  • 40
  • 38
  • 20
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 983
  • 597
  • 329
  • 263
  • 138
  • 100
  • 98
  • 71
  • 69
  • 68
  • 68
  • 66
  • 62
  • 61
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Using a Hidden Markov Model as a Financial Advisor

Lindqvist, Emil, Andersson, Robert January 2021 (has links)
People have been trying to predict the stock marketsince its inception and financial investors have made it theirprofession. What makes predicting the stock market such ahard task is its seemingly random dependency on everythingfrom Elon Musks tweets to future earnings. Machine learninghandles this apparent randomness with ease and we will try itout by implementing a Hidden Markov Model. We will modeltwo different stocks, Tesla, Inc. and Coca-Cola Company, andtry using the forecasted prices as a template for a simple tradingalgorithm. We used an approach of calculating the log-likelihoodof preceding observations and correlated it with the log-likelihoodof all the preceding subsequences of equivalent size by turningthe time window by one day in the past. The results show thatmodeling two stocks of different volatility is possible, but usingthe result as a template for trading came back inconclusive withless than 50 percent successful trades for both of the modelledstocks. / Människor har försökt förutsäga aktiemarknaden sedan starten och finansiella investerare har gjort det till sitt yrke. Det som gör att förutsäga aktiemarknaden till en så svår uppgift är dess till synes slumpmässiga beroende av allt från Elon Musks tweets till framtida intäkter. Maskininlärning hanterar denna uppenbara slumpmässighet med lätthet och vi kommer att testa det genom att implementera en Hidden Markov-modell. Vi kommer att modellera två olika aktier, Tesla, Inc. och Coca-Cola Company, och försöka använda de prognostiserade priserna som bas för en enkel algoritm att handla på. Vi använde ett tillvägagångssätt för att beräkna log-sannolikheten för föregående observationer och korrelerade den med logsannolikheten för alla föregående följder av motsvarande storlek genom att vrida tidsfönstret med en dag tidigare. Resultaten visar att det är möjligt att modellera två aktier med olika volatilitet, men att använda resultatet som en mall för handel kom tillbaka de med mindre än 50 procent framgångsrika affärer för båda modellerna. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
432

Generating Learning Algorithms: Hidden Markov Models as a Case Study

Szymczak, Daniel 04 1900 (has links)
<p>This thesis presents the design and implementation of a source code generator for dealing with Bayesian statistics. The specific focus of this case study is to produce usable source code for handling Hidden Markov Models (HMMs) from a Domain Specific Language (DSL).</p> <p>Domain specific languages are used to allow domain experts to design their source code from the perspective of the problem domain. The goal of designing in such a way is to increase the development productivity without requiring extensive programming knowledge.</p> / Master of Applied Science (MASc)
433

IDENTIFICATION OF PROTEIN PARTNERS FOR NIBP, A NOVEL NIK-AND IKKB-BINDING PROTEIN THROUGH EXPERIMENTAL, COMPUTATIONAL AND BIOINFORMATICS TECHNIQUES

Adhikari, Sombudha January 2013 (has links)
NIBP is a prototype member of a novel protein family. It forms a novel subcomplex of NIK-NIBP-IKKB and enhances cytokine-induced IKKB-mediated NFKB activation. It is also named TRAPPC9 as a key member of trafficking particle protein (TRAPP) complex II, which is essential in trans-Golgi networking (TGN). The signaling pathways and molecular mechanisms for NIBP actions remain largely unknown. The aim of this research is to identify potential proteins interacting with NIBP, resulting in the regulation of NFKB signaling pathways and other unknown signaling pathways. At the laboratory of Dr. Wenhui Hu in the Department of Neuroscience, Temple University, sixteen partner proteins were experimentally identified that potentially bind to NIBP. NIBP is a novel protein with no entry in the Protein Data Bank. From a computational and bioinformatics standpoint, we use prediction of secondary structure and protein disorder as well as homology-based structural modeling approaches to create a hypothesis on protein-protein interaction between NIBP and the partner proteins. Structurally, NIBP contains three distinct regions. The first region, consisting of 200 amino acids, forms a hybrid helix and beta sheet-based domain possibly similar to Sybindin domain. The second region comprised of approximately 310 residues, forms a tetratrico peptide repeat (TPR) zone. The third region is a 675 residue long all beta sheet and loops zone with as many as 35 strands and only 2 helices, shared by Gryzun-domain containing proteins. It is likely to form two or three beta sheet sandwiches. The TPR regions of many proteins tend to bind to the peptides from disordered regions of other proteins. Many of the 16 potential binding proteins have high levels of disorder. These data suggest that the TPR region in NIBP most likely binds with many of these 16 proteins through peptides and other domains. It is also possible that the Sybindin-like domain and the Gryzun-like domain containing beta sheet sandwiches bind to some of these proteins. / Bioengineering
434

Automated Interpretation of Abnormal Adult Electroencephalograms

Lopez de Diego, Silvia Isabel January 2017 (has links)
Interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiner. The interrater agreement, even for relevant clinical events such as seizures, can be low. For instance, the differences between interictal, ictal, and post-ictal EEGs can be quite subtle. Before making such low-level interpretations of the signals, neurologists often classify EEG signals as either normal or abnormal. Even though the characteristics of a normal EEG are well defined, there are some factors, such as benign variants, that complicate this decision. However, neurologists can make this classification accurately by only examining the initial portion of the signal. Therefore, in this thesis, we explore the hypothesis that high performance machine classification of an EEG signal as abnormal can approach human performance using only the first few minutes of an EEG recording. The goal of this thesis is to establish a baseline for automated classification of abnormal adult EEGs using state of the art machine learning algorithms and a big data resource – The TUH EEG Corpus. A demographically balanced subset of the corpus was used to evaluate performance of the systems. The data was partitioned into a training set (1,387 normal and 1,398 abnormal files), and an evaluation set (150 normal and 130 abnormal files). A system based on hidden Markov Models (HMMs) achieved an error rate of 26.1%. The addition of a Stacked Denoising Autoencoder (SdA) post-processing step (HMM-SdA) further decreased the error rate to 24.6%. The overall best result (21.2% error rate) was achieved by a deep learning system that combined a Convolutional Neural Network and a Multilayer Perceptron (CNN-MLP). Even though the performance of our algorithm still lags human performance, which approaches a 1% error rate for this task, we have established an experimental paradigm that can be used to explore this application and have demonstrated a promising baseline using state of the art deep learning technology. / Electrical and Computer Engineering
435

Estimation of Probability of Failure for Damage-Tolerant Aerospace Structures

Halbert, Keith January 2014 (has links)
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft. / Statistics
436

Recognition of off-line printed Arabic text using Hidden Markov Models.

Al-Muhtaseb, Husni A., Mahmoud, Sabri A., Qahwaji, Rami S.R. January 2008 (has links)
yes / This paper describes a technique for automatic recognition of off-line printed Arabic text using Hidden Markov Models. In this work different sizes of overlapping and non-overlapping hierarchical windows are used to generate 16 features from each vertical sliding strip. Eight different Arabic fonts were used for testing (viz. Arial, Tahoma, Akhbar, Thuluth, Naskh, Simplified Arabic, Andalus, and Traditional Arabic). It was experimentally proven that different fonts have their highest recognition rates at different numbers of states (5 or 7) and codebook sizes (128 or 256). Arabic text is cursive, and each character may have up to four different shapes based on its location in a word. This research work considered each shape as a different class, resulting in a total of 126 classes (compared to 28 Arabic letters). The achieved average recognition rates were between 98.08% and 99.89% for the eight experimental fonts. The main contributions of this work are the novel hierarchical sliding window technique using only 16 features for each sliding window, considering each shape of Arabic characters as a separate class, bypassing the need for segmenting Arabic text, and its applicability to other languages.
437

The complete Heyting algebra of subsystems and contextuality

Vourdas, Apostolos January 2013 (has links)
no / The finite set of subsystems of a finite quantum system with variables in Z(n), is studied as a Heyting algebra. The physical meaning of the logical connectives is discussed. It is shown that disjunction of subsystems is more general concept than superposition. Consequently, the quantum probabilities related to commuting projectors in the subsystems, are incompatible with associativity of the join in the Heyting algebra, unless if the variables belong to the same chain. This leads to contextuality, which in the present formalism has as contexts, the chains in the Heyting algebra. Logical Bell inequalities, which contain "Heyting factors," are discussed. The formalism is also applied to the infinite set of all finite quantum systems, which is appropriately enlarged in order to become a complete Heyting algebra.
438

Hidden Markov models and alert correlations for the prediction of advanced persistent threats

Ghafir, Ibrahim, Kyriakopoulos, K.G., Lambotharan, S., Aparicio-Navarro, F.J., Assadhan, B., Binsalleeh, H., Diab, D.M. 24 January 2020 (has links)
Yes / Cyber security has become a matter of a global interest, and several attacks target industrial companies and governmental organizations. The advanced persistent threats (APTs) have emerged as a new and complex version of multi-stage attacks (MSAs), targeting selected companies and organizations. Current APT detection systems focus on raising the detection alerts rather than predicting APTs. Forecasting the APT stages not only reveals the APT life cycle in its early stages but also helps to understand the attacker's strategies and aims. This paper proposes a novel intrusion detection system for APT detection and prediction. This system undergoes two main phases; the first one achieves the attack scenario reconstruction. This phase has a correlation framework to link the elementary alerts that belong to the same APT campaign. The correlation is based on matching the attributes of the elementary alerts that are generated over a configurable time window. The second phase of the proposed system is the attack decoding. This phase utilizes the hidden Markov model (HMM) to determine the most likely sequence of APT stages for a given sequence of correlated alerts. Moreover, a prediction algorithm is developed to predict the next step of the APT campaign after computing the probability of each APT stage to be the next step of the attacker. The proposed approach estimates the sequence of APT stages with a prediction accuracy of at least 91.80%. In addition, it predicts the next step of the APT campaign with an accuracy of 66.50%, 92.70%, and 100% based on two, three, and four correlated alerts, respectively. / The Gulf Science, Innovation and Knowledge Economy Programme of the U.K. Government under UK-Gulf Institutional Link Grant IL 279339985 and in part by the Engineering and Physical Sciences Research Council (EPSRC), U.K., under Grant EP/R006385/1.
439

Intent Recognition Of Rotation Versus Translation Movements In Human-Robot Collaborative Manipulation Tasks

Nguyen, Vinh Q 07 November 2016 (has links) (PDF)
The goal of this thesis is to enable a robot to actively collaborate with a person to move an object in an efficient, smooth and robust manner. For a robot to actively assist a person it is key that the robot recognizes the actions or phases of a collaborative tasks. This requires the robot to have the ability to estimate a person’s movement intent. A hurdle in collaboratively moving an object is determining whether the partner is trying to rotate or translate the object (the rotation versus translation problem). In this thesis, Hidden Markov Models (HMM) are used to recognize human intent of rotation or translation in real-time. Based on this recognition, an appropriate impedance control mode is selected to assist the person. The approach is tested on a seven degree-of-freedom industrial robot, KUKA LBR iiwa 14 R820, working with a human partner during manipulation tasks. Results show the HMMs can estimate human intent with accuracy of 87.5% by using only haptic data recorded from the robot. Integrated with impedance control, the robot is able to collaborate smoothly and efficiently with a person during the manipulation tasks. The HMMs are compared with a switching function based approach that uses interaction force magnitudes to recognize rotation versus translation. The results show that HMMs can predict correctly when fast rotation or slow translation is desired, whereas the switching function based on force magnitudes performs poorly.
440

A Methodology to Assess and Rank the Effects of Hidden Failures in Protection Schemes based on Regions of Vulnerability and Index of Severity

Elizondo, David C. 21 April 2003 (has links)
Wide-area disturbances are power outages occurring over large geographical regions that dramatically affect the power system reliability, causing interruptions of the electric supply to residential, commercial, and industrial users. Historically, wide-area disturbances have greatly affected societies. Virginia Tech directed a research project related to the causes of the major disturbances in electric power systems. Research results showed that the role of the power system's protection schemes in the wide-area disturbances is critical. Incorrect operations of power system's protection schemes have contributed to a spread of the disturbances. This research defined hidden failures of protection schemes and showed that these kinds of failures have contributed in the degradation of 70-80 percent of the wide-area disturbances. During a wide-area disturbance analysis, it was found that hidden failures in protection schemes caused the disconnection of power system elements in an incorrect and undesirable manner contributing to the disturbance degradation. This dissertation presents a methodology to assess and rank the effects of unwanted disconnections caused by hidden failures based on Regions of Vulnerability and index of severity in the protection schemes. The developed methodology for the evaluation of the Region of Vulnerability found that the indicator that most accurately reflects the relationship of the Region of Vulnerability with the single line diagram is kilometers. For the representation of the Region of Vulnerability in the power system, we found segments in the transmission line in which the occurrence of faults do make the relay to operate, producing the unwanted disconnection caused by hidden failure. The results in the test system show that the infeed currents restrain the Region of Vulnerability from spreading along power system elements. Finally the methodology to compute the index of severity is developed. The index of severity has the objective of ranking the protection schemes, considers the dynamics of the protection schemes, and evaluates the overall disturbance consequence under the static and dynamic perspectives. / Ph. D.

Page generated in 0.189 seconds