• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 448
  • 77
  • 34
  • 31
  • 29
  • 11
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 783
  • 488
  • 224
  • 213
  • 163
  • 141
  • 117
  • 91
  • 90
  • 84
  • 82
  • 75
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Essays on stock liquidity

Haykir, Ozkan January 2017 (has links)
This thesis consists of three main empirical chapters on the effect of stock liquidity on exchange markets. The first (Chapter 2) investigates the pricing ability of an illiquidity measure, namely the Amihud measure (Amihud, 2002), in different sample periods. The second (Chapter 3) determines the causal link between two well-known market quality factors liquidity and idiosyncratic volatility adopting two-stage least squares methodology (2SLS). The last empirical chapter (Chapter 4) revisits the limits to arbitrage theory and studies the link between stock liquidity and momentum anomaly profit, employing the difference-in-differences approach. The overall contribution of this thesis is to employ causal techniques in the context of asset pricing in order to eliminate potential endogeneity problems while investigating the relation between stock liquidity and exchange markets. Chapter 2 investigates whether the Amihud measure is priced differently if the investor is optimistic or, conversely, pessimistic about the future of the stock markets. The results of the chapter show that Amihud measure is priced in the low-sentiment period and that there is illiquidity premium when investor sentiment is low. Chapter 3 studies whether a change in stock liquidity has an impact on idiosyncratic volatility, employing causal techniques. Prior studies investigate the link between liquidity and idiosyncratic volatility but none focus on the potential problem of reverse causality. To overcome this reverse causality problem, I use the exogenous event of decimalisation as an instrumental variable and employ two-stage least squares approach to identify the impact of liquidity on idiosyncratic volatility. The results of the chapter suggest that an increase in illiquidity causes an increase in idiosyncratic volatility. As an additional result, my study shows that reduction in the tick size as a result of decimalisation improves firm-level stock liquidity. Chapter 4 examines whether liquid stocks earn more momentum anomaly profits compare to illiquid stocks, using the implementation of different tick sizes for different price ranges in the American Stock Exchange (AMEX) between February 1995 and April 1997. This programme provides a plausibly exogenous variation to disentangle the endogeneity issue and allows me to examine the impact of liquidity on momentum, by clearly exploiting the difference-in-difference framework. The results of the chapter show that liquid stocks earn more momentum profit than illiquid stocks.
22

Featured anomaly detection methods and applications

Huang, Chengqiang January 2018 (has links)
Anomaly detection is a fundamental research topic that has been widely investigated. From critical industrial systems, e.g., network intrusion detection systems, to people’s daily activities, e.g., mobile fraud detection, anomaly detection has become the very first vital resort to protect and secure public and personal properties. Although anomaly detection methods have been under consistent development over the years, the explosive growth of data volume and the continued dramatic variation of data patterns pose great challenges on the anomaly detection systems and are fuelling the great demand of introducing more intelligent anomaly detection methods with distinct characteristics to cope with various needs. To this end, this thesis starts with presenting a thorough review of existing anomaly detection strategies and methods. The advantageous and disadvantageous of the strategies and methods are elaborated. Afterward, four distinctive anomaly detection methods, especially for time series, are proposed in this work aiming at resolving specific needs of anomaly detection under different scenarios, e.g., enhanced accuracy, interpretable results, and self-evolving models. Experiments are presented and analysed to offer a better understanding of the performance of the methods and their distinct features. To be more specific, the abstracts of the key contents in this thesis are listed as follows: 1) Support Vector Data Description (SVDD) is investigated as a primary method to fulfill accurate anomaly detection. The applicability of SVDD over noisy time series datasets is carefully examined and it is demonstrated that relaxing the decision boundary of SVDD always results in better accuracy in network time series anomaly detection. Theoretical analysis of the parameter utilised in the model is also presented to ensure the validity of the relaxation of the decision boundary. 2) To support a clear explanation of the detected time series anomalies, i.e., anomaly interpretation, the periodic pattern of time series data is considered as the contextual information to be integrated into SVDD for anomaly detection. The formulation of SVDD with contextual information maintains multiple discriminants which help in distinguishing the root causes of the anomalies. 3) In an attempt to further analyse a dataset for anomaly detection and interpretation, Convex Hull Data Description (CHDD) is developed for realising one-class classification together with data clustering. CHDD approximates the convex hull of a given dataset with the extreme points which constitute a dictionary of data representatives. According to the dictionary, CHDD is capable of representing and clustering all the normal data instances so that anomaly detection is realised with certain interpretation. 4) Besides better anomaly detection accuracy and interpretability, better solutions for anomaly detection over streaming data with evolving patterns are also researched. Under the framework of Reinforcement Learning (RL), a time series anomaly detector that is consistently trained to cope with the evolving patterns is designed. Due to the fact that the anomaly detector is trained with labeled time series, it avoids the cumbersome work of threshold setting and the uncertain definitions of anomalies in time series anomaly detection tasks.
23

Lightweight Network Intrusion Detection

Chen, Ya-lin 26 July 2005 (has links)
Exploit codes based on system vulnerabilities are often used by attackers to attack target computers or services. Such exploit programs often send attack packets in the first few packets right after a connection established with the target machine or service. And such attacks are often launched via Telnet service as well. A lightweight network-based intrusion detection system is proposed on detecting such attacks on Telnet traffic. The proposed system filters the first a few packets after each Telnet connection established and only uses partial data of a packet rather than total of it to detect intrusion, i.e. such design makes system load reduced a lot. This research is anomaly detection. The proposed system characterizes the normal traffic behavior and constructs it as a normal model based on the filtered normal traffic. In detection phase, the system examines the deviation of current filtered packet from the normal model via an anomaly score function, i.e. a more deviate packet will receive a higher anomaly score. Finally, we use 1999 DARPA Intrusion Detection Evaluation Data Set which contains 5 days of training data and 10 days of testing data, and 44 attack instances of 16 types of attacks, to evaluate our proposed system. The proposed system has the detection rate of 73% under a low false alarm rate of 2 false alarms per day; 80% for the hard detected attacks which are poorly detected in 1999 DARPA IDEP.
24

Style Analysis of Stock Mutual Fund in Taiwan

Wang, Yen-Ming 26 July 2001 (has links)
none
25

Anomaly Detection Through Statistics-Based Machine Learning For Computer Networks

Zhu, Xuejun January 2006 (has links)
The intrusion detection in computer networks is a complex research problem, which requires the understanding of computer networks and the mechanism of intrusions, the configuration of sensors and the collected data, the selection of the relevant attributes, and the monitor algorithms for online detection. It is critical to develop general methods for data dimension reduction, effective monitoring algorithms for intrusion detection, and means for their performance improvement. This dissertation is motivated by the timely need to develop statistics-based machine learning methods for effective detection of computer network anomalies.Three fundamental research issues related to data dimension reduction, control charts design and performance improvement have been addressed accordingly. The major research activities and corresponding contributions are summarized as follows:(1) Filter and Wrapper models are integrated to extract a small number of the informative attributes for computer network intrusion detection. A two-phase analyses method is proposed for the integration of Filter and Wrapper models. The proposed method has successfully reduced the original 41 attributes to 12 informative attributes while increasing the accuracy of the model. The comparison of the results in each phase shows the effectiveness of the proposed method.(2) Supervised kernel based control charts for anomaly intrusion detection. We propose to construct control charts in a feature space. The first contribution is the use of multi-objective Genetic Algorithm in the parameter pre-selection for SVM based control charts. The second contribution is the performance evaluation of supervised kernel based control charts.(3) Unsupervised kernel based control charts for anomaly intrusion detection. Two types of unsupervised kernel based control charts are investigated: Kernel PCA control charts and Support Vector Clustering based control charts. The applications of SVC based control charts on computer networks audit data are also discussed to demonstrate the effectiveness of the proposed method.Although the developed methodologies in this dissertation are demonstrated in the computer network intrusion detection applications, the methodologies are also expected to be applied to other complex system monitoring, where the database consists of a large dimensional data with non-Gaussian distribution.
26

Parallel Stochastic Estimation on Multicore Platforms

Rosén, Olov January 2015 (has links)
The main part of this thesis concerns parallelization of recursive Bayesian estimation methods, both linear and nonlinear such. Recursive estimation deals with the problem of extracting information about parameters or states of a dynamical system, given noisy measurements of the system output and plays a central role in signal processing, system identification, and automatic control. Solving the recursive Bayesian estimation problem is known to be computationally expensive, which often makes the methods infeasible in real-time applications and problems of large dimension. As the computational power of the hardware is today increased by adding more processors on a single chip rather than increasing the clock frequency and shrinking the logic circuits, parallelization is one of the most powerful ways of improving the execution time of an algorithm. It has been found in the work of this thesis that several of the optimal filtering methods are suitable for parallel implementation, in certain ranges of problem sizes. For many of the suggested parallelizations, a linear speedup in the number of cores has been achieved providing up to 8 times speedup on a double quad-core computer. As the evolution of the parallel computer architectures is unfolding rapidly, many more processors on the same chip will soon become available. The developed methods do not, of course, scale infinitely, but definitely can exploit and harness some of the computational power of the next generation of parallel platforms, allowing for optimal state estimation in real-time applications. / CoDeR-MP
27

FPGA Based Binary Heap Implementation: With an Application to Web Based Anomaly Prioritization

Alam, Md Monjur 09 May 2015 (has links)
This thesis is devoted to the investigation of prioritization mechanism for web based anomaly detection. We propose a hardware realization of parallel binary heap as an application of web based anomaly prioritization. The heap is implemented in pipelined fashion in FPGA platform. The propose design takes O(1) time for all operations by ensuring minimum waiting time between two consecutive operations. We present the various design issues and hardware complexity. We explicitly analyze the design trade-offs of the proposed priority queue implementations.
28

Modeling and Detection of Content and Packet Flow Anomalies at Enterprise Network Gateway

Lin, Sheng-Ya 02 October 2013 (has links)
This dissertation investigates modeling techniques and computing algorithms for detection of anomalous contents and traffic flows of ingress Internet traffic at an enterprise network gateway. Anomalous contents refer to a large volume of ingress packets whose contents are not wanted by enterprise users, such as unsolicited electronic messages (UNE). UNE are often sent by Botnet farms for network resource exploitation, information stealing, and they incur high costs in bandwidth waste. Many products have been designed to block UNE, but most of them rely on signature database(s) for matching, and they cannot recognize unknown attacks. To address this limitation, in this dissertation I propose a Progressive E-Message Classifier (PEC) to timely classify message patterns that are commonly associated with UNE. On the basis of a scoring and aging engine, a real-time scoreboard keeps track of detected feature instances of the detection features until they are considered either as UNE or normal messages. A mathematical model has been designed to precisely depict system behaviors and then set detection parameters. The PEC performance is widely studied using different parameters based on several experiments. The objective of anomalous traffic flow detection is to detect selfish Transmission Control Protocol, TCP, flows which do not conform to one of the handful of congestion control protocols in adjusting their packet transmission rates in the face of network congestion. Given that none of the operational parameters in congestion control are carried in the transmitted packets, a gateway can only use packet arrival times to recover states of end to end congestion control rules, if any. We develop new techniques to estimate round trip time (RTT) using EWMA Lomb-Scargle periodogram, detect change of congestion windows by the CUSUM algorithm, and then finally predict detected congestion flow states using a prioritized decision chain. A high level finite state machine (FSM) takes the predictions as inputs to determine if a TCP flow follows a particular congestion control protocol. Multiple experiments show promising outcomes of classifying flows of different protocols based on the ratio of the aberrant transition count to normal transition count generated by FSM.
29

Rare category detection using hierarchical mean shift /

Vatturi, Pavan Kumar. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 45-46). Also available on the World Wide Web.
30

"Inversão por etapas de anomalias magnéticas bi-dimensionais" / Stepped inversion of magnetic data

Soraya Ivonne Lozada Tuma 27 April 2006 (has links)
Este trabalho apresenta um procedimento de inversão magnética de três etapas no qual quantidades invariantes em relação à fonte magnética são sequencialmente invertidas para recuperar i) a geometria da fonte no substrato, ii) sua intensidade de magnetização e iii) a inclinação da magnetização da fonte. A primeira quantidade invertida (chamada função geométrica) é obtida pela razão entre a intensidade do gradiente da anomalia magnética e a intensidade do campo magnético anômalo. Para fontes homogêneas, a função geométrica depende apenas da geometria da fonte, o que permite a reconstrução da forma do corpo usando valores arbitrários para a magnetização. Na segunda etapa, a forma da fonte é fixa e a intensidade de magnetização é estimada ajustando o módulo do gradiente da anomalia magnética, uma quantidade invariante com a direção da magnetização e equivalente à amplitude do sinal analítico. Na última etapa, a forma da fonte e a intensidade da magnetização são fixas e a inclinação da magnetização é determinada ajustando a anomalia magnética. Além de recuperar a forma e a magnetização de fontes homogêneas, esta técnica permite, em alguns casos, verificar se as fontes magnéticas são homogêneas. Isto é possível pois a função geométrica de fontes heterogêneas pode ser ajustada por um modelo homogêneo, mas o modelo assim obtido não permite o ajuste da amplitude do sinal analítico nem da anomalia magnética. Esse é um critério que parece efetivo no reconhecimento de fontes fortemente heterogêneas. O método de inversão por etapas é testado em experimentos numéricos de computador e utilizado para interpretar uma anomalia magnética gerada por rochas básicas intrusivas da Bacia do Paraná. / This work presents a three step magnetic inversion procedure in which invariant quantities related to source parameters are sequentially inverted to provide i) cross-section of two-dimensional sources; ii)intensity of source magnetization, and iii) inclination of source magnetization. The first inverted quantity (called geometrical function) is obtained by rationing intensity gradient of total field anomaly and intensity of vector anomalous field. For homogenous sources, geometrical function depends only on source geometry thus allowing shape reconstruction by using arbitrary values for source magnetization. In the second step, source shape is fixed and magnetization intensity is estimated by fitting intensity gradient of total field anomaly, an invariant quantity with magnetization direction and equivalent to amplitude of the analytical signal. In the last step, source shape and magnetization intensity are fixed and magnetization inclination is determined by fitting magnetic anomaly. Besides furnishing shape and magnetization of homogeneous two-dimensional sources, this technique allows to check in some cases if causative sources are homogeneous. It is possible because geometrical function from inhomogeneous sources can be fitted by a homogeneous model but a model thus obtained does not fit the amplitude of analytical signal nor magnetic anomaly itself. This is a criterion that seems efective in recognizing strongly inhomogeneous sources. The proposed technique is tested with numerical experiments, and used to model a magnetic anomaly from intrusive basic rocks of Paraná Basin, Brazil.

Page generated in 0.0424 seconds