• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 450
  • 77
  • 34
  • 31
  • 29
  • 11
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 785
  • 490
  • 225
  • 214
  • 164
  • 142
  • 118
  • 92
  • 92
  • 84
  • 82
  • 75
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An Explanation to Hot-Issue Anomaly of IPOs in Taiwan

Lu, Cheng-shou 26 January 2007 (has links)
ABSTRACT Ibbotson and Ritter (1995) indicate that there are three anomalies in IPO markets: IPO underpricing, IPO long-run underperformance, and hot-issue market. Contrary to most of the previous studies which focus on IPO underpricing or IPO long-run performance, this dissertation examines the existence of hot-issue phenomenon in Taiwan IPO markets. ¡§Hot-Issue anomaly¡¨ means that cycle exists in both the IPO volume and IPO underpricing. Consequently, issuers tend to issue new equity to the public when faced with high average initial returns of IPOs. Ritter (1984) argues that IPOs in hot markets experience higher initial returns. Moreover, young IPOs experience more underpricing in hot markets. A possible explanation for hot-issue phenomenon is the positive feedback hypothesis: market investors positively react to IPO underpricing. When investors earn initial returns from IPOs, they are more likely to subscribe to future IPOs leading to the fact that issuers tend to issue when previous IPOs are more underpriced. Hot-issue is considered as an anomaly because underpricing or initial return is referred as an indirect cost of issuance. Issuers should try to reduce the extent of underpricing to raise IPO proceeds. From the point of view of maximizing proceeds, previous studies fail to explain the hot-issue anomaly in IPO markets. Ibbotson, Sindelar, and Ritter (1994) show that IPO hot-issue phenomenon exists not only in U.S. market but also in Germany, South Korea, and U.K. markets. To make up this gap, this dissertation first tests if there exists hot-issue anomaly in Taiwan and then examines why hot-issue anomaly exists to provide an explanation to the anomaly. I find that IPO initial return leads IPO issuance. However, issuance of IPOs does not reduce the extent of underpricing of the followed up IPOs. I further show that IPO initial return is not related to the initial return of the followed up ones. The issuance of IPOs cannot be attributed to the information of initial returns of preceding IPOs. Rather, the market information between IPO filing date and IPO issuance date is the cause for the lead-lag relation between IPO initial return and IPO issuance. IPO offer price will fully reflect to the negative recent market return but simply partially to the recent positive market return. Most of the IPO initial return can be explained by the information revealed after offer price has been set implying that the offer price is set efficiently. The large amount of IPO volume following high IPO initial return can be attributed to the positive market reaction to the preceding IPOs instead of the filed IPO pricing of preceding IPOs. Our findings explain the hot-issue anomaly. During hot markets, investors¡¦ excess demand on IPOs leads to high initial returns of IPOs. Faced with investors¡¦ excess demand, issuers attempt to issue IPOs to take advantage of investors¡¦ sentiments to maximize IPO proceeds.
82

Malicious Web Page Detection Based on Anomaly Behavior

Tsai, Wan-yi 04 February 2009 (has links)
Because of the convenience of the Internet, we rely closely on the Internet to do information searching and sharing, forum discussion, and online services. However, most of the websites we visit are developed by people with limited security knowledge, and this condition results in many vulnerabilities in web applications. Unfortunately, hackers have successfully taken advantage of these vulnerabilities to inject malicious JavaScript into compromised web pages to trigger drive-by download attacks. Based on our long time observation of malicious web pages, malicious web pages have unusual behavior for evading detection which makes malicious web pages different form normal ones. Therefore, we propose a client-side malicious web page detection mechanism named Web Page Checker (WPC) which is based on anomaly behavior tracing and analyzing to identify malicious web pages. The experimental results show that our method can identify malicious web pages and alarm the website visitors efficiently.
83

Monitoring of biomedical systems using non-stationary signal analysis

Musselman, Marcus William 18 February 2014 (has links)
Monitoring of engineered systems consists of characterizing the normal behavior of the system and tracking departures from it. Techniques to monitor a system can be split into two classes based on their use of inputs and outputs of the system. Systems-based monitoring refers to the case when both inputs and outputs of a system are available and utilized. Conversely, symptomatic monitoring refers to the case when only outputs of the system are available. This thesis extended symptomatic and systems-based monitoring of biomedical systems via the use of non-stationary signal processing and advanced monitoring methods. Monitoring of various systems of the human body is encumbered by several key hurdles. First, current biomedical knowledge may not fully comprehend the extent of inputs and outputs of a particular system. In addition, regardless of current knowledge, inputs may not be accessible and outputs may be, at best, indirect measurements of the underlying biological process. Finally, even if inputs and outputs are measurable, their relationship may be highly nonlinear and convoluted. These hurdles require the use of advanced signal processing and monitoring approaches. Regardless of the pursuit of symptomatic or system-based monitoring, the aforementioned hurdles can be partially overcome by using non-stationary signal analysis to reveal the way frequency content of biomedical signals change over time. Furthermore, the use of advanced classification and monitoring methods facilitated reliable differentiation between various conditions of the monitored system based on the information from non-stationary signal analysis. The human brain was targeted for advancement of symptomatic monitoring, as it is a system responding to a plethora internal and external stimuli. The complexity of the brain makes it unfeasible to realize system-based monitoring to utilize all the relevant inputs and outputs for the brain. Further, measurement of brain activity (outputs), in the indirect form of electroencephalogram (EEG), remains a workhorse of brain disorder diagnosis. In this thesis, advanced signal processing and pattern recognition methods are employed to devise and study an epilepsy detection and localization algorithm that outperforms those reported in literature. This thesis also extended systems-based monitoring of human biomedical systems via advanced input-output modeling and sophisticated monitoring techniques based on the information from non-stationary signal analysis. Explorations of system-based monitoring in the NMS system were driven by the fact that joint velocities and torques can be seen NMS responses to electrical inputs provided by the central nervous system (CNS) and the electromyograph (EMG) provides an indirect measurement of CNS excitations delivered to the muscles. Thus, both inputs and outputs of this system are more or less available and one can approach its monitoring via the use of system-based approaches. / text
84

Recovery of the logical gravity field by spherical regularization wavelets approximation and its numerical implementation

Shuler, Harrey Jeong 29 April 2014 (has links)
As an alternative to spherical harmonics in modeling the gravity field of the Earth, we built a multiresolution gravity model by employing spherical regularization wavelets in solving the inverse problem, i.e. downward propagation of the gravity signal to the Earth.s surface. Scale discrete Tikhonov spherical regularization scaling function and wavelet packets were used to decompose and reconstruct the signal. We recovered the local gravity anomaly using only localized gravity measurements at the observing satellite.s altitude of 300 km. When the upward continued gravity anomaly to the satellite altitude with a resolution 0.5° was used as simulated measurement inputs, our model could recover the local surface gravity anomaly at a spatial resolution of 1° with an RMS error between 1 and 10 mGal, depending on the topography of the gravity field. Our study of the effect of varying the data volume and altering the maximum degree of Legendre polynomials on the accuracy of the recovered gravity solution suggests that the short wavelength signals and the regions with high magnitude gravity gradients respond more strongly to such changes. When tested with simulated SGG measurements, i.e. the second order radial derivative of the gravity anomaly, at an altitude of 300 km with a 0.7° spatial resolution as input data, our model could obtain the gravity anomaly with an RMS error of 1 ~ 7 mGal at a surface resolution of 0.7° (< 80 km). The study of the impact of measurement noise on the recovered gravity anomaly implies that the solutions from SGG measurements are less susceptible to measurement errors than those recovered from the upward continued gravity anomaly, indicating that the SGG type mission such as GOCE would be an ideal choice for implementing our model. Our simulation results demonstrate the model.s potential in determining the local gravity field at a finer scale than could be achieved through spherical harmonics, i.e. less than 100 km, with excellent performance in edge detection. / text
85

Disparities in infant health in Winnipeg, Manitoba: an ecological approach to maternal circumstances affecting infant health

Kosowan, Leanne 31 August 2015 (has links)
Infant health is an important comprehensive measure of the health in a society. Experiences during infancy can create durable and heritable patterns of social deprivation and illness ultimately producing health disparities in a population. This thesis sought to determine the relationship between maternal circumstances and infant mortality, morbidity and congenital anomaly rates in Winnipeg, Manitoba, Canada. Using logistic regression models the study explored provincial program screening data and administrative data held. The study found higher rates of congenital anomalies within two parent families and male infants. There was a relationship between hospital readmission rates and social and economic factors. Newborn hospital readmissions were associated with social support factors, while post-neonatal hospital readmissions were associated with contextual factors. Understanding the odds of infant mortality, morbidity and congenital anomaly in relation to different maternal socioeconomic factors may contribute to future health planning and the development of interventions that can improve health equity. / October 2015
86

Quantum Jump Spectroscopy of a Single Electron in a New and Improved Apparatus

Dorr, Joshua Charles 15 October 2013 (has links)
The 2008 measurement of the electron magnetic moment is the most precisely measured property of an elementary particle, with an astonishing precision of 0.28 parts per trillion. It makes possible the most precise determination of the fine structure constant and the most precise test of quantum electrodynamics and the Standard Model of particle physics. This thesis describes the installation of a new apparatus designed to have improved stability, more optimal control over the radiation field and inhibited spontaneous emission, and narrower resonance lines. / Physics
87

Anomaly-based Self-Healing Framework in Distributed Systems

Kim, Byoung Uk January 2008 (has links)
One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, interconnectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem. In this dissertation, we present a self healing methodology based on the principles of autonomic computing, statistical and data mining techniques to detect faults (hardware or software) and also identify the source of the fault. In our approach, we monitor and analyze in real-time all the interactions between all the components of a distributed system using two software modules: Component Fault Manager (CFM) to monitor all set of measurement attributes for applications and nodes and Application Fault Manager (AFM) that is responsible for several activities such as monitoring, anomaly analysis, root cause analysis and recovery. We used three-dimensional array of features to capture spatial and temporal features to be used by an anomaly analysis engine to immediately generate an alert when abnormal behavior pattern is detected due to a software or hardware failure. We use several fault tolerance metrics (false positive, false negative, precision, recall, missed alarm rate, detection accuracy, latency and overhead) to evaluate the effectiveness of our self healing approach when compared to other techniques. We applied our approach to an industry standard web e-commerce application to emulate a complex e-commerce environment. We evaluate the effectiveness of our approach and its performance to detect software faults that we inject asynchronously, and compare the results for different noise levels. Our experimental results showed that by applying our anomaly based approach, false positive, false negative, missed alarm rate and detection accuracy can be improved significantly. For example, evaluating the effectiveness of this approach to detect faults injected asynchronously shows a detection rate of above 99.9% with no false alarms for a wide range of faulty and normal operational scenarios.
88

Two Essays on the Low Volatility Anomaly

Riley, Timothy B 01 January 2014 (has links)
I find the low volatility anomaly is present in all but the smallest of stocks. Portfolios can be formed on either total or idiosyncratic volatility to take advantage of this anomaly, but I show measures of idiosyncratic volatility are key. Standard risk-adjusted returns suggest that there is no low volatility anomaly from 1996 through 2011, but I find this result arises from model misspecification. Caution must be taken when analyzing high volatility stocks because their returns have a nonlinear relationship with momentum during market bubbles. I then find that mutual funds with low return volatility in the prior year outperform those with high return volatility by about 5.4% during the next year. After controlling for heterogeneity in fund characteristics, I show that a one standard deviation decrease in fund volatility in the prior year predicts an increase in alpha of about 2.5% in the following year. My evidence suggests that this difference in performance is not due to manager skill but is instead caused by the low volatility anomaly. I find no difference in performance or skill between low and high volatility mutual funds after accounting for the returns on low and high volatility stocks.
89

Structural Analysis of Large Networks: Observations and Applications

McGlohon, Mary 01 December 2010 (has links)
Network data (also referred to as relational data, social network data, real graph data) has become ubiquitous, and understanding patterns in this data has become an important research problem. We investigate how interactions in social networks are formed and how these interactions facilitate diffusion, model these behaviors, and apply these findings to real-world problems. We examined graphs of size up to 16 million nodes, across many domains from academic citation networks, to campaign contributions and actor-movie networks. We also performed several case studies in online social networks such as blogs and message board communities. Our major contributions are the following: (a) We discover several surprising patterns in network topology and interactions, such as Popularity Decay power law (in-links to a blog post decay with a power law with -1:5 exponent) and the oscillating size of connected components; (b) We propose generators such as the Butterfly generator that reproduce both established and new properties found in real networks; (c) several case studies, including a proposed method of detecting misstatements in accounting data, where using network effects gave a significant boost in detection accuracy.
90

Anomaly detection in the surveillance domain

Brax, Christoffer January 2011 (has links)
In the post September 11 era, the demand for security has increased in virtually all parts of the society. The need for increased security originates from the emergence of new threats which differ from the traditional ones in such a way that they cannot be easily defined and are sometimes unknown or hidden in the “noise” of daily life. When the threats are known and definable, methods based on situation recognition can be used find them. However, when the threats are hard or impossible to define, other approaches must be used. One such approach is data-driven anomaly detection, where a model of normalcy is built and used to find anomalies, that is, things that do not fit the normal model. Anomaly detection has been identified as one of many enabling technologies for increasing security in the society. In this thesis, the problem of how to detect anomalies in the surveillance domain is studied. This is done by a characterisation of the surveillance domain and a literature review that identifies a number of weaknesses in previous anomaly detection methods used in the surveillance domain. Examples of identified weaknesses include: the handling of contextual information, the inclusion of expert knowledge and the handling of joint attributes. Based on the findings from this study, a new anomaly detection method is proposed. The proposed method is evaluated with respect to detection performance and computational cost on a number datasets, recorded from real-world sensors, in different application areas of the surveillance domain. Additionally, the method is also compared to two other commonly used anomaly detection methods. Finally, the method is evaluated on a dataset with anomalies developed together with maritime subject matter experts. The conclusion of the thesis is that the proposed method has a number of strengths compared to previous methods and is suitable foruse in operative maritime command and control systems. / Christoffer Brax forskar också vid högskolan i Skövde, Informatics Research Centre / Christoffer Brax also does research at the University of Skövde, Informatics Research Centre

Page generated in 0.0395 seconds