611 |
AN INVESTIGATION INTO LONG-RUN ABNORMAL RETURNS USING PROPENSITY SCORE MATCHINGAcharya, Sunayan 01 January 2012 (has links)
This is a study in two parts. In part-1, I identify several methods of estimating long-run abnormal returns prevalent in the finance literature and present an alternative using propensity score matching. I first demonstrate the concept with a simple simulation using generated data. I then employ historical returns from CRSP and randomly select events from the dataset using various alternating criteria. I test the efficacy of different methods in terms of type-I and type-II errors in detecting abnormal returns over 12- 36- and 60- month periods. I use various forms of propensity score matching: 1--5 Nearest Neighbors in Caliper using distance defined alternatively by Propensity Scores and the Mahalanobis Metric, and Caliper Matching. I show that overall, Propensity Score Matching with two nearest neighbors provides much better performance than traditional methods, especially when the occurence of events is dictated by the presence of certain firm characteristics.
In part-2, I demonstrate an application of Propensity Score Matching in the context of open-market share repurchase announcements. I show that traditional methods are ill-suited for the calculation of long-run abnormal returns following such events. Consequently, I am able to improve upon such methods on two fronts. First, I improve upon traditional matching methods by providing better matches on multiple dimensions and by being able to retain a larger sample of firms from the dataset. Second, I am able to eliminate much of the bias inherent in the Fama-French type methods for this particular application. I show this using simulations on samples based on firms that resemble a typical repurchasing firm. As a result, I obtain a statistically significant 1-, 3-, and 5- year abnormal return of about 2%, 5%, and 10% respectively, which is much lower than what prior literature has shown using traditional methods. Further investigation revealed that much of these returns are unique to small and unprofitable firms.
|
612 |
On the design of fast handovers in mobile WiMAX networksRay, Sayan Kumar January 2012 (has links)
This Thesis is an embodiment of some research work carried out towards achieving faster and more reliable handover techniques in a Mobile WiMAX (Worldwide Interoperability for Microwave Access) network. Handover, also called handoff, is the critical mechanism that allows an ongoing session in a cellular mobile network like WiMAX to be seamlessly maintained without any call drop as the Mobile Station (MS) moves out of the coverage area of one base station (BS) to that of another. Mobile WiMAX supports three different types of handover mechanisms, namely, the hard handover, the Fast Base Station Switching (FBSS) and the Micro-Diversity Handover (MDHO). Out of these, the hard handover is the default handover mechanism whereas the other two are the optional schemes. Also, FBSS and MDHO provide better performance in comparison to hard handover, when it comes to dealing with the high-speed multimedia applications. However, they require a complex architecture and are very expensive to implement. So, hard handover is the commonly used technique accepted by the mobile broadband wireless user community including Mobile WiMAX users.
The existing Mobile WiMAX hard handover mechanism suffers from multiple shortcomings when it comes to providing fast and reliable handovers. These shortcomings include lengthy handover decision process, lengthy and unreliable procedure of selecting the next BS, i.e., the target BS (TBS) for handover, occurrence of frequent and unwanted handovers, long connection disruption times (CDT), wastage of channel resources, etc. Out of these, reducing the handover latency and improving the handover reliability are the two issues that our present work has focused on. While the process of selecting the TBS for handover adds to the overall delay in completing the process of handover, choosing a wrong TBS for handover increases the chance of further unwanted handovers to occur or even a call drop to occur. The latter greatly hampers the reliability of a handover.
In order to contribute to the solution of the above two problems of slow handover and unreliable handover, this Thesis proposes and investigates three handover techniques, which have been called Handover Techniques 1, 2 and 3, respectively. Out of these three techniques, the first two are fully MS-controlled while the third one is a dominantly serving BS-controlled. In Handover Techniques 1 and 2, which share between them some amount of commonness of ideas, the MS not only itself determines the need for a handover but also self-tracks its own independent movement with respect to the location of the (static) neighboring BSs (NBS). N both these handover techniques, the MS performs distance estimation of the NBSs from the signal strength received from the NBSs. But they (the two handover techniques) employ different kinds of “lookahead” techniques to independently choose, as the TBS, that NBS to which the MS is most likely to come nearest in the future. Being MS-controlled, both Handover Technique 1 and Handover Technique 2 put minimal handover-related workload on their respective SBSs who thus remain free to offer services to many more MSs. This interesting capability of the two handover techniques can increase the scalability of the WiMAX network considerably.
In Handover Technique 3, which is a BS-controlled one with some assistance received from the MS, the SBS employs three different criteria or parameters to select the TBS. The first criterion, a novel one, is the orientation matching between the MS’s direction of motion and the geolocation of each NBS. The other two criteria are the current load of each NBS (the load provides an indication of a BS’s current QoS capabilities) and the signal strength received by the MS from each NBS. The BS assigns scores to each NBS against each of the three independent parameters and selects the TBS, which obtains the highest weighted average score among the NBSs.
All three handover techniques are validated using simulation methods. While Handover Techniques 1 and 2 are simulated using Qualnet network simulator, for Handover Technique 3, we had to design, with barest minimum capability, our own simulation environment, using Python. Results of simulation showed that for Handover Techniques 1 and 2, it is possible to achieve around 45% improvement (approx) in the overall handover time by using the two proposed handover techniques. The emphasis in the simulation of the Handover Technique 3 was on studying its reliability in producing correct handovers rather than how fast handovers are. Five different arbitrary pre-defined movement paths of the MS were studied. Results showed that with orientation matching or orientation matching together with signal strength, reliability was extremely good, provided the pre-defined paths were reasonably linear. But reliability fell considerably when relatively large loads were also considered along with orientation matching and signal strength. Finally, the comparison between the proposed handover techniques in this Thesis and few other similar techniques in Mobile WiMAX proposed by other researchers showed that our techniques are better in terms providing fast, reliable and intelligent handovers in Mobile WiMAX networks, with scalability being an added feature.
|
613 |
公務機關比對個人資料對資訊隱私權之可能影響 / Information Privacy Issues under the Data-matching Programs in Public Sector王怡人, Wang, Yi-Ren Unknown Date (has links)
隨著政府給付行政功能日益加重,公部門掌握個人資訊之機會與能力快速成長。以現今之資訊科技,各公務機關分別持有之檔案,可輕易在極短時間內經由傳輸、串連、比對而彙整為相當完整之個人資料檔,使得政府有機會成為社會上最龐大個人資料庫之擁有者。如能妥慎運用,不但可以提升行政效能,建立政府一體形象,且能積極實現人民權益,形成政府與人民雙贏。反之,漫無限制之資料比對則可能侵害人民之資訊隱私權。
所謂「資訊隱私權」係指個人可自行決定是否將自身資料公開或供特定使用之權利。我國大法官於2005年9月28日作出之釋字第603號解釋有完整之闡釋,指出其係受憲法第22條所保障之基本權利。然而憲法對其保障並非絕對,必要時,國家仍得基於公益之必要,在符合法律保留、法律明確及比例原則之條件下予以限制,但應在組織與程序上對於個人資料採行必要之防護措施。就公務機關執行個人資料比對業務而言,值得探討之處在於法律保留、法律明確原則是否落實,以及組織與程序上之防護措施是否妥適。
本論文以「歷史解釋」方法解析資訊隱私權之意義,以「比較法制」方法討論其憲法基礎,以及各國為保護此權利所制訂之資料比對法規。最後,經由「文獻分析」方法,整理學術論文、著作、法規、函釋及實務案例後,對我國公務機關比對個人資料之法制規範以及組織與程序提出建議。具體建議包括:於個資法中明定公務機關執行個人資料比對應盡之程序義務,對於資料之使用確實遵行合目的原則,儘可能以作用法為依據,不應僅憑組織法行之;參與比對之機關內部應設專責組織,推動「自我審查」機制,另於外部設督導單位,監督各機關比對業務之進行,同時藉重資訊科技協助正當程序之落實。
|
614 |
Evaluating the Use of Ridge Regression and Principal Components in Propensity Score Estimators under MulticollinearityGripencrantz, Sarah January 2014 (has links)
Multicollinearity can be present in the propensity score model when estimating average treatment effects (ATEs). In this thesis, logistic ridge regression (LRR) and principal components logistic regression (PCLR) are evaluated as an alternative to ML estimation of the propensity score model. ATE estimators based on weighting (IPW), matching and stratification are assessed in a Monte Carlo simulation study to evaluate LRR and PCLR. Further, an empirical example of using LRR and PCLR on real data under multicollinearity is provided. Results from the simulation study reveal that under multicollinearity and in small samples, the use of LRR reduces bias in the matching estimator, compared to ML. In large samples PCLR yields lowest bias, and typically was found to have the lowest MSE in all estimators. PCLR matched ML in bias under IPW estimation and in some cases had lower bias. The stratification estimator was heavily biased compared to matching and IPW but both bias and MSE improved as PCLR was applied, and for some cases under LRR. The specification with PCLR in the empirical example was usually most sensitive as a strongly correlated covariate was included in the propensity score model.
|
615 |
Segmentation Based Depth Extraction for Stereo Image and Video SequenceZhang, Yu 24 August 2012 (has links)
3D representation nowadays has attracted much more public attention than ever before. One of the most important techniques in this field is depth extraction. In this thesis, we first introduce a well-known stereo matching method using color segmentation and belief propagation, and make an implementation of this framework. The color-segmentation based stereo matching method performs well recently, since this method can keep the object boundaries accurate, which is very important to depth map. Based on the implemented framework of segmentation based stereo matching, we proposed a color segmentation based 2D-to-3D video conversion method using high quality motion information. In our proposed scheme, the original depth map is generated from motion parallax by optical flow calculation. After that we employ color segmentation and plane estimation to optimize the original depth map to get an improved depth map with sharp object boundaries. We also make some adjustments for optical flow calculation to improve its efficiency and accuracy. By using the motion vectors extracted from compressed video as initial values for optical flow calculation, the calculated motion vectors are more accurate within a shorter time compared with the same process without initial values. The experimental results shows that our proposed method indeed gives much more accurate depth maps with high quality edge information. Optical flow with initial values provides good original depth map, and color segmentation with plane estimation further improves the depth map by sharpening its boundaries.
|
616 |
Scarf's Theorem and Applications in CombinatoricsRioux, Caroline January 2006 (has links)
A theorem due to Scarf in 1967 is examined in detail. Several versions of
this theorem exist, some which appear at first unrelated. Two versions
can be shown to be equivalent to a result due to Sperner in 1928: for
a proper labelling of the vertices in a simplicial subdivision of an n-simplex,
there exists at least one elementary simplex which carries all labels {0,1,..., n}.
A third version is more akin to Dantzig's simplex method and is also examined.
In recent years many new applications in combinatorics have been found,
and we present several of them. Two applications are in the area of fair division: cake cutting
and rent partitioning. Two others are graph theoretic: showing the existence
of a fractional stable matching in a hypergraph and the existence of a fractional kernel in a
directed graph. For these last two, we also show the second implies the first.
|
617 |
Face pose estimation in monocular imagesShafi, Muhammad January 2010 (has links)
People use orientation of their faces to convey rich, inter-personal information. For example, a person will direct his face to indicate who the intended target of the conversation is. Similarly in a conversation, face orientation is a non-verbal cue to listener when to switch role and start speaking, and a nod indicates that a person has understands, or agrees with, what is being said. Further more, face pose estimation plays an important role in human-computer interaction, virtual reality applications, human behaviour analysis, pose-independent face recognition, driver s vigilance assessment, gaze estimation, etc. Robust face recognition has been a focus of research in computer vision community for more than two decades. Although substantial research has been done and numerous methods have been proposed for face recognition, there remain challenges in this field. One of these is face recognition under varying poses and that is why face pose estimation is still an important research area. In computer vision, face pose estimation is the process of inferring the face orientation from digital imagery. It requires a serious of image processing steps to transform a pixel-based representation of a human face into a high-level concept of direction. An ideal face pose estimator should be invariant to a variety of image-changing factors such as camera distortion, lighting condition, skin colour, projective geometry, facial hairs, facial expressions, presence of accessories like glasses and hats, etc. Face pose estimation has been a focus of research for about two decades and numerous research contributions have been presented in this field. Face pose estimation techniques in literature have still some shortcomings and limitations in terms of accuracy, applicability to monocular images, being autonomous, identity and lighting variations, image resolution variations, range of face motion, computational expense, presence of facial hairs, presence of accessories like glasses and hats, etc. These shortcomings of existing face pose estimation techniques motivated the research work presented in this thesis. The main focus of this research is to design and develop novel face pose estimation algorithms that improve automatic face pose estimation in terms of processing time, computational expense, and invariance to different conditions.
|
618 |
Framework to manage labels for e-assessment of diagramsJayal, Ambikesh January 2010 (has links)
Automatic marking of coursework has many advantages in terms of resource benefits and consistency. Diagrams are quite common in many domains including computer science but marking them automatically is a challenging task. There has been previous research to accomplish this, but results to date have been limited. Much of the meaning of a diagram is contained in the labels and in order to automatically mark the diagrams the labels need to be understood. However the choice of labels used by students in a diagram is largely unrestricted and diversity of labels can be a problem while matching. This thesis has measured the extent of the diagram label matching problem and proposed and evaluated a configurable extensible framework to solve it. A new hybrid syntax matching algorithm has also been proposed and evaluated. This hybrid approach is based on the multiple existing syntax algorithms. Experiments were conducted on a corpus of coursework which was large scale, realistic and representative of UK HEI students. The results show that the diagram label matching is a substantial problem and cannot be easily avoided for the e-assessment of diagrams. The results also show that the hybrid approach was better than the three existing syntax algorithms. The results also show that the framework has been effective but only to limited extent and needs to be further refined for the semantic stage. The framework proposed in this Thesis is configurable and extensible. It can be extended to include other algorithms and set of parameters. The framework uses configuration XML, dynamic loading of classes and two design patterns namely strategy design pattern and facade design pattern. A software prototype implementation of the framework has been developed in order to evaluate it. Finally this thesis also contributes the corpus of coursework and an open source software implementation of the proposed framework. Since the framework is configurable and extensible, its software implementation can be extended and used by the research community.
|
619 |
Applied logic : its use and implementation as a programming toolWarren, David H. D. January 1978 (has links)
The first Part of the thesis explains from first principles the concept of "logic programming" and its practical application in the programming language Prolog. Prolog is a simple but powerful language which encourages rapid, error-free programming and clear, readable, concise programs. The basic computational mechanism is a pattern matching process ("unification") operating on general record structures ("terms" of logic). IThe ideas are illustrated by describing in detail one sizable Prolog program which implements a simple compiler. The advantages and practicability of using Prolog for "real" compiler implementation are discussed. The second Part of the thesis describes techniques for implementing Prolog efficiently. In particular it is shown how to compile the patterns involved in the matching process into instructions of a low-level language. This idea has actually been implemented in a compiler (written in Prolog) from Prolog to DECsystem-10 assembly language. However the principles involved are explained more abstractly in terms of a "Prolog Machine". The code generated is comparable in speed with that produced by existing DEC10 Lisp compilers. Comparison is possible since pure Lisp can be viewed as a (rather restricted) subset of Prolog. It is argued that structured data objects, such as lists and trees, can be manipulated by pattern matching using a "structure 'sharing" representation as efficiently as by conventional selector and constructor functions operating on linked records in "heap" storage. Moreover the pattern matching formulation actually helps the implementor to produce a better implementation.
|
620 |
Analysis of the Chinese college admission systemZhang, Haibo January 2010 (has links)
This thesis focuses on the problems of the Chinese University Admission (CUA) system. Within the field of education, the system of university admissions involves all of Chinese society and causes much concern amongst all social classes. University admissions have been researched since the middle of last century as an issue which has economic impact. However, little attention has been paid to the CUA system from the perspective of economics. This thesis explores a number of interesting aspects of the system. As a special case of the priority-based matching mechanism, the CUA system shares most properties of the Boston Mechanism, which is another example of a priority-based matching mechanism. But it also has some unique and interesting characteristics. The first chapter will introduce the main principles of the CUA system in detail and discuss stability, efficiency, strategy-proofness, and other properties under different informational assumptions. There is a heated debate about whether the CUA system should be abandoned or not. Educational corruption is one of the issues that have been raised. Corruption is a major issue of the CUA system as well as university admission systems in other areas in the world, e.g. India, Russia, etc. We contrast the performance of markets and exams under the assumption that there exists corruption in the admission process. The problem will be analyzed under perfect capital markets and also under borrowing constraints. We use auction theory to obtain equilibria of the market system and the exam system and analyse the effects of corruption on the efficiency of the two systems. We conclude that the exam system is superior to the market system if we only consider the issue of corruption. In the third chapter, we construct a model to reveal the forces that positively sort students into different quality universities in a free choice system under assumptions of supermodular utility and production functions. Given a distribution of student ability and resources, we analyse the planner's decisions on the number of universities and the design of the "task level" for each university, as well as the allocation of resources between universities. Students gain from completing requirements (tasks) in universities, while having to incur costs of exerting effort. In contrast to previous literature, our model includes qualifications as well as cost in the student's utility function, and educational outputs depend on qualification, ability and resources per capita. Our main focus is on the design of task levels. Our result differs from the literature as regards the optimal number of colleges. A zero fixed cost of establishing new colleges does not necessarily result in perfect tailoring of tasks to students. Furthermore, if the fixed cost is not zero, then the planner has to take fixed costs into account when deciding the number of universities.
|
Page generated in 0.0178 seconds