Spelling suggestions: "subject:"nonparametric"" "subject:"onparametric""
431 |
Daugiamačio pasiskirstymo tankio neparametrinis įvertinimas naudojant stebėjimų klasterizavimą / The nonparametric estimation of multivariate distribution density applying clustering proceduresRuzgas, Tomas 15 March 2007 (has links)
The paper is devoted to statistical nonparametric estimation of multivariate distribution density. The influence of data pre-clustering on the estimation accuracy of multimodal density is analysed by means of the Monte-Carlo method.
|
432 |
Projector-Camera Calibration Using Gray Code PatternsJordan, Samuel James 30 June 2010 (has links)
A parameter-free solution is presented for data projector calibration using a single camera and Gray coded structured light patterns. The proposed method assumes that both camera and projector exhibit significant non-linear distortion, and that projection surfaces can be either planar or freeform. The camera is calibrated first through traditional methods, and the calibrated images are then used to detect Gray coded patterns displayed on a surface by the data projector. Projector to camera correspondences are created by decoding the patterns in the camera images to form a 2D correspondence map. Calibrated systems produce geometrically correct, ex- tremely short throw projections, while maintaining or exceeding the projection size of a standard configuration. Qualitative experiments are performed on two baseline images, while quantitative data is recovered from the projected image of a chessboard pattern. A typical throw ratio of 0.5 can be achieved with a pixel distance error below 1. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2010-06-29 09:33:50.311
|
433 |
High energy broad bandwidth optical parametric chirped pulse amplification / Didelės išvadinės energijos plataus spektro čirpuotų impulsų optinis parametrinis stiprinimasAntipenkov, Roman 07 March 2011 (has links)
Rapidly developing areas of high field physics, generation of high order harmonics or isolated attosecond pulses, require high peak power few-cycle pulse sources. Optical parametric chirped pulse amplification (OPCPA) has shown potential to satisfy these requirements and at present OPCPA is the leading technology for high energy few-cycle pulse table-top systems.
The main objectives of this thesis were to investigate optical parametric amplification of broadband seed pulses in femtosecond and picosecond regimes, to develop and optimize a compact TW-scale OPCPA system intended for various applications in areas of high-field physics. In this thesis the main concept of such system is discussed, advantages and disadvantages of proposed approach are analyzed, the setup is compared to other world known systems.
In this thesis an original approach for power scaling of regenerative amplifier by implementing several active elements in prolonged resonator has been proposed and investigated. Femtosecond pulse amplification in dual active element Yb:KGW regenerative amplifier has been demonstrated, resulting in boost of average output power to 30 W.
Broad bandwidth pulse generation, parametric amplification and compression to transform limited values were analyzed both numerically and experimentally. White light continuum generation in bulk material for broadband seed formation, its further optical parametric amplification in noncollinear scheme were investigated and Yb:KGW driven... [to full text] / Stiprių laukų fizikos srities tyrimams, aukštų eilių harmonikų ir pavienių atosekundinių impulsų generavimui, yra reikalingos kompaktiškos teravatų smailinės galios kelių optinių ciklų išvadinių impulsų lazerinės sistemos. Optinis parametrinis „čirpuotų“ impulsų stiprinimas yra vienas pagrindinių metodų leidžiančiu pasiekti šiems taikymams reikalingus lazerinių sistemų parametrus.
Šios disertacijos darbo tikslas – ištirti femtosekundinės ir pikosekundinės trukmės impulsų stiprinimą optiniuose parametriniuose stiprintuvuose užkratui naudojant ypač plataus spektro signalą, bei sukurti ir optimizuoti čirpuotų impulsų parametrinio stiprinimo sistemą, užtikrinančią patikimą teravatų smailinės galios impulsų formavimą. Disertacijoje aptariama bendra tokios sistemos architektūra, nagrinėjami privalumai ir trūkumai, palyginama su kitomis pasaulyje egzistuojančiomis sistemomis.
Šiame darbe pasiūlytas ir ištirtas lazerių vidutinės išvadinės galios didinimo metodas, naudojant kelis aktyviuosius elementus viename rezonatoriuje, ir pademonstruotas femtosekundinių impulsų stiprinimas šio metodo pagrindu sukonstruotame dviejų Yb:KGW aktyvių elementų regeneratyviniame stiprintuve, tokiu būdu padidinant lazerio išvadinę galią iki 30 W.
Darbo metu sukonstruota bei ištirta Yb:KGW femtosekundiniu lazeriu kaupinamos baltos šviesos kontinuumo generavimo ir nekolinearaus kaupinimo optinio parametrinio stiprinimo sistema, kurios išvadinių impulsų energiją siekia 20 mikrodžiaulių, o impulsai... [toliau žr. visą tekstą]
|
434 |
Goodness-of-Fit Test Issues in Generalized Linear Mixed ModelsChen, Nai-Wei 2011 December 1900 (has links)
Linear mixed models and generalized linear mixed models are random-effects models widely applied to analyze clustered or hierarchical data. Generally, random effects are often assumed to be normally distributed in the context of mixed models. However, in the mixed-effects logistic model, the violation of the assumption of normally distributed random effects may result in inconsistency for estimates of some fixed effects and the variance component of random effects when the variance of the random-effects distribution is large. On the other hand, summary statistics used for assessing goodness of fit in the ordinary logistic regression models may not be directly applicable to the mixed-effects logistic models. In this dissertation, we present our investigations of two independent studies related to goodness-of-fit tests in generalized linear mixed models.
First, we consider a semi-nonparametric density representation for the random effects distribution and provide a formal statistical test for testing normality of the random-effects distribution in the mixed-effects logistic models. We obtain estimates of parameters by using a non-likelihood-based estimation procedure. Additionally, we not only evaluate the type I error rate of the proposed test statistic through asymptotic results, but also carry out a bootstrap hypothesis testing procedure to control the inflation of the type I error rate and to study the power performance of the proposed test statistic. Further, the methodology is illustrated by revisiting a case study in mental health.
Second, to improve assessment of the model fit in the mixed-effects logistic models, we apply the nonparametric local polynomial smoothed residuals over within-cluster continuous covariates to the unweighted sum of squares statistic for assessing the goodness-of-fit of the logistic multilevel models. We perform a simulation study to evaluate the type I error rate and the power performance for detecting a missing quadratic or interaction term of fixed effects using the kernel smoothed unweighted sum of squares statistic based on the local polynomial smoothed residuals over x-space. We also use a real data set in clinical trials to illustrate this application.
|
435 |
Essays in Financial EconometricsDe Lira Salvatierra, Irving January 2015 (has links)
<p>The main goal of this work is to explore the effects of time-varying extreme jump tail dependencies in asset markets. Consequently, a lot of attention has been devoted to understand the extremal tail dependencies between of assets. As pointed by Hansen (2013), the estimation of tail risks dependence is a challenging task and their implications in several sectors of the economy are of great importance. One of the principal challenges is to provide a measure systemic risks that is, in principle, statistically tractable and has an economic meaning. Therefore, there is a need of a standardize dependence measures or at least to provide a methodology that can capture the complexity behind global distress in the economy. These measures should be able to explain not only the dynamics of the most recent financial crisis but also the prior events of distress in the world economy, which is the motivation of this paper. In order to explore the tail dependencies I exploit the information embedded in option prices and intra-daily high frequency data. </p><p>The first chapter, a co-authored work with Andrew Patton, proposes a new class of dynamic copula models for daily asset returns that exploits information from high frequency (intra-daily) data. We augment the generalized autoregressive score (GAS) model of Creal, et al. (2013) with high frequency measures such as realized correlation to obtain a "GRAS" model. We find that the inclusion of realized measures significantly improves the in-sample fit of dynamic copula models across a range of U.S. equity returns. Moreover, we find that out-of-sample density forecasts from our GRAS models are superior to those from simpler models. Finally, we consider a simple portfolio choice problem to illustrate the economic gains from exploiting high frequency data for modeling dynamic dependence.</p><p>In the second chapter using information from option prices I construct two new measures of dependence between assets and industries, the Jump Tail Implied Correlation and the Tail Correlation Risk Premia. The main contribution in this chapter is the construction of a systemic risk factor from daily financial measures using a quantile-regression-based methodology. In this direction, I fill the existing gap between downturns in the financial sector and the real economy. I find that this new index performs well to forecast in-sample and out-of-sample quarterly macroeconomic shocks. In addition, I analyze whether the tail risk of the correlation may be priced. I find that for the S&P500 and its sectors there is an ex ante premium to hedge against systemic risks and changes in the aggregate market correlation. Moreover, I provide evidence that the tails of the implied correlation have remarkable predictive power for future stock market returns.</p> / Dissertation
|
436 |
Depth-Assisted Semantic Segmentation, Image Enhancement and Parametric ModelingZhang, Chenxi 01 January 2014 (has links)
This dissertation addresses the problem of employing 3D depth information on solving a number of traditional challenging computer vision/graphics problems. Humans have the abilities of perceiving the depth information in 3D world, which enable humans to reconstruct layouts, recognize objects and understand the geometric space and semantic meanings of the visual world. Therefore it is significant to explore how the 3D depth information can be utilized by computer vision systems to mimic such abilities of humans. This dissertation aims at employing 3D depth information to solve vision/graphics problems in the following aspects: scene understanding, image enhancements and 3D reconstruction and modeling.
In addressing scene understanding problem, we present a framework for semantic segmentation and object recognition on urban video sequence only using dense depth maps recovered from the video. Five view-independent 3D features that vary with object class are extracted from dense depth maps and used for segmenting and recognizing different object classes in street scene images. We demonstrate a scene parsing algorithm that uses only dense 3D depth information to outperform using sparse 3D or 2D appearance features.
In addressing image enhancement problem, we present a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale internet photo collections (IPCs). By augmenting personal 2D images with 3D information reconstructed from IPCs, we address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms.
In addressing 3D reconstruction and modeling problem, we focus on parametric modeling of flower petals, the most distinctive part of a plant. The complex structure, severe occlusions and wide variations make the reconstruction of their 3D models a challenging task. We overcome these challenges by combining data driven modeling techniques with domain knowledge from botany. Taking a 3D point cloud of an input flower scanned from a single view, each segmented petal is fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned 3D exemplar petals. Novel constraints based on botany studies are incorporated into the fitting process for realistically reconstructing occluded regions and maintaining correct 3D spatial relations.
The main contribution of the dissertation is in the intelligent usage of 3D depth information on solving traditional challenging vision/graphics problems. By developing some advanced algorithms either automatically or with minimum user interaction, the goal of this dissertation is to demonstrate that computed 3D depth behind the multiple images contains rich information of the visual world and therefore can be intelligently utilized to recognize/ understand semantic meanings of scenes, efficiently enhance and augment single 2D images, and reconstruct high-quality 3D models.
|
437 |
Using a Catenary Equation in Parametric Representation for Minimizing Stress Concentrations at Notches / Minimierung der Spannungskonzentration an Kerben mittels einer Kettenlinie in parametrischer DarstellungJakel, Roland 26 June 2015 (has links) (PDF)
Der Vortrag beschreibt, wie sich mittels Kettenlinien als Kerbgeometrie der Spannungskonzentrationsfaktor an Querschnittsübergängen auf nahezu 1 reduzieren lässt. Mittels globaler Sensitivitätsstudien in der p-FEM-Software Creo Simulate wird mithilfe eines in Creo Parametric parametrisierten CAD-Modells der Kettenlinie in Parameterdarstellung ein normiertes Kerbzahldiagramm erstellt. Dieses erlaubt die Dimensionierung einer Kerbe, d.h. die Festlegung ihrer exakten Geometrie und der damit verbundenen Kerbzahl αk ohne die weitere Verwendung eines FEM-Programmes. / The presentation describes how to reduce the stress concentration factor at cross section transitions to nearly 1 by using a catenary curve as notch geometry (catenary fillet). With help of global sensitivity studies performed in the p-FEM-code Creo Simulate, a normalized stress concentration factor diagram is drawn. Therefore, a CAD-model of the catenary curve in parametric representation was developed. The diagram created allows to dimension the notch, that means to determine its exact geometry and stress concentration factor Kt, without further usage of a FEM code.
|
438 |
A general theory of electronic parametric instability of relativistically intense laser light in plasmaParr, David Michael January 2000 (has links)
No description available.
|
439 |
Computer Aided Engineering in the Foot Orthosis Development ProcessLochner, Samuel Jewell 22 August 2013 (has links)
An orthosis, or orthotic device is used to straighten or correct the posture of part of the body. A foot orthosis (FO) is the subject of study for this dissertation. A FO is situated between the foot and the midsole of the shoe and replaces the insole. Foot orthoses (FOs) are intended to prevent or aid in the recovery of injury by acting to redistribute pressure experienced by the plantar surface of the foot as well as cause adjustments to the relative positions of the foot's bones during standing and gait.
Traditional methods for developing a FO require extensive skilled manual labour and are highly dependent on subjective input. Modern FO development methods have sought to address these issues through the use of computer driven technological advancements. Foot scanners record geometry, computer aided design (CAD) software is used to develop the FO geometry, and automated manufacturing tools are used to either fabricate the FO or fabricate a mould about which the FO can be formed.
A variety of modern solutions have successfully automated the process, however, it remains highly subjective. Skilled manual labour has merely been replaced with equally subjective skilled computer labour. In particular, adjustments to the foot are made with basic deformation functions to the static surface foot models generated by modern digitizers. To improve upon this, a model that describes the mechanics and properties of the various tissues of the foot is required. Such a model will also be useful for validating and optimizing FO designs prior to fabrication through simulation of weight-bearing conditions.
Given the deformable characteristics of the tissues of the foot, the finite element (FE) modeling method is appropriate. The FE foot model has become a common medical and engineering tool in recent years. Its application, however, has primarily been limited to research as few clinical applications warrant the development cost. High cost stems from the MRI or CT scan and the skilled labour required to assemble the model for FE analysis. Consequently, the FE modeling approach has previously been out of reach for the application of FO development.
The solution proposed and implemented was to map a detailed generic FE foot model to an inexpensive surface scan obtained from a modern digitizer. The mapping accurately predicted anatomical geometry and resulted in simulation models that can be used in the FO development process first to carry out postural adjustments prescribed by a practitioner and second in a validation step where a FO design can be tested prior to fabrication. In addition to simulation tools, novel complementary tools were developed for designing and fabricating FOs. The simulation, design, and fabrication tools were incorporated into a novel, seven step FO development process. The proposed process is beneficial to FO development as it reduces the required subjective input from practitioners and lab technicians and allows for the validation of potential FO designs prior to fabrication. Future work is required to improve computational efficiency of the FE foot models and to fully automate the process to make it commercially viable. In addition to FOs, the proposed approach also presents opportunities for improving other orthoses and prostheses for the human body.
|
440 |
Robust polynomial controller designWellstead, Kevin January 1991 (has links)
The work presented in this thesis was motivated by the desire to establish an alternative approach to the design of robust polynomial controllers. The procedure of pole-placement forms the basis of the design and for polynomial systems this generally involves the solution of a diophantine equation. This equation has many possible solutions which leads directly to the idea of determining the most appropriate solution for improved performance robustness. A thorough review of many of the aspects of the diophantine equation is presented, which helps to gain an understanding of this extremely important equation. A basic investigation into selecting a more robust solution is carried out but it is shown that, in the polynomial framework, it is difficult to relate decisions in the design procedure to the effect on performance robustness. This leads to the approach of using a state space based design and transforming the resulting output feedback controller to polynomial form. The state space design is centred around parametric output feedback which explicitly represents a set of possible feedback controllers in terms of arbitrary free parameters. The aim is then to select these free parameters such that the closed-loop system has improved performance robustness. Two parametric methods are considered and compared, one being well established and the other a recently proposed scheme. Although the well established method performs slightly better for general systems it is shown to fail when applied to this type of problem. For performance robustness, the shape of the transient response in the presence of model uncertainty is of interest. It is well known that the eigenvalues and eigenvectors play an important role in determining the transient behaviour and as such the sensitivities of these factors to model uncertainty forms the basis on which the free parameters are selected. Numerical optimisation is used to select the free parameters such that the sensitivities are at a minimum. It is shown both in a simple example and in a more realistic application that a significant improvement in the transient behaviour in the presence of model uncertainty can be achieved using the proposed design procedure.
|
Page generated in 0.0692 seconds