• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 439
  • 171
  • 53
  • 40
  • 26
  • 19
  • 14
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 957
  • 957
  • 198
  • 176
  • 160
  • 157
  • 139
  • 137
  • 123
  • 114
  • 95
  • 92
  • 78
  • 77
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Essays on Computational Problems in Insurance

Ha, Hongjun 31 July 2016 (has links)
This dissertation consists of two chapters. The first chapter establishes an algorithm for calculating capital requirements. The calculation of capital requirements for financial institutions usually entails a reevaluation of the company's assets and liabilities at some future point in time for a (large) number of stochastic forecasts of economic and firm-specific variables. The complexity of this nested valuation problem leads many companies to struggle with the implementation. The current chapter proposes and analyzes a novel approach to this computational problem based on least-squares regression and Monte Carlo simulations. Our approach is motivated by a well-known method for pricing non-European derivatives. We study convergence of the algorithm and analyze the resulting estimate for practically important risk measures. Moreover, we address the problem of how to choose the regressors, and show that an optimal choice is given by the left singular functions of the corresponding valuation operator. Our numerical examples demonstrate that the algorithm can produce accurate results at relatively low computational costs, particularly when relying on the optimal basis functions. The second chapter discusses another application of regression-based methods, in the context of pricing variable annuities. Advanced life insurance products with exercise-dependent financial guarantees present challenging problems in view of pricing and risk management. In particular, due to the complexity of the guarantees and since practical valuation frameworks include a variety of stochastic risk factors, conventional methods that are based on the discretization of the underlying (Markov) state space may not be feasible. As a practical alternative, this chapter explores the applicability of Least-Squares Monte Carlo (LSM) methods familiar from American option pricing in this context. Unlike previous literature we consider optionality beyond surrendering the contract, where we focus on popular withdrawal benefits - so-called GMWBs - within Variable Annuities. We introduce different LSM variants, particularly the regression-now and regression-later approaches, and explore their viability and potential pitfalls. We commence our numerical analysis in a basic Black-Scholes framework, where we compare the LSM results to those from a discretization approach. We then extend the model to include various relevant risk factors and compare the results to those from the basic framework.
142

The relationship between lean service, activity-based costing and business strategy and their impact on performance

Hadid, Wael January 2014 (has links)
Lean system has drawn the attention of researchers and practitioners since its emergence in 1950s. This has been reflected by the increasing number of companies attempting to implement its practices and the large number of researchers investigating its effectiveness and identifying important contextual factors which affect its implementation. The rising level of interest in lean system has led to the emergence of three distinctive streams of literature. The first stream of literature has focused on the effectiveness of lean system. However, this literature was limited as it mainly examined the additive impact of lean practices on operational performance in the manufacturing context. The second stream of literature has focused on the role the accounting system in the lean context. In this body of literature, there was an agreement among researchers on the superiority of activity-based costing system (ABC) over the traditional accounting system in supporting the implementation of lean practices. However, most studies in this strand of literature were either conceptual or case-based studies. The third stream of literature has focused on the fit between business strategy and lean system. However, inconclusive results were reported in relation to the suitability of lean system to firms adopting the differentiation strategy and others adopting the cost leadership strategy. The aim of this study is to develop and empirically test a conceptual model which integrates the three distinctive streams of literature to extend their focus and overcome their limitations. More specifically, the model developed in the current study highlights not only the additive impact of lean practices but also the possible synergy among those practices in improving both operational and financial performance of service firms. In addition, the model brings to light the potential intervening role of ABC in the strategy-lean association. After identifying and reviewing the relevant literature, the socio-technical system theory and contingency theory were used to develop the conceptual model and associated hypotheses. A questionnaire instrument was designed to collect empirical data which was supplemented by objective data from the Financial Analysis Made Easy database in order to empirically test the conceptual model using partial least squares structural equation modelling (PLS-SEM). The findings of this study indicated that while the technical practices of lean service improved only the operational performance of service firms, the social practices enhanced both operational and financial performance. In addition, the two sets of practices positively interacted to improve firm performance over and above the improvement achieved from each set separately. Moreover, ABC was found to have a positive association with lean practice, and consequently an indirect positive relation with firm operational performance. Finally, both the differentiation and cost leadership strategy had a direct positive relationship with lean practices. However, while ABC was found to partially mediate the differentiation-lean association, it suppressed the cost leadership-lean association leading to a case of inconsistent mediation. The current study contributes to the current literature at different levels. First, at the theoretical level, this study develops a conceptual framework which crosses different streams of literatures mainly, lean system literature, management accounting literature (with focus on ABC), and business strategy literature. Unlike previous studies, by integrating the perspective of socio-technical system theory and contingency theory, the model (i) highlights not only the additive but also the synergistic effect of lean service practices on firm performance, (ii) brings to light the direct impact of ABC and business strategy on lean service practices and the intervening role of ABC due to which the business strategy is assumed to have also an indirect influence on lean practices, and (iii) offers an alternative view on how ABC can improve firm performance by enhancing other organisational capabilities (lean practices) which are expected to improve performance . Second, at the methodological level, unlike previous studies, this study includes a large number of lean service practices and contextual variables to report more precisely on the lean-performance association. In addition, the inclusion of the financial performance dimension-measured by secondary data- in the model besides the operational performance is critical to understand the full capability of lean service in improving firm performance. Further, employing a powerful statistical technique (PLS-SEM) provides more credibility to the results reported in this study. Third, at the empirical level, this study is conducted in the UK service sector. As such, this study is one of the very few studies that have reported on lean service and examined how the adoption of ABC and a specific type of business strategy can affect its implementation using empirical survey data from this context.
143

Supervised Descent Method

Xiong, Xuehan 01 September 2015 (has links)
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
144

Sparse Linear Modeling of Speech from EEG / Gles Linjära Modellering av Tal från EEG

Tiger, Mattias January 2014 (has links)
For people with hearing impairments, attending to a single speaker in a multi-talker background can be very difficult and something which the current hearing aids can barely help with. Recent studies have shown that the audio stream a human focuses on can be found among the surrounding audio streams, using EEG and linear models. With this rises the possibility of using EEG to unconsciously control future hearing aids such that the attuned sounds get enhanced, while the rest are damped. For such hearing aids to be useful for every day usage it better be using something other than a motion sensitive, precisely placed EEG cap. This could possibly be archived by placing the electrodes together with the hearing aid in the ear. One of the leading hearing aid manufacturer Oticon and its research lab Erikholm Research Center have recorded an EEG data set of people listening to sentences and in which electrodes were placed in and closely around the ears. We have analyzed the data set by applying a range of signal processing approaches, mainly in the context of audio estimation from EEG. Two different types of linear sparse models based on L1-regularized least squares are formulated and evaluated, providing automatic dimensionality reduction in that they significantly reduce the number of channels needed. The first model is based on linear combinations of spectrograms and the second is based on linear temporal filtering. We have investigated the usefulness of the in-ear electrodes and found some positive indications. All models explored consider the in-ear electrodes to be the most important, or among the more important, of the 128 electrodes in the EEG cap.This could be a positive indication of the future possibility of using only electrodes in the ears for future hearing aids.
145

The role of the human nasal cavity in patterns of craniofacial covariation and integration

Lindal, Joshua 18 January 2016 (has links)
Climate has a selective influence on nasal cavity morphology. Due to the constraints of cranial integration, naturally selected changes in one structure necessitate changes in others in order to maintain structural and functional cohesion. The relationships between climate and skull/nasal cavity morphology have been explored, but the integrative role of nasal variability within the skull as a whole has not. This thesis presents two hypotheses: 1) patterns of craniofacial integration observed in 2D can be reproduced using 3D geometric morphometric techniques; 2) the nasal cavity exhibits a higher level of covariation with the lateral cranial base than with other parts of the skull, since differences in nasal morphology and basicranial breadth have both been linked to climatic variables. The results support the former hypothesis, but not the latter; covariation observed between the nasal cavity and other cranial modules may suggest that these relationships are characterized by a unique integrative relationship. / February 2016
146

Aspects of model development using regression quantiles and elemental regressions

Ranganai, Edmore 03 1900 (has links)
Dissertation (PhD)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: It is well known that ordinary least squares (OLS) procedures are sensitive to deviations from the classical Gaussian assumptions (outliers) as well as data aberrations in the design space. The two major data aberrations in the design space are collinearity and high leverage. Leverage points can also induce or hide collinearity in the design space. Such leverage points are referred to as collinearity influential points. As a consequence, over the years, many diagnostic tools to detect these anomalies as well as alternative procedures to counter them were developed. To counter deviations from the classical Gaussian assumptions many robust procedures have been proposed. One such class of procedures is the Koenker and Bassett (1978) Regressions Quantiles (RQs), which are natural extensions of order statistics, to the linear model. RQs can be found as solutions to linear programming problems (LPs). The basic optimal solutions to these LPs (which are RQs) correspond to elemental subset (ES) regressions, which consist of subsets of minimum size to estimate the necessary parameters of the model. On the one hand, some ESs correspond to RQs. On the other hand, in the literature it is shown that many OLS statistics (estimators) are related to ES regression statistics (estimators). Therefore there is an inherent relationship amongst the three sets of procedures. The relationship between the ES procedure and the RQ one, has been noted almost “casually” in the literature while the latter has been fairly widely explored. Using these existing relationships between the ES procedure and the OLS one as well as new ones, collinearity, leverage and outlier problems in the RQ scenario were investigated. Also, a lasso procedure was proposed as variable selection technique in the RQ scenario and some tentative results were given for it. These results are promising. Single case diagnostics were considered as well as their relationships to multiple case ones. In particular, multiple cases of the minimum size to estimate the necessary parameters of the model, were considered, corresponding to a RQ (ES). In this way regression diagnostics were developed for both ESs and RQs. The main problems that affect RQs adversely are collinearity and leverage due to the nature of the computational procedures and the fact that RQs’ influence functions are unbounded in the design space but bounded in the response variable. As a consequence of this, RQs have a high affinity for leverage points and a high exclusion rate of outliers. The influential picture exhibited in the presence of both leverage points and outliers is the net result of these two antagonistic forces. Although RQs are bounded in the response variable (and therefore fairly robust to outliers), outlier diagnostics were also considered in order to have a more holistic picture. The investigations used comprised analytic means as well as simulation. Furthermore, applications were made to artificial computer generated data sets as well as standard data sets from the literature. These revealed that the ES based statistics can be used to address problems arising in the RQ scenario to some degree of success. However, due to the interdependence between the different aspects, viz. the one between leverage and collinearity and the one between leverage and outliers, “solutions” are often dependent on the particular situation. In spite of this complexity, the research did produce some fairly general guidelines that can be fruitfully used in practice. / AFRIKAANSE OPSOMMING: Dit is bekend dat die gewone kleinste kwadraat (KK) prosedures sensitief is vir afwykings vanaf die klassieke Gaussiese aannames (uitskieters) asook vir data afwykings in die ontwerpruimte. Twee tipes afwykings van belang in laasgenoemde geval, is kollinearitiet en punte met hoë hefboom waarde. Laasgenoemde punte kan ook kollineariteit induseer of versteek in die ontwerp. Na sodanige punte word verwys as kollinêre hefboom punte. Oor die jare is baie diagnostiese hulpmiddels ontwikkel om hierdie afwykings te identifiseer en om alternatiewe prosedures daarteen te ontwikkel. Om afwykings vanaf die Gaussiese aanname teen te werk, is heelwat robuuste prosedures ontwikkel. Een sodanige klas van prosedures is die Koenker en Bassett (1978) Regressie Kwantiele (RKe), wat natuurlike uitbreidings is van rangorde statistieke na die lineêre model. RKe kan bepaal word as oplossings van lineêre programmeringsprobleme (LPs). Die basiese optimale oplossings van hierdie LPs (wat RKe is) kom ooreen met die elementale deelversameling (ED) regressies, wat bestaan uit deelversamelings van minimum grootte waarmee die parameters van die model beraam kan word. Enersyds geld dat sekere EDs ooreenkom met RKe. Andersyds, uit die literatuur is dit bekend dat baie KK statistieke (beramers) verwant is aan ED regressie statistieke (beramers). Dit impliseer dat daar dus ‘n inherente verwantskap is tussen die drie klasse van prosedures. Die verwantskap tussen die ED en die ooreenkomstige RK prosedures is redelik “terloops” van melding gemaak in die literatuur, terwyl laasgenoemde prosedures redelik breedvoerig ondersoek is. Deur gebruik te maak van bestaande verwantskappe tussen ED en KK prosedures, sowel as nuwes wat ontwikkel is, is kollineariteit, punte met hoë hefboom waardes en uitskieter probleme in die RK omgewing ondersoek. Voorts is ‘n lasso prosedure as veranderlike seleksie tegniek voorgestel in die RK situasie en is enkele tentatiewe resultate daarvoor gegee. Hierdie resultate blyk belowend te wees, veral ook vir verdere navorsing. Enkel geval diagnostiese tegnieke is beskou sowel as hul verwantskap met meervoudige geval tegnieke. In die besonder is veral meervoudige gevalle beskou wat van minimum grootte is om die parameters van die model te kan beraam, en wat ooreenkom met ‘n RK (ED). Met sodanige benadering is regressie diagnostiese tegnieke ontwikkel vir beide EDs en RKe. Die belangrikste probleme wat RKe negatief beinvloed, is kollineariteit en punte met hoë hefboom waardes agv die aard van die berekeningsprosedures en die feit dat RKe se invloedfunksies begrensd is in die ruimte van die afhanklike veranderlike, maar onbegrensd is in die ontwerpruimte. Gevolglik het RKe ‘n hoë affiniteit vir punte met hoë hefboom waardes en poog gewoonlik om uitskieters uit te sluit. Die finale uitset wat verkry word wanneer beide punte met hoë hefboom waardes en uitskieters voorkom, is dan die netto resultaat van hierdie twee teenstrydige pogings. Alhoewel RKe begrensd is in die onafhanklike veranderlike (en dus redelik robuust is tov uitskieters), is uitskieter diagnostiese tegnieke ook beskou om ‘n meer holistiese beeld te verkry. Die ondersoek het analitiese sowel as simulasie tegnieke gebruik. Voorts is ook gebruik gemaak van kunsmatige datastelle en standard datastelle uit die literatuur. Hierdie ondersoeke het getoon dat die ED gebaseerde statistieke met ‘n redelike mate van sukses gebruik kan word om probleme in die RK omgewing aan te spreek. Dit is egter belangrik om daarop te let dat as gevolg van die interafhanklikheid tussen kollineariteit en punte met hoë hefboom waardes asook dié tussen punte met hoë hefboom waardes en uitskieters, “oplossings” dikwels afhanklik is van die bepaalde situasie. Ten spyte van hierdie kompleksiteit, is op grond van die navorsing wat gedoen is, tog redelike algemene riglyne verkry wat nuttig in die praktyk gebruik kan word.
147

Analysis of a Combined GLONASS/Compass-I Navigation Algorithm

Peng, Song, Xiao-yu, Chen, Jian-zhong, Qi 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Compass-I system is China has built satellite navigation system. It's a kind of regional position system according to the double-star position principle. Commonly, Compass-I system need adopt active position, in the paper several passive position methods are put forward. A combination navigation mode based on GLONASS and Compass-I passive navigation is proposed in this paper. The differences of coordinates and time systems between those two navigation systems are analyzed. User position is calculated by least squares method. Combination Navigation Algorithm can improve visible satellite constellation structure and positioning precision so as to ensure the reliability and continuity of positioning result.
148

Archeologinių duomenų analizė. Sukimo ašies radimas / Analysis of archaeological data. estimation of the axis of rotation

Misiukevičius, Ramūnas 30 June 2014 (has links)
Pasaulyje sparčiai besivystančios informacinės technologijos (IT) neaplenkia ir archeologijos mokslo. Vis dažniau archeologai naudoja įvairias kompiuterines programas ne tik archeologinės medžiagos dokumentavimui, vaizdavimui ar rekonstrukcijai, bet ir žmonių veiklos, buities, gyvenimo aplinkos rekonstrukcijai ar modeliavimui. Šis uždavinys reikalauja atlikti kelių etapų analizę ir išsiaiškinti radinių kilmę, tipą, originalumą ir paskirtį. Turint šią informaciją, galime daug sužinoti apie žmonių, kurie naudojosi tais daiktais žinias, turėtus įrankius, papročius, emigraciją ir daug kitos informacijos. Žinių kiekis apie senovę priklauso nuo radinių ir mūsų gebėjimų juos analizuoti. Šiame darbe yra pristatomas vienas iš puodų šukių analizės metodų - sukimo ašies radimas. Tai yra pirmasis ir esminis tokio tipo radinių analizės etapas, nes nuo jo rezultatų priklauso kitos radinio analizės - profilio linijos radimas, simetriškumo tikrinimas, segmantacijos realizavimas, objektų tipologija, rekonstrukcija ir galiausiai - žmonių gyvenimo analizė. Klaidos šiame etape turi lemiamos reikšmės kitiems analizės etapams, o gautos žinios gali suklaidinti tiriant senovės žmonių kultūrą ir jų paplitimą bei migraciją. Darbe yra aptariami sukimo ašies radimo metodai, jų privalumai ir trūkumai, pateikiami pavyzdžiai. / The world is rapidly developing information technology (IT) exist in archaeological science. Increasingly, archaeologists use various computer programs not only for documentation of archaeological material, or the depiction of reconstruction, but human activity, lifestyle, environmental reconstruction and modeling. This task requires a multi-step analysis of the findings and to clarify the origin of the type of originality and purpose. With this information, we can learn a lot about the people who used the objects of knowledge at the tools, customs, emigration, and much other information. Amount of knowledge about ancient artifacts and depends on our ability to analyze them. This paper has presented one of the pottery shards of methods of analysis – estimation of the axis of rotation. This is the first of its kind and an essential step in the analysis finds, because it captures the results of another analysis - Finding the profile lines, symmetry checks, realization of segmentation, object typology, reconstruction, and finally - an analysis of people's lives. Errors at this stage is critical for other steps in the analysis and the knowledge generated is likely to mislead the investigation of ancient human cultures and their distribution and migration. The paper discusses the rotation axis of the detection methods, their advantages and disadvantages, are examples.
149

Post-manoeuvre and online parameter estimation for manned and unmanned aircraft

Jameson, Pierre-Daniel January 2013 (has links)
Parameterised analytical models that describe the trimmed inflight behaviour of classical aircraft have been studied and are widely accepted by the flight dynamics community. Therefore, the primary role of aircraft parameter estimation is to quantify the parameter values which make up the models and define the physical relationship of the air vehicle with respect to its local environment. Nevertheless, a priori empirical predictions dependent on aircraft design parameters also exist, and these provide a useful means of generating preliminary values predicting the aircraft behaviour at the design stage. However, at present the only feasible means that exist to actually prove and validate these parameter values remains to extract them through physical experimentation either in a wind-tunnel or from a flight test. With the advancement of UAVs, and in particular smaller UAVs (less than 1m span) the ability to fly the full scale vehicle and generate flight test data presents an exciting opportunity. Furthermore, UAV testing lends itself well to the ability to perform rapid prototyping with the use of COTS equipment. Real-time system identification was first used to monitor highly unstable aircraft behaviour in non-linear flight regimes, while expanding the operational flight envelope. Recent development has focused on creating self-healing control systems, such as adaptive re-configurable control laws to provide robustness against airframe damage, control surface failures or inflight icing. In the case of UAVs real-time identification, would facilitate rapid prototyping especially in low-cost projects with their constrained development time. In a small UAV scenario, flight trials could potentialy be focused towards dynamic model validation, with the prior verification step done using the simulation environment. Furthermore, the ability to check the estimated derivatives while the aircraft is flying would enable detection of poor data readings due to deficient excitation manoeuvres or atmospheric turbulence. Subsequently, appropriate action could then be taken while all the equipment and personnel are in place. This thesis describes the development of algorithms in order to perform online system identification for UAVs which require minimal analyst intervention. Issues pertinent to UAV applications were: the type of excitation manoeuvers needed and the necessary instrumentation required to record air-data. Throughout the research, algorithm development was undertaken using an in-house Simulink© model of the Aerosonde UAV which provided a rapid and flexible means of generating simulated data for analysis. In addition, the algorithms were further tested with real flight test data that was acquired from the Cranfield University Jestream-31 aircraft G-NFLA during its routine operation as a flying classroom. Two estimation methods were principally considered, the maximum likelihood and least squares estimators, with the aforementioned found to be best suited to the proposed requirements. In time-domain analysis reconstruction of the velocity state derivatives ˙W and ˙V needed for the SPPO and DR modes respectively, provided more statistically reliable parameter estimates without the need of a α- or β- vane. By formulating the least squares method in the frequency domain, data issues regarding the removal of bias and trim offsets could be more easily addressed while obtaining timely and reliable parameter estimates. Finally, the importance of using an appropriate input to excite the UAV dynamics allowing the vehicle to show its characteristics must be stressed.
150

Ground Object Recognition using Laser Radar Data : Geometric Fitting, Performance Analysis, and Applications

Grönwall, Christna January 2006 (has links)
This thesis concerns detection and recognition of ground object using data from laser radar systems. Typical ground objects are vehicles and land mines. For these objects, the orientation and articulation are unknown. The objects are placed in natural or urban areas where the background is unstructured and complex. The performance of laser radar systems is analyzed, to achieve models of the uncertainties in laser radar data. A ground object recognition method is presented. It handles general, noisy 3D point cloud data. The approach is based on the fact that man-made objects on a large scale can be considered be of rectangular shape or can be decomposed to a set of rectangles. Several approaches to rectangle fitting are presented and evaluated in Monte Carlo simulations. There are error-in-variables present and thus, geometric fitting is used. The objects can have parts that are subject to articulation. A modular least squares method with outlier rejection, that can handle articulated objects, is proposed. This method falls within the iterative closest point framework. Recognition when several similar models are available is discussed. The recognition method is applied in a query-based multi-sensor system. The system covers the process from sensor data to the user interface, i.e., from low level image processing to high level situation analysis. In object detection and recognition based on laser radar data, the range value’s accuracy is important. A general direct-detection laser radar system applicable for hard-target measurements is modeled. Three time-of-flight estimation algorithms are analyzed; peak detection, constant fraction detection, and matched filter. The statistical distribution of uncertainties in time-of-flight range estimations is determined. The detection performance for various shape conditions and signal-tonoise ratios are analyzed. Those results are used to model the properties of the range estimation error. The detector’s performances are compared with the Cramér-Rao lower bound. The performance of a tool for synthetic generation of scanning laser radar data is evaluated. In the measurement system model, it is possible to add several design parameters, which makes it possible to test an estimation scheme under different types of system design. A parametric method, based on measurement error regression, that estimates an object’s size and orientation is described. Validations of both the measurement system model and the measurement error model, with respect to the Cramér-Rao lower bound, are presented.

Page generated in 0.0282 seconds