• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

TRAINING TEACHERS TO IMPLEMENT SYSTEMATIC STRATEGIES IN PRESCHOOL CLASSROOMS WITH FIDELITY

Crawford, Rebecca V. 01 January 2018 (has links)
This study examined the fidelity of implementation by four Head Start teachers using the teaching strategies of constant time delay, enhanced milieu teaching, and system of least prompts with children with and without disabilities in an inclusive early childhood setting. The teachers worked with the researcher to determine appropriate skills to target for each teaching strategy. A multiple probe across behaviors design replicated across four teachers was used to determine the effects of teachers’ fidelity of implementation of evidence-based teaching strategies. The results showed that Head Start teachers could implement systematic teaching strategies with fidelity. The study also examined if children with and without disabilities can make progress towards their target skills. The results showed that children were able to make progress towards their target skills.
302

Least Squares Estimation of the Pareto Type I and II Distribution

Chien, Ching-hua 01 May 1982 (has links)
The estimation of the Pareto distribution can be computationally expensive and the method is badly biased. In this work, an improved Least Squares derivation is used and the estimation will be less biased. Numerical examples and figures are provided so that one may observe the solution more clearly. Furthermore, by varying the different methods of estimation, a comparing of the estimators of the parameters is given. The improved Least Squares derivation is confidently employed for it is economic and efficient.
303

Georeferencing Digital Camera Images Using Internal Camera Model

Nagdev, Alok 02 April 2004 (has links)
The NASA Airborne Topographic Mapper (ATM) is a laser scanning instrument which is used mainly to collect dense topographic data over much of the conterminous US coastline. The inclusion of two digital cameras in consonance with the ATM instrument now gives 3-band (RGB) imagery apart from a very rich topographic data. This imagery, in its crude form, has limited applications due to it being not georeferenced and having a heavy camera lens distortions. As thesis, a processing system - Park-View - is developed to bring this imagery into a more suitable format for the scientists for analytical and interpretational purposes. Park-View utilizes the well gridded elevation data from layer four of another processing system called LaserMap for georeferencing the digital camera images. Camera lens behavior is modeled using a 2D grid image and all of its intrinsic parameters ascertained. These parameters are then incorporated into correcting the lens distortions of georeferenced images. Errors in time-stamping of images and in the mounting angles of the camera are calculated using well known tie-points. Georeferenced images can be stored either in GeoTiff format or jpeg format. Individual images can be georeferenced or put in a mosaic form with the mosaic color equalized for adjoining images. Park-View also provides the main GUI displaying the entire surveyed area, mapper GUI for a batch processing of all the images and a display window for displaying georeferenced images or mosaics. Additional capabilities could be added to the processing system for performing some specific image processing operations on the images such as edge detection and image enhancement.
304

Semiparametric Estimation of Unimodal Distributions

Looper, Jason K 20 August 2003 (has links)
One often wishes to understand the probability distribution of stochastic data from experiment or computer simulations. However, where no model is given, practitioners must resort to parametric or non-parametric methods in order to gain information about the underlying distribution. Others have used initially a nonparametric estimator in order to understand the underlying shape of a set of data, and then later returned with a parametric method to locate the peaks. However they are interested in estimating spectra, which may have multiple peaks, where in this work we are interested in approximating the peak position of a single-peak probability distribution. One method of analyzing a distribution of data is by fitting a curve to, or smoothing them. Polynomial regression and least-squares fit are examples of smoothing methods. Initial understanding of the underlying distribution can be obscured depending on the degree of smoothing. Problems such as under and oversmoothing must be addressed in order to determine the shape of the underlying distribution. Furthermore, smoothing of skewed data can give a biased estimation of the peak position. We propose two new approaches for statistical mode estimation based on the assumption that the underlying distribution has only one peak. The first method imposes the global constraint of unimodality locally, by requiring negative curvature over some domain. The second method performs a search that assumes a position of the distribution's peak and requires positive slope to the left, and negative slope to the right. Each approach entails a constrained least-squares fit to the raw cumulative probability distribution. We compare the relative efficiencies [12] of finding the peak location of these two estimators for artificially generated data from known families of distributions Weibull, beta, and gamma. Within each family a parameter controls the skewness or kurtosis, quantifying the shapes of the distributions for comparison. We also compare our methods with other estimators such as the kernel-density estimator, adaptive histogram, and polynomial regression. By comparing the effectiveness of the estimators, we can determine which estimator best locates the peak position. We find that our estimators do not perform better than other known estimators. We also find that our estimators are biased. Overall, an adaptation of kernel estimation proved to be the most efficient. The results for the work done in this thesis will be submitted, in a different form, for publication by D.A. Rabson and J.K. Looper.
305

Human Body Motions Optimization for Able-Bodied Individuals and Prosthesis Users During Activities of Daily Living Using a Personalized Robot-Human Model

Menychtas, Dimitrios 16 November 2018 (has links)
Current clinical practice regarding upper body prosthesis prescription and training is lacking a standarized, quantitative method to evaluate the impact of the prosthetic device. The amputee care team typically uses prior experiences to provide prescription and training customized for each individual. As a result, it is quite challenging to determine the right type and fit of a prosthesis and provide appropriate training to properly utilize it early in the process. It is also very difficult to anticipate expected and undesired compensatory motions due to reduced degrees of freedom of a prosthesis user. In an effort to address this, a tool was developed to predict and visualize the expected upper limb movements from a prescribed prosthesis and its suitability to the needs of the amputee. It is expected to help clinicians make decisions such as choosing between a body-powered or a myoelectric prosthesis, and whether to include a wrist joint. To generate the motions, a robotics-based model of the upper limbs and torso was created and a weighted least-norm (WLN) inverse kinematics algorithm was used. The WLN assigns a penalty (i.e. the weight) on each joint to create a priority between redundant joints. As a result, certain joints will contribute more to the total motion. Two main criteria were hypothesized to dictate the human motion. The first one was a joint prioritization criterion using a static weighting matrix. Since different joints can be used to move the hand in the same direction, joint priority will select between equivalent joints. The second criterion was to select a range of motion (ROM) for each joint specifically for a task. The assumption was that if the joints' ROM is limited, then all the unnatural postures that still satisfy the task will be excluded from the available solutions solutions. Three sets of static joint prioritization weights were investigated: a set of optimized weights specifically for each task, a general set of static weights optimized for all tasks, and a set of joint absolute average velocity-based weights. Additionally, task joint limits were applied both independently and in conjunction with the static weights to assess the simulated motions they can produce. Using a generalized weighted inverse control scheme to resolve for redundancy, a human-like posture for each specific individual was created. Motion capture (MoCap) data were utilized to generate the weighting matrices required to resolve the kinematic redundancy of the upper limbs. Fourteen able-bodied individuals and eight prosthesis users with a transradial amputation on the left side participated in MoCap sessions. They performed ROM and activities of daily living (ADL) tasks. The methods proposed here incorporate patient's anthropometrics, such as height, limb lengths, and degree of amputation, to create an upper body kinematic model. The model has 23 degrees-of-freedom (DoFs) to reflect a human upper body and it can be adjusted to reflect levels of amputation. The weighting factors resulted from this process showed how joints are prioritized during each task. The physical meaning of the weighting factors is to demonstrate which joints contribute more to the task. Since the motion is distributed differently between able-bodied individuals and prosthesis users, the weighting factors will shift accordingly. This shift highlights the compensatory motion that exist on prosthesis users. The results show that using a set of optimized joint prioritization weights for each specific task gave the least RMS error compared to common optimized weights. The velocity-based weights had a slightly higher RMS error than the task optimized weights but it was not statistically significant. The biggest benefit of that weight set is their simplicity to implement compared to the optimized weights. Another benefit of the velocity based weights is that they can explicitly show how mobile each joint is during a task and they can be used alongside the ROM to identify compensatory motion. The inclusion of task joint limits gave lower RMS error when the joint movements were similar across subjects and therefore the ROM of each joint for the task could be established more accurately. When the joint movements were too different among participants, the inclusion of task limits was detrimental to the simulation. Therefore, the static set of task specific optimized weights was found to be the most accurate and robust method. However, the velocity-based weights method was simpler with similar accuracy. The methods presented here were integrated in a previously developed graphical user interface (GUI) to allow the clinician to input the data of the prospective prosthesis users. The simulated motions can be presented as an animation that performs the requested task. Ultimately, the final animation can be used as a proposed kinematic strategy that a prosthesis user and a clinician can refer to, during the rehabilitation process as a guideline. This work has the potential to impact current prosthesis prescription and training by providing personalized proposed motions for a task.
306

A macroeconometric analysis of foreign aid in economic growth and development in least developed countries : a case study of the Lao People's Democratic Republic (1978-2001) : a dissertation presented in fulfilment of the requirements for the degree of Doctor of Philosophy in Economics at Massey University, Palmerston North, New Zealand

Xayavong, Vilaphonh Unknown Date (has links)
Despite receiving large quantities of aid, many developing countries, especially the Least Developed Countries, have remained stagnant and became more aid-dependent. This grim reality provokes vigorous debate on the effectiveness of aid. This study re-examines the effectiveness of aid, focusing on the ongoing debate on the interactive effect of aid and policy conditionality on sustainable economic growth. A theoretical model of the aid-growth nexus was developed to explain why policy conditionality attached to aid may not always promote sustainable economic growth. Noticeable methodological weaknesses in the aid fungibility and aid-growth models have led to the construction of two macroeconometric models to tackle and reduce these weaknesses. The Lao People's Democratic Republic's economy for the 1978-2001 period has been used for a case study.It is argued that the quality of policy conditionality and the recipient country's ability to complete specified policy conditions are the main factors determining the effectiveness of aid. Completing the policy prescriptions contributes to a stable aid inflow. The aid-growth nexus model developed in this study shows that stable and moderate aid inflow boosts economic growth even when aid is fungible. However, failure to complete the policy conditionality owing to inadequate policy design and problems of policy mismanagement caused by lack of state and institutional capability in the recipient country triggers an unstable aid inflow. The model shows that unstable aid flows reduce capital accumulation and economic growth in the recipient country. These empirical findings reveal that policy conditionality propagated through the "adjustment programmes" has mitigated the side effects of aid fungibility and "Dutch disease" in the case of the Lao PDR. Preliminary success in implementing the policy conditions in the pre-1997 period led to a stable aid inflow and contributed to higher economic growth. This favourable circumstance, however, was impaired by unstable aid flow in the post-1997 period. The lack of state and institutional capacity in the Lao PDR and the inadequate policy design to deal with external shocks triggered the instability of aid inflow, which in turn exacerbated the negative effects of the Asian financial crisis on the Lao PDR's economy.
307

A Model of Global Marketing in Multinational Firms: An Emprirical Investigation

Venaik, Sunil, AGSM, UNSW January 1999 (has links)
With increasing globalisation of the world economy, there is growing interest in international business research among academics, business practitioners and public policy makers. As marketing is usually the first corporate function to internationalise, it occupies the centre-stage in the international strategy debate. The objective of this study is to understand the environmental and organisational factors that drive the desirable outcomes of learning, innovation and performance in multinational firms. By adapting the IO-based, resource-based and contingency theories, the study proposes the environment-conduct-outcome framework and a model of global marketing in MNCs. Using the structural equation modelling-based PLS methodology, the model is estimated with data from a global survey of marketing managers in MNC subsidiaries. The results show that the traditional international marketing strategy and organisational structure constructs of adaptation and autonomy do not have a significant direct effect on MNC performance. Instead, the effects are largely mediated by the networking, learning and innovation constructs that are included in the proposed model. The study also shows that, whereas collaborative decision making has a positive effect on interunit learning, subsidiary autonomy has a significant influence on innovativeness in MNC subsidiaries. Finally, it is found that marketing mix adaptation has an adverse impact on the performance of MNCs facing high global integration pressures but improves the performance of MNCs confronted with low global integration pressures. The findings have important implications for global marketing in MNCs. First, to enhance organisational learning and innovation and ultimately improve corporate performance, MNCs should simultaneously develop the potentially conflicting organisational attributes of collective decision-making among the subsidiaries and greater autonomy to the subsidiaries. Second, to tap local knowledge, MNCs should increasingly regard their country units as 'colleges' or 'seminaries' of learning rather than merely as 'subsidiaries' with secondary or subordinate roles. Finally, to improve MNC performance, the key requirement is to achieve a good fit between the global organisational structure, marketing strategy and business environment. Overall, the results provide partial support for the IO-based and resource-based views and strong support for the contingency perspective in international strategy.
308

Regression methods in multidimensional prediction and estimation

Björkström, Anders January 2007 (has links)
<p>In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.</p><p>For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.</p>
309

An Evaluation of Shortest Path Algorithms on Real Metropolitan Area Networks

Johansson, David January 2008 (has links)
<p>This thesis examines some of the best known algorithms for solving the shortest point-to-point path problem, and evaluates their performance on real metropolitan area networks. The focus has mainly been on Dijkstra‟s algorithm and different variations of it, and the algorithms have been implemented in C# for the practical tests. The size of the networks used in this study varied between 358 and 2464 nodes, and both running time and representative operation counts were measured.</p><p>The results show that many different factors besides the network size affect the running time of an algorithm, such as arc-to-node ratio, path length and network structure. The queue implementation of Dijkstra‟s algorithm showed the worst performance and suffered heavily when the problem size increased. Two techniques for increasing the performance were examined: optimizing the management of labelled nodes and reducing the search space. A bidirectional Dijkstra‟s algorithm using a binary heap to store temporarily labelled nodes combines both of these techniques, and it was the algorithm that performed best of all the tested algorithms in the practical tests.</p><p>This project was initiated by Netadmin Systems i Sverige AB who needed a new path finding module for their network management system NETadmin. While this study is primarily of interest for researchers dealing with path finding problems in computer networks, it may also be useful in evaluations of path finding algorithms for road networks since the two networks share some common characteristics.</p>
310

Design and Implementation of a Test Rig for a Gyro Stabilized Camera System

Eklånge, Johannes January 2006 (has links)
<p>PolyTech AB in Malmköping manufactures gyro stabilized camera systems or helicopter applications. In this Master´s Thesis a shaker test rig for vibration testing of these systems is designed, implemented and evaluated. The shaker is required to have an adjustable frequency and displacement and different shakers that meet these requirements are treated in a literature study.</p><p>The shaker chosen in the test rig is based on a mechanical solution that is described in detail. Additionally all components used in the test rig are described and modelled. The test rig is identified and evaluated from different experiments carried out at PolyTech, where the major part of the identification is based on data collected from accelerometers.</p><p>The test rig model is used to develop a controller that controls the frequency and the displacement of the shaker. A three-phase motor is used to control the frequency of the shaker and a linear actuator with a servo is used to control the displacement. The servo controller is designed using observer and state feedback techniques.</p><p>Additionally, the mount in which the camera system is hanging is modelled and identified, where the identification method is based on nonlinear least squares (NLS) curve fitting technique.</p>

Page generated in 0.045 seconds