• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 147
  • 41
  • 30
  • 23
  • 14
  • 13
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 626
  • 626
  • 210
  • 124
  • 114
  • 87
  • 86
  • 86
  • 75
  • 67
  • 61
  • 58
  • 58
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Metodik för detektering av vägåtgärder via tillståndsdata / Methodology for detection of road treatments

Andersson, Niklas, Hansson, Josef January 2010 (has links)
<p>The Swedish Transport Administration has, and manages, a database containing information of the status of road condition on all paved and governmental operated Swedish roads. The purpose of the database is to support the Pavement Management System (PMS). The PMS is used to identify sections of roads where there is a need for treatment, how to allocate resources and to get a general picture of the state of the road network condition. All major treatments should be reported which has not always been done.</p><p>The road condition is measured using a number of indicators on e.g. the roads unevenness. Rut depth is an indicator of the roads transverse unevenness. When a treatment has been done the condition drastically changes, which is also reflected by these indicators.</p><p>The purpose of this master thesis is to; by using existing indicators make predictions to find points in time when a road has been treated.</p><p>We have created a SAS-program based on simple linear regression to analyze rut depth changes over time. The function of the program is to find levels changes in the rut depth trend. A drastic negative change means that a treatment has been made.</p><p>The proportion of roads with an alleged date for the latest treatment earlier than the programs latest detected date was 37 percent. It turned out that there are differences in the proportions of possible treatments found by the software and actually reported roads between different regions. The regions North and Central have the highest proportion of differences. There are also differences between the road groups with various amount of traffic. The differences between the regions do not depend entirely on the fact that the proportion of heavily trafficked roads is greater for some regions.</p>
332

Mean preservation in censored regression using preliminary nonparametric smoothing

Heuchenne, Cédric 18 August 2005 (has links)
In this thesis, we consider the problem of estimating the regression function in location-scale regression models. This model assumes that the random vector (X,Y) satisfies Y = m(X) + s(X)e, where m(.) is an unknown location function (e.g. conditional mean, median, truncated mean,...), s(.) is an unknown scale function, and e is independent of X. The response Y is subject to random right censoring, and the covariate X is completely observed. In the first part of the thesis, we assume that m(x) = E(Y|X=x) follows a polynomial model. A new estimation procedure for the unknown regression parameters is proposed, which extends the classical least squares procedure to censored data. The proposed method is inspired by the method of Buckley and James (1979), but is, unlike the latter method, a non-iterative procedure due to nonparametric preliminary estimation. The asymptotic normality of the estimators is established. Simulations are carried out for both methods and they show that the proposed estimators have usually smaller variance and smaller mean squared error than the Buckley-James estimators. For the second part, suppose that m(.)=E(Y|.) belongs to some parametric class of regression functions. A new estimation procedure for the true, unknown vector of parameters is proposed, that extends the classical least squares procedure for nonlinear regression to the case where the response is subject to censoring. The proposed technique uses new `synthetic' data points that are constructed by using a nonparametric relation between Y and X. The consistency and asymptotic normality of the proposed estimator are established, and the estimator is compared via simulations with an estimator proposed by Stute in 1999. In the third part, we study the nonparametric estimation of the regression function m(.). It is well known that the completely nonparametric estimator of the conditional distribution F(.|x) of Y given X=x suffers from inconsistency problems in the right tail (Beran, 1981), and hence the location function m(x) cannot be estimated consistently in a completely nonparametric way, whenever m(x) involves the right tail of F(.|x) (like e.g. for the conditional mean). We propose two alternative estimators of m(x), that do not share the above inconsistency problems. The idea is to make use of the assumed location-scale model, in order to improve the estimation of F(.|x), especially in the right tail. We obtain the asymptotic properties of the two proposed estimators of m(x). Simulations show that the proposed estimators outperform the completely nonparametric estimator in many cases.
333

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 September 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
334

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 October 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
335

Observed score equating with covariates

Bränberg, Kenny January 2010 (has links)
In test score equating the focus is on the problem of finding the relationship between the scales of different test forms. This can be done only if data are collected in such a way that the effect of differences in ability between groups taking different test forms can be separated from the effect of differences in test form difficulty. In standard equating procedures this problem has been solved by using common examinees or common items. With common examinees, as in the equivalent groups design, the single group design, and the counterbalanced design, the examinees taking the test forms are either exactly the same, i.e., each examinee takes both test forms, or random samples from the same population. Common items (anchor items) are usually used when the samples taking the different test forms are assumed to come from different populations. The thesis consists of four papers and the main theme in three of these papers is the use of covariates, i.e., background variables correlated with the test scores, in observed score equating. We show how covariates can be used to adjust for systematic differences between samples in a non-equivalent groups design when there are no anchor items. We also show how covariates can be used to decrease the equating error in an equivalent groups design or in a non-equivalent groups design. The first paper, Paper I, is the only paper where the focus is on something else than the incorporation of covariates in equating. The paper is an introduction to test score equating, and the author's thoughts on the foundation of test score equating. There are a number of different definitions of test score equating in the literature. Some of these definitions are presented and the similarities and differences between them are discussed. An attempt is also made to clarify the connection between the definitions and the most commonly used equating functions. In Paper II a model is proposed for observed score linear equating with background variables. The idea presented in the paper is to adjust for systematic differences in ability between groups in a non-equivalent groups design by using information from background variables correlated with the observed test scores. It is assumed that conditional on the background variables the two samples can be seen as random samples from the same population. The background variables are used to explain the systematic differences in ability between the populations. The proposed model consists of a linear regression model connecting the observed scores with the background variables and a linear equating function connecting observed scores on one test forms to observed scores on the other test form. Maximum likelihood estimators of the model parameters are derived, using an assumption of normally distributed test scores, and data from two administrations of the Swedish Scholastic Assessment Test are used to illustrate the use of the model. In Paper III we use the model presented in Paper II with two different data collection designs: the non-equivalent groups design (with and without anchor items) and the equivalent groups design. Simulated data are used to examine the effect - in terms of bias, variance and mean squared error - on the estimators, of including covariates. With the equivalent groups design the results show that using covariates can increase the accuracy of the equating. With the non-equivalent groups design the results show that using an anchor test together with covariates is the most efficient way of reducing the mean squared error of the estimators. Furthermore, with no anchor test, the background variables can be used to adjust for the systematic differences between the populations and produce unbiased estimators of the equating relationship, provided that the “right” variables are used, i.e., the variables explaining those differences. In Paper IV we explore the idea of using covariates as a substitute for an anchor test with a non-equivalent groups design in the framework of Kernel Equating. Kernel Equating can be seen as a method including five different steps: presmoothing, estimation of score probabilities, continuization, equating, and calculating the standard error of equating. For each of these steps we give the theoretical results when observations on covariates are used as a substitute for scores on an anchor test. It is shown that we can use the method developed for Post-Stratification Equating in the non-equivalent groups with anchor test design, but with observations on the covariates instead of scores on an anchor test. The method is illustrated using data from the Swedish Scholastic Assessment Test.
336

Aktivering av trafiksäkerhetskameror : En studie av kameraaktiveringens effekter på fordonshastigheter i Sverige / Activation of speed cameras : A study of the effects of camera activation on vehicle speeds in Sweden

Lundström, Josefine, Ruotsalainen, Juoni January 2008 (has links)
During 2006 an estimated number of 150 persons are supposed to have been killed in road accidents caused by speed limit violations. Through Automatic traffic security control (ATK) the Swedish road administration (Vägverket) is working towards lowering the number of speed related accidents. By placing the speed cameras on roads they've managed to lower the average speed at those places. The enlargement of the number of speed cameras is based upon knowledge about for example how high the risk is for speed related accidents on the roads. The speed cameras always measure the speed in which every vehicle passes, but aren't constantly activated to register speed violations. Our purpose with this essay is consequently to explore possible relations between the activation of the speed cameras and the speed itself on the roads.We studied the average speed and the number of speed violations during 12 weeks evenly distributed in 2007. To see if the results would differ, we used two different response variables in the analysis.  Multiple linear regression was used to analyse the average speed, while Poisson regression was used in the analysis of the number of speed violations. An activated camera proved to cause a lowered average speed and fewer speed violations in three regions (Skåne, Mälardalen, Norr).To study the effect of maximized camera activation, an experiment in the region of Mälardalen was performed in the beginning of 2008. The result showed that maximized camera activation didn't decelerate the average speed, in stead the region's own activation policy seems to be more important for a lowered average speed. When the traffic flow rises the average speed decelerates while the number of speed violations also rises. During the study of commuter traffic we could see that the average speed is lower and there is fewer speed violations on commuter roads compared to normal traffic roads. / Under 2006 beräknas 150 personer ha omkommit i vägtrafikolyckor på grund av överskridna hastighetsgränser. Vägverket arbetar för att sänka dessa siffror bland annat genom att använda sig av Automatisk trafiksäkerhetskontroll (ATK). Genom att placera trafiksäkerhetskameror på sträckor har medelhastigheten på dessa sänkts. Trafiksäkerhetskamerorna mäter alltid hastigheten hos varje passerande fordon, men är inte konstant aktiverade för att registrera hastighetsöverträdelser. Nu vill man optimera kameraaktiveringen för att minska antalet ärenden utan att hanteringskapaciteten överskrids. Vårt syfte med uppsatsen är därför att undersöka möjliga samband mellan aktivering av trafiksäkerhetskameror och själva hastigheten på vägarna.Medelhastigheten och antalet överträdelser studerades under tolv veckor jämnt fördelade över år 2007. Analyserna gjordes med två olika responsvariabler för att se om resultaten skilde sig åt. Vi använde oss av multipel linjär regression för att analysera medelhastigheten, medan Poissonregression användes för antalet överträdelser. Det visade sig att en aktiv kamera gav upphov till sänkta medelhastigheter och färre hastighetsöverträdelser i tre regioner (Skåne, Mälardalen, Norr).För att studera effekten av maximal kameraaktivering utfördes ett experiment i region Mälardalen under början av 2008. Det visade sig att en maximal aktivering inte gav en sänkning av genomsnittshastigheterna, istället verkar regionens egen aktiverings-strategi ha större betydelse för sänkta genomsnittshastigheter.När fordonsflödet på alla sträckor ökar så minskar medelhastigheten medan antalet överträdelser ökar. För pendeltrafiksträckor är medelhastigheten lägre och det sker färre hastighetsöverträdelser än på normaltrafiksträckor.
337

Non-contact measurement of soil moisture content using thermal infrared sensor and weather variables

Alshikaili, Talal 19 March 2007
The use of remote sensing technology has made it possible for the non-contact measurement of soil moisture content (SMC). Many remote sensing techniques can be used such as microwave sensors, electromagnetic waves sensors, capacitance, and thermal infrared sensors. Some of those techniques are constrained by their high fabrication cost, operation cost, size, or complexity. In this study, a thermal infrared technique was used to predict soil moisture content with the aid of using weather meteorological variables. <p>The measured variables in the experiment were soil moisture content (%SMC), soil surface temperature (Ts) measured using thermocouples, air temperature (Ta), relative humidity (RH), solar radiation (SR), and wind speed (WS). The experiment was carried out for a total of 12 soil samples of two soil types (clay/sand) and two compaction levels (compacted/non-compacted). After data analysis, calibration models relating soil moisture content (SMC) to differential temperature (Td), relative humidity (RH), solar radiation (SR), and wind speed (WS) were generated using stepwise multiple linear regression of the calibration data set. The performance of the models was evaluated using validation data. Four mathematical models of predicting soil moisture content were generated for each soil type and configuration using the calibration data set. Among the four models, the best model for each soil type and configuration was determined by comparing root mean of squared errors of calibration (RMSEC) and root mean of squared errors of validation (RMSEV) values. Furthermore, a calibration model for the thermal infrared sensor was developed to determine the corrected soil surface temperature as measured by the sensor (Tir) instead of using the thermocouples. The performance of the thermal infrared sensor to predict soil moisture content was then tested for sand compacted and sand non-compacted soils and compared to the predictive performance of the thermocouples. This was achieved by using the measured soil surface temperature by the sensor (Tir), instead of the measured soil surface temperature using the thermocouples to determine the soil-minus-air temperature (Td). The sensor showed comparable prediction performance, relative to thermocouples. <p>Overall, the models developed in this study showed high prediction performance when tested with the validation data set. The best models to predict SMC for compacted clay soil, non-compacted clay soil, and compacted sandy soil were three-variable models containing three predictive variables; Td, RH, and SR. On the other hand, the best model to predict SMC for compacted sandy soil was a two-variable model containing Td, and RH. The results showed that the prediction performance of models for predicting SMC for the sandy soils was superior to those of clay soils.
338

Non-contact measurement of soil moisture content using thermal infrared sensor and weather variables

Alshikaili, Talal 19 March 2007 (has links)
The use of remote sensing technology has made it possible for the non-contact measurement of soil moisture content (SMC). Many remote sensing techniques can be used such as microwave sensors, electromagnetic waves sensors, capacitance, and thermal infrared sensors. Some of those techniques are constrained by their high fabrication cost, operation cost, size, or complexity. In this study, a thermal infrared technique was used to predict soil moisture content with the aid of using weather meteorological variables. <p>The measured variables in the experiment were soil moisture content (%SMC), soil surface temperature (Ts) measured using thermocouples, air temperature (Ta), relative humidity (RH), solar radiation (SR), and wind speed (WS). The experiment was carried out for a total of 12 soil samples of two soil types (clay/sand) and two compaction levels (compacted/non-compacted). After data analysis, calibration models relating soil moisture content (SMC) to differential temperature (Td), relative humidity (RH), solar radiation (SR), and wind speed (WS) were generated using stepwise multiple linear regression of the calibration data set. The performance of the models was evaluated using validation data. Four mathematical models of predicting soil moisture content were generated for each soil type and configuration using the calibration data set. Among the four models, the best model for each soil type and configuration was determined by comparing root mean of squared errors of calibration (RMSEC) and root mean of squared errors of validation (RMSEV) values. Furthermore, a calibration model for the thermal infrared sensor was developed to determine the corrected soil surface temperature as measured by the sensor (Tir) instead of using the thermocouples. The performance of the thermal infrared sensor to predict soil moisture content was then tested for sand compacted and sand non-compacted soils and compared to the predictive performance of the thermocouples. This was achieved by using the measured soil surface temperature by the sensor (Tir), instead of the measured soil surface temperature using the thermocouples to determine the soil-minus-air temperature (Td). The sensor showed comparable prediction performance, relative to thermocouples. <p>Overall, the models developed in this study showed high prediction performance when tested with the validation data set. The best models to predict SMC for compacted clay soil, non-compacted clay soil, and compacted sandy soil were three-variable models containing three predictive variables; Td, RH, and SR. On the other hand, the best model to predict SMC for compacted sandy soil was a two-variable model containing Td, and RH. The results showed that the prediction performance of models for predicting SMC for the sandy soils was superior to those of clay soils.
339

Metodik för detektering av vägåtgärder via tillståndsdata / Methodology for detection of road treatments

Andersson, Niklas, Hansson, Josef January 2010 (has links)
The Swedish Transport Administration has, and manages, a database containing information of the status of road condition on all paved and governmental operated Swedish roads. The purpose of the database is to support the Pavement Management System (PMS). The PMS is used to identify sections of roads where there is a need for treatment, how to allocate resources and to get a general picture of the state of the road network condition. All major treatments should be reported which has not always been done. The road condition is measured using a number of indicators on e.g. the roads unevenness. Rut depth is an indicator of the roads transverse unevenness. When a treatment has been done the condition drastically changes, which is also reflected by these indicators. The purpose of this master thesis is to; by using existing indicators make predictions to find points in time when a road has been treated. We have created a SAS-program based on simple linear regression to analyze rut depth changes over time. The function of the program is to find levels changes in the rut depth trend. A drastic negative change means that a treatment has been made. The proportion of roads with an alleged date for the latest treatment earlier than the programs latest detected date was 37 percent. It turned out that there are differences in the proportions of possible treatments found by the software and actually reported roads between different regions. The regions North and Central have the highest proportion of differences. There are also differences between the road groups with various amount of traffic. The differences between the regions do not depend entirely on the fact that the proportion of heavily trafficked roads is greater for some regions.
340

Fuzzy Classification Models Based On Tanaka

Ozer, Gizem 01 July 2009 (has links) (PDF)
In some classification problems where human judgments, qualitative and imprecise data exist, uncertainty comes from fuzziness rather than randomness. Limited number of fuzzy classification approaches is available for use for these classification problems to capture the effect of fuzzy uncertainty imbedded in data. The scope of this study mainly comprises two parts: new fuzzy classification approaches based on Tanaka&rsquo / s Fuzzy Linear Regression (FLR) approach, and an improvement of an existing one, Improved Fuzzy Classifier Functions (IFCF). Tanaka&rsquo / s FLR approach is a well known fuzzy regression technique used for the prediction problems including fuzzy type of uncertainty. In the first part of the study, three alternative approaches are presented, which utilize the FLR approach for a particular customer satisfaction classification problem. A comparison of their performances and their applicability in other cases are discussed. In the second part of the study, the improved IFCF method, Nonparametric Improved Fuzzy Classifier Functions (NIFCF), is presented, which proposes to use a nonparametric method, Multivariate Adaptive Regression Splines (MARS), in clustering phase of the IFCF method. NIFCF method is applied on three data sets, and compared with Fuzzy Classifier Function (FCF) and Logistic Regression (LR) methods.

Page generated in 0.0702 seconds