• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 450
  • 158
  • 49
  • 47
  • 46
  • 38
  • 33
  • 25
  • 20
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • Tagged with
  • 1041
  • 1041
  • 249
  • 147
  • 129
  • 124
  • 111
  • 111
  • 95
  • 94
  • 88
  • 84
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Influence of Advanced Airbags on Injury Risk During Frontal Crashes

Chen, Rong 17 September 2013 (has links)
The combination of airbag and seatbelt is considered to be the most effective vehicle safety system. However, despite the widespread availability of airbags and a belt use rate of over 85% U.S. drivers involved in crashes continue to be at risk of serious thoracic injury. One hypothesis is that this risk may be due to the lack of airbag deployment or the airbag \'bottoming-out\' in some cases, causing drivers to make contact with the steering. The objective of this study is to determine the influence of various advanced airbags on occupant injury risk in frontal automobile crash. The analysis is based upon cases extracted from the National Automotive Sampling System Crashworthiness Data System (NASS/CDS) database for case years 1993-2011. The approach was to compare the frontal crash performance of advanced airbags against depowered airbags, first generation airbags, and vehicles with no airbag equipped. NASS/CDS steering wheel deformation measurements were used to identify cases in which thoracic injuries may have been caused due to steering wheel impact and deformation. The distributions of injuries for all cases were determined by body region and injury severity. These distributions were used to compare and contrast injury outcomes for cases with frontal airbag deployment for both belted and unbelted drivers. Among frontal crash cases with belted drivers, observable steering wheel deformation occurred in less than 4% of all cases, but accounted for 29% of all serious-to-fatally injured belted drivers and 28% of belted drivers with serious thoracic injuries (AIS3+). Similarly, observable steering wheel deformation occurred in approximately 13% of all cases with unbelted drivers involved in frontal crashes, but accounted for 58% of serious-to-fatally injured unbelted drivers and 66% of unbelted drivers with serious thoracic injuries. In a frontal crash, the factors which were statistically significant in the probability of steering wheel deformation were: longitudinal delta-V, driver weight, and driver belt status. Seatbelt pre-tensioner and load limiters were not significant factors in influencing steering wheel deformation. Furthermore, belted drivers in vehicles with no airbag equipped were found to have 3 times higher odds of deforming the steering wheel, as compared to driver in similar crash scenario. Similarly, unbelted drivers were found to have 2 times greater odds of deforming the steering wheel in vehicles with no airbags equipped as compared to vehicles with advanced airbag. The result also showed no statistically significant difference in the odds of deforming the steering wheel between depowered and advanced airbag. After controlling for crash severity, and driver weight, the study showed that crashes with steering wheel deformation results in greater odds of injury in almost all body regions for both belted and unbelted drivers. Moreover, steering wheel deformation is more likely to occur in unbelted drivers than belted drivers, as well as higher severity crashes and with heavier drivers. Another potential factor in influencing driver crash injury is the knee airbag. After comparing the odds of injury between vehicles with and without knee airbags equipped, belted drivers in vehicles equipped with knee airbag were found to have statistically smaller odds of injury in the thorax, abdomen, and upper extremity. Similarly, the findings showed that unbelted drivers benefited from knee airbag through statistically significant lower odds of chest and lower extremity injuries. However, the results should be considered with caution as the study is limited by its small sample of vehicles with knee airbags. / Master of Science
142

Optimal one and two-stage designs for the logistic regression model

Letsinger, William C. II 13 February 2009 (has links)
Binary response data is often modeled using the logistic regression model, a well known nonlinear model. Designing an optimal experiment for this nonlinear situation poses some problems not encountered with a linear model. The application of several optimality design criteria to the logistic regression model is explored, and many resulting optimal designs are given. The implementation of these optimal designs requires the parameters of the model to be known. However, the model parameters are not known. If they were, there would be no need to design an experiment. Consequently the parameters must be estimated prior to implementing a design. Standard one-stage optimal designs are quite sensitive to parameter misspecification and are therefore unsatisfactory in practice. A two-stage Bayesian design procedure is developed which effectively deals with poor parameter knowledge while maintaining high efficiency. The first stage makes use of Bayesian design as well as Bayesian estimation in order to cope with parameter misspecification. Using the parameter estimates from the first stage, the second stage conditionally optimizes a chosen design optimality criterion. Asymptotically, the two-stage design procedure is considerably more efficient than the one-stage design when the parameters are misspecified and only slightly less efficient when the parameters are known. The superiority of the two-stage procedure over the one-stage is even more evident for small samples. / Ph. D.
143

Predicting UFC matches using regression models

Apelgren, Sebastian, Eklund, Christoffer January 2024 (has links)
This project applied statistical inference methods to historical data of mixed martial arts (MMA) matches from the Ultimate Fighting Championship (UFC). The goal of the project was to create a model to predict the outcome of Ultimate Fighting Championship matches with the best possible accuracy. The main methods used in the project were logistic regression and Bayesian regression. The data used for said model was taken from matches between early April 2000 and mid April 2024. The predictions made by these models were compared with the predictions of various betting sites as well as with the true outcomes of the matches. The logistic regression model and the Bayesian model predicted the true outcome of the matches 60% and 70% of the time respectively, with both having comparable predictions to those of the betting sites.
144

Regression då data utgörs av urval av ranger

Widman, Linnea January 2012 (has links)
För alpina skidåkare mäter man prestationer i så kallad FIS-ranking. Vi undersöker några metoder för hur man kan analysera data där responsen består av ranger som dessa. Vid situationer då responsdata utgörs av urval av ranger finns ingen självklar analysmetod. Det vi undersöker är skillnaderna vid användandet av olika regressionsanpassningar så som linjär, logistisk och ordinal logistisk regression för att analysera data av denna typ. Vidare används bootstrap för att bilda konfidensintervall. Det visar sig att för våra datamaterial ger metoderna liknande resultat när det gäller att hitta betydelsefulla förklarande variabler. Man kan därmed utgående från denna undersökning, inte se några skäl till varför man ska använda de mer avancerade modellerna. / Alpine skiers measure their performance in FIS ranking. We will investigate some methods on how to analyze data where response data is based on ranks like this. In situations where response data is based on ranks there is no obvious method of analysis. Here, we examine differences in the use of linear, logistic and ordinal logistic regression to analyze data of this type. Bootstrap is used to make confidence intervals. For our data these methods give similar results when it comes to finding important explanatory variables. Based on this survey we cannot see any reason why one should use the more advanced models.
145

Inkrementell responsanalys : Vilka kunder bör väljas vid riktad marknadsföring? / Incremental response analysis : Which customers should be selected in direct marketing?

Karlsson, Jonas, Karlsson, Roger January 2013 (has links)
If customers respond differently to a campaign, it is worthwhile to find those customers who respond most positively and direct the campaign towards them. This can be done by using so called incremental response analysis where respondents from a campaign are compared with respondents from a control group. Customers with the highest increased response from the campaign will be selected and thus may increase the company’s return. Incremental response analysis is applied to the mobile operator Tres historical data. The thesis intends to investigate which method that best explain the incremental response, namely to find those customers who give the highest incremental response of Tres customers, and what characteristics that are important.The analysis is based on various classification methods such as logistic regression, Lassoregression and decision trees. RMSE which is the root mean square error of the deviation between observed and predicted incremental response, is used to measure the incremental response prediction error. The classification methods are evaluated by Hosmer-Lemeshow test and AUC (Area Under the Curve). Bayesian logistic regression is also used to examine the uncertainty in the parameter estimates.The Lasso regression performs best compared to the decision tree, the ordinary logistic regression and the Bayesian logistic regression seen to the predicted incremental response. Variables that significantly affect the incremental response according to Lasso regression are age and how long the customer had their subscription.
146

Smart task logging : Prediction of tasks for timesheets with machine learning

Bengtsson, Emil, Mattsson, Emil January 2018 (has links)
Every day most people are using applications and services that are utilising machine learning, in some way, without even knowing it. Some of these applications and services could, for example, be Google’s search engine, Netflix’s recommendations, or Spotify’s music tips. For machine learning to work it needs data, and often a large amount of it. Roughly 2,5 quintillion bytes of data are created every day in the modern information society. This huge amount of data can be utilised to make applications and systems smarter and automated. Time logging systems today are usually not smart since users of these systems still must enter data manually. This bachelor thesis will explore the possibility of applying machine learning to task logging systems, to make it smarter and automated. The machine learning algorithm that is used to predict the user’s task, is called multiclass logistic regression, which is categorical. When a small amount of training data was used in the machine learning process the predictions of a task had a success rate of about 91%.
147

Logistic regression to determine significant factors associated with share price change

Muchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011. Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase. The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed. Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed. The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
148

Logistic regression to determine significant factors associated with share price change

Muchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011. Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase. The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed. Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed. The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
149

Detection of erroneous payments utilizing supervised and utilizing supervised and unsupervised data mining techniques

Yanik, Todd E. 09 1900 (has links)
Approved for public release; distribution in unlimited. / In this thesis we develop a procedure for detecting erroneous payments in the Defense Finance Accounting Service, Internal Review's (DFAS IR) Knowledge Base Of Erroneous Payments (KBOEP), with the use of supervised (Logistic Regression) and unsupervised (Classification and Regression Trees (C & RT)) modeling algorithms. S-Plus software was used to construct a supervised model of vendor payment data using Logistic Regression, along with the Hosmer-Lemeshow Test, for testing the predictive ability of the model. The Clementine Data Mining software was used to construct both supervised and unsupervised model of vendor payment data using Logistic Regression and C & RT algorithms. The Logistic Regression algorithm, in Clementine, generated a model with predictive probabilities, which were compared against the C & RT algorithm. In addition to comparing the predictive probabilities, Receiver Operating Characteristic (ROC) curves were generated for both models to determine which model provided the best results for a Coincidence Matrix's True Positive, True Negative, False Positive and False Negative Fractions. The best modeling technique was C & RT and was given to DFAS IR to assist in reducing the manual record selection process currently being used. A recommended ruleset was provided, along with a detailed explanation of the algorithm selection process. / Lieutenant Commander, United States Navy
150

High-dimensional classification and attribute-based forecasting

Lo, Shin-Lian 27 August 2010 (has links)
This thesis consists of two parts. The first part focuses on high-dimensional classification problems in microarray experiments. The second part deals with forecasting problems with a large number of categories in predictors. Classification problems in microarray experiments refer to discriminating subjects with different biologic phenotypes or known tumor subtypes as well as to predicting the clinical outcomes or the prognostic stages of subjects. One important characteristic of microarray data is that the number of genes is much larger than the sample size. The penalized logistic regression method is known for simultaneous variable selection and classification. However, the performance of this method declines as the number of variables increases. With this concern, in the first study, we propose a new classification approach that employs the penalized logistic regression method iteratively with a controlled size of gene subsets to maintain variable selection consistency and classification accuracy. The second study is motivated by a modern microarray experiment that includes two layers of replicates. This new experimental setting causes most existing classification methods, including penalized logistic regression, not appropriate to be directly applied because the assumption of independent observations is violated. To solve this problem, we propose a new classification method by incorporating random effects into penalized logistic regression such that the heterogeneity among different experimental subjects and the correlations from repeated measurements can be taken into account. An efficient hybrid algorithm is introduced to tackle computational challenges in estimation and integration. Applications to a breast cancer study show that the proposed classification method obtains smaller models with higher prediction accuracy than the method based on the assumption of independent observations. The second part of this thesis develops a new forecasting approach for large-scale datasets associated with a large number of predictor categories and with predictor structures. The new approach, beyond conventional tree-based methods, incorporates a general linear model and hierarchical splits to make trees more comprehensive, efficient, and interpretable. Through an empirical study in the air cargo industry and a simulation study containing several different settings, the new approach produces higher forecasting accuracy and higher computational efficiency than existing tree-based methods.

Page generated in 0.0432 seconds