• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3017
  • 1002
  • 369
  • 345
  • 272
  • 182
  • 174
  • 160
  • 82
  • 54
  • 30
  • 29
  • 23
  • 22
  • 21
  • Tagged with
  • 6627
  • 2242
  • 1127
  • 915
  • 852
  • 791
  • 740
  • 739
  • 643
  • 542
  • 501
  • 486
  • 444
  • 417
  • 397
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Entwicklung und Implementierung eines Regressionsalgorithmus zur Prognose der Einsatzzeit von Feuerwehrkräften im Atemschutzeinsatz

Meister, Justin 14 June 2023 (has links)
Die vorliegende Arbeit stellt einen Algorithmus vor, der mittels Regression aus den Meldungen von Feuerwehrkräften unter Atemschutz ihre Einsatzzeit prognostiziert. Der erste Teil der Arbeit befasst sich mit den Grundlagen und Vorbereitungen für die Entwicklung des Algorithmus. Dies beinhaltet eine kurze Einführung in die Thematik, da die wenigsten Erfahrung mit den Vorgängen in der Feuerwehr haben werden. Zudem werden die Daten, die für diesen Algorithmus betrachtet wurden, erklärt und analysiert. Des Weiteren wird in dieser Arbeit das Verfahren vorgestellt, welches der Algorithmus nutzt, um die Einsatzzeiten zu ermitteln. Im zweiten Teil geht es um die Implementierung des Algorithmus. Zuerst werden Sonderfälle betrachtet, die nicht mit dem normalen Verfahren gelöst werden können. Danach wird zunächst die Implementation des Algorithmus selbst mit Ausschnitten aus dem Quellcode präsentiert. Darauf folgt die Vorstellung einer Anwendung, die den Algorithmus nutzt, um im Einsatz die berechneten Werte anzuzeigen und um Daten an den Algorithmus zu geben. Anschlieÿend werden die prognostizierten Werte eines Testlaufs mit den realen Werten verglichen. Zum Schluss gibt es eine kurze Zusammenfassung sowie ein paar Ideen für Verbesserungen und Anpassungen.:1 Einleitung 1.1 Zielstellung und Methodik 1.2 Thematische Abgrenzung 1.3 Aufbau der Arbeit 2 Analyse der Einsatzdaten 2.1 Erzeugung der Daten im Einsatz 2.2 Resultierende Anforderungen 2.3 Auswertung der vorliegenden Daten 2.4 Betrachtete Modelle 3 Das Gauÿ-Newton-Verfahren 3.1 Einführung in das Verfahren 3.2 Schrittweitenregelung 3.2.1 Die einfache Schrittweitenregel 3.2.2 Die Armijo-Schrittweitenregel 3.2.3 Die Powell-Wolfe-Schrittregel 3.2.4 Vergleich der Schrittweitenregeln 4 Entwicklung des Prognosealgorithmus 4.1 Mehr Gewichte als Datenpunkte 4.2 Schwierigkeiten bei kürzer aufeinanderfolgenden Eingaben 4.3 Implementation des Algorithmus 5 Nutzung des Prognosealgorithmus 5.1 Der Prototyp 5.2 Einbindung des Algorithmus 5.2.1 Berechnung der Gesamtzeit 5.2.2 Berechnung des Restdrucks 5.3 Atemschutzüberwachung nach FwDV 7 5.3.1 Vorgaben der FwDV 7 5.3.2 Umsetzung der FwDV 7 5.4 Weitere hilfreiche Funktionen 5.4.1 Rückzugserinnerung 5.4.2 Verbesserung der Prognosen 5.5 Nutzung der Anwendung bei einer Einsatzübung 6 Zusammenfassung und Ausblick Literaturverzeichnis Abbildungsverzeichnis Tabellenverzeichnis Liste der Algorithmen Anhang Selbständigkeitserklärung
612

Optimal weight settings in locally weighted regression: A guidance through cross-validation approach

Puri, Roshan January 2023 (has links)
Locally weighted regression is a powerful tool that allows the estimation of different sets of coefficients for each location in the underlying data, challenging the assumption of stationary regression coefficients across a study region. The accuracy of LWR largely depends on how a researcher establishes the relationship across locations, which is often constructed using a weight matrix or function. This paper explores the different kernel functions used to assign weights to observations, including Gaussian, bi-square, and tri-cubic, and how the choice of weight variables and window size affects the accuracy of the estimates. We guide this choice through the cross-validation approach and show that the bi-square function outperforms the choice of other kernel functions. Our findings demonstrate that an optimal window size for LWR models depends on the cross-validation (CV) approach employed. In our empirical application, the full-sample CV guides the choice of a higher window-size case, and CV by proxy guides the choice of a lower window size. Since the CV by Proxy approach focuses on the predictive ability of the model in the vicinity of one specific point (usually a policy point/site), we note that guiding a model choice through this approach makes more intuitive sense when the aim of the researcher is to predict the outcome in one specific site (policy or target point). To identify the optimal weight variables, while we suggest exploring various combinations of weight variables, we argue that an efficient alternative is to merge all continuous variables in the dataset into a single weight variable. / M.A. / Locally weighted regression (LWR) is a statistical technique that establishes a relationship between dependent and explanatory variables, focusing primarily on data points in proximity to a specific point of interest/target point. This technique assigns varying degrees of importance to the observations that are in proximity to the target point, thereby allowing for the modeling of relationships that may exhibit spatial variability within the dataset. The accuracy of LWR largely depends on how researchers define relationships across different locations/studies, which is often done using a “weight setting”. We define weight setting as a combination of weight functions (determines how the observations around a point of interest are weighted before they enter the model), weight variables (determines proximity between the point of interest and all other observations), and window sizes (determines the number of observations that can be allowed in the local regression). To find which weight setting is an optimal one or which combination of weight functions, weight variables, and window sizes generates the lowest predictive error, researchers often employ a cross-validation (CV) approach. Cross-validation is a statistical method used to assess and validate the performance of a predictive model. It entails removing a host observation (a point of interest), predicting that point, and evaluating the accuracy of such predicted point by comparing it with its actual value. In our study, we employ two CV approaches. The first one is a full-sample CV approach, where we remove a host observation, and predict it using the full set of observations used in the given local regression. The second one is the CV by proxy approach, which uses a similar mechanism as full-sample CV to check the accuracy of the prediction, however, by focusing only on the vicinity points that share similar characteristics as a target point. We find that the bi-square function consistently outperforms the choice of Gaussian and tri-cubic weight functions, regardless of the CV approaches. However, the choice of an optimal window size in LWR models depends on the CV approach that we employ. While the full-sample CV method guides us toward the selection of a larger window size, the CV by proxy directs us toward a smaller window size. In the context of identifying the optimal weight variables, we recommend exploring various combinations of weight variables. However, we also propose an efficient alternative, which involves using all continuous variables within the dataset into a single-weight variable instead of striving to identify the best of thousands of different weight variable settings.
613

Functional Data Analysis and its application to cancer data

Martinenko, Evgeny 01 January 2014 (has links)
The objective of the current work is to develop novel procedures for the analysis of functional data and apply them for investigation of gender disparity in survival of lung cancer patients. In particular, we use the time-dependent Cox proportional hazards model where the clinical information is incorporated via time-independent covariates, and the current age is modeled using its expansion over wavelet basis functions. We developed computer algorithms and applied them to the data set which is derived from Florida Cancer Data depository data set (all personal information which allows to identify patients was eliminated). We also studied the problem of estimation of a continuous matrix-variate function of low rank. We have constructed an estimator of such function using its basis expansion and subsequent solution of an optimization problem with the Schattennorm penalty. We derive an oracle inequality for the constructed estimator, study its properties via simulations and apply the procedure to analysis of Dynamic Contrast medical imaging data.
614

Estimating the load rating of reinforced concrete bridges without plans

Ruiz, Edgardo 01 May 2020 (has links)
There are over 250,000 reinforced concrete bridges in the U.S. many of which do not have a load rating on record nor the plans required to perform the calculations. The U.S. Army owns and maintains hundreds of these bridges throughout the U.S. This dissertation describes the development of multiple regression models to estimate the load rating of reinforced concrete bridges. An exploratory data analysis of the 2017 NBI data was performed for the selection of a representative data sample. The data was found to have multiple errors and required significant processing in order to extract a reliable sample for modeling. After processing, a data sample of 31,112 bridges remained, providing sufficient sample for model training and testing. A six-variable model (Model A) was determined to provide the best performance while maintaining a desired low level of complexity. The model was tested by comparing the percentage of cases that fell within its 95% prediction interval, which resulted in 94.9% of the real values falling within the prediction interval. Given the concerns that arose of the quality of the 2017 NBI data during its exploration, as built-drawings from 50 slab bridges throughout the U.S. were collected. With these drawings a new data sample was generated by calculating the load rating of each bridge. Availability of the as-built drawings provided the opportunity to investigate other variables not available in the 2017 NBI, most notably the slab thickness. This data sample was significantly smaller than the previous one, therefore a repeated 10old cross-validation approach was taken to evaluate model performance. It was determined that a five-variable model (Model B) provided the best trade-off between complexity and performance. Model B performed significantly better than Model A due to the inclusion of the slab thickness variable. The models presented in this dissertation provide a valuable tool for reinforced concrete bridge owners tasked with the assigning a load rating when no structural plans are available helping.
615

An exploration of success factors in the healthcare supply chain

Tidwell, Matthew 07 August 2020 (has links)
This research builds off previous research conducted in 2009 which included a survey of healthcare professionals assessing their organization’s levels of supply chain maturity (SCM) and data standard readiness (DSR) from 1 to 5 [Smith, 2011]. With the survey data, Smith developed a 0-1 quadratic program to conserve the maximum amount of survey data while removing non-responses. This research uses the quadratic program as well as other machine learning algorithms and analysis methods to investigate what factors contribute to an organization’s SCM and DSR levels the most. No specific factors were found; however, different levels of prediction accuracy were achieved across the five different subsets and algorithms. he best accuracy prediction SCM model was linear discriminant analysis on the Reduced subset at 50.84% while the highest prediction accuracy for DSR was stepwise regression on the PCA subset at 45.00%. Most misclassifications found in this study were minimal.
616

Inference on Logistic Regression Models

Rashid, Mamunur 25 July 2008 (has links)
No description available.
617

COMPARISON OF LOGISTIC REGRESSION TO LATEST CART TREE STRUCTURE GENERATING ALGORITHMS

MA, YUN 28 September 2005 (has links)
No description available.
618

MEASUREMENT CIRCUITS AND MODELING TECHNIQUES FOR TITANIUM CAPACITORS

DeLibero, Michael L. 27 January 2016 (has links)
No description available.
619

Continuity of Personality Pathology Constructs in an Inpatient Sample: A Comparison of Linear and Count Regression Analyses Using the PID-5 and MMPI-2-RF

Menton, William 02 May 2016 (has links)
No description available.
620

Comparison of ridge regression and neural networks in modeling multicollinear data

Bakshi, Girish January 1996 (has links)
No description available.

Page generated in 0.0543 seconds