• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 31
  • 30
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 239
  • 77
  • 77
  • 57
  • 42
  • 32
  • 30
  • 30
  • 30
  • 29
  • 25
  • 25
  • 24
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model Selection

Zhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter estimation in various statistical problems. Regular estimating functions produce parameter estimators which have desirable properties, such as consistency and asymptotic normality. In quasi-likelihood inference, an important example of estimating functions, correct specification of the first two moments of the underlying distribution leads to the information unbiasedness, which states that two forms of the information matrix: the negative sensitivity matrix (negative expectation of the first order derivative of an estimating function) and the variability matrix (variance of an estimating function) are equal, or in other words, the analogue of the Fisher information is equivalent to the Godambe information. Consequently, the information unbiasedness indicates that the model-based covariance matrix estimator and sandwich covariance matrix estimator are equivalent. By comparing the model-based and sandwich variance estimators, we propose information ratio (IR) statistics for testing model misspecification of variance/covariance structure under correctly specified mean structure, in the context of linear regression models, generalized linear regression models and generalized estimating equations. Asymptotic properties of the IR statistics are discussed. In addition, through intensive simulation studies, we show that the IR statistics are powerful in various applications: test for heteroscedasticity in linear regression models, test for overdispersion in count data, and test for misspecified variance function and/or misspecified working correlation structure. Moreover, the IR statistics appear more powerful than the classical information matrix test proposed by White (1982). In the literature, model selection criteria have been intensively discussed, but almost all of them target choosing the optimal mean structure. In this thesis, two model selection procedures are proposed for selecting the optimal variance/covariance structure among a collection of candidate structures. One is based on a sequence of the IR tests for all the competing variance/covariance structures. The other is based on an ``information discrepancy criterion" (IDC), which provides a measurement of discrepancy between the negative sensitivity matrix and the variability matrix. In fact, this IDC characterizes the relative efficiency loss when using a certain candidate variance/covariance structure, compared with the true but unknown structure. Through simulation studies and analyses of two data sets, it is shown that the two proposed model selection methods both have a high rate of detecting the true/optimal variance/covariance structure. In particular, since the IDC magnifies the difference among the competing structures, it is highly sensitive to detect the most appropriate variance/covariance structure.
92

Analysis of Correlated Data with Measurement Error in Responses or Covariates

Chen, Zhijian January 2010 (has links)
Correlated data frequently arise from epidemiological studies, especially familial and longitudinal studies. Longitudinal design has been used by researchers to investigate the changes of certain characteristics over time at the individual level as well as how potential factors influence the changes. Familial studies are often designed to investigate the dependence of health conditions among family members. Various models have been developed for this type of multivariate data, and a wide variety of estimation techniques have been proposed. However, data collected from observational studies are often far from perfect, as measurement error may arise from different sources such as defective measuring systems, diagnostic tests without gold references, and self-reports. Under such scenarios only rough surrogate variables are measured. Measurement error in covariates in various regression models has been discussed extensively in the literature. It is well known that naive approaches ignoring covariate error often lead to inconsistent estimators for model parameters. In this thesis, we develop inferential procedures for analyzing correlated data with response measurement error. We consider three scenarios: (i) likelihood-based inferences for generalized linear mixed models when the continuous response is subject to nonlinear measurement errors; (ii) estimating equations methods for binary responses with misclassifications; and (iii) estimating equations methods for ordinal responses when the response variable and categorical/ordinal covariates are subject to misclassifications. The first problem arises when the continuous response variable is difficult to measure. When the true response is defined as the long-term average of measurements, a single measurement is considered as an error-contaminated surrogate. We focus on generalized linear mixed models with nonlinear response error and study the induced bias in naive estimates. We propose likelihood-based methods that can yield consistent and efficient estimators for both fixed-effects and variance parameters. Results of simulation studies and analysis of a data set from the Framingham Heart Study are presented. Marginal models have been widely used for correlated binary, categorical, and ordinal data. The regression parameters characterize the marginal mean of a single outcome, without conditioning on other outcomes or unobserved random effects. The generalized estimating equations (GEE) approach, introduced by Liang and Zeger (1986), only models the first two moments of the responses with associations being treated as nuisance characteristics. For some clustered studies especially familial studies, however, the association structure may be of scientific interest. With binary data Prentice (1988) proposed additional estimating equations that allow one to model pairwise correlations. We consider marginal models for correlated binary data with misclassified responses. We develop “corrected” estimating equations approaches that can yield consistent estimators for both mean and association parameters. The idea is related to Nakamura (1990) that is originally developed for correcting bias induced by additive covariate measurement error under generalized linear models. Our approaches can also handle correlated misclassifications rather than a simple misclassification process as considered by Neuhaus (2002) for clustered binary data under generalized linear mixed models. We extend our methods and further develop marginal approaches for analysis of longitudinal ordinal data with misclassification in both responses and categorical covariates. Simulation studies show that our proposed methods perform very well under a variety of scenarios. Results from application of the proposed methods to real data are presented. Measurement error can be coupled with many other features in the data, e.g., complex survey designs, that can complicate inferential procedures. We explore combining survey weights and misclassification in ordinal covariates in logistic regression analyses. We propose an approach that incorporates survey weights into estimating equations to yield design-based unbiased estimators. In the final part of the thesis we outline some directions for future work, such as transition models and semiparametric models for longitudinal data with both incomplete observations and measurement error. Missing data is another common feature in applications. Developing novel statistical techniques for dealing with both missing data and measurement error can be beneficial.
93

Life-cycle cost analysis and probabilistic cost estimating in engineering design using an air duct design case study

Asiedu, Yaw 01 January 2000 (has links)
Although the issue of uncertainties in cost model parameters has been recognized as an important aspect of life-cycle cost analysis, it is often ignored or not well treated in cost estimating. A simulation approach employing kernel estimation techniques and their asymptotic properties in the development of the probability distribution functions (PDFs) of cost estimates is proposed. This eliminates the guess work inherent in current simulation based cost estimating procedures, reduces the amount of data sampled and makes it easier to specify the accuracy desired in the estimated distribution. Building energy costs can be reduced considerably if air duct systems are designed for the least life-cycle cost. The IPS-Method, a simple approach to HVAC air duct design is suggested. The Diameter and Enhanced Friction Charts are also developed. These are charts that implicitly incorporate the LCC and are better than the existing Friction Chart for the selection of duct sizes. Through illustrative examples, the ease and effectiveness of these are demonstrated. For more complex designs, a Segregated Genetic Algorithm (SGA) is recommend. A sample problem with variable time-of-day operating conditions and utility rates is used to illustrate its capabilities. The results are compared to those obtained using weighted average flow rates and utility rates to show the life-cycle cost savings possible by using this approach. Although life-cycle cost savings may be only between 0.4% and 8.3% for some simple designs, much larger savings may occur with more complex designs and operating constraints. The SGA is combined with probabilistic cost estimating to optimize HVAC air duct systems with uncertainties in the model parameters. The designs based on the SGA method tended to be less sensitive to typical variations in the component physical parameters and, therefore, are expected to result in lower balancing and operating costs.
94

En jämförelse mellan individers självuppskattade livskvalitet och samhällets hälsopreferenser : En paneldatastudie av hjärtpatienter

Lyth, Johan January 2006 (has links)
Objective: In recent years there has been an increasing interest within the clinical (medical) science in measuring people’s health. When estimating quality of life, present practise is to use the EQ-5D questionnaire and an index which weighs the different questions. The question is what happens if the individuals estimate there own health, would it differ from the public preferences? The aim is to make a new prediction model based on the opinion of patients and compare it to the present model based on public preferences. Method: A sample of 362 patients with unstable coronary artery disease from the Frisc II trial, valued their quality of life in the acute phase and after 3, 6 and 12 months. The EQ-5D question form and also the Time Trade-off method (TTO), a direct method of valuing health was used. A regression technique managing panel data had to be used in estimating TTO by the EQ-5D and other variables like gender and age. Result: Different regression techniques vary in estimating parameters and standard errors. A Generalized Estimating Equation approach with empirical correlation structure is the most suitable regression technique for the data material. A model based on the EQ-5D question form and a continuous age variable proves to be the best model for an index derived by individuals. The difference between heart patients own opinion of health and the public preferences differs a great amount in the severe health conditions, but are rather small for healthy patients. Of the total 243 health conditions, only eight of the conditions were estimated higher by the public index. Conclusions: As the differences between the approaches are significantly large the choice of index could affect the decision making in a health economic study.
95

A methodology to Develop an Integrated Engineering System to Estimate Quantities for Bridge Repairs at the Pre-Design Stage

Thaesler-Garibaldi, Maria P. 21 April 2005 (has links)
A Damage Assessment Model, Construction Process Model and Parametric Quantity Model were developed with the purpose of capturing the engineering knowledge involved in the estimating process of bridge repair construction projects. The Damage Assessment Model was used to create a sample database in which detailed inspection data was stored in a format compatible with the existing Pontis?tabase. Detailed inspection data, which provided quantitative values for the different damage types observed in bridges, could be retrieved from the sample database so that data could be used as either input parameters in the knowledge rules that triggered the selection of construction tasks in the Construction Process Model, or data could be used as variables in the equations used to estimate quantities in the Parametric Quantity Model. The Construction Process Model was used to incorporate the logic behind the construction process for different repair methods. The Construction Process Model was composed of seven repair matrices that defined specific repair methods for each Pontis?idge element. Construction tasks were grouped in construction modules that were modeled as flowcharts. Each construction module flowchart was composed of construction tasks arranged in sequential order and decision points that triggered the selection of construction tasks based on input parameters and knowledge rules. Input parameters were provided by the user, retrieved from the model or pre-defined in the model by expert knowledge. The construction modules developed involved construction tasks related to the repair of concrete bridge piles that were damaged due to reinforcement corrosion and related concrete deterioration. Data describing the construction tasks that were considered in the construction module flowcharts were modeled using the entity-relationship model and were stored in the sample database described previously. The Parametric Quantity Model combined data generated by the Damage Assessment Model and the Construction Process Model with additional expert knowledge and parameters into equations that were used to estimate quantities. The author investigated the use of neural networks as a tool to predict actual damage in bridge piles, conducted a preliminary survey to define labor productivity factors and collected data to define the duration of construction activities related to bridge repair.
96

Generalized score tests for missing covariate data

Jin, Lei 15 May 2009 (has links)
In this dissertation, the generalized score tests based on weighted estimating equations are proposed for missing covariate data. Their properties, including the effects of nuisance functions on the forms of the test statistics and efficiency of the tests, are investigated. Different versions of the test statistic are properly defined for various parametric and semiparametric settings. Their asymptotic distributions are also derived. It is shown that when models for the nuisance functions are correct, appropriate test statistics can be obtained via plugging the estimates of the nuisance functions into the appropriate test statistic for the case that the nuisance functions are known. Furthermore, the optimal test is obtained using the relative efficiency measure. As an application of the proposed tests, a formal model validation procedure is developed for generalized linear models in the presence of missing covariates. The asymptotic distribution of the data driven methods is provided. A simulation study in both linear and logistic regressions illustrates the applicability and the finite sample performance of the methodology. Our methods are also employed to analyze a coronary artery disease diagnostic dataset.
97

Study of Possible Applications of Currently Available Building Information Modeling Tools for the Analysis of Initial Costs and Energy Costs for Performing Life Cycle Cost Analysis

Mukherji, Payal Tapandev 2010 December 1900 (has links)
The cost of design, construction and maintenance of facilities is on continual rise. The demand is to construct facilities which have been designed by apply life cycle costing principles. These principles have already given strong decision making power to the manufacturing industry. The need to satisfy the environmental sustainability requirements, improve operational effectiveness of buildings and apply value engineering principles has increased the dependency on life cycle costing analysis. The objective is to obtain economically viable solutions by analyzing the alternatives during the design of a building. Though the LCCA process is able to give the desired results, it does have some problems which have stood as hindrances to the more widespread use of the LCCA concept and method. The literature study has highlighted that the problem areas are the lack of frameworks or mechanisms for collecting and storing data and the complexity of LCCA exercise, which involves the analysis of a thousand of building elements and a number of construction-type options and maintenance activities for each building element at detailed design stages. Building Information Modeling has been able to repeatedly answer the questions raised by the AEC industry. The aim of this study is to identify the areas where BIM can be effectively applied to the LCCA process and become a part of the workflow. In this study, initially four LCCA case studies are read and evaluated from the point of view of understanding the method in which the life cycle costing principles have been applied. The purpose, the type alternatives examined, the process of analysis, the type of software used and the results are understood. An attempt has been carried out to understand the workflow of the LCCA process. There is a confidence that Building Information Modeling is capable of handling changes during the design, construction and maintenance phases of the project. Since applying changes to any kind of information of the building during LCC analysis forms the core, it has become necessary to use computer building models for examining these changes. The building modeling softwares are enumerated. The case studies have highlighted that the evaluation of the alternatives are primarily to achieve energy efficient solutions for the buildings. Applying these solutions involves high initial costs. The return on investment is the means by which these solutions become viable to the owners of the facilities. This is where the LCCA has been applied. Two of the important cost elements of the LCC analysis are initial costs and the operating costs of the building. The collaboration of these modeling tools with other estimating software where the initial costs of the building can be generated is studied. The functions of the quantity take-off tools and estimating tools along with the interoperability between these tools are analyzed. The operating costs are generated from the software that focuses on sustainability. And the currently used tools for performing the calculations of the life cycle costing analysis are also observed. The objective is to identify if the currently available BIM tools and software can help in obtaining LCCA results and are able to offset the hindrances of the process. Therefore, the software are studied from the point of view of ease of handling data and the type of data that can be generated. Possible BIM workflows are suggested depending on the functions of the software and the relationship between them. The study has aimed at taking a snapshot the current tools available which can aid the LCCA process. The research is of significance to the construction industry as it forms a precursor to the application of Building Information Modeling to the LCCA process as it shows that it has the capacity of overcoming the obstacles for life cycle costing. This opens a window to the possibility of applying BIM to LCCA and furthering this study.
98

Study on Estimation of Intelligent Residual Capacity of Li-ion Batteries

Lai, Shih-Jung 19 October 2004 (has links)
This research proposes a method for estimating the residual capacity of Li-ion batteries. The charging and discharging characteristics of Li-ion batteries are investigated and analyzed by a battery test system. The measurement of the initial capacity is based on the improved open-circuit voltage measurement, which compensates the effects of battery aging and self-discharging. The measurement of the used capacity is based on the improved coulomb counting measurement, which compensates the effects of output current and environmental temperature. The designed system provides various functions for battery charging and discharging, battery voltage measuring and recording, battery capacity estimation and calculation, and the log files can be used for further battery characteristics analysis.
99

Research and Development of a Smart Li-Battery Management System

Hung, Yu-Huan 29 June 2005 (has links)
This research proposes a smart battery management system applied to Li-Battery. The system not only can monitor the batteries for all kinds of parameters but also modify them by users. Besides, in estimating the residual capacity of Li-ion batteries, an automatic measurement platform is set up in order to record the data of Li-battery in different kinds of charge-discharge condition and to analysis the characteristics. In monitoring used battery capacity, Modified-Coulomb-Measuring method is proposed and it can accurately estimate the residual capacity according to the effect of output current and environment temperature. In addition to estimate the residual capacity accurately, Smart Battery System can record the information of Li-batteries over a long period of time, and the log files can be used for further battery characteristics analysis.
100

Study fo Ni-MH Battery Capacity Management

Chang, Chiung-jen 05 July 2005 (has links)
The topic of this study is to develop a battery capacity management system. The main purpose is to monitor the state of battery during charging and discharging. Form this, user can know the battery status and to avoid loss of data before sudden system power down caused by a spent battery. Different states of battery were collected in different conditions by a battery measurement system, after which characteristics were analyzed. A fast-charge and residual capacity estimation system was developed according to the battery characteristics. The fast-charge system is a technique that emphasizes not only fastness charging but also safety. In this study a fast-charge end method was adopted to terminate the fast charging state of the battery and the initial state had been estimated before charging. Furthermore, the battery was charged with the optimum method according to the battery initial state. That can recover the capacity of the battery within a short period without causing any side effects from repeated usage. The residual capacity estimation system works by first estimating the initial capacity of the batteries, and then recording the current of batteries continuously using the coulomb counting method to make compensation for the effects of battery aging, environmental temperature, self-discharging, and output current.

Page generated in 0.0867 seconds