• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1402
  • 599
  • 402
  • 195
  • 145
  • 141
  • 129
  • 118
  • 112
  • 37
  • 29
  • 24
  • 17
  • 16
  • 14
  • Tagged with
  • 3930
  • 579
  • 547
  • 336
  • 273
  • 266
  • 262
  • 214
  • 206
  • 205
  • 196
  • 191
  • 191
  • 183
  • 172
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Rarities of genotype profiles in a normal Swedish population

Hedell, Ronny January 2010 (has links)
Investigation of stains from crime scenes are commonly used in the search for criminals. At The National Laboratory of Forensic Science, where these stains are examined, a number of questions of theoretical and practical interest regarding the databases of DNA profiles and the strength of DNA evidence against a suspect in a trial are not fully investigated. The first part of this thesis deals with how a sample of DNA profiles from a population is used in the process of estimating the strength of DNA evidence in a trial, taking population genetic factors into account. We then consider how to combine hypotheses regarding the relationship between a suspect and other possible donors of the stain from the crime scene by two applications of Bayes’ theorem. After that we assess the DNA profiles that minimize the strength of DNA evidence against a suspect, and investigate how the strength is affected by sampling error using the bootstrap method and a Bayesian method. In the last part of the thesis we examine discrepancies between different databases of DNA profiles by both descriptive and inferential statistics, including likelihood ratio tests and Bayes factor tests. Little evidence of major differences is found.
162

Single-Zone Cylinder Pressure Modeling and Estimation for Heat Release Analysis of SI Engines

Klein, Markus January 2007 (has links)
Cylinder pressure modeling and heat release analysis are today important and standard tools for engineers and researchers, when developing and tuning new engines. Being able to accurately model and extract information from the cylinder pressure is important for the interpretation and validity of the result. The first part of the thesis treats single-zone cylinder pressure modeling, where the specific heat ratio model constitutes a key part. This model component is therefore investigated more thoroughly. For the purpose of reference, the specific heat ratio is calculated for burned and unburned gases, assuming that the unburned mixture is frozen and that the burned mixture is at chemical equilibrium. Use of the reference model in heat release analysis is too time consuming and therefore a set of simpler models, both existing and newly developed, are compared to the reference model. A two-zone mean temperature model and the Vibe function are used to parameterize the mass fraction burned. The mass fraction burned is used to interpolate the specific heats for the unburned and burned mixture, and to form the specific heat ratio, which renders a cylinder pressure modeling error in the same order as the measurement noise, and fifteen times smaller than the model originally suggested in Gatowski et al. (1984). The computational time is increased with 40 % compared to the original setting, but reduced by a factor 70 compared to precomputed tables from the full equilibrium program. The specific heats for the unburned mixture are captured within 0.2 % by linear functions, and the specific heats for the burned mixture are captured within 1 % by higher-order polynomials for the major operating range of a spark ignited (SI) engine. In the second part, four methods for compression ratio estimation based on cylinder pressure traces are developed and evaluated for both simulated and experimental cycles. Three methods rely upon a model of polytropic compression for the cylinder pressure. It is shown that they give a good estimate of the compression ratio at low compression ratios, although the estimates are biased. A method based on a variable projection algorithm with a logarithmic norm of the cylinder pressure yields the smallest confidence intervals and shortest computational time for these three methods. This method is recommended when computational time is an important issue. The polytropic pressure model lacks information about heat transfer and therefore the estimation bias increases with the compression ratio. The fourth method includes heat transfer, crevice effects, and a commonly used heat release model for firing cycles. This method estimates the compression ratio more accurately in terms of bias and variance. The method is more computationally demanding and thus recommended when estimation accuracy is the most important property. In order to estimate the compression ratio as accurately as possible, motored cycles with as high initial pressure as possible should be used. The objective in part 3 is to develop an estimation tool for heat release analysis that is accurate, systematic and efficient. Two methods that incorporate prior knowledge of the parameter nominal value and uncertainty in a systematic manner are presented and evaluated. Method 1 is based on using a singular value decomposition of the estimated hessian, to reduce the number of estimated parameters one-by-one. Then the suggested number of parameters to use is found as the one minimizing the Akaike final prediction error. Method 2 uses a regularization technique to include the prior knowledge in the criterion function. Method 2 gives more accurate estimates than method 1. For method 2, prior knowledge with individually set parameter uncertainties yields more accurate and robust estimates. Once a choice of parameter uncertainty has been done, no user interaction is needed. Method 2 is then formulated for three different versions, which differ in how they determine how strong the regularization should be. The quickest version is based on ad-hoc tuning and should be used when computational time is important. Another version is more accurate and flexible to changing operating conditions, but is more computationally demanding.
163

Reinforcing Efficacy of Amphetamine in Adolescent and Adult Male Rats

Payne, Lauren Chantel 16 April 2008 (has links)
Rationale: Amphetamine abuse by adolescents predicts long-term drug dependence. Heightened vulnerability to drug abuse could be due to higher sensitivity to drug’s reinforcing effects. Rodents are used to study age-related sensitivities to drugs. Objective: We compared intravenous amphetamine self-administration between adolescent and adult male rats on an operant schedule of reinforcement measuring the reinforcing efficacy of a drug. Methods: After surgery, adolescent and adult rats acquired lever-pressing behavior reinforced by amphetamine infusions. Results: Both age groups exhibited more infusions per session as dose increased. However, neither the number of infusions per session nor total amphetamine intake differed across age groups. Conclusion: Although rapid transition is reliable to test reinforcing properties of stimulants, results suggest that amphetamine is an equally efficacious reinforcer among both age groups. In regards to humans, these results suggest that other factors, like social influences, explain higher rates of drug intake by adolescent compared with adult humans.
164

Estimation and Goodness of Fit for Multivariate Survival Models Based on Copulas

Yilmaz, Yildiz Elif 11 August 2009 (has links)
We provide ways to test the fit of a parametric copula family for bivariate censored data with or without covariates. The proposed copula family is tested by embedding it in an expanded parametric family of copulas. When parameters in the proposed and the expanded copula models are estimated by maximum likelihood, a likelihood ratio test can be used. However, when they are estimated by two-stage pseudolikelihood estimation, the corresponding test is a pseudolikelihood ratio test. The two-stage procedures offer less computation, which is especially attractive when the marginal lifetime distributions are specified nonparametrically or semiparametrically. It is shown that the likelihood ratio test is consistent even when the expanded model is misspecified. Power comparisons of the likelihood ratio and the pseudolikelihood ratio tests with some other goodness-of-fit tests are performed both when the expanded family is correct and when it is misspecified. They indicate that model expansion provides a convenient, powerful and robust approach. We introduce a semiparametric maximum likelihood estimation method in which the copula parameter is estimated without assumptions on the marginal distributions. This method and the two-stage semiparametric estimation method suggested by Shih and Louis (1995) are generalized to regression models with Cox proportional hazards margins. The two-stage semiparametric estimator of the copula parameter is found to be about as good as the semiparametric maximum likelihood estimator. Semiparametric likelihood ratio and pseudolikelihood ratio tests are considered to provide goodness of fit tests for a copula model without making parametric assumptions for the marginal distributions. Both when the expanded family is correct and when it is misspecified, the semiparametric pseudolikelihood ratio test is almost as powerful as the parametric likelihood ratio and pseudolikelihood ratio tests while achieving robustness to the form of the marginal distributions. The methods are illustrated on applications in medicine and insurance. Sequentially observed survival times are of interest in many studies but there are difficulties in modeling and analyzing such data. First, when the duration of followup is limited and the times for a given individual are not independent, the problem of induced dependent censoring arises for the second and subsequent survival times. Non-identifiability of the marginal survival distributions for second and later times is another issue, since they are observable only if preceding survival times for an individual are uncensored. In addition, in some studies, a significant proportion of individuals may never have the first event. Fully parametric models can deal with these features, but lack of robustness is a concern, and methods of assessing fit are lacking. We introduce an approach to address these issues. We model the joint distribution of the successive survival times by using copula functions, and provide semiparametric estimation procedures in which copula parameters are estimated without parametric assumptions on the marginal distributions. The performance of semiparametric estimation methods is compared with some other estimation methods in simulation studies and shown to be good. The methodology is applied to a motivating example involving relapse and survival following colon cancer treatment.
165

Estimation and Goodness of Fit for Multivariate Survival Models Based on Copulas

Yilmaz, Yildiz Elif 11 August 2009 (has links)
We provide ways to test the fit of a parametric copula family for bivariate censored data with or without covariates. The proposed copula family is tested by embedding it in an expanded parametric family of copulas. When parameters in the proposed and the expanded copula models are estimated by maximum likelihood, a likelihood ratio test can be used. However, when they are estimated by two-stage pseudolikelihood estimation, the corresponding test is a pseudolikelihood ratio test. The two-stage procedures offer less computation, which is especially attractive when the marginal lifetime distributions are specified nonparametrically or semiparametrically. It is shown that the likelihood ratio test is consistent even when the expanded model is misspecified. Power comparisons of the likelihood ratio and the pseudolikelihood ratio tests with some other goodness-of-fit tests are performed both when the expanded family is correct and when it is misspecified. They indicate that model expansion provides a convenient, powerful and robust approach. We introduce a semiparametric maximum likelihood estimation method in which the copula parameter is estimated without assumptions on the marginal distributions. This method and the two-stage semiparametric estimation method suggested by Shih and Louis (1995) are generalized to regression models with Cox proportional hazards margins. The two-stage semiparametric estimator of the copula parameter is found to be about as good as the semiparametric maximum likelihood estimator. Semiparametric likelihood ratio and pseudolikelihood ratio tests are considered to provide goodness of fit tests for a copula model without making parametric assumptions for the marginal distributions. Both when the expanded family is correct and when it is misspecified, the semiparametric pseudolikelihood ratio test is almost as powerful as the parametric likelihood ratio and pseudolikelihood ratio tests while achieving robustness to the form of the marginal distributions. The methods are illustrated on applications in medicine and insurance. Sequentially observed survival times are of interest in many studies but there are difficulties in modeling and analyzing such data. First, when the duration of followup is limited and the times for a given individual are not independent, the problem of induced dependent censoring arises for the second and subsequent survival times. Non-identifiability of the marginal survival distributions for second and later times is another issue, since they are observable only if preceding survival times for an individual are uncensored. In addition, in some studies, a significant proportion of individuals may never have the first event. Fully parametric models can deal with these features, but lack of robustness is a concern, and methods of assessing fit are lacking. We introduce an approach to address these issues. We model the joint distribution of the successive survival times by using copula functions, and provide semiparametric estimation procedures in which copula parameters are estimated without parametric assumptions on the marginal distributions. The performance of semiparametric estimation methods is compared with some other estimation methods in simulation studies and shown to be good. The methodology is applied to a motivating example involving relapse and survival following colon cancer treatment.
166

Generalized Sharpe Ratio under the Levy Processes

Feng, Liang-Hsueh 22 June 2010 (has links)
none
167

The study of earnings management via manipulation of discretionary loan loss provisions by banks in Taiwan.

Shen, Wen-hua 26 June 2004 (has links)
For the evaluation of banks¡¦ performance, non-performing loans ratio and capital adequacy ratio are the two major indicators other than earnings performance. Among the various tools for earnings manipulation, loan loss provisions may be the only one that could affect bank¡¦s earning numbers, non-performing loans ratio and capital adequacy ratio simultaneously. In order to satisfy the need to increase earnings and capital adequacy ratio and to decrease non-performing loans ratio, banks may have motivation to conduct earnings management. The purpose of the study is thus to investigate whether there is a relationship between the earnings management by using the discretionary loan loss provisions and the earnings before loan loss provisions, non-performing loans ratio, capital adequacy ratio, asset size, loan growth rate, and loans uncollected. In addition, the study divides the sample banks into the following categories: (1) commercial banks versus others, (2) new banks versus old banks (based on the time the bank was founded), and (3) state-run versus non-state-run (based on whether the president of the bank is appointed by the government). The study also intends to examine whether earnings management conducted by the bank¡¦s management is different between the various categories. Based on the empirical results from the Taiwan Economics Journal (TEJ) database, the study found: (1) the three variables of earnings before loan loss provisions, asset size, and loans uncollected are significantly related to the earnings management by using discretionary loan loss provisions, and the higher the three variables, the higher the degree of earnings management; (2) non-performing loans ratio, capital adequacy ratio, and loan growth rate are not found to be significantly related to the earnings management by using discretionary loan loss provisions; (3) state-run banks have conducted more earnings management than non-state-run banks; and (4) there is no significant result found in other analyses for other categories.
168

The Application Of VaR In Taiwan Property And Casualty Insurance Industry And Influence Factor Of Underwriting Risk Research

Liu, Cheng-chung 02 July 2008 (has links)
Abstract In these years, Value at Risk (VaR) has been an important tool of risk management in the bank industry. In the past, property and casualty insurance industry does not have many correlation research in this aspect, especially in the key of the underwriting risk application may be collection difficulty in data , the domestic correlation research literature were actually few. In this paper, we use TEJ data bank to study the statistical data which needs for the research , the research sample total has 9 property insurance companies, By using the public information of TEJ data bank, it obtains the yearly and quarterly data, and uses the ¡§Fuzzy Distance Weighting Method¡¨ to change the quarterly data into monthly data , calculates loss ratio of the yearly, quarterly, monthly, then use the idea of VaR to compare the different of loss ratio-at-risk in yearly, quarterly, monthly¡CMoreover this study discusses the underwriting risk influence factor of domestic property and casualty insurance industry .This research discovers that yearly data will underestimate the actual of loss ratio at risk . In addition using regression analysis, the underwriting loss ratio-at- risk is influenced by free cash flow , leverage ratio , and firm size. According to the result of this paper, it could provide the reference rule when property and casualty insurance industry or supervisory authority set up the risk management rule. Keywords: Value at risk, Loss ratio, Loss ratio-at-risk, Underwriting risk
169

The Mathematical Modelling for Simulating the Shift of Limiting Nutrient in the Estuary

Lui, Hon-kit 05 August 2009 (has links)
The linear relationship between a conservative element and salinity during mixing of water masses is widely used to study biogeochemistry in estuaries and the oceans. Even though nutrient ratios are widely used to determine the limiting nutrient in aquatic environments, the rules of nutrient ratios change through the mixing of freshwater and seawater are still unstudied. This study provides general rules for nutrient ratios change via mixing. A simple mixing model is developed with the aims to illustrate that nutrient ratio is a nonlinear function of salinity, thus, shift in limiting nutrient over the salinity gradient can be simply a result of river water and seawater mixing, albeit complicated by biological consumption or remineralization. This model explains a natural phenomenon that rivers contain relatively high dissolved inorganic nitrogen (DIN) to soluble reactive phosphorus (SRP) ratios start to decrease the ratios as salinity increases when seawater contains higher SRP:DIN ratios. Although additional sources of P have been implicated as the cause for such change, this change can be a result of riverine water and seawater mixing. Four mixing rules are presented here to explain the factors governing the change in nutrient ratios vs. salinity; thus, answering why in some cases variations in nutrient loading and in other cases mixing triggers changes to seasonal limitation status in some estuaries. Shift in nutrient ratios can be explained by the change in nutrient inventories via mixing. After the P-limited riverine water shifts in N limitation by mixing with N-limited seawater, new production of the estuary in general becomes limited by the amount of N inputs from the riverine water and the seawater. The result may help to explain a current consensus that N and not P riverine loadings lead to eutrophication in estuaries which are influenced by P-limited riverine waters. Further, new production which is generated by N-limited riverine input and N-limited seawater input mainly depends on the amount of N inputs from the riverine water and the seawater.
170

Design of Structures and Clutching Sequences of Combinational Epicyclic-Type Automatic Transmissions for Automobiles

Lee, Su-I 10 September 2009 (has links)
An epicyclic-type automatic transmission is a device which is connected from engine to driving wheels, and the main purpose is to maintain the revolution speed of engine to be in a desired working range while accelerating or decelerating the automobile. The epicyclic -type automatic transmissions in production are mostly Ravigneaux-type epicyclic mechanisms, and in recent year, the development of automatic transmissions for automobile is oriented to the orientation of setting an single epicyclic gear mechanism instead of the input to bring various inputs of revolution speed to increase the total number of speed ratios of an automatic transmission, and such type of epicyclic is called combinational epicyclic-type automatic transmissions for automobiles. For the incompletion in the relative design methodology, a systematic methodology of designing mechanism and clutching-sequence of combinational epicyclic-type automobile transmission is described. First, fundamental principle of the combination and operations of automatic transmissions are analyzed to establish the design requirements. Second, a procedure of structure synthesis of combinational epicyclic-type automobile transmission is brought up which the planar graphic method is applied for. Third, based on Speed Ratio Relationship of each clutching sequence, a procedure of gearing-sequence is introduced. Finally, on the basis of the analytic method, the teeth of gears is to be clarified. The result of this work obtains five types of the combinational epicyclic-type automatic transmissions which could reach six speeds.

Page generated in 0.072 seconds