• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 205
  • 61
  • 32
  • 30
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1036
  • 127
  • 126
  • 123
  • 100
  • 93
  • 83
  • 80
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Power Estimation of High Speed Bit-Parallel Adders / Effektestimering av snabba bitparallella adderare

Åslund, Anders January 2004 (has links)
Fast addition is essential in many DSP algorithms. Various structures have been introduced to speed up the time critical carry propagation. For high throughput applications, however, it may be necessary to introduce pipelining. In this report the power consumption of four different adder structures, with varying word length and different number of pipeline cuts, is compared. Out of the four adder structures compared, the Kogge-Stone parallel prefix adder proves to be the best choice most of the time. The Brent-Kung parallel prefix adder is also a good choice, but the maximal throughput does not reach as high as the maximal throughput of the Kogge-Stone parallel prefix adder.
482

Decision Support System for Value-Based Evaluation and Conditional Approval of Construction Submittals

Sherbini, Khaled Ali 03 May 2010 (has links)
To ensure compliance with specifications during construction, a formal review process, called the submittals process is typically implemented, whereby the contractor is required to submit proposals for materials, equipment, and processes for the owner’s approval within a short period of time. This procedure can be a difficult task because of lack of time, lack of information in the submittal package, difficulty in retrieving related data, and lack of defined criteria for evaluation. This research introduces development of a framework for submittal evaluation that considers the operational impact of any minor variation in the required specifications. The evaluation mechanism uses the Multi-Attribute Utility Theory (MAUT) approach, which is adaptable to the varying requirements of organizations. Through the process of analyzing the current submittal mechanism, a list of key submittals is defined and the top one (chiller) is selected to be the focus of the research. The governing criteria (evaluation parameters) are defined for the selected submittal item and categorized into two categories: inflexible and flexible. The inflexible parameters have been dealt with using checklists with predefined threshold that must be met without tolerance. Flexible parameters have been analyzed using utility functions that represent decision maker preferences and tolerance levels. Accordingly, the evaluation process considers multi-parameters to determine an overall utility for the submittal and the value-based condition for accepting it, incorporating LEED requirements. The investigation is based on data provided by three main organizations, as well as intensive meetings and interviews with experts from each participating organization. The outcome of this investigation is the development of evaluation criteria and checklist parameters that are used as the basis of a value-based evaluation, which is the core of the developed decision support system. In summary, it has been demonstrated that a decision support system for the evaluation of construction submittals can be constructed and that it will provide numerous benefits: an expedited decision process, an audit trail for decisions, more consistent and objective decisions, risk identification, internal alignment of organizational values, and improved lifecycle asset performance. The benefits were validated by demonstration, and by experts' evaluations.
483

Assessing Binary Measurement Systems

Danila, Oana Mihaela January 2012 (has links)
Binary measurement systems (BMS) are widely used in both manufacturing industry and medicine. In industry, a BMS is often used to measure various characteristics of parts and then classify them as pass or fail, according to some quality standards. Good measurement systems are essential both for problem solving (i.e., reducing the rate of defectives) and to protect customers from receiving defective products. As a result, it is desirable to assess the performance of the BMS as well as to separate the effects of the measurement system and the production process on the observed classifications. In medicine, BMSs are known as diagnostic or screening tests, and are used to detect a target condition in subjects, thus classifying them as positive or negative. Assessing the performance of a medical test is essential in quantifying the costs due to misclassification of patients, and in the future prevention of these errors. In both industry and medicine, the most commonly used characteristics to quantify the performance a BMS are the two misclassification rates, defined as the chance of passing a nonconforming (non-diseased) unit, called the consumer's risk (false positive), and the chance of failing a conforming (diseased) unit, called the producer's risk (false negative). In most assessment studies, it is also of interest to estimate the conforming (prevalence) rate, i.e. probability that a randomly selected unit is conforming (diseased). There are two main approaches for assessing the performance of a BMS. Both approaches involve measuring a number of units one or more times with the BMS. The first one, called the "gold standard" approach, requires the use of a gold-standard measurement system that can determine the state of units with no classification errors. When a gold standard does not exist, is too expensive or time-consuming, another option is to repeatedly measure units with the BMS, and then use a latent class approach to estimate the parameters of interest. In industry, for both approaches, the standard sampling plan involves randomly selecting parts from the population of manufactured parts. In this thesis, we focus on a specific context commonly found in the manufacturing industry. First, the BMS under study is nondestructive. Second, the BMS is used for 100% inspection or any kind of systematic inspection of the production yield. In this context, we are likely to have available a large number of previously passed and failed parts. Furthermore, the inspection system typically tracks the number of parts passed and failed; that is, we often have baseline data about the current pass rate, separate from the assessment study. Finally, we assume that during the time of the evaluation, the process is under statistical control and the BMS is stable. Our main goal is to investigate the effect of using sampling plans that involve random selection of parts from the available populations of previously passed and failed parts, i.e. conditional selection, on the estimation procedure and the main characteristics of the estimators. Also, we demonstrate the value of combining the additional information provided by the baseline data with those collected in the assessment study, in improving the overall estimation procedure. We also examine how the availability of baseline data and using a conditional selection sampling plan affect recommendations on the design of the assessment study. In Chapter 2, we give a summary of the existing estimation methods and sampling plans for a BMS assessment study in both industrial and medical settings, that are relevant in our context. In Chapters 3 and 4, we investigate the assessment of a BMS in the case where we assume that the misclassification rates are common for all conforming/nonconforming parts and that repeated measurements on the same part are independent, conditional on the true state of the part, i.e. conditional independence. We call models using these assumptions fixed-effects models. In Chapter 3, we look at the case where a gold standard is available, whereas in Chapter 4, we investigate the "no gold standard" case. In both cases, we show that using a conditional selection plan, along with the baseline information, substantially improves the accuracy and precision of the estimators, compared to the standard sampling plan. In Chapters 5 and 6, we investigate the case where we allow for possible variation in the misclassification rates within conforming and nonconforming parts, by proposing some new random-effects models. These models relax the fixed-effects model assumptions regarding constant misclassification rates and conditional independence. As in the previous chapters, we focus on investigating the effect of using conditional selection and baseline information on the properties of the estimators, and give study design recommendations based on our findings. In Chapter 7, we discuss other potential applications of the conditional selection plan, where the study data are augmented with the baseline information on the pass rate, especially in the context where there are multiple BMSs under investigation.
484

Computation of context as a cognitive tool

Sanscartier, Manon Johanne 09 November 2006 (has links)
In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.<p>In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between <i>dispositional causes</i> and <i>situational factors</i> by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their <i>probabilistic contrast model</i> makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete <i>focal sets</i> from probabilistic distributions, without the human expert.
485

Three Essays on Bio-security

Gao, Qi 2009 December 1900 (has links)
In this dissertation, several essays in the field of bio-security are presented. The estimation of the probability of an FMD outbreak by type and location of premises is important for decision making. In Essay I, we estimate and predict the probability/risk of an FMD outbreak spreading to the various premises in the study area. We first used a Poisson regression model with adjustment dispersion associated with random simulation results from the AusSpead model to estimate the parameters of the model. Our estimation and prediction show that large cattle loss could be concentrated in three counties-Deaf Smith, Parmer, and Castro. These results are based on approximately 70% of the feedlots with over 10,000 cattle located in the three counties previously mentioned. In Essay II, our objective is to determine the best mitigation strategies in minimizing animal loss based on AusSpead simulation model. We tested 15 mitigation strategies by using multiple comparison. The results show that the best mitigation strategies for all four scenarios are regular surveillance, slaughter of the infected animals, and early detection. We then used the Mixed Integer Programming to estimate costs of disposing of animal carcasses and transportation. Results show that the unit disposal cost will vary with carcass scale and the unit transportation cost also varies with the distribution of the infected premises and disposal locations. FMD seems to have varying impacts on equity markets. In Essay III, we studied returns at three different levels of the stock market. We determined results in a structural break, and then estimated the impact of the announcement of confirmed cases of FMD disease on the volatility of stock market returns by using a GARCH-Mean model. Our results show that the structure break occurs on the day with the largest number of confirmed cases for meat product firms rather than the day of the first confirmed case. We found that the conditional volatilities over the FMD period are higher than those over the sample period. The announcement of confirmed cases had the largest marginal impact on meat products. Investors may always consider maintaining a portfolio consisting of index funds or hedge funds.
486

The Value of Public Transportation for Improving the Quality of Life for the Rural Elderly

Israel, Alicia Ann 2012 May 1900 (has links)
Mobility is an undeniable issue for current and future elderly populations. The increasing popularity for retirees to live in rural communities makes this a particularly important issue in rural towns. When an elderly individual living in a rural community is no longer able to drive, issues that come with living in a rural area may be exacerbated, and the individual may experience a decrease in their quality of life. Although individuals may be able to use public transportation most existing options do not promote an independent lifestyle. Any updated rural transportation system benefiting the elderly would be funded by taxpayers. An understanding of the taxpayers' preferences and willingness-to-pay (WTP) for transportation options, therefore, is essential. Few, if any economic studies have addressed this issue. The objectives of this research are to: (1) estimate economic willingness-to-pay (WTP) for public transportation options by using choice modeling techniques; and (2) better understand opinions related to public transportation for the elderly held by the general population as a whole and within different demographics. To complete these objectives, a choice survey was distributed to samples of three populations: residents of Atascosa County (located in south Texas); residents of Polk County (located in east Texas); and students at Texas A&M University. Respondents were presented with transportation options made of five attributes: addition to annual vehicle registration fee, days of operation, hours of operation, type of route, and senior citizen transportation fare discount. Results show both students and the general public value public transportation options and are willing to pay for specific transportation attributes. Respondents tended to prefer options which are more flexible than the less flexible attribute presented to them; however, respondents did not necessarily prefer the most flexible options. Students, generally, are willing to pay more for transportation attributes than county residents. Overall, both Atascosa and Polk County residents have similar WTP's, indicating both populations value rural public transportation similarly. The effects of socio-demographic variables on residents' decision to choose a transportation option appear to differ between the counties. These findings imply that while the influence of transportation attribute levels are consistent across counties, local input is important in customizing transportation systems to meet local expectations.
487

A Structural Equation Modeling Study: The Metacognition-knowledge Model For Geometry

Aydin, Utkun 01 July 2007 (has links) (PDF)
The purpose of this study is twofold: (1) to examine the effects of knowledge on cognition and regulation of cognition on declarative knowledge, conditional knowledge, and procedural knowledge in geometry and (2) to examine the interrelationships among declarative knowledge, conditional knowledge, and procedural knowledge in geometry. The reciprocal relationships between metacognitive and knowledge factors were modeled by using data from tenth grade secondary school students. Structural equation modeling was used to test the hypothesized relationships of two metacognitive factors (knowledge of cognition, regulation of cognition) and three knowledge factors (declarative knowledge, conditional knowledge, procedural knowledge). The observed variables representing the latent variables were determined by carrying out exploratory factor analysis and confirmatory factor analysis for the metacognitive awareness inventory and geometry knowledge test separately. Major findings revealed: (1) Declarative knowledge significantly and positively influences conditional and procedural knowledge / (2) Procedural knowledge has a signitificant and positive direct effect on conditional knowledge / (3) Declarative knowledge has a positive indirect effect on conditional knowledge / (4) Knowledge of cognition significantly and positively influences procedural knowledge / (5) Regulation of cognition has a significant but negative direct effect on procedural knowledge / (6) Knowledge of cognition has positive indirect effects on conditional and procedural knowledge / (7) Regulation of cognition has negative indirect effects on conditional and procedural knowledge / (8) Knowledge of cognition and regulation of cognition have non-significant direct effect on declarative and conditional knowledge. The results showed that knowledge of cognition has the strongest direct effect on procedural knowledge and the direct effect of declarative knowledge on conditional knowledge is stronger than on procedural knowledge. In view of the findings considerable suggestions is provided for teachers, instructional designers, and mathematics education researchers.
488

The Effects Of Physical Manipulative With Or Without Self-metacognitive Questioning On Sixth Grade Students&#039 / Knowledge Acquisition In Polygons

Erdogan, Beril 01 December 2007 (has links) (PDF)
This study compared the effect of the use of physical manipulative with self-metacognitive questioning versus manipulative without self-metacognitive questioning on the knowledge acquisition in polygons. Participants were 220 sixth grade students. A pretest, treatment and posttest two-group design was used. There were two treatment groups: manipulative with self-metacognitive questioning (MAN+META) and manipulative without self-metacognitive questioning (MAN) Three distinct knowledge tests were designed by the researcher: Declarative, conditional and procedural. Declarative knowledge test consisted of 18 multiple-choice questions. The conditional and procedural knowledge tests consisted of six and ten open-ended questions respectively. Mixed design analysis of variance results revealed that there is a significant effect for time but no group-by-time interaction effect suggesting that both groups responded equally well to treatment in the amount of change in their scores on the two outcome measures: pretests and posttests. A follow up analysis (paired t-test) was conducted to evaluate the impact of time on students&rsquo / pretest and posttest scores. The large effect size indicated that there was a statistically significant increase in scores of all three tests.
489

Modeling Diseases With Multiple Disease Characteristics: Comparison Of Models And Estimation Methods

Erdem, Munire Tugba 01 July 2011 (has links) (PDF)
Epidemiological data with disease characteristic information can be modelled in several ways. One way is taking each disease characteristic as a response and constructing binary or polytomous logistic regression model. Second way is using a new response which consists of disease subtypes created by cross-classification of disease characteristic levels, and then constructing polytomous logistic regression model. The former may be disadvantageous since any possible covariation between disease characteristics is neglected, whereas the latter can capture that covariation behaviour. However, cross-classifying the characteristic levels increases the number of categories of response, so that dimensionality problem in parameter space may occur in classical polytomous logistic regression model. A two staged polytomous logistic regression model overcomes that dimensionality problem. In this thesis, study is progressen in two main directions: simulation study and data analysis parts. In simulation study, models that capture the covariation behaviour are compared in terms of the response model parameter estimators. That is, performances of the maximum likelihood estimation (MLE) approach to classical polytomous logistic regression, Bayesian estimation approach to classical polytomous logistic regression and pseudo-conditional likelihood (PCL) estimation approach to two stage polytomous logistic regression are compared in terms of bias and variation of estimators. Results of the simulation study revealed that for small sized sample and small number of disease subtypes, PCL outperforms in terms of bias and variance. For medium scaled size of total disease subtypes situation when sample size is small, PCL performs better than MLE, however when the sample size gets larger MLE has better performance in terms of standard errors of estimates. In addition, sampling variance of PCL estimators of two stage model converges to asymptotic variance faster than the ML estimators of classical polytomous logistic regression model. In data analysis, etiologic heterogeneity in breast cancer subtypes of Turkish female cancer patients is investigated, and the superiority of the two stage polytomous logistic regression model over the classical polytomous logistic model with disease subtypes is represented in terms of the interpretation of parameters and convenience in hypothesis testing.
490

Financial Derivatives Pricing and Hedging - A Dynamic Semiparametric Approach

Huang, Shih-Feng 26 June 2008 (has links)
A dynamic semiparametric pricing method is proposed for financial derivatives including European and American type options and convertible bonds. The proposed method is an iterative procedure which uses nonparametric regression to approximate derivative values and parametric asset models to derive the continuation values. Extension to higher dimensional option pricing is also developed, in which the dependence structure of financial time series is modeled by copula functions. In the simulation study, we valuate one dimensional American options, convertible bonds and multi-dimensional American geometric average options and max options. The considered one-dimensional underlying asset models include the Black-Scholes, jump-diffusion, and nonlinear asymmetric GARCH models and for multivariate case we study copula models such as the Gaussian, Clayton and Gumbel copulae. Convergence of the method is proved under continuity assumption on the transition densities of the underlying asset models. And the orders of the supnorm errors are derived. Both the theoretical findings and the simulation results show the proposed approach to be tractable for numerical implementation and provides a unified and accurate technique for financial derivative pricing. The second part of this thesis studies the option pricing and hedging problems for conditional leptokurtic returns which is an important feature in financial data. The risk-neutral models for log and simple return models with heavy-tailed innovations are derived by an extended Girsanov change of measure, respectively. The result is applicable to the option pricing of the GARCH model with t innovations (GARCH-t) for simple eturn series. The dynamic semiparametric approach is extended to compute the option prices of conditional leptokurtic returns. The hedging strategy consistent with the extended Girsanov change of measure is constructed and is shown to have smaller cost variation than the commonly used delta hedging under the risk neutral measure. Simulation studies are also performed to show the effect of using GARCH-normal models to compute the option prices and delta hedging of GARCH-t model for plain vanilla and exotic options. The results indicate that there are little pricing and hedging differences between the normal and t innovations for plain vanilla and Asian options, yet significant disparities arise for barrier and lookback options due to improper distribution setting of the GARCH innovations.

Page generated in 0.0646 seconds