• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 455
  • 205
  • 61
  • 32
  • 29
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1034
  • 127
  • 126
  • 123
  • 100
  • 93
  • 82
  • 79
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Aplicação de tecnologia wireless para controle de qualidade do planejamento de lavra incorporando a incerteza geológica. / Wireless technology application for quality control on mining planning incorporating geologic uncertainty.

Rondinelli de Sousa Silva 06 October 2006 (has links)
As etapas envolvidas na produção de uma mina podem ser modeladas e gerenciadas de forma mais eficiente quando há um fluxo mútuo e contínuo de informações, que vão desde a geologia até às especificações do produto final. Através da modelagem e gerenciamento das diversas etapas envolvidas no processo mineiro é possível melhorar significativamente o planejamento e operação das minas. A incorporação de componentes tecnológicos diversos tais como software de mineração, software de realidade virtual e componentes wireless, permite tomar decisões baseadas em modelos mais realísticos e precisos. Para tal, é proposta uma metodologia de integração destas tecnologias no controle das tarefas de decapeamento no planejamento de lavra de curto prazo, visando aprimorar o controle de qualidade no minério lavrado através do uso de técnicas de simulação condicional. A integração dessas tecnologias permite a transmissão de dados em tempo real entre planejamento e operação de mina, proporcionando melhorias na produtividade, na eficiência e no controle dos processos, além de promover melhorias na qualidade do material lavrado através do conhecimento do nível de incerteza associado em cada plano operacional de lavra. Uma solução composta por tecnologias integradas permite que o pessoal de operação de mina alcance os resultados de forma mais rápida e eficiente, melhorando significativamente a produtividade das operações da mina e a qualidade do minério lavrado. / The stages involved in mining production can be modeled and managed in a more efficient way when there is an integrated and continuous flow of information, from geology all the way down the mine value chain to the final specifications of the product. It is possible to improve the efficiency of decision-making in mine planning and operations through proper data-flow management. With the incorporation of technological components such as mining software, virtual reality software and wireless components, it is possible to take decisions based on models that are more accurate and realistic. This research considers a methodology technology integration for the tasks of waste moving in the short term mine planning, to improve the quality control in mined ore using conditional simulation techniques. The integration of these technologies allows instantaneous transmission of data between mine planning and the mine operation, improving productivity and efficiency in process control. In addition, it provides a measure of the uncertainty associated with the operational mining planning. An integrated solution allows mine planners and equipment operators to obtain quicker and more efficient results, improving significantly productivity in the operation of the mine and quality of the mined ore.
122

Persian potential preterit : The use of the preterit in potential conditional clauses in modern literary texts in Persian

von Zeipel, Kenneth January 2010 (has links)
No description available.
123

Improving use of statistical information by jurors by reducing confusion of the inverse

Raacke, John David January 1900 (has links)
Doctor of Philosophy / Department of Psychology / James Shanteau / In many situations, people are called on to make judgments about the likelihood of an event. Research has shown that when people make these judgments, they frequently equate or confuse conditional probabilities with other conditional probabilities. This equating or confusing of conditional probabilities is known as the confusion of the inverse. Research investigating this problem typically focuses on clinical and medical decision-making and the use of statistical evidence to make diagnoses. However, one area in which the confusion of the inverse has not been studied is in juror decision-making. Thus, the purpose of this dissertation was to (1) determine if the confusion of the inverse influences juror decision-making, (2) interpret reasons why this confusion occurs, and (3) attempt to eliminate it from juror decision-making. Jurors were presented with four court cases gathered from local and federal courthouses in a small Mid-western city. In each of the four cases, a single piece of evidence was presented (statistical only) which was to be used when rendering verdicts. Finally, each case contained juror instructions for the specific case type: murder, kidnapping, arson, sexual assault. Overall, jurors fell prey to the confusion of the inverse, equating the probability of the data given the hypothesis [P(D|H)] with the probability of the hypothesis given the data [P(H|D)]. However, the research was unable to reduce the effect, much less eliminate it from the task. Interestingly, jurors tended to ignore the statistical evidence (i.e., estimations about probability of a match) in favor of their own personal believe in the strength of the evidence. Although the original intent of reducing/eliminating the confusion of the inverse was not accomplished, the dissertation did accomplish three things. First, researchers have hypothesized three reasons why people engage in incorrect probabilistic reasoning, and the dissertation affirmed that it is indeed a function of the confusion of conditional probabilities – the confusion of the inverse. Second, it seems that the use of statistical evidence in a trial is ignored by most jurors in favor of their own personal belief in the evidence’s strength. Finally, the criteria needed for “beyond a reasonable doubt” may be too stringent.
124

Computations of turbulent premixed flames using conditional moment closure

Amzin, Shokri January 2012 (has links)
Lean premixed combustion is at present one of the most promising methods to reduce emissions and to maintain high efficiency in combustion systems. As the emission legislation becomes more stringent, modelling of turbulent premixed combustion has become an important tool for designing efficient and environmentally friendlier combustion systems. However, in order to predict these emissions reliable predictive models are required. One of the methods used for predicting pollutants is the conditional moment closure (CMC), which is suitable to predict pollutants with slow time scales. Despite the fact that CMC has been successfully applied to various non-premixed combustion systems, its application to premixed flames is not fully tested and validated. The main difficulty is associated with the modelling of the conditional scalar dissipation rate (CSDR) of the conditioning scalar, the progress variable. In premixed CMC, this term is an important quantity and represents the rate of mixing at small scales of relevance for combustion. The numerical accuracy of the CMC method depends on the accuracy of the CSDR model. In this study, two different models for CSDR, an algebraic model and an inverse problem model, are validated using two different DNS data sets. The algebraic model along with standard k-ε turbulence modelling is used in the computations of stoichiometric and very lean pilot stabilized Bunsen flames using the RANS-CMC method. A first order closure is used for the conditional mean reaction rate. The computed nonreacting and reacting scalars are in reasonable agreement with the experiments and are consistent with earlier computations using flamlets and transported PDF methods for the stoichiometric flames, and transported PDF methods for the very lean flames. Sensitivity to chemical kinetics mechanism is also assessed.
125

Prediction-based failure management for supercomputers

Ge, Wuxiang January 2011 (has links)
The growing requirements of a diversity of applications necessitate the deployment of large and powerful computing systems and failures in these systems may cause severe damage in every aspect from loss of human lives to world economy. However, current fault tolerance techniques cannot meet the increasing requirements for reliability. Thus new solutions are urgently needed and research on proactive schemes is one of the directions that may offer better efficiency. This thesis proposes a novel proactive failure management framework. Its goal is to reduce the failure penalties and improve fault tolerance efficiency in supercomputers when running complex applications. The proposed proactive scheme builds on two core components: failure prediction and proactive failure recovery. More specifically, the failure prediction component is based on the assessment of system events and employs semi-Markov models to capture the dependencies between failures and other events for the forecasting of forthcoming failures. Furthermore, a two-level failure prediction strategy is described that not only estimates the future failure occurrence but also identifies the specific failure categories. Based on the accurate failure forecasting, a prediction-based coordinated checkpoint mechanism is designed to construct extra checkpoints just before each predicted failure occurrence so that the wasted computational time can be significantly reduced. Moreover, a theoretical model has been developed to assess the proactive scheme that enables calculation of the overall wasted computational time.The prediction component has been applied to industrial data from the IBM BlueGene/L system. Results of the failure prediction component show a great improvement of the prediction accuracy in comparison with three other well-known prediction approaches, and also demonstrate that the semi-Markov based predictor, which has achieved the precision of 87.41% and the recall of 77.95%, performs better than other predictors.
126

The covariance structure of conditional maximum likelihood estimates

Strasser, Helmut 11 1900 (has links) (PDF)
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6].
127

Algoritmos eficientes para análise de campos aleatórios condicionais semi-markovianos e sua aplicação em sequências genômicas / Efficient algorithms for semi-markov conditional random fields and their application for the analysis of genomic sequences

Ígor Bonadio 06 August 2018 (has links)
Campos Aleatórios Condicionais são modelos probabilísticos discriminativos que tem sido utilizados com sucesso em diversas áreas como processamento de linguagem natural, reconhecimento de fala e bioinformática. Entretanto, implementar algoritmos eficientes para esse tipo de modelo não é uma tarefa fácil. Nesse trabalho apresentamos um arcabouço que ajuda no desenvolvimento e experimentação de Campos Aleatórios Condicionais Semi Markovianos (semi-CRFs). Desenvolvemos algoritmos eficientes que foram implementados em C++ propondo uma interface de programação flexível e intuitiva que habilita o usuário a definir, treinar e avaliar modelos. Nossa implementação foi construída como uma extensão do arcabouço ToPS que, inclusive, pode utilizar qualquer modelo já definido no ToPS como uma função de característica especializada. Por fim utilizamos nossa implementação de semi-CRF para construir um preditor de promotores que apresentou performance superior aos preditores existentes. / Conditional Random Fields are discriminative probabilistic models that have been successfully used in several areas like natural language processing, speech recognition and bioinformatics. However, implementing efficient algorithms for this kind of model is not an easy task. In this thesis we show a framework that helps the development and experimentation of Semi-Markov Conditional Random Fields (semi-CRFs). It has an efficient implementation in C++ and an intuitive API that allow users to define, train and evaluate models. It was built as an extension of ToPS framework and can use ToPS probabilistic models as specialized feature functions. We also use our implementation of semi-CRFs to build a high performance promoter predictor.
128

Hes1 expression in mature neurons in the adult mouse brain is required for normal behaviors / 成体マウス脳の成熟神経細胞におけるHes1の発現は正常行動に必要である

Matsuzaki, Tadanobu 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第22318号 / 医博第4559号 / 新制||医||1041(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 渡邉 大, 教授 林 康紀, 教授 伊佐 正 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
129

Causal Inference : controlling for bias in observational studies using propensity score methods

Msibi, Mxolisi January 2020 (has links)
Adjusting for baseline pre-intervention characteristics between treatment groups, through the use of propensity score matching methods, is an important step that enables researchers to do causal inference with confidence. This is critical, largely, due to the fact that practical treatment allocation scenarios are non-randomized in nature, with various inherent biases that are inevitable, and therefore requiring such experimental manipulations. These propensity score matching methods are the available tools to be used as control mechanisms, for such intrinsic system biases in causal studies, without the benefits of randomization (Lane, To, Kyna , & Robin, 2012). Certain assumptions need to be verifiable or met, before one may embark on a propensity score matching causal effects journey, using the Rubin causal model (Holland, 1986), of which the main ones are conditional independence (unconfoundedness) and common support (positivity). In particular, with this dissertation we are concerned with elaborating the applications of these matching methods, for a ‘strong-ignorability’ case (Rosenbaum & Rubin, 1983), i.e. when both the overlap and unconfoundedness properties are valid. We will take a journey from explaining different experimental designs and how the treatment effect is estimated, closing with a practical example based on two cohorts of enrolled introductory statistics students prior and post-clickers intervention, at a public South African university, and the relevant causal conclusions thereof. Keywords: treatment, conditional independence, propensity score, counterfactual, confounder, common support / Dissertation (MSc)--University of Pretoria, 2020. / Statistics / MSc / Unrestricted
130

Determinants of Hospital Choice of Rural Hospital Patients: The Impact of Networks, Service Scopes, and Market Competition

Roh, Chul, Lee, Keon Hyung, Fottler, Myron D. 01 August 2008 (has links)
Among 10,384 rural Colorado female patients who received MDC 14 (obstetric services) from 2000 to 2003, 6,615 (63.7%) were admitted to their local rural hospitals; 1,654 (15.9%) were admitted to other rural hospitals; and 2,115 (20.4%) traveled to urban hospitals for inpatient services. This study is to examine how network participation, service scopes, and market competition influences rural women's choice of hospital for their obstetric care. A conditional logistic regression analysis was used. The network participation (p < 0.01), the number of services offered (p < 0.05), and the hospital market competition had a positive and significant relationship with patients' choice to receive obstetric care. That is, rural patients prefer to receive care from a hospital that participates in a network, that provides more number of services, and that has a greater market share (i.e., a lower level of market competition) in their locality. Rural hospitals could actively increase their competitiveness and market share by increasing the number of health care services provided and seeking to network with other hospitals.

Page generated in 0.0869 seconds