• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Probabilistic Approach to Image Feature Extraction, Segmentation and Interpretation

Pal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
2

A Probabilistic Approach to Image Feature Extraction, Segmentation and Interpretation

Pal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
3

Draw control strategy for sublevel caving mines : A probabilistic approach

Shekhar, Gurmeet January 2018 (has links)
Sublevel caving is an underground mass mining method used for extracting different types of ores from the earth crust. Mines using sublevel caving (SLC) as the primary mining method are generally highly mechanized with standardized and independent unit operations. Different unit operations (drilling, blasting, loading and transportation) are performed in isolation with each other which leads to standardized procedures and safe operation. Loading of the material from the production face in sublevel caving is facilitated by the flow of material under gravity into the production face. A large amount of material is loaded from a limited opening termed as the draw point which creates challenges for the mining method.   Material flow in SLC has been studied extensively in the past five decades and different methods have been used to simulate material flow in caving operations. Physical models of different scales has been designed for simulating material flow by using sand, gravel or rocks and studying the movement of material inside the model. Initial physical models showed an ellipsoidal zone above the draw point from which material flowed into the draw point. However, subsequent physical modelling results disagreed with this notion of material flow. Numerical modelling techniques have also been applied to simulate material flow. Currently, marker trials are being used to understand material flow in SLC. Markers (numbered steel rods, RFID enabled markers) are installed in boreholes drilled inside the burden of a production ring and based on the recovery sequence of markers, material flow is predicted. Results from physical models, numerical models and marker trials along with mine experience have been used in the past to optimize mine design and draw control for SLC operation. The results from latest marker trials highlight the chaotic nature of material flow and the unpredictability associated with material flow simulation.   In caving operations, draw control deals with the question of when to stop loading and regulates the loading process by providing the information on when to stop loading. The decision to stop loading a blasted ring and proceed to blasting the subsequent ring is a critical decision made in a SLC operation. If a draw point is closed early then ore is lost in the draw point which cannot be conclusively recovered at the lower levels and if delayed the mine faces greater dilution and increased mining costs. A study of the various draw control strategies used in sublevel caving operations globally has also been done to describe the present state-of-art. An analysis of the draw control and loading operations at the Malmberget and Kiirunavaara mines is summarized in the thesis using information collected through interviews, internal documents, meetings, and manuals. An optimized draw control strategy is vital for improving ore recovery and reducing dilution in SLC. Based on the literature review and baseline mapping study, a set of guidelines for designing a new draw control strategy has been listed.    In the present scenario of fluctuating metal prices and increasing operational cost a new draw control strategy is needed which is probabilistic in nature and can handle the uncertainties associated with caving operations. A draw control model which is probabilistic in nature provide a scenario based solution and can be used to test different draw control strategy before performing mine test. A framework for a probabilistic draw control model along with its application for draw control optimization has been discussed here. An effective draw control requires a constant monitoring system and a constant calibration of the loading criteria’s through draw point monitoring for reducing dilution and improving ore recovery. / Improved resource efficiency through dynamic loading control
4

HIDING IN PLAIN SIGHT: A QUANTCRIT, INTERSECTIONAL ANALYSIS OF DUAL ENROLLMENT

Winfield, Jake, 0000-0001-6181-8664 January 2023 (has links)
The COVID-19 pandemic has threatened to widen racial educational achievement and attainment gaps in the United States, reinforcing a need to understand how education policy can work to advance racial equity. Dual enrollment (DE) programs offer a potential policy solution that could increase college-going for these students as participation has consistently been associated with increased rates of college-going and completion. These prior findings are heterogeneous among student demographics. This study expands on this prior research on DE by examining how access to DE and benefits from participation may have differential benefits based one’s intersectional identities. Using a critical quantitative framework, QuantCrit, and multiple national datasets this study investigates three related questions. First, using IPEDS and the American Community Survey and logistic regression I investigate how do demographics of areas of the United States with limited access to widely accessible colleges and universities (called college access deserts) vary from those with greater access? I then consider access to DE with the U.S. Department of Education’s Civil Rights Data Collection: 2017-18 and linear probability models by examining how access to DE varied across the United States, with attention to Black females and males. Finally, with the High School Longitudinal Study of 2009 and linear probability models, I investigate if participation in DE is associated with increased probabilities of attending postsecondary education and does this association vary for Black people based on their gender. I find that areas of the United States that have less access to widely accessible public colleges have lower proportions of Black people, but higher proportions of American Indian/Alaskan Native residents. Analysis of the Civil Rights Data Collection indicates that high schools with less access to widely accessible public colleges were more likely to offer DE. However, schools with higher percentages of Black students and male students are less likely to offer DE. The HSLS analysis shows that DE is positively associated with postsecondary attendance. Of postsecondary enrollees, DE participants were more likely to enroll in four-year institutions, but this increased probability may vary by gender as female DE participants had greater increased probabilities of enrolling in four-year institutions than male DE participants. Taken together, this study’s findings indicate that DE can be a policy to advance postsecondary attendance for all students – however current access is racialized. These findings show that expanding DE for all students could increase postsecondary attendance. I conclude with recommendations for dual enrollment programs to broaden access in areas of the United States with widely accessible public colleges and the utility of open science and anti-racist quantitative methods in educational research. / Policy, Organizational and Leadership Studies
5

A System for Determining the Statistical Significance of the Frequency of Short DNA Motif Matches in a Genome - An Analytical Approach

Pfeiffer, Philip Edward 16 May 2011 (has links)
No description available.
6

Contributions to the analysis of dispersed count data / Contribuições à análise de dados de contagem

Ribeiro Junior, Eduardo Elias 18 February 2019 (has links)
In many agricultural and biological contexts, the response variable is a nonnegative integer value which we wish to explain or analyze in terms of a set of covariates. Unlike the Gaussian linear model, the response variable is discrete with a distribution that places probability mass at natural numbers only. The Poisson regression is the standard model for count data. However, assumptions of this model forces the equality between mean and variance, which may be implausible in many applications. Motivated by experimental data sets, this work intended to develop more realistic methods for the analysis of count data. We proposed a novel parametrization of the COM-Poisson distribution and explored the regression models based on it. We extended the model to allow the dispersion, as well as the mean, depending on covariates. A set of count statistical models, namely COM-Poisson, Gamma-count, discrete Weibull, generalized Poisson, double Poisson and Poisson-Tweedie, was reviewed and compared, considering the dispersion, zero-inflation, and heavy tail indexes, together with the results of data analyzes. The computational routines developed in this dissertation were organized in two R packages available on GitHub. / Em diversos estudos agrícolas e biológicos, a variável resposta é um número inteiro não negativo que desejamos explicar ou analisar em termos de um conjunto de covariáveis. Diferentemente do modelo linear Gaussiano, a variável resposta é discreta com distribuição de probabilidade definida apenas em valores do conjunto dos naturais. O modelo Poisson é o modelo padrão para dados em forma de contagens. No entanto, as suposições desse modelo forçam que a média seja igual a variância, o que pode ser implausível em muitas aplicações. Motivado por conjuntos de dados experimentais, este trabalho teve como objetivo desenvolver métodos mais realistas para a análise de contagens. Foi proposta uma nova reparametrização da distribuição COM-Poisson e explorados modelos de regressão baseados nessa distribuição. Uma extensão desse modelo para permitir que a dispersão, assim como a média, dependa de covariáveis, foi proposta. Um conjunto de modelos para contagens, nomeadamente COM-Poisson, Gamma-count, Weibull discreto, Poisson generalizado, duplo Poisson e Poisson-Tweedie, foi revisado e comparado, considerando os índices de dispersão, inflação de zero e cauda pesada, juntamente com os resultados de análises de dados. As rotinas computacionais desenvolvidas nesta dissertação foram organizadas em dois pacotes R disponíveis no GitHub.
7

Ranking from Pairwise Comparisons : The Role of the Pairwise Preference Matrix

Rajkumar, Arun January 2016 (has links) (PDF)
Ranking a set of candidates or items from pair-wise comparisons is a fundamental problem that arises in many settings such as elections, recommendation systems, sports team rankings, document rankings and so on. Indeed it is well known in the psychology literature that when a large number of items are to be ranked, it is easier for humans to give pair-wise comparisons as opposed to complete rankings. The problem of ranking from pair-wise comparisons has been studied in multiple communities such as machine learning, operations research, linear algebra, statistics etc., and several algorithms (both classic and recent) have been proposed. However, it is not well under-stood under what conditions these different algorithms perform well. In this thesis, we aim to fill this fundamental gap, by elucidating precise conditions under which different algorithms perform well, as well as giving new algorithms that provably perform well under broader conditions. In particular, we consider a natural statistical model wherein for every pair of items (i; j), there is a probability Pij such that each time items i and j are compared, item j beats item i with probability Pij . Such models, which we summarize through a matrix containing all these pair-wise probabilities, have been used explicitly or implicitly in much previous work in the area; we refer to the resulting matrix as the pair-wise preference matrix, and elucidate clearly the crucial role it plays in determining the performance of various algorithms. In the first part of the thesis, we consider a natural generative model where all pairs of items can be sampled and where the underlying preferences are assumed to be acyclic. Under this setting, we elucidate the conditions on the pair-wise preference matrix under which popular algorithms such as matrix Borda, spectral ranking, least squares and maximum likelihood under a Bradley-Terry-Luce (BTL) model produce optimal rankings that minimize the pair-wise disagreement error. Specifically, we derive explicit sample complexity bounds for each of these algorithms to output an optimal ranking under interesting subclasses of the class of all acyclic pair-wise preference matrices. We show that none of these popular algorithms is guaranteed to produce optimal rankings for all acyclic preference matrices. We then pro-pose a novel support vector machine based rank aggregation algorithm that provably does so. In the second part of the thesis, we consider the setting where preferences may contain cycles. Here, finding a ranking that minimizes the pairwise disagreement error is in general NP-hard. However, even in the presence of cycles, one may wish to rank 'good' items ahead of the rest. We develop a framework for this setting using notions of winners based on tournament solution concepts from social choice theory. We first show that none of the existing algorithms are guaranteed to rank winners ahead of the rest for popular tournament solution based winners such as top cycle, Copeland set, Markov set etc. We propose three algorithms - matrix Copeland, unweighted Markov and parametric Markov - which provably rank winners at the top for these popular tournament solutions. In addition to ranking winners at the top, we show that the rankings output by the matrix Copeland and the parametric Markov algorithms also minimize the pair-wise disagreement error for certain classes of acyclic preference matrices. Finally, in the third part of the thesis, we consider the setting where the number of items to be ranked is large and it is impractical to obtain comparisons among all pairs. Here, one samples a small set of pairs uniformly at random and compares each pair a fixed number of times; in particular, the goal is to come up with good algorithms that sample comparisons among only O(nlog(n)) item pairs (where n is the number of items). Unlike existing results for such settings, where one either assumes a noisy permutation model (under which there is a true underlying ranking and the outcome of every comparison differs from the true ranking with some fixed probability) or assumes a BTL or Thurstone model, we develop a general algorithmic framework based on ideas from matrix completion, termed low-rank pair-wise ranking, which provably produces an good ranking by comparing only O(nlog(n)) pairs, O(log(n)) times each, not only for popular classes of models such as BTL and Thurstone, but also for much more general classes of models wherein a suitable transform of the pair-wise probabilities leads to a low-rank matrix; this subsumes the guarantees of many previous algorithms in this setting. Overall, our results help to understand at a fundamental level the statistical properties of various algorithms for the problem of ranking from pair-wise comparisons, and under various natural settings, lead to novel algorithms with improved statistical guarantees compared to existing algorithms for this problem.
8

Aportaciones al modelado de la indisponibilidad de componentes y la fiabilidad humana para la mejora de la seguridad de las centrales nucleares informada en el riesgo

Martorell Aigües, Pablo 22 March 2019 (has links)
[ES] La seguridad ha sido, es y seguirá siendo una prioridad en la operación de las centrales nucleares para la producción de energía eléctrica. Uno de los grandes retos a los que se enfrenta la industria nuclear es el envejecimiento de las estructuras, sistemas y componentes (ESC) de seguridad. Actualmente, este hecho cobra especial relevancia porque un gran número de reactores está alcanzando el fin de su vida de diseño, debiendo afrontar próximamente una Revisión Periódica de Seguridad (RPS), que garantice el funcionamiento seguro de la central durante un periodo adicional denominado Opera-ción a Largo Plazo (OLP). El Análisis Probabilista de Seguridad (APS) es una herramienta fundamental para la gestión integral de la seguridad de una planta, tanto en el marco de la RPS como en la Toma de Decisiones Informada en el Riesgo (TDIR), siendo capaz de evaluar el incremento o disminución en el riesgo producido por un cambio determinado. Sin embargo, los modelos y datos actuales que lo integran no tienen en cuenta factores que influyen de forma determinante en el riesgo de una central. Por un lado, no tienen en cuenta el envejecimiento o la estrategia de pruebas y mantenimiento, en el caso del modelado de la fiabilidad e indisponibilidad de componentes. Por otro lado, no con-templan la imprecisión en el modelado de las acciones humanas, ni la inexactitud en la cuantificación de sus probabilidades de error. La presente tesis doctoral pretende actualizar y mejorar los modelos del APS, con vistas a su aplicación en el contexto de la RPS y de la TDIR. En este sentido, se establecen dos objetivos principales. En primer lugar, el desarrollo de un modelo RAM (fiabilidad, mantenibilidad y disponibilidad) dependiente del tiempo para componentes de seguridad, ajustado con datos reales de planta y con un nivel de detalle suficiente para recoger explícitamente en la edad del mismo, el efecto de las pruebas y el mantenimiento. En segundo lugar, mostrar las aplicaciones del APS para la TDIR. En primer término, se define un modelo de riesgo dependiente de la edad, a partir del mode-lo RAM previamente formulado, con el que seleccionar la mejor estrategia de pruebas y mantenimiento en un horizonte temporal dado. En segundo término, para mejorar el modelado del APS y la cuantificación de las probabilidades de error humano (PEH), se propone una metodología de evaluación de impacto en el riesgo para cambios en las acciones humanas y las Especificaciones Técnicas de Funcionamiento, que aprovecha los resultados del uso conjunto de los análisis de seguridad determinista y probabilista. Las aportaciones que conforman esta tesis doctoral se integran en la línea de investigación financiada por el Ministerio de Economía y Empresa en el proyecto ENE2016-80401-R, "Armonización de requisitos de vigilancia y mantenimiento en centrales nucleares con información en el riesgo" y la ayuda para contratos predoctorales para la formación de doctores BES-2014-067602. / [CA] La seguretat ha sigut, és i continuarà sent una prioritat en l'operació de les centrals nuclears par a la producció d'energia elèctrica. Un del grans reptes als que s'enfronta la industria nuclear es l'envelliment de les estructures, sistemes i components de seguretat. Actualment, aquest fet té especial rellevància perquè un gran nombre de reactors està arribant a la fi de la seua vida de disseny, havent d'afrontar pròximament una Revisió Periòdica de Seguretat (RPS), que garantirà el funcionament segur de la central durant un període addicional conegut com Operació a Llarg Termini (OLP). L'Anàlisi Probabilista de Seguretat es una ferramenta fonamental per a la gestió integral de la seguretat d'una planta, tant en el marc de la RPS com en la presa de decisions informades en el risc (TDIR), sent capaç d'avaluar l'increment o disminució en el risc produït per un canvi determinat. No obstant això, els models i dades actuals que ho integren no tenen en compte factors que influeixen de forma determinant en el risc d'una central. Per una banda, no consideren l'envelliment o l'estratègia de proves i manteniment, en el cas del modelat de la fiabilitat i indisponiblitat de components. Per altra banda, no contemplen la imprecisió en el modelat de les acciones humanes, ni la inexactitud en la quantificació de la probabilitat d'error de les mateixes. La present tesi doctoral pretén actualitzar i millorar els models APS, amb la intenció d'aplicar-los en el context de la RPS y la TDIR. En aquest sentit, s'estableixen dos objectius principals. En primer lloc, el desenvolupament d'un model RAM (fiabilitat, mantenibilitat i disponibilitat) dependent del temps per a components de seguretat, ajustat amb dades reals de planta i amb un nivell de detall suficient per a recollir explícitament en l'edat del mateix, la política de proves i manteniment a la que es sotmès. En segon lloc, mostrar les aplicacions de l'APS per a la TDIR. En primer terme, es defineix un model de risc dependent de l'edat, a partir del model RAM prèviament formulat, amb el qual seleccionar la millor estratègia de proves i manteniment en un horitzó temporal determinat. En segon terme, per a millorar el modelat de l'APS i la quantificació de les probabilitats d'error humà (PEH), es proposa una metodologia d'avaluació d'impacte en el risc per a canvis en les accions humanes i en les Especificacions Tècniques de Funcionament, que aprofita els resultats de l'ús conjunt dels anàlisis determinista i probabilista. Les aportacions que formen part d'aquesta tesi doctoral s'integren en la línia d'investigació finançada pel Ministeri d'Economia i Empresa en el projecte ENE2016-80401-R, "Armonización de requisitos de vigilancia y mantenimiento en centrales nucleares con información en el riesgo" i l'ajuda per a contractes predoctorals per a la formació de doctors BES-2014-067602. / [EN] Safety always has been and always will be a priority in nuclear power plant operation to generate electricity. One of the major challenges of nuclear industry is the ageing of safety-related structures, systems and components (SSC). Presently, this fact gains relevance due to several reactors are reaching their design life, having to conduct a Periodic Safety Review (PSR) that assures safety operation for an additional period better-known as Long Term Operation (LTO) Probabilistic Safety Assessment (PSA) is an essential tool in an integral NPP safety management, both PSR framework and Risk Informed Decision Making (RIDM), being able to evaluate a risk increase or decrease for any specific change on the plant. However, current PSA models and data do not consider factors, which have an impact in the overall risk of the plant. On the one hand, they do not take into account the effect of ageing nor maintenance and testing program, in the case of reliability and unavailability modelling. On the other hand, human actions modelling and their associated error probabilities are characterized by its inaccuracy and uncertainty. This Ph. Dissertation aims to update and improve PSA models, in order to apply with-in the framework of PSR and RIDM. In this sense, two main objectives are established. Firstly, the development of a time dependent RAM (reliability, availability and maintainability) for safety-related components, fitted with a historical plant data and with enough level of detail to include the effects of testing and maintenance activities in the age of the component. Secondly, it shows PSA applications of new models in RIDM. In the first place, it defines an age-dependent risk model based on the RAM model previously formulated, which helps to select the best approach of maintenance and testing activities for a specific timeline. In the second place, to improve PSA modelling and human error probabilities (HEP) quantification, a methodology is pro-posed to evaluate the risk impact of human actions and TS changes, taking advantage of combining DSA and PSA insights. The work of this thesis is part of the research supported by the Ministry of Economy and Business in the project ENE2016-80401-R, "Risk Informed Harmonization of Surveillance Requirements and Maintenance in Nuclear Power Plants", and in the Doctoral Fellowship BES-2014-067602. / Las aportaciones que conforman esta tesis doctoral se integran en la línea de investigación financiada por el Ministerio de Economía y Empresa en el proyecto ENE2016- 80401-R, “Armonización de requisitos de vigilancia y mantenimiento en centrales nucleares con información en el riesgo” y la ayuda para contratos predoctorales para la formación de doctores BES-2014-067602. / Martorell Aigües, P. (2019). Aportaciones al modelado de la indisponibilidad de componentes y la fiabilidad humana para la mejora de la seguridad de las centrales nucleares informada en el riesgo [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/118794 / TESIS
9

<b>Inquiry into Additionality in the Solar Policy Framework</b>

Michael Liam Smith (18410295) 19 April 2024 (has links)
<p dir="ltr">An inquiry into the additionality of the income tax credit program for solar purchasing in Ohio, where aggregation electric purchasing programs exist.</p><p dir="ltr">In the State of Ohio, a unique feature of the electric market regulatory landscape permits local governments to become energy suppliers to their residents and small businesses through programs known as community choice aggregation (CCA). Some of these programs guarantee 100% renewable electricity to all enrollees. Concurrently, the federal government offers an income tax credit (ITC) for the purchase of a solar array. When policy incentives are offered, it is important to ensure they impact their target audience to act in ways that would not be observed in the scenario without the tax incentive. This is known as “additionality.” In the context of carbon emissions reduction goals, individuals who claim the ITC while already having 100% renewable electricity would violate additionality. In other words, these renewable aggregation programs may crowd out the benefits of the ITC. This paper seeks to assess the additionality of the ITC in the context of Ohio’s CCA program. The actual additionality can depend on whether renewable energy is already being supplied to the site that constructs a solar array. Hence, we study the relationship between CCA and solar adoption probability to determine whether tax incentives are additional. Using non-parametric survival analysis, panel data methods, and post-estimation simulations, this paper seeks to discern if additionality is violated using the ITC in areas where a supply of renewable energy is already guaranteed. We find that aggregation programs increase the probability of solar adoption and that on average, in Ohio, roughly $0.44 of every dollar spent on the income tax credit is non-additional. This will help policymakers determine the efficacy of funds allocated to their respective programs.</p>

Page generated in 0.0909 seconds