• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

System Availability Maximization and Residual Life Prediction under Partial Observations

Jiang, Rui 10 January 2012 (has links)
Many real-world systems experience deterioration with usage and age, which often leads to low product quality, high production cost, and low system availability. Most previous maintenance and reliability models in the literature do not incorporate condition monitoring information for decision making, which often results in poor failure prediction for partially observable deteriorating systems. For that reason, the development of fault prediction and control scheme using condition-based maintenance techniques has received considerable attention in recent years. This research presents a new framework for predicting failures of a partially observable deteriorating system using Bayesian control techniques. A time series model is fitted to a vector observation process representing partial information about the system state. Residuals are then calculated using the fitted model, which are indicative of system deterioration. The deterioration process is modeled as a 3-state continuous-time homogeneous Markov process. States 0 and 1 are not observable, representing healthy (good) and unhealthy (warning) system operational conditions, respectively. Only the failure state 2 is assumed to be observable. Preventive maintenance can be carried out at any sampling epoch, and corrective maintenance is carried out upon system failure. The form of the optimal control policy that maximizes the long-run expected average availability per unit time has been investigated. It has been proved that a control limit policy is optimal for decision making. The model parameters have been estimated using the Expectation Maximization (EM) algorithm. The optimal Bayesian fault prediction and control scheme, considering long-run average availability maximization along with a practical statistical constraint, has been proposed and compared with the age-based replacement policy. The optimal control limit and sampling interval are calculated in the semi-Markov decision process (SMDP) framework. Another Bayesian fault prediction and control scheme has been developed based on the average run length (ARL) criterion. Comparisons with traditional control charts are provided. Formulae for the mean residual life and the distribution function of system residual life have been derived in explicit forms as functions of a posterior probability statistic. The advantage of the Bayesian model over the well-known 2-parameter Weibull model in system residual life prediction is shown. The methodologies are illustrated using simulated data, real data obtained from the spectrometric analysis of oil samples collected from transmission units of heavy hauler trucks in the mining industry, and vibration data from a planetary gearbox machinery application.
332

Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences

Erguven, Sait 01 September 2006 (has links) (PDF)
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore / change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
333

David Gauthier’s Moral Contractarianism and the Problem of Secession

Etieyibo, Edwin Unknown Date
No description available.
334

Delay-sensitive Communications Code-Rates, Strategies, and Distributed Control

Parag, Parimal 2011 December 1900 (has links)
An ever increasing demand for instant and reliable information on modern communication networks forces codewords to operate in a non-asymptotic regime. To achieve reliability for imperfect channels in this regime, codewords need to be retransmitted from receiver to the transmit buffer, aided by a fast feedback mechanism. Large occupancy of this buffer results in longer communication delays. Therefore, codewords need to be designed carefully to reduce transmit queue-length and thus the delay experienced in this buffer. We first study the consequences of physical layer decisions on the transmit buffer occupancy. We develop an analytical framework to relate physical layer channel to the transmit buffer occupancy. We compute the optimal code-rate for finite-length codewords operating over a correlated channel, under certain communication service guarantees. We show that channel memory has a significant impact on this optimal code-rate. Next, we study the delay in small ad-hoc networks. In particular, we find out what rates can be supported on a small network, when each flow has a certain end-to-end service guarantee. To this end, service guarantee at each intermediate link is characterized. These results are applied to study the potential benefits of setting up a network suitable for network coding in multicast. In particular, we quantify the gains of network coding over classic routing for service provisioned multicast communication over butterfly networks. In the wireless setting, we study the trade-off between communications gains achieved by network coding and the cost to set-up a network enabling network coding. In particular, we show existence of scenarios where one should not attempt to create a network suitable for coding. Insights obtained from these studies are applied to design a distributed rate control algorithm in a large network. This algorithm maximizes sum-utility of all flows, while satisfying per-flow end-to-end service guarantees. We introduce a notion of effective-capacity per communication link that captures the service requirements of flows sharing this link. Each link maintains a price and effective-capacity, and each flow maintains rate and dissatisfaction. Flows and links update their respective variables locally, and we show that their decisions drive the system to an optimal point. We implemented our algorithm on a network simulator and studied its convergence behavior on few networks of practical interest.
335

Essays in financial mathematics

Lindensjö, Kristoffer January 2013 (has links)
<p>Diss. Stockholm : Handelshögskolan, 2013. Sammanfattning jämte 3 uppsatser.</p>
336

Détection et classification de cibles multispectrales dans l'infrarouge

Maire, F. 14 February 2014 (has links) (PDF)
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l'avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d'aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d'aéronefs satisfaisant ces contraintes. Tout d'abord, nous spécifions une méthode de détection d'anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l'image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d'identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d'un modèle statistique d'observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L'estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d'apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l'estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu'en classification, justifient l'intérêt du développement de dispositifs permettant l'acquisition d'images multispectrales. Ces méthodes nous ont également permis d'identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d'aéronefs faiblement résolus en infrarouge.
337

Détection et classification de cibles multispectrales dans l'infrarouge

MAIRE, Florian 14 February 2014 (has links) (PDF)
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l'avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d'aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d'aéronefs satisfaisant ces contraintes. Tout d'abord, nous spécifions une méthode de détection d'anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l'image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d'identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d'un modèle statistique d'observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L'estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d'apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l'estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu'en classification, justifient l'intérêt du développement de dispositifs permettant l'acquisition d'images multispectrales. Ces méthodes nous ont également permis d'identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d'aéronefs faiblement résolus en infrarouge
338

David Gauthiers Moral Contractarianism and the Problem of Secession

Etieyibo, Edwin 11 1900 (has links)
This thesis proposes a reading of David Gauthiers moral contractarianism (hereinafter Mb(CM)A) that demonstrates how cooperation can be rational in situations where expected utilities (EU) are stacked too high against cooperation. The dissertation critically examines Mb(CM)A and contends that it breaks down in the test of application, i.e. the problem of secession because of the conception of rationality it appeals to. Mb(CM)A identifies rationality with utility-maximization, where utility is the measure of considered coherent preferences about outcomes. Mb(CM)A links morality to reason, and reason to practical rationality, and practical rationality to interest, which it identifies with individual utility. On this view, an action (or a disposition) is rational if that action (or disposition) maximizes an agents EU. This conception of rationality the essay claims is both nave and misleading because it does not take into account an agents considered preference for the acts that are available, in addition to the EU of those acts. Therefore, the thesis argues that Mb(CM)As account of rationality be abandoned in favor of a decision-value/symbolic utilitys or morals by decision-value agreements conception of practical rationality. Morals by decision-value agreement (henceforth Mb(DV)A), the dissertation claims, handles serious problems, like the problem of secession in ways that Mb(CM)A cannot. Mb(CM)A breaks down in the test of application because when applied to the problem of secession, it suggests a single-tracked silver bullet solution. Specifically, it tracks only EU-reasons and claims that insofar as cooperation does not maximize the EU of better-off agents, it is not rational for them to cooperate with or support those that are less well-off. By contrast, Mb(DV)A offers a multi-tracked framework for solutions to the problem, namely: it factors in an agents considered preference for the acts that are available, in addition to EU of those acts. It is the argument of the thesis that when EU is stacked too high against cooperation, it may or may not be rational for an agent to cooperate, depending on which way symbolic utility (SU) for that agent points toward. If SU points in the direction of secession, then it is DV-rational for an agent not to cooperate, but if SU points toward non-secession, then it is DV-rational for that agent to cooperate.
339

Preferências assimétricas em decisões de investimento no Brasil

Martits, Luiz Augusto 20 February 2008 (has links)
Made available in DSpace on 2010-04-20T20:48:01Z (GMT). No. of bitstreams: 3 71050100718.pdf.jpg: 12656 bytes, checksum: 70340ae65c49c6fee3a991247dc4ef5b (MD5) 71050100718.pdf.txt: 321921 bytes, checksum: 2a3fd8e10dce647d19b0906c936496e2 (MD5) 71050100718.pdf: 1109092 bytes, checksum: fd5777ca389880dab6d98b5c7c624391 (MD5) Previous issue date: 2008-02-20T00:00:00Z / The main objective of this thesis is to test the hypothesis that utility preferences that incorporate asymmetric reactions between gains and losses generate better results, when applied to the Brazilian market, than the classic Von Neumann-Morgenstern expected utility function. The asymmetric behavior can be computed through the introduction of a disappointment (or loss) aversion coefficient in the classical expected utility function, which increases the impact of losses against gains. This kind of adjustment is supported by recent developments in financial theory, specially those studies that try to solve the violations of the expected utility axioms. The analysis of the implications of such adjustment is made through the comparison of the results regarding the participation of the risky asset (stock market) in the composition of the optimum portfolio (the one that maximizes utility) generated by both types of preferences: expected utility and loss aversion utility functions. The results are then compared with real data from two types of Brazilian investors (pension funds and households) aiming at verifying the capacity of each utility function to replicate real investment data from these investors. The results of the tests show that it is not possible to reject the expected utility function as an adequate representative model for the aggregate behavior of Brazilian pension funds. However, the simulations indicate that this type of function should be rejected as an adequate model to replicate real investment decisions of Brazilian individual investors (households). The behavior of this type of investors can be better replicated by applying a loss aversion utility function. / O principal objetivo deste trabalho é analisar se o uso de preferências que incorporem assimetria na reação do investidor frente a ganhos e perdas permite gerar resultados mais coerentes com o comportamento real de investidores brasileiros na seleção de portfólios ótimos de investimento. Uma das formas de tratar o comportamento assimétrico se dá através da introdução do coeficiente de aversão a perdas (ou ao desapontamento) na função utilidade tradicional, coeficiente este que aumenta o impacto das perdas frente aos ganhos. A aplicação deste ajuste na função utilidade tradicional decorre de recentes avanços na teoria de finanças, mais especificamente daqueles estudos que buscam solucionar as violações dos axiomas da teoria da utilidade esperada, violações estas já demonstradas empiricamente através de testes de laboratório. A análise das implicações do uso deste tipo de função é feita através da comparação dos resultados quanto à participação do ativo com risco (mercado acionário) na composição do portfólio ótimo (aquele que maximiza a utilidade) do investidor gerados por dois tipos de função utilidade: tradicional e com aversão a perdas. Os resultados são comparados com dados reais de participação do mercado acionário nos investimentos totais de dois tipos de investidores brasileiros - fundos de pensão e investidores individuais - visando verificar a adequação dos resultados de cada função em relação ao comportamento destes investidores. Os resultados mostram que não é possível rejeitar a função utilidade tradicional como modelo representativo do comportamento agregado dos fundos de pensão. Por outro lado, as simulações indicam que a função utilidade tradicional deve ser rejeitada como modelo representativo do comportamento dos investidores individuais, sendo o comportamento destes investidores melhor representado por uma função que incorpora aversão a perdas.
340

Memory-aware algorithms : from multicores to large scale platforms / Algorithmes orientés mémoire : des processeurs multi-cœurs aux plates-formes à grande échelle

Jacquelin, Mathias 20 July 2011 (has links)
Cette thèse s’intéresse aux algorithmes adaptés aux architectures mémoire hiérarchiques, rencontrées notamment dans le contexte des processeurs multi-cœurs.Nous étudions d’abord le produit de matrices sur les processeurs multi-cœurs. Nous modélisons le processeur, bornons le volume de communication, présentons trois algorithmes réduisant ce volume de communication et validons leurs performances. Nous étudions ensuite la factorisation QR, dans le contexte des matrices ayant plus de lignes que de colonnes. Nous revisitons les algorithmes existants afin d’exploiter les processeurs multi-cœurs, analysons leurs chemins critiques, montrons que certains sont asymptotiquement optimaux, et analysons leurs performances.Nous étudions ensuite les applications pipelinées sur une plate-forme hétérogène, le QS 22. Nous modélisons celle-ci et appliquons les techniques d’ordonnancement en régime permanent. Nous introduisons un programme linéaire mixte permettant d’obtenir une solution optimale. Nous introduisons en outre un ensemble d’heuristiques.Puis, nous minimisons la mémoire nécessaire à une application modélisée par un arbre, sur une plate-forme à deux niveaux de mémoire. Nous présentons un algorithme optimal et montrons qu’il existe des arbres tels que les parcours postfixes sont arbitrairement mauvais. Nous étudions alors la minimisation du volume d’E/S à mémoire donnée, montrons que ce problème est NP-complet, et présentons des heuristiques. Enfin, nous comparons plusieurs politiques d’archivage pour BLUE WATERS. Nous introduisons deux politiques d’archivage améliorant les performances de la politique RAIT, modélisons la plate-forme de stockage et simulons son fonctionnement. / This thesis focus on memory-aware algorithms tailored for hierarchical memory architectures, found for instance within multicore processors. We first study the matrix product on multicore architectures. We model such a processor, and derive lower bounds on the communication volume. We introduce three ad hoc algorithms, and experimentally assess their performance.We then target a more complex operation: the QR factorization of tall matrices. We revisit existing algorithms to better exploit the parallelism of multicore processors. We thus study the critical paths of many algorithms, prove some of them to be asymptotically optimal, and assess their performance.In the next study, we focus on scheduling streaming applications onto a heterogeneous multicore platform, the QS 22. We introduce a model of the platform and use steady-state scheduling techniques so as to maximize the throughput. We present a mixed integer programming approach that computes an optimal solution, and propose simpler heuristics. We then focus on minimizing the amount of required memory for tree-shaped workflows, and target a classical two-level memory system. I/O represent transfers from a memory to the other. We propose a new exact algorithm, and show that there exist trees where postorder traversals are arbitrarily bad. We then study the problem of minimizing the I/O volume for a given memory, show that it is NP-hard, and provide a set of heuristics.Finally, we compare archival policies for BLUE WATERS. We introduce two archival policies and adapt the well known RAIT strategy. We provide a model of the tape storage platform, and use it to assess the performance of the three policies through simulation.

Page generated in 0.0969 seconds