• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Machine learning via dynamical processes on complex networks / Aprendizado de máquina via processos dinâmicos em redes complexas

Cupertino, Thiago Henrique 20 December 2013 (has links)
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases / A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
662

Random allocations: new and extended models and techniques with applications and numerics.

Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
663

On unequal probability sampling designs

Grafström, Anton January 2010 (has links)
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
664

Analyse und praktische Umsetzung unterschiedlicher Methoden des <i>Randomized Branch Sampling</i> / Analysis and practical application of different methods of the <i>Randomized Branch Sampling</i>

Cancino Cancino, Jorge Orlando 26 June 2003 (has links)
No description available.
665

Random allocations: new and extended models and techniques with applications and numerics.

Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
666

Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom / Joint subsample time delay and echo template estimation for ultrasound signals

Antelo Junior, Ernesto Willams Molina 20 September 2017 (has links)
CAPES / Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais. / Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
667

Extremal combinatorics, graph limits and computational complexity

Noel, Jonathan A. January 2016 (has links)
This thesis is primarily focused on problems in extremal combinatorics, although we will also consider some questions of analytic and algorithmic nature. The d-dimensional hypercube is the graph with vertex set {0,1}<sup>d</sup> where two vertices are adjacent if they differ in exactly one coordinate. In Chapter 2 we obtain an upper bound on the 'saturation number' of Q<sub>m</sub> in Q<sub>d</sub>. Specifically, we show that for m &ge; 2 fixed and d large there exists a subgraph G of Q<sub>d</sub> of bounded average degree such that G does not contain a copy of Q<sub>m</sub> but, for every G' such that G &subne; G' &sube; Q<sub>d</sub>, the graph G' contains a copy of Q<sub>m</sub>. This result answers a question of Johnson and Pinto and is best possible up to a factor of O(m). In Chapter 3, we show that there exists &epsilon; &gt; 0 such that for all k and for n sufficiently large there is a collection of at most 2<sup>(1-&epsilon;)k</sup> subsets of [n] which does not contain a chain of length k+1 under inclusion and is maximal subject to this property. This disproves a conjecture of Gerbner, Keszegh, Lemons, Palmer, P&aacute;lv&ouml;lgyi and Patk&oacute;s. We also prove that there exists a constant c &isin; (0,1) such that the smallest such collection is of cardinality 2<sup>(1+o(1))<sup>ck</sup> </sup> for all k. In Chapter 4, we obtain an exact expression for the 'weak saturation number' of Q<sub>m</sub> in Q<sub>d</sub>. That is, we determine the minimum number of edges in a spanning subgraph G of Q<sub>d</sub> such that the edges of E(Q<sub>d</sub>)\E(G) can be added to G, one edge at a time, such that each new edge completes a copy of Q<sub>m</sub>. This answers another question of Johnson and Pinto. We also obtain a more general result for the weak saturation of 'axis aligned' copies of a multidimensional grid in a larger grid. In the r-neighbour bootstrap process, one begins with a set A<sub>0</sub> of 'infected' vertices in a graph G and, at each step, a 'healthy' vertex becomes infected if it has at least r infected neighbours. If every vertex of G is eventually infected, then we say that A<sub>0</sub> percolates. In Chapter 5, we apply ideas from weak saturation to prove that, for fixed r &ge; 2, every percolating set in Q<sub>d</sub> has cardinality at least (1+o(1))(d choose r-1)/r. This confirms a conjecture of Balogh and Bollob&aacute;s and is asymptotically best possible. In addition, we determine the minimum cardinality exactly in the case r=3 (the minimum cardinality in the case r=2 was already known). In Chapter 6, we provide a framework for proving lower bounds on the number of comparable pairs in a subset S of a partially ordered set (poset) of prescribed size. We apply this framework to obtain an explicit bound of this type for the poset &Vscr;(q,n) consisting of all subspaces of &Fopf;<sub>q</sub><sup>n</sup>ordered by inclusion which is best possible when S is not too large. In Chapter 7, we apply the result from Chapter 6 along with the recently developed 'container method,' to obtain an upper bound on the number of antichains in &Vscr;(q,n) and a bound on the size of the largest antichain in a p-random subset of &Vscr;(q,n) which holds with high probability for p in a certain range. In Chapter 8, we construct a 'finitely forcible graphon' W for which there exists a sequence (&epsilon;<sub>i</sub>)<sup>&infin;</sup><sub>i=1</sub> tending to zero such that, for all i &ge; 1, every weak &epsilon;<sub>i</sub>-regular partition of W has at least exp(&epsilon;<sub>i</sub><sup>-2</sup>/2<sup>5log&lowast;&epsilon;<sub>i</sub><sup>-2</sup></sup>) parts. This result shows that the structure of a finitely forcible graphon can be much more complex than was anticipated in a paper of Lov&aacute;sz and Szegedy. For positive integers p,q with p/q &VerticalSeparator;&ge; 2, a circular (p,q)-colouring of a graph G is a mapping V(G) &rarr; &Zopf;<sub>p</sub> such that any two adjacent vertices are mapped to elements of &Zopf;<sub>p</sub> at distance at least q from one another. The reconfiguration problem for circular colourings asks, given two (p,q)-colourings f and g of G, is it possible to transform f into g by recolouring one vertex at a time so that every intermediate mapping is a p,q-colouring? In Chapter 9, we show that this question can be answered in polynomial time for 2 &le; p/q &LT; 4 and is PSPACE-complete for p/q &ge; 4.
668

Climate variability and climate change in water resources management of the Zambezi River basin

Tirivarombo, Sithabile January 2013 (has links)
Water is recognised as a key driver for social and economic development in the Zambezi basin. The basin is riparian to eight southern African countries and the transboundary nature of the basin’s water resources can be viewed as an agent of cooperation between the basin countries. It is possible, however, that the same water resource can lead to conflicts between water users. The southern African Water Vision for ‘equitable and sustainable utilisation of water for social, environmental justice and economic benefits for the present and future generations’ calls for an integrated and efficient management of water resources within the basin. Ensuring water and food security in the Zambezi basin is, however, faced with challenges due to high variability in climate and the available water resources. Water resources are under continuous threat from pollution, increased population growth, development and urbanisation as well as global climate change. These factors increase the demand for freshwater resources and have resulted in water being one of the major driving forces for development. The basin is also vulnerable due to lack of adequate financial resources and appropriate water resources infrastructure to enable viable, equitable and sustainable distribution of the water resources. This is in addition to the fact that the basin’s economic mainstay and social well-being are largely dependent on rainfed agriculture. There is also competition among the different water users and this has the potential to generate conflicts, which further hinder the development of water resources in the basin. This thesis has focused on the Zambezi River basin emphasising climate variability and climate change. It is now considered common knowledge that the global climate is changing and that many of the impacts will be felt through water resources. If these predictions are correct then the Zambezi basin is most likely to suffer under such impacts since its economic mainstay is largely determined by the availability of rainfall. It is the belief of this study that in order to ascertain the impacts of climate change, there should be a basis against which this change is evaluated. If we do not know the historical patterns of variability it may be difficult to predict changes in the future climate and in the hydrological resources and it will certainly be difficult to develop appropriate management strategies. Reliable quantitative estimates of water availability are a prerequisite for successful water resource plans. However, such initiatives have been hindered by paucity in data especially in a basin where gauging networks are inadequate and some of them have deteriorated. This is further compounded by shortages in resources, both human and financial, to ensure adequate monitoring. To address the data problems, this study largely relied on global data sets and the CRU TS2.1 rainfall grids were used for a large part of this study. The study starts by assessing the historical variability of rainfall and streamflow in the Zambezi basin and the results are used to inform the prediction of change in the future. Various methods of assessing historical trends were employed and regional drought indices were generated and evaluated against the historical rainfall trends. The study clearly demonstrates that the basin has a high degree of temporal and spatial variability in rainfall and streamflow at inter-annual and multi-decadal scales. The Standardised Precipitation Index, a rainfall based drought index, is used to assess historical drought events in the basin and it is shown that most of the droughts that have occurred were influenced by climatic and hydrological variability. It is concluded, through the evaluation of agricultural maize yields, that the basin’s food security is mostly constrained by the availability of rainfall. Comparing the viability of using a rainfall based index to a soil moisture based index as an agricultural drought indicator, this study concluded that a soil moisture based index is a better indicator since all of the water balance components are considered in the generation of the index. This index presents the actual amount of water available for the plant unlike purely rainfall based indices, that do not account for other components of the water budget that cause water losses. A number of challenges were, however, faced in assessing the variability and historical drought conditions, mainly due to the fact that most parts of the Zambezi basin are ungauged and available data are sparse, short and not continuous (with missing gaps). Hydrological modelling is frequently used to bridge the data gap and to facilitate the quantification of a basin’s hydrology for both gauged and ungauged catchments. The trend has been to use various methods of regionalisation to transfer information from gauged basins, or from basins with adequate physical basin data, to ungauged basins. All this is done to ensure that water resources are accounted for and that the future can be well planned. A number of approaches leading to the evaluation of the basin’s hydrological response to future climate change scenarios are taken. The Pitman rainfall-runoff model has enjoyed wide use as a water resources estimation tool in southern Africa. The model has been calibrated for the Zambezi basin but it should be acknowledged that any hydrological modelling process is characterised by many uncertainties arising from limitations in input data and inherent model structural uncertainty. The calibration process is thus carried out in a manner that embraces some of the uncertainties. Initial ranges of parameter values (maximum and minimum) that incorporate the possible parameter uncertainties are assigned in relation to physical basin properties. These parameter sets are used as input to the uncertainty version of the model to generate behavioural parameter space which is then further modified through manual calibration. The use of parameter ranges initially guided by the basin physical properties generates streamflows that adequately represent the historically observed amounts. This study concludes that the uncertainty framework and the Pitman model perform quite well in the Zambezi basin. Based on assumptions of an intensifying hydrological cycle, climate changes are frequently expected to result in negative impacts on water resources. However, it is important that basin scale assessments are undertaken so that appropriate future management strategies can be developed. To assess the likely changes in the Zambezi basin, the calibrated Pitman model was forced with downscaled and bias corrected GCM data. Three GCMs were used for this study, namely; ECHAM, GFDL and IPSL. The general observation made in this study is that the near future (2046-2065) conditions of the Zambezi basin are expected to remain within the ranges of historically observed variability. The differences between the predictions for the three GCMs are an indication of the uncertainties in the future and it has not been possible to make any firm conclusions about directions of change. It is therefore recommended that future water resources management strategies account for historical patterns of variability, but also for increased uncertainty. Any management strategies that are able to satisfactorily deal with the large variability that is evident from the historical data should be robust enough to account for the near future patterns of water availability predicted by this study. However, the uncertainties in these predictions suggest that improved monitoring systems are required to provide additional data against which future model outputs can be assessed.
669

La chance en droit administratif. / Chance in administrative law

Giraud, Camille 21 November 2017 (has links)
La chance est une notion hétérogène en droit administratif. Les manifestations de ses différentes acceptions sont en effet nombreuses tant le hasard, les probabilités et le risque auxquels la chance renvoie sont riches d’applications concrètes. Les effets de ceux-ci sont donc également, et sans surprise, très variés, de sorte que la perméabilité du droit administratif est à leur égard teintée de singularité selon qu’ils sont respectivement considérés comme bénéfiques ou néfastes. Ainsi, les probabilités sont l’illustration de ce que la chance peut être un outil utile au juge administratif dans le prononcé de ses jugements, tandis que le hasard et le risque renvoient tous deux à la survenance d’événements ou de phénomènes imprévisibles qui cherchent à être évités. Malgré toutes les subtilités déployées par la chance en droit administratif, celle-ci se révèle néanmoins comme une notion dont l’unité apparaît au stade de l’étude de sa fonction. La chance est alors une notion fonctionnelle qui a vocation à être de plus en plus employée par le juge administratif pour améliorer l’indemnisation délivrée aux administrés, tant d’un point de vue qualitatif que quantitatif. / Chance is an heterogeneous notion in administrative law. The manifestations of its different meanings are indeed quite abundant in the sense that the coincidences, the probabilities and the risk that chance refers to are full of tangible applications. Their effects are equally, and unsurprisingly, very varied which means that the permeability of administrative law towards them is quite unique, depending on whether they are considered to be beneficial or adverse. So, probabilities illustrate how chance can be a useful tool for the administrative judge when pronouncing a judgment, whereas coincidences and risk both reflect back on the occurrence of events or unpredictable phenomenons which one would aim to avoid. Inspite of all the subtleties deployed by chance in administrative law, it nevertheless seems to be a notion, the unity of which appears to be at the study stage of its function. Chance is then a functional notion, the vocation of which is to be used more and more often by the administrative judge in order to improve the compensation awarded to citizens both from a qualitative and quantitative point of view.
670

Modelagem e avaliação da extensão da vida útil de plantas industriais / Modelling and evaluation of industrial plants useful life extension

José Alberto Avelino da Silva 30 May 2008 (has links)
O envelhecimento de uma instalação industrial provoca o aumento do número de falhas. A probabilidade de falhar é um indicador do momento em que deve ser feita uma parada para manutenção. É desenvolvido um método estatístico, baseado na teoria não-markoviana, para a determinação da variação da probabilidade de falhar em função do tempo de operação, que resulta num sistema de equações diferenciais parciais de natureza hiperbólica. São apresentadas as soluções por passo-fracionário e Lax-Wendroff com termo fonte. Devido à natureza suave da solução, os dois métodos chegam ao mesmo resultado com erro menor que 10&#8722;3. No caso estudado, conclui-se que o colapso do sistema depende principalmente do estado inicial da cadeia de Markov, sendo que os demais estados apresentam pouca influência na probabilidade de falha geral do sistema. / During the useful life of an industrial plant, the failure occurrence follows an exponential distribution. However, the aging process in an industrial plant generates an increase of the failure number. The failure probability is a rating for the maintenance stopping process. In this paper, an statistical method for the assessment of the failure probability as a function of the operational time, based on the non-Markovian theory, is presented. Two maintenance conditions are addressed: In the first one, the old parts are utilized, after the repair this condition being called as good as old; in the second one the old parts are substituted by brand new ones this condition being called as good as new. A non-Markovian system with variable source term is modeled by using hyperbolic partial differential equations. The system of equations is solved using the Lax-Wendroff and fractional-step numerical schemes. The two methods achieve to approximately the same results, due to the smooth behavior of the solution. The main conclusion is that the system collapse depends essentially on the initial state of the Markov chain.

Page generated in 0.3833 seconds