• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Modelagem e avaliação da extensão da vida útil de plantas industriais / Modelling and evaluation of industrial plants useful life extension

José Alberto Avelino da Silva 30 May 2008 (has links)
O envelhecimento de uma instalação industrial provoca o aumento do número de falhas. A probabilidade de falhar é um indicador do momento em que deve ser feita uma parada para manutenção. É desenvolvido um método estatístico, baseado na teoria não-markoviana, para a determinação da variação da probabilidade de falhar em função do tempo de operação, que resulta num sistema de equações diferenciais parciais de natureza hiperbólica. São apresentadas as soluções por passo-fracionário e Lax-Wendroff com termo fonte. Devido à natureza suave da solução, os dois métodos chegam ao mesmo resultado com erro menor que 10−3. No caso estudado, conclui-se que o colapso do sistema depende principalmente do estado inicial da cadeia de Markov, sendo que os demais estados apresentam pouca influência na probabilidade de falha geral do sistema. / During the useful life of an industrial plant, the failure occurrence follows an exponential distribution. However, the aging process in an industrial plant generates an increase of the failure number. The failure probability is a rating for the maintenance stopping process. In this paper, an statistical method for the assessment of the failure probability as a function of the operational time, based on the non-Markovian theory, is presented. Two maintenance conditions are addressed: In the first one, the old parts are utilized, after the repair this condition being called as good as old; in the second one the old parts are substituted by brand new ones this condition being called as good as new. A non-Markovian system with variable source term is modeled by using hyperbolic partial differential equations. The system of equations is solved using the Lax-Wendroff and fractional-step numerical schemes. The two methods achieve to approximately the same results, due to the smooth behavior of the solution. The main conclusion is that the system collapse depends essentially on the initial state of the Markov chain.
672

Machine learning via dynamical processes on complex networks / Aprendizado de máquina via processos dinâmicos em redes complexas

Thiago Henrique Cupertino 20 December 2013 (has links)
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases / A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
673

On some damage processes in risk and epidemic theories

Gathy, Maude 14 September 2010 (has links)
Cette thèse traite de processus de détérioration en théorie du risque et en biomathématique.<p><p>En théorie du risque, le processus de détérioration étudié est celui des sinistres supportés par une compagnie d'assurance.<p><p>Le premier chapitre examine la distribution de Markov-Polya comme loi possible pour modéliser le nombre de sinistres et établit certains liens avec la famille de lois de Katz/Panjer. Nous construisons la loi de Markov-Polya sur base d'un modèle de survenance des sinistres et nous montrons qu'elle satisfait une récurrence élégante. Celle-ci permet notamment de déduire un algorithme efficace pour la loi composée correspondante. Nous déduisons la famille de Katz/Panjer comme famille limite de la loi de Markov-Polya.<p><p>Le second chapitre traite de la famille dite "Lagrangian Katz" qui étend celle de Katz/Panjer. Nous motivons par un problème de premier passage son utilisation comme loi du nombre de sinistres. Nous caractérisons toutes les lois qui en font partie et nous déduisons un algorithme efficace pour la loi composée. Nous examinons également son indice de dispersion ainsi que son comportement asymptotique. <p><p>Dans le troisième chapitre, nous étudions la probabilité de ruine sur horizon fini dans un modèle discret avec taux d'intérêt positifs. Nous déterminons un algorithme ainsi que différentes bornes pour cette probabilité. Une borne particulière nous permet de construire deux mesures de risque. Nous examinons également la possibilité de faire appel à de la réassurance proportionelle avec des niveaux de rétention égaux ou différents sur les périodes successives.<p><p>Dans le cadre de processus épidémiques, la détérioration étudiée consiste en la propagation d'une maladie de type SIE (susceptible - infecté - éliminé). La manière dont un infecté contamine les susceptibles est décrite par des distributions de survie particulières. Nous en déduisons la distribution du nombre total de personnes infectées à la fin de l'épidémie. Nous examinons en détails les épidémies dites de type Markov-Polya et hypergéométrique. Nous approximons ensuite cette loi par un processus de branchement. Nous étudions également un processus de détérioration similaire en théorie de la fiabilité où le processus de détérioration consiste en la propagation de pannes en cascade dans un système de composantes interconnectées. <p><p><p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
674

Réactions de fusion entre ions lourds par effet tunnel quantique : le cas des collisions entre calcium et nickel / Heavy-ion fusion reactions through quantum tunneling : collisions between calcium and nickel isotopes

Bourgin, Dominique 26 September 2016 (has links)
Les réactions de fusion-évaporation et de transfert de nucléons entre ions lourds à des énergies proches de la barrière de Coulomb jouent un rôle essentiel dans l’étude de la structure nucléaire et des mécanismes de réaction. Dans le cadre de cette thèse, deux expériences de fusion-évaporation et de transfert de nucléons ont été réalisées au Laboratoire National de Legnaro en Italie : 40Ca+58Ni et 40Ca+64Ni. Dans une première expérience, les sections efficaces de fusion de 40Ca+58,64Ni ont été mesurées à des énergies au-dessus et en dessous de la barrière de Coulomb et ont été interprétées à l’aide de calculs en voies couplées et Hartree-Fock dépendants du temps (TDHF). Les résultats montrent l’importance de l’excitation à un phonon octupolaire dans le noyau 40Ca et les excitations à un phonon quadripolaire dans les noyaux 58Ni et 64Ni, ainsi que l’importance des voies de transfert de nucléons dans le système riche en neutrons 40Ca+64Ni. Dans une expérience complémentaire, les probabilités de transfert de nucléons de 40Ca+58,64Ni ont été mesurées dans le même domaine d’énergie que l’expérience précédente et ont été interprétées en effectuant des calculs TDHF+BCS. Les résultats confirment l’importance des voies de transfert de nucléons dans 40Ca+64Ni. Une description conjointe des probabilités de transfert de nucléons et des sections efficaces de fusion a été réalisée pour les deux réactions étudiées en utilisant une approche en voies couplées. / Heavy-ion fusion-evaporation and nucleon transfer reactions at energies close to the Coulomb barrier play an essential role in the study of nuclear structure and reaction dynamics. In the framework of this PhD thesis, two fusion-evaporation and nucleon transfer experiments have been performed at the Laboratori Nazionali di Legnaro in Italy : 40Ca+58Ni and 40Ca+64Ni. In a first experiment, fusion cross sections for 40Ca+58,64Ni have been measured from above to below the Coulomb barrier and have been interpreted by means of coupled-channels and Time-Dependent Hartree-Fock (TDHF) calculations. The results show the importance of the one-phonon octupole excitation in the 40Ca nucleus and the one-phonon quadrupole excitations in the 58Ni and 64Ni nuclei, as well as the importance of the nucleon transfer channels in the neutron-rich system 40Ca+64Ni. In a complementary experiment, nucleon transfer probabilities for 40Ca+58,64Ni have been measured in the same energy region as the previous experiment and have been interpreted by performing TDHF+BCS calculations. The results confirm the importance of nucleon transfer channels in 40Ca+64Ni. A simultaneous description of the nucleon transfer probabilities and the fusion cross sections has been performed for both reactions, using a coupled-channels approach.
675

Stanovení norem spotřeby v oddělení podpůrného nákupu. / Setting norms of consumption in the procurement operations department.

Millerová, Denisa January 2015 (has links)
The diploma thesis focuses on the problematics of setting the number of employees in the selected department (team) in a specific company. With respect to the severity of impact of this decision, the diploma thesis choses a sophisticated tool, norms of work consumption. To simplify their utilization a model is created to specify the number of employees. The basis for construction of the model is summarized in the theoretical-methodological part. Practical part of the diploma thesis presents the company, the department and the team and it defines preformed processes. Furthermore, it calculates (determines) individual elements needed for building the required model, namely the time normatives (norms) of the activities, probabilities of occurrence of certain activities and usable time fund of an employee. Lastly, the created model is tested by using real data from the last period and consequent recommendations are presented.
676

Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste / Adaptive surrogate models for reliability analysis and reliability-based design optimization

Dubourg, Vincent 05 December 2011 (has links)
Cette thèse est une contribution à la résolution du problème d’optimisation sous contrainte de fiabilité. Cette méthode de dimensionnement probabiliste vise à prendre en compte les incertitudes inhérentes au système à concevoir, en vue de proposer des solutions optimales et sûres. Le niveau de sûreté est quantifié par une probabilité de défaillance. Le problème d’optimisation consiste alors à s’assurer que cette probabilité reste inférieure à un seuil fixé par les donneurs d’ordres. La résolution de ce problème nécessite un grand nombre d’appels à la fonction d’état-limite caractérisant le problème de fiabilité sous-jacent. Ainsi,cette méthodologie devient complexe à appliquer dès lors que le dimensionnement s’appuie sur un modèle numérique coûteux à évaluer (e.g. un modèle aux éléments finis). Dans ce contexte, ce manuscrit propose une stratégie basée sur la substitution adaptative de la fonction d’état-limite par un méta-modèle par Krigeage. On s’est particulièrement employé à quantifier, réduire et finalement éliminer l’erreur commise par l’utilisation de ce méta-modèle en lieu et place du modèle original. La méthodologie proposée est appliquée au dimensionnement des coques géométriquement imparfaites soumises au flambement. / This thesis is a contribution to the resolution of the reliability-based design optimization problem. This probabilistic design approach is aimed at considering the uncertainty attached to the system of interest in order to provide optimal and safe solutions. The safety level is quantified in the form of a probability of failure. Then, the optimization problem consists in ensuring that this failure probability remains less than a threshold specified by the stakeholders. The resolution of this problem requires a high number of calls to the limit-state design function underlying the reliability analysis. Hence it becomes cumbersome when the limit-state function involves an expensive-to-evaluate numerical model (e.g. a finite element model). In this context, this manuscript proposes a surrogate-based strategy where the limit-state function is progressively replaced by a Kriging meta-model. A special interest has been given to quantifying, reducing and eventually eliminating the error introduced by the use of this meta-model instead of the original model. The proposed methodology is applied to the design of geometrically imperfect shells prone to buckling.
677

Satisficing solutions for multiobjective stochastic linear programming problems

Adeyefa, Segun Adeyemi 06 1900 (has links)
Multiobjective Stochastic Linear Programming is a relevant topic. As a matter of fact, many real life problems ranging from portfolio selection to water resource management may be cast into this framework. There are severe limitations in objectivity in this field due to the simultaneous presence of randomness and conflicting goals. In such a turbulent environment, the mainstay of rational choice does not hold and it is virtually impossible to provide a truly scientific foundation for an optimal decision. In this thesis, we resort to the bounded rationality and chance-constrained principles to define satisficing solutions for Multiobjective Stochastic Linear Programming problems. These solutions are then characterized for the cases of normal, exponential, chi-squared and gamma distributions. Ways for singling out such solutions are discussed and numerical examples provided for the sake of illustration. Extension to the case of fuzzy random coefficients is also carried out. / Decision Sciences
678

Long term seasonal and annual changes in rainfall duration and magnitude in Luvuvhu River Catchment, South Africa

Mashinye, Mosedi Deseree 18 May 2018 (has links)
MESHWR / Department of Hydrology and Water Resources / This study was aimed at investigating the long term seasonal and annual changes in rainfall duration and magnitude at Luvuvhu River Catchment (LRC). Rainfall in this catchment is highly variable and is characterised of extreme events which shift runoff process, affect the timing and magnitude of floods and drought, and alter groundwater recharge. This study was motivated by the year to year changes of rainfall which have some effects on the availability of water resources. Computed long term total seasonal, annual rainfall and total number of seasonal rainy days were used to identify trends for the period of 51 years (1965- 2015), using Mann Kendal (MK), linear regression (LR) and quantile regression methods. The MK, LR and quantile regression methods have indicated dominance of decreasing trends of the annual, seasonal rainfall and duration of seasonal rainfall although they were not statistically significant. However, statistical significant decreasing trends in duration of seasonal rainfall were identified by MK and LR at Matiwa, Palmaryville, Levubu, and Entabeni Bos stations only. Quantile regression identified the statistically significant decreasing trends on 0.2, 0.5 and 0.7 quantiles only in the Palmaryville, Levubu and Entabeni Bos, respectively. Stations with non-statistically significant decreasing trends of annual and seasonal rainfall had magnitude of change ranging from 0.12 to 12.31 and 0.54 to 6.72 mm, respectively. Stations with non-statistically increasing trends of annual and seasonal rainfall magnitude had positive magnitude of change ranging from 1.51 to 6.78 and 2.05 to 6.51 mm, respectively. The Study recommended further studies using other approaches to determine the duration of rainfall to improve, update and compare the results obtained in the current study. Continuous monitoring and installation of rain gauges are recommended on the lower reaches of the catchment for the findings to be of complete picture for the whole catchment and to also minimize the rainfall gaps in the stations. Water resources should be used in a sustainable way to avoid water crisis risk in the next generations. / NRF
679

Medium-range probabilistic river streamflow predictions

Roulin, Emmannuel 30 June 2014 (has links)
River streamflow forecasting is traditionally based on real-time measurements of rainfall over catchments and discharge at the outlet and upstream. These data are processed in mathematical models of varying complexity and allow to obtain accurate predictions for short times. In order to extend the forecast horizon to a few days - to be able to issue early warning - it is necessary to take into account the weather forecasts. However, the latter display the property of sensitivity to initial conditions, and for appropriate risk management, forecasts should therefore be considered in probabilistic terms. Currently, ensemble predictions are made using a numerical weather prediction model with perturbed initial conditions and allow to assess uncertainty. <p><p>The research began by analyzing the meteorological predictions at the medium-range (up to 10-15 days) and their use in hydrological forecasting. Precipitation from the ensemble prediction system of the European Centre for Medium-Range Weather Forecasts (ECMWF) were used. A semi-distributed hydrological model was used to transform these precipitation forecasts into ensemble streamflow predictions. The performance of these forecasts was analyzed in probabilistic terms. A simple decision model also allowed to compare the relative economic value of hydrological ensemble predictions and some deterministic alternatives. <p><p>Numerical weather prediction models are imperfect. The ensemble forecasts are therefore affected by errors implying the presence of biases and the unreliability of probabilities derived from the ensembles. By comparing the results of these predictions to the corresponding observed data, a statistical model for the correction of forecasts, known as post-processing, has been adapted and shown to improve the performance of probabilistic forecasts of precipitation. This approach is based on retrospective forecasts made by the ECMWF for the past twenty years, providing a sufficient statistical sample. <p><p>Besides the errors related to meteorological forcing, hydrological forecasts also display errors related to initial conditions and to modeling errors (errors in the structure of the hydrological model and in the parameter values). The last stage of the research was therefore to investigate, using simple models, the impact of these different sources of error on the quality of hydrological predictions and to explore the possibility of using hydrological reforecasts for post-processing, themselves based on retrospective precipitation forecasts. <p>/<p>La prévision des débits des rivières se fait traditionnellement sur la base de mesures en temps réel des précipitations sur les bassins-versant et des débits à l'exutoire et en amont. Ces données sont traitées dans des modèles mathématiques de complexité variée et permettent d'obtenir des prévisions précises pour des temps courts. Pour prolonger l'horizon de prévision à quelques jours – afin d'être en mesure d'émettre des alertes précoces – il est nécessaire de prendre en compte les prévisions météorologiques. Cependant celles-ci présentent par nature une dynamique sensible aux erreurs sur les conditions initiales et, par conséquent, pour une gestion appropriée des risques, il faut considérer les prévisions en termes probabilistes. Actuellement, les prévisions d'ensemble sont effectuées à l'aide d'un modèle numérique de prévision du temps avec des conditions initiales perturbées et permettent d'évaluer l'incertitude.<p><p>La recherche a commencé par l'analyse des prévisions météorologiques à moyen-terme (10-15 jours) et leur utilisation pour des prévisions hydrologiques. Les précipitations issues du système de prévisions d'ensemble du Centre Européen pour les Prévisions Météorologiques à Moyen-Terme ont été utilisées. Un modèle hydrologique semi-distribué a permis de traduire ces prévisions de précipitations en prévisions d'ensemble de débits. Les performances de ces prévisions ont été analysées en termes probabilistes. Un modèle de décision simple a également permis de comparer la valeur économique relative des prévisions hydrologiques d'ensemble et d'alternatives déterministes.<p><p>Les modèles numériques de prévision du temps sont imparfaits. Les prévisions d'ensemble sont donc entachées d'erreurs impliquant la présence de biais et un manque de fiabilité des probabilités déduites des ensembles. En comparant les résultats de ces prévisions aux données observées correspondantes, un modèle statistique pour la correction des prévisions, connue sous le nom de post-processing, a été adapté et a permis d'améliorer les performances des prévisions probabilistes des précipitations. Cette approche se base sur des prévisions rétrospectives effectuées par le Centre Européen sur les vingt dernières années, fournissant un échantillon statistique suffisant.<p><p>A côté des erreurs liées au forçage météorologique, les prévisions hydrologiques sont également entachées d'erreurs liées aux conditions initiales et aux erreurs de modélisation (structure du modèle hydrologique et valeur des paramètres). La dernière étape de la recherche a donc consisté à étudier, à l'aide de modèles simples, l'impact de ces différentes sources d'erreur sur la qualité des prévisions hydrologiques et à explorer la possibilité d'utiliser des prévisions hydrologiques rétrospectives pour le post-processing, elles-même basées sur les prévisions rétrospectives des précipitations. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
680

Poisson hyperplane tessellation: Asymptotic probabilities of the zero and typical cells

Bonnet, Gilles 17 February 2017 (has links)
We consider the distribution of the zero and typical cells of a (homogeneous) Poisson hyperplane tessellation. We give a direct proof adapted to our setting of the well known Complementary Theorem. We provide sharp bounds for the tail distribution of the number of facets. We also improve existing bounds for the tail distribution of size measurements of the cells, such as the volume or the mean width. We improve known results about the generalised D.G. Kendall's problem, which asks about the shape of large cells. We also show that cells with many facets cannot be close to a lower dimensional convex body. We tacle the much less study problem of the number of facets and the shape of small cells. In order to obtain the results above we also develop some purely geometric tools, in particular we give new results concerning the polytopal approximation of an elongated convex body.

Page generated in 0.0486 seconds