• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Analyse et modélisation du trafic internet

Chabchoub, Yousra 12 October 2009 (has links) (PDF)
Cette thèse s'inscrit dans le domaine de l'analyse et la modélisation du trafic Internet à l'échelle des flots. Les informations sur les flots (surtout les grands flots) sont très utiles dans différents domaines comme l'ingénierie du trafic, la supervision du réseau et la sécurité. L'extraction en ligne des statistiques sur les flots est une tâche difficile à cause du très haut débit du trafic actuel. Nous nous sommes intéressés dans cette thèse à l'étude de deux classes d'algorithmes traitant en ligne le trafic Internet. Dans la première partie, nous avons conçu un nouvel algorithme basé sur les filtres de Bloom pour l'identification en ligne des grands flots. Le point fort de cet algorithme est l'adaptation automatique aux variations du trafic. Une application intéressante est la détection en ligne des attaques par déni de service. Nous avons donc développé une version de l'algorithme qui intègre les spécificités des attaques. L'expérimentation en ligne montre que cette nouvelle méthode est capable d'identifier quasiment toutes les sources de trafic anormal avec un délai très court. Nous avons aussi étudié la performance de l'algorithme d'identification en ligne des grands flots. En considérant un modèle simplifié, nous avons pu approcher l'erreur générée par cet algorithme sur l'estimation du nombre de grands flots. Cette étude a permis en particulier d'évaluer l'impact des différents paramètres de l'algorithme sur sa performance. Les algorithmes présentés dans la première partie s'appliquent sur la totalité du trafic, ce qui n'est pas toujours possible car dans certains cas, on ne dispose que du trafic échantillonné. La deuxième partie de la thèse est consacrée à l'étude de l'échantillonnage et des algorithmes d'inférence des caractéristiques du trafic d'origine. D'abord, en utilisant un résultat d'approximations poissonniennes, nous avons montré que les deux méthodes d'échantillonnage: déterministe et probabiliste donnent des résultats équivalents du point de vue composition du trafic échantillonné en flots. Ensuite, nous avons conçu un algorithme permettant d'estimer, par un calcul asymptotique, à partir du trafic échantillonné, le nombre de flots dans le trafic réel et la distribution de leur taille sur un intervalle de temps court. Ceci permet de faire l'hypothèse à priori que cette distribution suit une loi de Pareto. Cette hypothèse a été validée sur des traces de trafic de différentes natures.
262

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>
263

Conception de lignes de fabrication sous incertitudes : analyse de sensibilité et approche robuste.

Gurevsky, Evgeny 13 December 2011 (has links) (PDF)
Les travaux présentés dans cette thèse portent sur la conception de systèmes de fabrication en contexte incertain. La conception d'un tel système peut être vue comme un problème d'optimisation qui consiste à trouver une configuration qui permet d'optimiser certains objectifs tout en respectant des contraintes technologiques et économiques connues. Les systèmes de fabrication étudiés dans ce mémoire sont des lignes d'assemblage et d'usinage. La première est une ligne qui se présente comme une chaîne de postes de travail où, dans chaque poste, les opérations d'assemblage s'exécutent de manière séquentielle. La deuxième, quant à elle, est une ligne particulière qui se compose de machines de transfert comportant plusieurs boîtiers multibroches où les opérations s'exécutent simultanément. Dans un premier temps, nous décrivons de différentes approches permettant de modéliser l'incertitude des données en optimisation. Une attention particulière est portée sur les deux approches suivantes : l'approche robuste et l'analyse de sensibilité. Puis, nous présentons trois applications : la conception d'une ligne d'assemblage et d'une ligne d'usinage soumises aux variations de temps opératoires et la conception d'une ligne d'assemblage avec les temps opératoires connus sous la forme d'intervalles des valeurs possibles. Pour chaque application, nous identifions les performances attendues ainsi que la complexité de la prise en compte de l'incertitude. Ensuite, nous proposons de nouveaux critères d'optimisation en adéquation avec la problématique introduite. Enfin des méthodes de résolution sont développées pour appréhender les différents problèmes mis en évidence par ces critères.
264

Parameter Estimation of the Pareto-Beta Jump-Diffusion Model in Times of Catastrophe Crisis

Reducha, Wojciech January 2011 (has links)
Jump diffusion models are being used more and more often in financial applications. Consisting of a Brownian motion (with drift) and a jump component, such models have a number of parameters that have to be set at some level. Maximum Likelihood Estimation (MLE) turns out to be suitable for this task, however it is computationally demanding. For a complicated likelihood function it is seldom possible to find derivatives. The global maximum of a likelihood function defined for a jump diffusion model can however, be obtained by numerical methods. I chose to use the Bound Optimization BY Quadratic Approximation (BOBYQA) method which happened to be effective in this case. However, results of Maximum Likelihood Estimation (MLE) proved to be hard to interpret.
265

Multi-objective optimization using Genetic Algorithms

Amouzgar, Kaveh January 2012 (has links)
In this thesis, the basic principles and concepts of single and multi-objective Genetic Algorithms (GA) are reviewed. Two algorithms, one for single objective and the other for multi-objective problems, which are believed to be more efficient are described in details. The algorithms are coded with MATLAB and applied on several test functions. The results are compared with the existing solutions in literatures and shows promising results. Obtained pareto-fronts are exactly similar to the true pareto-fronts with a good spread of solution throughout the optimal region. Constraint handling techniques are studied and applied in the two algorithms. Constrained benchmarks are optimized and the outcomes show the ability of algorithm in maintaining solutions in the entire pareto-optimal region. In the end, a hybrid method based on the combination of the two algorithms is introduced and the performance is discussed. It is concluded that no significant strength is observed within the approach and more research is required on this topic. For further investigation on the performance of the proposed techniques, implementation on real-world engineering applications are recommended.
266

Technology Characterization Models and Their Use in Designing Complex Systems

Parker, Robert Reed 2011 May 1900 (has links)
When systems designers are making decisions about which components or technologies to select for a design, they often use experience or intuition to select one technology over another. Additionally, developers of new technologies rarely provide more information about their inventions than discrete data points attained in testing, usually in a laboratory. This makes it difficult for system designers to select newer technologies in favor of proven ones. They lack the knowledge about these new technologies to consider them equally with existing technologies. Prior research suggests that set-based design representations can be useful for facilitating collaboration among engineers in a design project, both within and across organizational boundaries. However, existing set-based methods are limited in terms of how the sets are constructed and in terms of the representational capability of the sets. The goal of this research is to introduce and demonstrate new, more general set-based design methods that are effective for characterizing and comparing competing technologies in a utility-based decision framework. To demonstrate the new methods and compare their relative strengths and weaknesses, different technologies for a power plant condenser are compared. The capabilities of different condenser technologies are characterized in terms of sets defined over the space of common condenser attributes (cross sectional area, heat exchange effectiveness, pressure drop, etc.). It is shown that systems designers can use the resulting sets to explore the space of possible condenser designs quickly and effectively. It is expected that this technique will be a useful tool for system designers to evaluate new technologies and compare them to existing ones, while also encouraging the use of new technologies by providing a more accurate representation of their capabilities. I compare four representational methods by measuring the solution accuracy (compared to a more comprehensive optimization procedure's solution), computation time, and scalability (how a model changes with different data sizes). My results demonstrate that a support vector domain description-based method provides the best combination of these traits for this example. When combined with recent research on reducing its computation time, this method becomes even more favorable.
267

Multiobjective Shape Optimization of Linear Elastic Structures Considering Multiple Loading Conditions (Dealing with Mean Compliance Minimization problems)

SHIMODA, Masatoshi, AZEGAMI, Hideyuki, SAKURAI, Toshiaki 15 July 1996 (has links)
No description available.
268

A framework of statistical process control for software development

Shih, Tsung-Yo 03 August 2009 (has links)
With the globalization era, software companies around the world not only have to face competition in the domestic industry, as well as the subsequent challenge of large international companies. For this reason, domestic software companies must to upgrade their own software quality. Domestic government agencies and non-governmental units together promote Capability Maturity Model Integration (CMMI). Hope to improve their quality of software development process through internationalized professional evaluation. Towards the high-maturity software development process, software development process should be estimated quantitatively in CMMI Level 4. There are frequently used statistical process control (SPC) methods, including control charts, fishbone diagram, pareto charts ... and other related practices. Its goal is to maintain stability of overall software development process, so the output performance can be expected. Primitive SPC applied in manufacturing industry, successfully improving the quality of their products. But some characteristics of software, such as software development is human-intensive and innovative activities. It increases not only variability of control, but also difficulties of implementation. In this study, collate and analyze the operational framework of SPC and CMMI Level 4 through study of literature and case study with the case company-A company's practices. It contains two points, one is organization point of view, the other is methodological point of view. Organizational point of view includes stage of CMMI Level 4 and SPC implemented in the software industry, as well as how to design the organizational structure. Methodological point of view includes the steps to run SPC¡Buseful methods and tools. Methodological point of view also uses control theory to collate relevant control mechanisms. Finally, we illustrate how to integrate SPC into the company's system development life cycle. The framework can provide a reference for domestic software companies of longing for implementing CMMI Level 4 and SPC.
269

A pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternatives

Rousis, Damon 01 July 2011 (has links)
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
270

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>

Page generated in 0.0285 seconds