• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 352
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 717
  • 185
  • 97
  • 88
  • 87
  • 76
  • 69
  • 54
  • 54
  • 53
  • 53
  • 52
  • 50
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Robustness of reinforced concrete framed building at elevated temperatures

Lee, Seungjea January 2016 (has links)
This thesis presents the results of a research programme to investigate the behaviour and robustness of reinforced concrete (RC) frames in fire. The research was carried out through numerical simulations using the commercial finite element analysis package TNO DIANA. The main focus of the project is the large deflection behaviour of restrained reinforced concrete beams, in particular the development of catenary action, because this behaviour is the most important factor that influences the frame response under accidental loading. This research includes four main parts as follows: (1) validation of the simulation model; (2) behaviour of axially and rotationally restrained RC beams at elevated temperatures; (3) derivation of an analytical method to estimate the key quantities of restrained RC beam behaviour at elevated temperatures; (4) response and robustness of RC frame structures with different extents of damage at elevated temperatures. The analytical method has been developed to estimate the following three quantities: when the axial compression force in the restrained beam reaches the maximum; when the RC beams reach bending limits (axial force = 0) and when the beams finally fail. To estimate the time to failure, which is initiated by the fracture of reinforcement steel at the catenary action stage, a regression equation is proposed to calculate the maximum deflections of RC beams, based on an analysis of the reinforcement steel strain distributions at failure for a large number of parametric study results. A comparison between the analytical and simulation results indicates that the analytical method gives reasonably good approximations to the numerical simulation results. Based on the frame simulation results, it has been found that if a member is completely removed from the structure, the structure is unlikely to be able to develop an alternative load carrying mechanism to ensure robustness of the structure. This problem is particularly severe when a corner column is removed. However, it is possible for frames with partially damaged columns to achieve the required robustness in fire, provided the columns still have sufficient resistance to allow the beams to develop some catenary action. This may be possible if the columns are designed as simply supported columns, but have some reserves of strength in the frame due to continuity. Merely increasing the reinforcement steel area or ductility (which is difficult to do) would not be sufficient. However, increasing the cover thickness of the reinforcement steel to slow down the temperature increase is necessary.
252

Contribution à la conception robuste de véhicules en choc frontal : détection de défaillances en crash

Rosenblatt, Nicolas 27 June 2012 (has links)
Ce mémoire s’intéresse à la conception robuste de systèmes complexes dans le cadre de l’ingénierie système et de la méthode First Design. Ces travaux s’appliquent plus particulièrement aux prestations en choc frontal de véhicules de la gamme Renault. L’objectif principal de ces travaux est de proposer une méthode de conception robuste basée sur la modélisation numérique des prestations crash du véhicule. Cette stratégie vise à assurer la robustesse du produit dès la phase de conception, afin d’éviter des modifications de conception tardives et coûteuses, conséquences d’apparition de problèmes durant le cycle de validation ou la vie série du véhicule. Les spécificités du crash sont le coût important des simulations, la forte non linéarité du phénomène, ainsi que les bifurcations de comportement. Ces particularités rendent les méthodes classiques de conception robuste peu efficaces ou très couteuses. Afin de répondre à ce problème, nous développons une méthode originale, baptisée détection de défaillances, permettant d’identifier les problèmes de robustesse en crash, afin de les corriger dès le cycle de conception. Cette méthode est basée sur l’utilisation des techniques d’optimisation par les plans d’expériences. La méthode développée vise aussi à intégrer l’expertise des concepteurs crash afin de localiser rapidement les défaillances, ce qui permet de limiter le nombre de simulations nécessaires. La contrepartie d’une méthode de conception robuste reposant sur la simulation numérique est la nécessité d’avoir un bon niveau de confiance dans les résultats du modèle. On propose donc dans ce mémoire des améliorations des modèles éléments finis des véhicules Renault, afin d’améliorer la qualité de la simulation. Ces travaux vont dans le sens d’un remplacement des prototypes physiques par des prototypes numériques dans l’industrie, enjeu majeur permettant la réduction des coûts et des délais de développement. Cet enjeu est particulièrement important dans un secteur automobile très concurrentiel, où la survie d’un constructeur dépend de ses coûts et de sa réactivité face au marché. / This PhD thesis deals with robust design of complex products, within the framework of system engineering methods, such as First Design. This work focuses on frontal crashworthiness of Renault vehicles. The main goal of this PhD is to develop a robust design method based on crashworthiness numerical simulation. This method aims at ensuring the robustness of a vehicle crashworthiness right from the design stage of the product, in order to avoid costly design modifications, necessary when problems are found during the validation cycle or the life cycle of the product. Characteristics of crashworthiness phenomena are a high cost of numerical simulation, highly non-linear and bifurcative behaviour. Due to this behaviour, classic robust design methods would be unefficient or very expensive to use. In order to face this problem, we develop an original robust design method, based on optimization using design of experiments method. The goal of this method is to identify crash failures as soon as possible in the design stage, in order to correct them. This method also aims at integrating knowledge from the crash engineers, in order to find crash failures quickly, using as few simulations as possible. A challenge we meet when using numerical simulation of the crashworthiness is the need to trust the results of the model. This thesis also deals with improvements in the crash models at Renault. This work is well suited for a very competitive industry such as the automotive, where car manufacturers need to replace physical prototypes with numerical ones, in order to reduce design costs and be more reactive.
253

Estimation robuste pour les systèmes incertains / Robust estimation for uncertain systems

Bayon, Benoît 06 December 2012 (has links)
Un système est dit robuste s'il est possible de garantir son bon comportement dynamique malgré les dispersions de ses caractéristiques lors de sa fabrication, les variations de l'environnement ou encore son vieillissement. Au-delà du fait que la dispersion des caractéristiques est inéluctable, une plus grande dispersion permet notamment de diminuer fortement les coûts de production. La prise en compte explicite de la robustesse par les ingénieurs est donc un enjeu crucial lors de la conception d'un système. Des propriétés robustes peuvent être garanties lors de la synthèse d'un correcteur en boucle fermée. Il est en revanche beaucoup plus difficile de garantir ces propriétés en boucle ouverte, ce qui concerne par exemple des cas comme la synthèse d'estimateur.Prendre en compte la robustesse lors de la synthèse est une problématique importante de la communauté du contrôle robuste. Un certain nombre d'outils ont été développés pour analyser la robustesse d'un système vis-à-vis d'un ensemble d'incertitudes(μ analyse par exemple). Bien que le problème soit intrinsèquement complexe au sens algorithmique, des relaxations ont permis de formuler des conditions suffisantes pour tester la stabilité d'un système vis-à-vis d'un ensemble d'incertitudes. L'émergence de l'Optimisation sous contrainte Inégalité Matricielle Linéaire (LMI) a permis de tester ces conditions suffisantes au moyen d'un algorithme efficace, c'est-à-dire convergeant vers une solution en un temps raisonnable grâce au développement des méthodes des points intérieurs.En se basant sur ces résultats d'analyse, le problème de synthèse de correcteurs en boucle fermée ne peut pas être formulé sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe. En revanche, pour certains cas comme la synthèse de filtres robustes, le problème de synthèse peut être formulé sous la forme d'un problème d'optimisation sous contrainte LMI pour lequel un algorithme efficace existe. Ceci laisse entrevoir un certain potentiel de l'approche robuste pour la synthèse d'estimateurs.Exploitant ce fait, cette thèse propose une approche complète du problème de synthèse d'estimateurs robustes par l'intermédiaire des outils d'analyse de la commande robuste en conservant le caractère efficace de la synthèse lié aux outils classiques. Cette approche passe par une ré-interprétation de l'estimation nominale (sans incertitude) par l'optimisation sous contrainte LMI, puis par une extension systématique des outils de synthèse et d'analyse développés pour l'estimation nominale à l'estimation robuste.Cette thèse présente des outils de synthèse d'estimateurs, mais également des outils d'analyse qui permettront de tester les performances robustes atteintes par les estimateurs.Les résultats présentés dans ce document sont exprimés sous la forme de théorèmes présentant des contraintes LMI. Ces théorèmes peuvent se mettre de façon systématique sous la forme d'un problème d'optimisation pour lequel un algorithme efficace existe.Pour finir, les problèmes de synthèse d'estimateurs robustes appartiennent à une classe plus générale de problèmes de synthèse robuste : les problèmes de synthèse robuste en boucle ouverte. Ces problèmes de synthèse ont un potentiel très intéressant. Des résultats de base sont formulés pour la synthèse en boucle ouverte, permettant de proposer des méthodes de synthèse robustes dans des cas pour lesquels la mise en place d'une boucle de rétroaction est impossible. Une extension aux systèmes LPV avec une application à la commande de position sans capteur de position est également proposée. / A system is said to be robust if it is possible to guarantee his dynamic behaviour despite dispersion of his features due to production, environmental changes or aging. beyond the fact that a dispersion is ineluctable, a greater one allows to reduce production costs. Thus, considering robustness is a crucial stake during the conception of a system.Robustness can be achieved using feedback, but is more difficult in Open-Loop, which concerns estimator synthesis for instance.Robustness is a major concern of the Robust Control Community. Many tools have been developed to analyse robustness of a system towards a set of uncertainties (μ analysis for instance). And even if the problem is known to be difficult (speaking of complexity), sufficient conditions allow to formulate results to test the robust stability of a system. Thanks to the development of interior point methods, the emergence of optimization under Linear Matrix Inequalities Constraints allows to test these results using an efficient algorithm.Based on these analysis results, the robust controller synthesis problem cannot be recast as a convex optimization problem involving LMI. But for some cases such as filter synthesis, the synthesis problem can recast as a convex optimization problem. This fact let sense that robust control tools have some potential for estimators synthesis.Exploiting this fact, this thesis ofiers a complete approach of robust estimator synthesis, using robust control tools, while keeping what made the nominal approaches successful : eficient computation tools. this approach goes through reinterpretation of nominal estimation using LMI optimization, then propose a systematic extension of these tools to robust estimation.This thesis presents not only synthesis tools, but also analysis tools, allowing to test the robust performance reached by estimators All the results are proposed as convex optimization problems involving LMI.As a conclusion, robust estimator synthesis problems belong to a wider class of problems : robust open-loop synthesis problems, which have a great potential in many applications. Basic results are formulated for open-loop synthesis, providing results for cases where feedback cannot be used. An extension to LPV systems with an application to sensorless control is given.
254

Robustesse et stabilité des systèmes non-linéaires : un point de vue basé sur l’homogénéité / Robustness and stability of nonlinear systems : a homogeneous point of view

Bernuau, Emmanuel 03 October 2013 (has links)
L'objet de ce travail est l’étude des propriétés de stabilité et de robustesse des systèmes non-linéaires via des méthodes basées sur l'homogénéité. Dans un premier temps, nous rappelons le contexte usuel des systèmes homogènes ainsi que leurs caractéristiques principales. La suite du travail porte sur l'extension de l'homogénéisation des systèmes non-linéaires, déjà proposée dans le cadre de l'homogénéité à poids, au cadre plus général de l'homogénéité géométrique. Les principaux résultats d'approximation sont étendus. Nous développons ensuite un cadre théorique pour définir l'homogénéité de systèmes discontinus et/ou donnés par des inclusions différentielles. Nous montrons que les propriétés bien connues des systèmes homogènes restent vérifiées dans ce contexte. Ce travail se poursuit par l'étude de la robustesse des systèmes homogènes ou homogénéisables. Nous montrons que sous des hypothèses peu restrictives, ces systèmes sont input-to-state stable. Enfin, la dernière partie de ce travail consiste en l'étude du cas particulier du double intégrateur. Nous développons pour ce système un retour de sortie qui le stabilise en temps fini, et pour lequel nous prouvons des propriétés de robustesse par rapport à des perturbations ou à la discrétisation en exploitant les résultats développés précédemment. Des simulations viennent compléter l'étude théorique de ce système et illustrer son comportement / The purpose of this work is the study of stability and robustness properties of nonlinear systems using homogeneity-based methods. Firstly, we recall the usual context of homogeneous systems as well as their main features. The sequel of this work extends the homogenization of nonlinear systems, which was already defined in the framework of weighted homogeneity, to the more general setting of the geometric homogeneity. The main approximation results are extended. Then we develop a theoretical framework for defining homogeneity of discontinuous systems and/or systems given by a differential inclusion. We show that the well-known properties of homogeneous systems persist in this context. This work is continued by a study of the robustness properties of homogeneous or homogenizable systems. We show that under mild assumptions, these systems are input-to-state stable. Finally, the last part of this work consists in the study of the example of the double integrator system. We synthesize a finite-time stabilizing output feedback, which is shown to be robust with respect to perturbations or discretization by using techniques developed before. Simulations conclude the theoretical study of this system and illustrate its behavior
255

Scalability and robustness of artificial neural networks

Stromatias, Evangelos January 2016 (has links)
Artificial Neural Networks (ANNs) appear increasingly and routinely to gain popularity today, as they are being used in several diverse research fields and many different contexts, which may range from biological simulations and experiments on artificial neuronal models to machine learning models intended for industrial and engineering applications. One example is the recent success of Deep Learning architectures (e.g., Deep Belief Networks [DBN]), which appear in the spotlight of machine learning research, as they are capable of delivering state-of-the-art results in many domains. While the performance of such ANN architectures is greatly affected by their scale, their capacity for scalability both for training and during execution is limited by the increased power consumption and communication overheads, implicitly posing a limiting factor on their real-time performance. The on-going work on the design and construction of spike-based neuromorphic platforms offers an alternative for running large-scale neural networks, such as DBNs, with significantly lower power consumption and lower latencies, but has to overcome the hardware limitations and model specialisations imposed by these type of circuits. SpiNNaker is a novel massively parallel fully programmable and scalable architecture designed to enable real-time spiking neural network (SNN) simulations. These properties render SpiNNaker quite an attractive neuromorphic exploration platform for running large-scale ANNs, however, it is necessary to investigate thoroughly both its power requirements as well as its communication latencies. This research focusses on around two main aspects. First, it aims at characterising the power requirements and communication latencies of the SpiNNaker platform while running large-scale SNN simulations. The results of this investigation lead to the derivation of a power estimation model for the SpiNNaker system, a reduction of the overall power requirements and the characterisation of the intra- and inter-chip spike latencies. Then it focuses on a full characterisation of spiking DBNs, by developing a set of case studies in order to determine the impact of (a) the hardware bit precision; (b) the input noise; (c) weight variation; and (d) combinations of these on the classification performance of spiking DBNs for the problem of handwritten digit recognition. The results demonstrate that spiking DBNs can be realised on limited precision hardware platforms without drastic performance loss, and thus offer an excellent compromise between accuracy and low-power, low-latency execution. These studies intend to provide important guidelines for informing current and future efforts around developing custom large-scale digital and mixed-signal spiking neural network platforms.
256

DEVELOPMENT OF A SUPPLIER SEGMENTATION METHOD FOR INCREASED RESILIENCE AND ROBUSTNESS: A STUDY USING AGENT BASED MODELING AND SIMULATION

Brown, Adam J. 01 January 2017 (has links)
Supply chain management is a complex process requiring the coordination of numerous decisions in the attempt to balance often-conflicting objectives such as quality, cost, and on-time delivery. To meet these and other objectives, a focal company must develop organized systems for establishing and managing its supplier relationships. A reliable, decision-support tool is needed for selecting the best procurement strategy for each supplier, given knowledge of the existing sourcing environment. Supplier segmentation is a well-established and resource-efficient tool used to identify procurement strategies for groups of suppliers with similar characteristics. However, the existing methods of segmentation generally select strategies that optimize performance during normal operating conditions, and do not explicitly consider the effects of the chosen strategy on the supply chain’s ability to respond to disruption. As a supply chain expands in complexity and scale, its exposure to sources of major disruption like natural disasters, labor strikes, and changing government regulations also increases. With increased exposure to disruption, it becomes necessary for supply chains to build in resilience and robustness in the attempt to guard against these types of events. This work argues that the potential impacts of disruption should be considered during the establishment of day-to-day procurement strategy, and not solely in the development of posterior action plans. In this work, a case study of a laser printer supply chain is used as a context for studying the effects of different supplier segmentation methods. The system is examined using agent-based modeling and simulation with the objective of measuring disruption impact, given a set of initial conditions. Through insights gained in examination of the results, this work seeks to derive a set of improved rules for segmentation procedure whereby the best strategy for resilience and robustness for any supplier can be identified given a set of the observable supplier characteristics.
257

Algorithm for inserting a single train in an existing timetable

Ljunggren, Fredrik, Persson, Kristian January 2017 (has links)
The purpose with this report is to develop a network based insertion algorithm and evaluate it on a real-case timetable. The aim of the algorithm is to minimize the effect that that train implementation cause on the other, already scheduled traffic. We meet this purpose by choosing an objective function that maximizes the minimum distance to a conflicting train path. This ensures that the inserted train receives the best possible bottleneck robustness. We construct a graph problem, which solve with a modified version of Dijkstra’s algorithm. The complexity of the algorithm is Ο(s^2 t log⁡(s^2 t). We applied the algorithm on a Swedish timetable, containing 76 stations. The algorithm performs well and manage to obtain the optimal solution for a range of scenarios, which we have evaluated in various experiments. Increased congestion seemed to reduce the problem size. The case also show that a solution’s robustness decreases with increasing total number of departures. One disadvantage with the algorithm is that it cannot detect the best solution among those using the same bottleneck. We propose a solution to this that we hope can be implemented in further studies.
258

Evaluation of [subscript n]C[subscript k] estimators

Bsharat, Rebhi S January 1900 (has links)
Doctor of Philosophy / Department of Statistics / James J. Higgins / Outliers in the data impair traditional estimators of location, variance, and regression parameters so researchers tend to look for robust estimators, i.e., estimators that aren’t sensitive to outliers. These robust estimators can tolerate a certain proportion of outliers. Besides robustness, efficiency is another desirable property. Researchers try to find estimators that are efficient under standard conditions and use them when outliers exist in the data. In this study the robustness and efficiency of a class of estimators that we call [subscript n]C[subscript k ]estimators are investigated. Special cases of this method exist in the literature including U and generalized L-statistics. This estimation technique is based on taking all subsamples of size k from a sample of size n, finding the estimator of interest for each subsample, and specifying one of them, typically the median, or a linear combination of them as the estimator of the parameter of interest. A simulation study is conducted to evaluate these estimators under different distributions with small sample sizes. Estimators of location, scale, linear regression and multiple regression parameters are studied and compared to other estimators existing in the literature. The concept of data depth is used to propose a new type of estimator for the regression parameters in multiple regression.
259

Optimisation des correcteurs par les métaheuristiques. Application à la stabilisation inertielle de ligne de visée. / Controllers optimization with metaheuristics – Application to the ligne of sight stabilization problem.

Feyel, Philippe 16 June 2015 (has links)
Dans l’industrie, l’automaticien doit concevoir une loi de commande unique qu’il valide sur un prototype unique, ayant un degré de robustesse suffisant pour satisfaire un cahier des charges complexe sur un grand nombre de systèmes. Pour cela, la méthodologie de développement qu’il emploie consiste en un processus itératif expérimental (phase d’essai-erreur), qui fait grandement appel à l’expérience de l’ingénieur. Dans cette thèse, on tente de rendre la méthodologie de synthèse des correcteurs des asservissements plus efficace car plus directe et donc moins couteuse en temps de développement en calculant un correcteur final (structuré) par une attaque directe de la spécification système haut niveau. La complexité des spécifications systèmes haut-niveau nous pousse à l’emploi des métaheuristiques : ces techniques d’optimisation ne nécessitent pas la formulation du gradient, la seule contrainte étant la possibilité d’évaluer la spécification. Ainsi avons-nous proposé dans ce travail de reformuler les problèmes de commande robuste pour l’optimisation stochastique : on montre dans ce travail comment on peut synthétiser des correcteurs structurés à partir de problématiques de type H ou -synthèse et on montre que l’intérêt de l’approche formulée réside dans sa flexibilité et la prise en compte de contraintes « exotiques » complexes ; les algorithmes évolutionnaires s’avérant très performants et compétitifs, nous avons finalement développé sur cette base une méthode originale de synthèse de correcteurs structurés et robustes vis-à-vis de critères d’optimisation de forme quelconque. La validation de ces travaux a été réalisée sur des exemples industriels de viseurs. / In the industrial framework, the control engineer must design a unique control law that valid on a single prototype, with a sufficient degree of robustness to satisfy a complex specification on many systems. For that purpose, his development methodology consists of an experimental iterative process (trial and error phase), which relies heavily on the experience of the engineer. In this thesis, we try to make the methodology for computing controllers more efficient and more direct with a less costly development time by calculating a final structured controller by a direct optimization on the high level specification.The complexity of high-level specifications pushes us to the use of metaheuristics: these optimization techniques do not require the formulation of the gradient, the only constraint being the possibility of evaluating the specification. Thus we proposed in this work to reformulate robust control problems for stochastic optimization: we show in this work how to synthesize structured controllers for control problems such H synthesis or -synthesis and show that the interest of the formulated approach lies in its flexibility and the consideration of exotic complex constraints. Evolutionary algorithms proving very effective and competitive, we finally developed on this basis a new method for synthesizing robust and structured controllers with respect to any form of optimization criteria. The validation of this work was carried out on the industrial example of the line of sight stabilization problem.
260

Industrial and office wideband MIMO channel performance

Nair, Lakshmi Ravindran 26 November 2009 (has links)
The aim of this dissertation is to characterize the MIMO channel in two very distinct indoor scenarios: an office building and an industrial environment. The study investigates the use of single- and dual-polarized antenna MIMO systems, and attempts to model the channel using well-known analytical models. The suitability of MIMO architectures employing either single or dual-polarization antennas is presented, with the purpose of identifying not only which architecture provides better average capacity performance, but also which is more robust for avoiding low channel rank. A measurement campaign employing dual-polarized 8×8 patch arrays at 2.4 GHz and 5.2 GHz is analyzed. For both environments the performance of three 4×4 subsystems (dual-polarized, vertical-polarized and horizontal-polarized) are compared in terms of the average capacities attained by these systems and their eigenvalue distributions. Average capacities are found to be only marginally different, indicating little advantage of dual-polarized elements for average performance. However, an eigenvalue analysis indicates that the dual-polarized system is most robust for full-rank MIMO communications, by providing orthogonal channels with more equal gain. The analysis of the analytical models shows that the Kronecker and Weichselberger models underestimate the measured data. Kronecker models are known to perform poorly for large antenna sizes and the performance of the Weichselberger model can be attributed to certain parts of the channel not fading enough. / Dissertation (MEng)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0394 seconds