• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 3
  • 3
  • 1
  • Tagged with
  • 30
  • 30
  • 9
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Reliability modelling of complex systems

Mwanga, Alifas Yeko. January 2006 (has links)
Thesis (Ph.D.)(Industrial and Systems Engineering))--University of Pretoria, 2006. / Includes summary. Includes bibliographical references. Available on the Internet via the World Wide Web.
22

Network reliability as a result of redundant connectivity

Binneman, Francois J. A. 03 1900 (has links)
Thesis (MSc (Logistics)--University of Stellenbosch, 2007. / There exists, for any connected graph G, a minimum set of vertices that, when removed, disconnects G. Such a set of vertices is known as a minimum cut-set, the cardinality of which is known as the connectivity number k(G) of G. A connectivity preserving [connectivity reducing, respectively] spanning subgraph G0 ? G may be constructed by removing certain edges of G in such a way that k(G0) = k(G) [k(G0) < k(G), respectively]. The problem of constructing such a connectivity preserving or reducing spanning subgraph of minimum weight is known to be NP–complete. This thesis contains a summary of the most recent results (as in 2006) from a comprehensive survey of literature on topics related to the connectivity of graphs. Secondly, the computational problems of constructing a minimum weight connectivity preserving or connectivity reducing spanning subgraph for a given graph G are considered in this thesis. In particular, three algorithms are developed for constructing such spanning subgraphs. The theoretical basis for each algorithm is established and discussed in detail. The practicality of the algorithms are compared in terms of their worst-case running times as well as their solution qualities. The fastest of these three algorithms has a worst-case running time that compares favourably with the fastest algorithm in the literature. Finally, a computerised decision support system, called Connectivity Algorithms, is developed which is capable of implementing the three algorithms described above for a user-specified input graph.
23

Detecção de linhas redundantes em problemas de programação linear de grande porte / Finding all linearly dependent rows in large-scale linear programming

Silva, Daniele Costa, 1984- 16 August 2018 (has links)
Orientador: Aurelio Ribeiro Leite de Oliveira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-16T01:36:50Z (GMT). No. of bitstreams: 1 Silva_DanieleCosta_M.pdf: 1303714 bytes, checksum: 5b7f038f6b0f53fca9601f7784ec02d1 (MD5) Previous issue date: 2010 / Resumo: A presença de linhas redundantes na matriz de restrições não é incomum em problemas reais de grande porte. A existência de tais linhas deve ser levada em consideração na solução destes problemas. Se o método de solução adotado for o método simplex, existem procedimentos eficientes e de fácil implementação que contornam este problema. O mesmo se aplica quando métodos de pontos interiores são adotados e os sistemas lineares resultantes são resolvidos por métodos diretos. No entanto, existem problemas de grande porte cuja única forma possível de solução é resolver os sistemas lineares por métodos iterativos. Nesta situação as linhas redundantes representam uma dificuldade considerável, pois geram uma matriz singular e os métodos iterativos não convergem. A única alternativa viável consiste em detectar tais linhas e eliminá-las antes da aplicação dos métodos de pontos interiores. Este trabalho propõe uma implementação eficiente de um procedimento de detecção de linhas redundantes, que incluímos em uma adaptação própria do PCx que resolve os sistemas lineares por métodos iterativos / Abstract: The presence of dependent rows in the constraint matrix is frequent in real large-scale problems. If the method of solution adopted is the simplex method, there are efficient procedures easy to implement that circumvent this problem. The same applies when interior point methods are adopted and the resulting linear systems are solved for directed methods. However, there are large-scale problems whose only possible solution is to solve linear systems by iterative methods. In this situation, the dependent rows create a singular matrix and the iterative method does not converge. The only viable alternative is to find and remove these rows before applying the method. This dissertation proposes an efficient implementation of a procedure for detection dependent rows, include in a PCx modification that solves linear systems by iterative methods / Mestrado / Programação Linear / Mestre em Matemática Aplicada
24

Reliability allocation and apportionment: addressing redundancy and life-cycle cost

Nowicki, David R. 04 August 2009 (has links)
Two reliability analysis techniques, allocation and apportionment, have the potential to influence a system's design (a distinction is made here between allocation and apportionment). Algorithms that account for the ever increasing design complexities are constructed here for both. As designs of aircraft, railway systems, automobiles and space systems continue to push the envelope in terms of their capabilities, the importance of performance criteria such as reliability and associated life-cycle cost (LCC) consequences become even more important. These interrelated criteria are the foundation for the reliability allocation and apportionment algorithms derived in this thesis. Reliability allocation is the process of assigning reliability targets to lower-level assemblies to ensure the top-level assembly's goal is achieved. Reliability apportionment involves the analysis of an existing design configuration to determine the most cost-effective means of adding redundancy. In the apportionment problem, acquisition cost is the traditional cost-effectiveness measure. The apportionment algorithm defined herein expands the definition of cost-effectiveness to include downstream costs, thereby addressing LCC. A well-behaved, allocation routine is derived to account for any combination of serial, parallel and partially redundant configurations. In addition, a closed-form analytic solution provides the framework for economically adding redundancy to a system's structure in order to achieve a system-level reliability goal. An Apportionment Criterion Ratio (ACR), which contrasts the incremental reliability benefits of adding redundant components with the corresponding incremental LCC, is used. The Rate of Occurrence of Failure (ROCOF) is the reliability metric used in both the allocation and the apportionment routines. The formulation of the LCC model carefully distinguishes between failures and an allied measurement, demands. / Master of Science
25

Recurrent neural networks for inverse kinematics and inverse dynamics computation of redundant manipulators.

January 1999 (has links)
Tang Wai Sum. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 68-70). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Redundant Manipulators --- p.1 / Chapter 1.2 --- Inverse Kinematics of Robotic Manipulators --- p.2 / Chapter 1.3 --- Inverse Dynamics of Robotic Manipulators --- p.4 / Chapter 1.4 --- Redundancy Resolutions of Manipulators --- p.5 / Chapter 1.5 --- Motivation of Using Neural Networks for these Applications --- p.9 / Chapter 1.6 --- Previous Work for Redundant Manipulator Inverse Kinematics and Inverse Dynamics Computation by Neural Networks --- p.9 / Chapter 1.7 --- Advantages of the Proposed Recurrent Neural Networks --- p.11 / Chapter 1.8 --- Contribution of this work --- p.11 / Chapter 1.9 --- Organization of this thesis --- p.12 / Chapter 2 --- Problem Formulations --- p.14 / Chapter 2.1 --- Constrained Optimization Problems for Inverse Kinematics Com- putation of Redundant Manipulators --- p.14 / Chapter 2.1.1 --- Primal and Dual Quadratic Programs for Bounded Joint Velocity Minimization --- p.14 / Chapter 2.1.2 --- Primal and Dual Linear Programs for Infinity-norm Joint Velocity Minimization --- p.15 / Chapter 2.2 --- Constrained Optimization Problems for Inverse Dynamics Com- putation of Redundant Manipulators --- p.17 / Chapter 2.2.1 --- Quadratic Program for Unbounded Joint Torque Mini- mization --- p.17 / Chapter 2.2.2 --- Primal and Dual Quadratic Programs for Bounded Joint Torque Minimization --- p.18 / Chapter 2.2.3 --- Primal and Dual Linear Programs for Infinity-norm Joint Torque Minimization --- p.19 / Chapter 3 --- Proposed Recurrent Neural Networks --- p.20 / Chapter 3.1 --- The Lagrangian Network --- p.21 / Chapter 3.1.1 --- Optimality Conditions for Unbounded Joint Torque Min- imization --- p.21 / Chapter 3.1.2 --- Dynamical Equations and Architecture --- p.22 / Chapter 3.2 --- The Primal-Dual Network 1 --- p.24 / Chapter 3.2.1 --- Optimality Conditions for Bounded Joint Velocity Min- imization --- p.24 / Chapter 3.2.2 --- Dynamical Equations and Architecture for Bounded Joint Velocity Minimization --- p.26 / Chapter 3.2.3 --- Optimality Conditions for Bounded Joint Torque Mini- mization --- p.27 / Chapter 3.2.4 --- Dynamical Equations and Architecture for Bounded Joint Torque Minimization --- p.28 / Chapter 3.3 --- The Primal-Dual Network 2 --- p.29 / Chapter 3.3.1 --- Energy Function for Infinity-norm Joint Velocity Mini- mization Problem --- p.29 / Chapter 3.3.2 --- Dynamical Equations for Infinity-norm Joint Velocity Minimization --- p.30 / Chapter 3.3.3 --- Energy Functions for Infinity-norm Joint Torque Mini- mization Problem --- p.32 / Chapter 3.3.4 --- Dynamical Equations for Infinity-norm Joint Torque Min- imization --- p.32 / Chapter 3.4 --- Selection of the Positive Scaling Constant --- p.33 / Chapter 4 --- Stability Analysis of Neural Networks --- p.36 / Chapter 4.1 --- The Lagrangian Network --- p.36 / Chapter 4.2 --- The Primal-Dual Network 1 --- p.38 / Chapter 4.3 --- The Primal-Dual Network 2 --- p.41 / Chapter 5 --- Simulation Results and Network Complexity --- p.45 / Chapter 5.1 --- Simulation Results of Inverse Kinematics Computation in Re- dundant Manipulators --- p.45 / Chapter 5.1.1 --- Bounded Least Squares Joint Velocities Computation Using the Primal-Dual Network 1 --- p.46 / Chapter 5.1.2 --- Minimum Infinity-norm Joint Velocities Computation Us- ing the Primal-Dual Network 2 --- p.49 / Chapter 5.2 --- Simulation Results of Inverse Dynamics Computation in Redun- dant Manipulators --- p.51 / Chapter 5.2.1 --- Minimum Unbounded Joint Torques Computation Using the Lagrangian Network --- p.54 / Chapter 5.2.2 --- Minimum Bounded Joint Torques Computation Using the Primal-Dual Network 1 --- p.57 / Chapter 5.2.3 --- Minimum Infinity-norm Joint Torques Computation Us- ing the Primal-Dual Network 2 --- p.59 / Chapter 5.3 --- Network Complexity Analysis --- p.60 / Chapter 6 --- Concluding Remarks and Future Work --- p.64 / Publications Resulted from the Study --- p.66 / Bibliography --- p.68
26

Network compression via network memory: realization principles and coding algorithms

Sardari, Mohsen 13 January 2014 (has links)
The objective of this dissertation is to investigate both the theoretical and practical aspects of redundancy elimination methods in data networks. Redundancy elimination provides a powerful technique to improve the efficiency of network links in the face of redundant data. In this work, the concept of network compression is introduced to address the redundancy elimination problem. Network compression aspires to exploit the statistical correlation in data to better suppress redundancy. In a nutshell, network compression enables memorization of data packets in some nodes in the network. These nodes can learn the statistics of the information source generating the packets which can then be used toward reducing the length of codewords describing the packets emitted by the source. Memory elements facilitate the compression of individual packets using the side-information obtained from memorized data which is called ``memory-assisted compression''. Network compression improves upon de-duplication methods that only remove duplicate strings from flows. The first part of the work includes the design and analysis of practical algorithms for memory-assisted compression. These algorithms are designed based on the theoretical foundation proposed in our group by Beirami et al. The performance of these algorithms are compared to the existing compression techniques when the algorithms are tested on the real Internet traffic traces. Then, novel clustering techniques are proposed which can identify various information sources and apply the compression accordingly. This approach results in superior performance for memory-assisted compression when the input data comprises sequences generated by various and unrelated information sources. In the second part of the work the application of memory-assisted compression in wired networks is investigated. In particular, networks with random and power-law graphs are studied. Memory-assisted compression is applied in these graphs and the routing problem for compressed flows is addressed. Furthermore, the network-wide gain of the memorization is defined and its scaling behavior versus the number of memory nodes is characterized. In particular, through our analysis on these graphs, we show that non-vanishing network-wide gain of memorization is obtained even when the number of memory units is a tiny fraction of the total number of nodes in the network. In the third part of the work the application of memory-assisted compression in wireless networks is studied. For wireless networks, a novel network compression approach via memory-enabled helpers is proposed. Helpers provide side-information that is obtained via overhearing. The performance of network compression in wireless networks is characterized and the following benefits are demonstrated: offloading the wireless gateway, increasing the maximum number of mobile nodes served by the gateway, reducing the average packet delay, and improving the overall throughput in the network. Furthermore, the effect of wireless channel loss on the performance of the network compression scheme is studied. Finally, the performance of memory-assisted compression working in tandem with de-duplication is investigated and simulation results on real data traces from wireless users are provided.
27

Experimental Study Of Fault Cones And Fault Aliasing

Bilagi, Vedanth 01 January 2012 (has links)
The test of digital integrated circuits compares the test pattern results for the device under test (DUT) to the expected test pattern results of a standard reference. The standard response is typically obtained from simulations. The test pattern and response are created and evaluated assuming ideal test conditions. The standard response is normally stored within automated test equipment (ATE). However the use of ATE is the major contributor to the test cost. This thesis explores an alternative strategy to the standard response. As an alternative to the stored standard response, the response is estimated by fault tolerant technique. The purpose of the fault tolerant technique is to eliminate the need of standard response and enable online/real-time testing. Fault tolerant techniques use redundancy and majority voting to estimate the standard response. Redundancy in the circuit leads to fault aliasing. Fault aliasing misleads the majority voter in estimating the standard response. The statistics and phenomenon of aliasing are analyzed for benchmark circuits. The impact of fault aliasing on test with respect to coverage, test escape and over-kill is analyzed. The results show that aliasing can be detected with additional test vectors and get 100% fault coverage.
28

Uma infra-estrutura confiavel para arquiteturas baseadas em serviços Web aplicada a pesquisa de biodiversidade / A dependable infrastructure for service-oriented architectures applied at biodiversity research

Gonçalves, Eduardo Machado 15 August 2018 (has links)
Orientador: Cecilia Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T11:38:59Z (GMT). No. of bitstreams: 1 Goncalves_EduardoMachado_M.pdf: 3443509 bytes, checksum: b9211dc7c7cdb58d86853bd60f992664 (MD5) Previous issue date: 2009 / Resumo: A Arquitetura Orientada a Serviços (SOA) é responsável por mapear os processos de negócios relevantes aos seus serviços correspondentes que, juntos, agregam o valor final ao usuário. Esta arquitetura deve atender aos principais requisitos de dependabilidade, entre eles, alta disponibilidade e alta confiabilidade da solução baseada em serviços. O objetivo deste trabalho é desenvolver uma infra-estrutura de software, chamada de Arquitetura Mediador, que atua na comunicação entre os clientes dos serviços e os próprios serviços Web, a fim de implementar técnicas de tolerância a falhas que façam uso efetivo das redundâncias de serviços disponíveis. A Arquitetura Mediador foi projetada para ser acessível remotamente via serviços Web, de forma que o impacto na sua adoção seja minimizado. A validação da solução proposta foi feita usando aplicações baseadas em serviços Web implementadas no projeto BioCORE. Tal projeto visa apoiar biólogos nas suas atividades de pesquisa de manutenção do acervo de informações sobre biodiversidade de espécies / Abstract: The Service-Oriented Architecture is responsible to map the business processes relevant to its services that, together, add value to the final user. This architecture must meet the main dependability requirements, among them, high availability and high reliability, part of the service-based solution. The objective of this work is to develop a software infrastructure, called Arquitetura Mediador, that operates in the communication between the web service's clients and the web services itself, in order to implement fault tolerance techniques that make eéctive use of available services redundancies. The Arquitetura Mediador infrastructure was designed to be remotely accessible via web services, so that the impact on its adoption should be minimized. The validation of the proposed solution was made using web services-based applications implemented on BioCORE project. This project aims to support biologists in his/her research activities and to maintain informations about collections of species and biodiversity / Mestrado / Engenharia de Software / Mestre em Ciência da Computação
29

Desenvolvimento de um software para detecção de erros grosseiros e reconciliação de dados estática e dinâmica de processos químicos e petroquímicos / Development of software for static and dynamic gross error detection and data reconciliation of chemical and petrochemical processes

Barbosa, Agremis Guinho 22 August 2018 (has links)
Orientador: Rubens Maciel Filho / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-22T08:44:14Z (GMT). No. of bitstreams: 1 Barbosa_AgremisGuinho_D.pdf: 4370227 bytes, checksum: 9fc9a5dfb766e6075fe58104c3c22087 (MD5) Previous issue date: 2008 / Resumo: O principal objetivo deste trabalho foi o desenvolvimento de um software para reconciliação de dados, detecção e identificação de erros grosseiros, estimativa de parâmetros e monitoramento da qualidade da informação em unidades industriais em estado estacionário e dinâmico. O desenvolvimento desse software focalizou atender aos critérios de modularidade, extensibilidade e facilidade de uso. A reconciliação de dados é um procedimento de tratamento de medidas em plantas de processos necessário devido ao fato da inexorável presença de erros aleatórios de pequena magnitude associados aos valores obtidos dos equipamentos de medição. Além dos erros aleatórios, por vezes os dados estão associados a erros de maior magnitude e que constituem uma tendência, ou viés. Erros desta natureza podem ser qualificados e quantificados por técnicas de detecção de erros grosseiros. É importante para aplicação de subrotinas de otimização que os dados sejam confiáveis e livres de erros tanto quanto possível. A tarefa da remoção destes erros através de modelos previamente conhecidos (reconciliação de dados) não é trivial, já sendo estudada no campo da engenharia química nos últimos 40 anos e apresenta uma crescente quantidade de trabalhos publicados. Contudo, uma parte destes trabalhos é voltada para aplicação da reconciliação sobre equipamentos isolados, como tanques, reatores e colunas de destilação, ou pequenos conjuntos destes equipamentos e não são muitos os trabalhos que utilizam dados reais de operação. Isto pode ser atribuído à dimensão do trabalho computacional associado ao grande número de variáveis. O que se propõe neste trabalho é tomar partido da crescente capacidade computacional e das modernas ferramentas de desenvolvimento, provendo uma aplicação na qual seja facilitada a tarefa de descrever sistemas de maior dimensão, para estimar dados de qualidade superior, em tempo hábil, para sistemas de controle e otimização. É importante frisar que a reconciliação de dados e a detecção de erros grosseiros são fundamentais para a confiabilidade de resultados de subrotinas de otimização e controle supervisório e também pode ser utilizada para a reconstrução de estados do processo / Abstract: The main goal of this work was the development of software for data reconciliation, gross errors detection and identification, data reconciliation, parameter estimation, and information quality monitoring in industrial units under steady state and dynamic operation. The development of this software was focused on meeting the criteria of modularity, extensibility, and user friendliness. Data reconciliation is a procedure for measurement data treatment in process plants, which is necessary due the fact of the inexorable presence of random, small magnitude errors associated to the values obtained from measurement devices. In addition to the random errors, sometimes data are associated to major magnitude errors that lead to a trend or bias. Errors of this nature can be qualified and quantified through gross errors detection techniques. It is important for optimization routines that data are reliable and error free as much as possible. The task of removal of these errors using previously known models (data reconciliation) is not trivial, and has been studied for the last 40 years in the field of chemical engineering, showing an increasing amount of published works. However, part of these works is devoted to applying data reconciliation over single equipment, such as tanks, reactors, distillation columns, or small sets of these equipments. Furthermore, not much of this published work relies on real operation data. This can be regarded to the dimension of computational work associated to the great number of variables. This work proposes to take advantage of increasing computational capacity and modern development tools to provide an application in which the task of higher dimension systems description is accomplished with ease in order to produce data estimates of superior quality, in a suitable time frame, to control and optimization systems. It is worthwhile mentioning that data reconciliation and gross error detection are fundamental for reliability of the results from supervisory control and optimization routines, and can be used also to process state reconstruction / Doutorado / Desenvolvimento de Processos Químicos / Doutor em Engenharia Química
30

A non-conventional multilevel flying-capacitor converter topology

Gulpinar, Feyzullah January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This research proposes state-of-the-art multilevel converter topologies and their modulation strategies, the implementation of a conventional flying-capacitor converter topology up to four-level, and a new four-level flying-capacitor H-Bridge converter confi guration. The three phase version of this proposed four-level flying-capacitor H-Bridge converter is given as well in this study. The highlighted advantages of the proposed converter are as following: (1) the same blocking voltage for all switches employed in the con figuration, (2) no capacitor midpoint connection is needed, (3) reduced number of passive elements as compared to the conventional solution, (4) reduced total dc source value by comparison with the conventional topology. The proposed four-level capacitor-clamped H-Bridge converter can be utilized as a multilevel inverter application in an electri fied railway system, or in hybrid electric vehicles. In addition to the implementation of the proposed topology in this research, its experimental setup has been designed to validate the simulation results of the given converter topologies.

Page generated in 0.0799 seconds