• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 56
  • 44
  • 23
  • 20
  • 12
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 438
  • 95
  • 56
  • 53
  • 50
  • 46
  • 43
  • 39
  • 35
  • 32
  • 31
  • 30
  • 30
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Integrated Reliability and Availability Aanalysis of Networks With Software Failures and Hardware Failures

Hou, Wei 17 May 2003 (has links)
This dissertation research attempts to explore efficient algorithms and engineering methodologies of analyzing the overall reliability and availability of networks integrated with software failures and hardware failures. Node failures, link failures, and software failures are concurrently and dynamically considered in networks with complex topologies. MORIN (MOdeling Reliability for Integrated Networks) method is proposed and discussed as an approach for analyzing reliability of integrated networks. A Simplified Availability Modeling Tool (SAMOT) is developed and introduced to evaluate and analyze the availability of networks consisting of software and hardware component systems with architectural redundancy. In this dissertation, relevant research efforts in analyzing network reliability and availability are reviewed and discussed, experimental data results of proposed MORIN methodology and SAMOT application are provided, and recommendations for future researches in the network reliability study are summarized as well.
232

Vers des robots et machines parallèles rapides et précis / Towards Rapid and Precise Parallel Kinematic Machines

Shayya, Samah Aref 19 February 2015 (has links)
Les machines parallèles (MPs) existent depuis plus d'un demi-siècle et ils ont fait l'objet d'études intensives. Par opposition avec leurs homologues de structure série, ces mécanismes sont constitués de plusieurs chaînes cinématiques qui relient la base fixe à la plateforme mobile. L'intérêt de ces architectures s'explique par les nombreux avantages qu'elles offrent, parmi lesquels: une rigidité élevée, un rapport important charge/poids global, des capacités dynamiques élevées en raison des masses en mouvement réduites (en particulier lorsque les actionneurs sont sur ou près de la base), une meilleure précision, des fréquences propres plus élevées, etc. Néanmoins, leur exploitation comme machines-outils reste timide et limitée, et le plus souvent elles ne dépassent pas le stade d'étude et de prototype de laboratoires universitaires ou de fabricants de machines-outils. Les principaux inconvénients qui entravent la généralisation des MPs dans l'industrie sont les suivants: un espace de travail limité, des débattements angulaires réduits, la présence de configurations singulières, la complexité de conception, les difficultés d'étalonnage, les problèmes causés par les collisions, la complexité du contrôle/commande (en particulier dans le cas de redondance à actionnement), etc. De plus, si les MPs ont rencontré un grand succès dans les applications de pick-and-place grâce à leur rapidité (capacité d'accélération), leur précision reste inférieure à ce qui a été prévu initialement. Par ailleurs, on trouve également des MPs de très précision, mais malheureusement avec de faibles performances dynamiques. En partant du constat précédant, cette thèse se concentre sur l'obtention de MPs avec un bon compromis entre rapidité et précision. Nous commençons par donner un aperçu de la bibliographie disponible concernant MPs et les avancées majeures dans ce domaine, tout en soulignant les limites de performance des MPs, ainsi que les limites des outils de conception classique. En outre, nous insistons sur les outils d'évaluation des performances, et montrons leurs limites dès qu'il s'agit de traiter le cas de la redondance ou l'hétérogénéité des degrés de liberté (ddl). En effet, si la synthèse architecturale est un point dur de la conception de MPs, la synthèse dimensionnelle reposant sur des indices de performances réellement significatifs l'est également. Par conséquent, de nouveaux indices de performance sont proposés pour évaluer la précision, les capacités cinétostatiques et dynamiques des manipulateurs de manière générale qui apportent des solutions aux difficultés évoquées ci-dessus. Par la suite, plusieurs nouvelles architectures 3T-2R et 3T-1R (T: signifie ddl en translation et R signifie un ddl de rotation) sont présentées, à savoir MachLin5, ARROW V1, et ARROW V2 et ses versions dérivées ARROW V2 M1 et M2. En outre, la synthèse dimensionnelle d'ARROW V2 M2 est réalisée, et les performances de la machine sont évaluées. Finalement, des améliorations futures concernant la précision sont proposées au regard de premiers résultats obtenus sur le prototype. / Parallel manipulators (PMs) have been there for more than half a century and they have been subject of intensive research. In comparison with their serial counterparts, PMs consist of several kinematic chains that connect the fixed base to the moving platform. The interest in such architectures is due to the several advantages they offer, among which we mention: high rigidity and payload-to-weight ratio, elevated dynamical capabilities due to reduced moving masses (especially when the actuators are at or near the base), better precision, higher proper frequencies, etc. Nevertheless, despite of the aforementioned merits, their exploitation as machine tools is still timid and limited, in which they most often do not exceed the research and prototyping stages at university laboratories and machine tool manufacturers. The main drawbacks that hinder the widespread of parallel kinematic machines (PKMs) are the following: limited operational workspace and tilting capacity, presence of singular configurations, design complexities, calibration difficulties, collision-related problems, sophistication of control (especially in the case of actuation redundancy), etc. Besides, though PMs have met a great success in pick-and-place applications, thanks to their rapidity (acceleration capacity), still their precision is less than what has been initially anticipated. On the other hand, extremely precise PMs exist, but unfortunately with poor dynamic performance. Starting from the aforementioned problematics, the current thesis focuses on obtaining PKMs with a good compromise between rapidity and precision. We begin by providing a survey of the available literature regarding PKMs and the major advancements in this field, while emphasizing the shortcomings on the level of design as well as performance. Moreover, an overview on the state of the art regarding performance evaluation is presented and the inadequacies of classical measures, when dealing with redundancy and heterogeneity predicaments, are highlighted. In fact, if finding the proper architectures is one of the prominent issues hindering PKMs' widespread, the performance evaluation and the criteria upon which these PKMs are dimensionally synthesized are of an equal importance. Therefore, novel performance indices are proposed to assess precision, kinetostatic and dynamic capabilities of general manipulators, while overcoming the aforementioned dilemmas. Subsequently, several novel architectures with 3T-2R and 3T-1R degrees of freedom (T and R signify translational and rotational degrees of freedom), namely MachLin5, ARROW V1, and ARROW V2 with its mutated versions ARROW V2 M1/M2, are presented. Furthermore, the dimensional synthesis of the executed PKM, namely ARROW V2 M2, is discussed with its preliminary performances and possible future enhancements, particularly regarding precision amelioration.
233

Efficient Homology Search for Genomic Sequence Databases

Cameron, Michael, mcam@mc-mc.net January 2006 (has links)
Genomic search tools can provide valuable insights into the chemical structure, evolutionary origin and biochemical function of genetic material. A homology search algorithm compares a protein or nucleotide query sequence to each entry in a large sequence database and reports alignments with highly similar sequences. The exponential growth of public data banks such as GenBank has necessitated the development of fast, heuristic approaches to homology search. The versatile and popular blast algorithm, developed by researchers at the US National Center for Biotechnology Information (NCBI), uses a four-stage heuristic approach to efficiently search large collections for analogous sequences while retaining a high degree of accuracy. Despite an abundance of alternative approaches to homology search, blast remains the only method to offer fast, sensitive search of large genomic collections on modern desktop hardware. As a result, the tool has found widespread use with millions of queries posed each day. A significant investment of computing resources is required to process this large volume of genomic searches and a cluster of over 200 workstations is employed by the NCBI to handle queries posed through the organisation's website. As the growth of sequence databases continues to outpace improvements in modern hardware, blast searches are becoming slower each year and novel, faster methods for sequence comparison are required. In this thesis we propose new techniques for fast yet accurate homology search that result in significantly faster blast searches. First, we describe improvements to the final, gapped alignment stages where the query and sequences from the collection are aligned to provide a fine-grain measure of similarity. We describe three new methods for aligning sequences that roughly halve the time required to perform this computationally expensive stage. Next, we investigate improvements to the first stage of search, where short regions of similarity between a pair of sequences are identified. We propose a novel deterministic finite automaton data structure that is significantly smaller than the codeword lookup table employed by ncbi-blast, resulting in improved cache performance and faster search times. We also discuss fast methods for nucleotide sequence comparison. We describe novel approaches for processing sequences that are compressed using the byte packed format already utilised by blast, where four nucleotide bases from a strand of DNA are stored in a single byte. Rather than decompress sequences to perform pairwise comparisons, our innovations permit sequences to be processed in their compressed form, four bases at a time. Our techniques roughly halve average query evaluation times for nucleotide searches with no effect on the sensitivity of blast. Finally, we present a new scheme for managing the high degree of redundancy that is prevalent in genomic collections. Near-duplicate entries in sequence data banks are highly detrimental to retrieval performance, however existing methods for managing redundancy are both slow, requiring almost ten hours to process the GenBank database, and crude, because they simply purge highly-similar sequences to reduce the level of internal redundancy. We describe a new approach for identifying near-duplicate entries that is roughly six times faster than the most successful existing approaches, and a novel approach to managing redundancy that reduces collection size and search times but still provides accurate and comprehensive search results. Our improvements to blast have been integrated into our own version of the tool. We find that our innovations more than halve average search times for nucleotide and protein searches, and have no signifcant effect on search accuracy. Given the enormous popularity of blast, this represents a very significant advance in computational methods to aid life science research.
234

Measuring Closeness to Singularities of Parallel Manipulators with Application to the Design of Redundant Actuation

Voglewede, Philip Anthony 16 April 2004 (has links)
At a platform singularity, a parallel manipulator loses constraint. Adding redundant actuation in an existing leg or new leg can eliminate these types of singularities. However, redundant manipulators have been designed with little attention to frame invariant techniques. In this dissertation, physically meaningful measures for closeness to singularities in non-redundant manipulators are developed. Two such frameworks are constructed. The first framework is a constrained optimization problem that unifies seemingly unrelated existing measures and facilitates development of new measures. The second is a clearance propagation technique based on workspace generation. These closeness measures are expanded to include redundancy and thus can be used as objective functions for designing redundant actuation. The constrained optimization framework is applied to a planar three degree of freedom redundant parallel manipulator to show feasibility of the technique.
235

Reliability Analysis Process And Reliabilty Improvement Of An Inertial Measurement Unit (imu)

Unlusoy, Ozlem 01 September 2010 (has links) (PDF)
Reliability is one of the most critical performance measures of guided missile systems. It is directly related to missile mission success. In order to have a high reliability value, reliability analysis should be carried out at all phases of the system design. Carrying out reliability analysis at all the phases of system design helps the designer to make reliability related design decisions in time and update the system design. In this study, reliability analysis process performed during the conceptual design phase of a Medium Range Anti-Tank Missile System Inertial Measurement Unit (IMU) was introduced. From the reliability requirement desired for the system, an expected IMU reliability value was derived by using reliability allocation methods. Then, reliability prediction for the IMU was calculated by using Relex Software. After that, allocated and predicted reliability values of the IMU were compared. It was seen that the predicted reliability value of the IMU did not meet the required reliability value. Therefore, reliability improvement analysis was carried out.
236

Προσομοίωση φυσικού επιπέδου και επιπέδου σύνδεσης δεδομένων ασύρματου δικτύου ιατρικών αισθητήρων / Physical link layer and data link layer simulation of a wireless medical sensor network

Καρκάνης, Xαράλαμπος 29 June 2007 (has links)
Ο σκοπός της μεταπτυχιακής εργασίας, ήταν η ανάλυση, όσον αφορά την πιθανότητα σφάλματος, ενός τηλεπικοινωνιακού συστήματος το οποίο μεταδίδει ιατρική πληροφορία, ασύρματα, μεταξύ δυο κόμβων ενός δικτύου ιατρικών αισθητήρων. Το δίκτυο αυτό περιλαμβάνει έναν επιβλέποντα κόμβο ο οποίος προωθεί, τα δεδομένα που συλλέχθηκαν, σε ένα σταθμό βάσης, ο οποίος βρίσκεται σε ένα νοσοκομείο. Η μετάδοση της ιατρικής πληροφορίας επιτυγχάνεται με ένα πομποδέκτη ο οποίος είναι ενσωματωμένος σε όλους τους κόμβους του ασύρματου δικτύου. Χρησιμοποιείται ο ΧΕ1209 πομποδέκτης της εταιρίας Xemics S.A. ο οποίος χρησιμοποιεί την διαμόρφωση 2-CPFSK ενώ η φέρουσα συχνότητα είναι τα 36,86 kHz. Προτού, μεταδοθεί η ιατρική πληροφορία, γίνεται μια κατάλληλη επεξεργασία ώστε να προστατευθεί από τον πανταχού παρών θόρυβο και να φτάσει αναλλοίωτη στο δέκτη. Η επεξεργασία της ιατρικής πληροφορίας περιλαμβάνει τον κυκλικό έλεγχο πλεονασμού (Cyclic Redundancy Check - CRC) και την εφαρμογή ενός σχήματος διόρθωσης λαθών (Forward Error Correction – FEC). / The purpose of my master thesis was, the analysis, concerning the probability of error, of a telecommunication system, which transmits medical information, wireless, from one node, of a medical sensor network, to another. This network consists of a supervising node, who forwards the collected data, to a base station, which resides in a hospital. The transmission of the medical information is achieved by a transmitter embedded to all the nodes of the wireless network. We have used the ΧΕ1209 transmitter of Xemics S.A., who uses the 2-CPFSK modulation, whilst the carrier frequency is 36.86 kHz. Before the transmission of the medical information takes place, the data undergo a processing phase, in order to be protected from the ubiquitous noise, and reach the receiver intact. The processing of the medical information includes the Cyclic Redundancy Check (CRC) and the application of a form of error correction called Forward Error Correction (FEC).
237

Resilient Cloud Computing and Services

Fargo, Farah Emad January 2015 (has links)
Cloud Computing is emerging as a new paradigm that aims at delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used it is critical that the security mechanisms are robust and resilient to malicious faults and attacks. Securing cloud is a challenging research problem because it suffers from current cybersecurity problems in computer networks and data centers and additional complexity introduced by virtualizations, multi-tenant occupancy, remote storage, and cloud management. It is widely accepted that we cannot build software and computing systems that are free from vulnerabilities and that cannot be penetrated or attacked. Furthermore, it is widely accepted that cyber resilient techniques are the most promising solutions to mitigate cyberattacks and change the game to advantage defender over attacker. Moving Target Defense (MTD) has been proposed as a mechanism to make it extremely challenging for an attacker to exploit existing vulnerabilities by varying different aspects of the execution environment. By continuously changing the environment (e.g. Programming language, Operating System, etc.) we can reduce the attack surface and consequently, the attackers will have very limited time to figure out current execution environment and vulnerabilities to be exploited. In this dissertation, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on MTD and autonomic computing. The proposed research will utilize the following capabilities: Software Behavior Obfuscation (SBO), replication, diversity, and Autonomic Management (AM). SBO employs spatiotemporal behavior hiding or encryption and MTD to make software components change their implementation versions and resources randomly to avoid exploitations and penetrations. Diversity and random execution is achieved by using AM that will randomly "hot" shuffling multiple functionally-equivalent, behaviorally-different software versions at runtime (e.g., the software task can have multiple versions implemented in a different language and/or run on a different platform). The execution environment encryption will make it extremely difficult for an attack to disrupt normal operations of cloud. In this work, we evaluated the performance overhead and effectiveness of the proposed ARCM approach to secure and protect a wide range of cloud applications such as MapReduce and scientific and engineering applications.
238

Mémorisation de séquences dans des réseaux de neurones binaires avec efficacité élevée

JIANG, Xiaoran 08 January 2014 (has links) (PDF)
Sequential structure imposed by the forward linear progression of time is omnipresent in all cognitive behaviors. This thesis proposes a novel model to store sequences of any length, scalar or vectorial, in binary neural networks. Particularly, the model that we introduce allows resolving some well known problems in sequential learning, such as error intolerance, catastrophic forgetting and the interference issue while storing complex sequences, etc. The quantity of the total sequential information that the network is able to store grows quadratically with the number of nodes. And the efficiency - the ratio between the capacity and the total amount of information consumed by the storage device - can reach around 30%. This work could be considered as an extension of the non oriented clique-based neural networks previously proposed and studied within our team. Those networks composed of binary neurons and binary connections utilize the concept of graph redundancy and sparsity in order to acquire a quadratic learning diversity. To obtain the ability to store sequences, connections are provided with orientation to form a tournament-based neural network. This is more natural biologically speak- ing, since communication between neurons is unidirectional, from axons to synapses. Any component of the network, a cluster or a node, can be revisited several times within a sequence or by multiple sequences. This allows the network to store se- quences of any length, independent of the number of clusters and only limited by the total available resource of the network. Moreover, in order to allow error correction and provide robustness to the net- work, both spatial assembly redundancy and sequential redundancy, with or without anticipation, may be combined to offer a large amount of redundancy in the activation of a node. Subsequently, a double layered structure is introduced with the purpose of accurate retrieval. The lower layer of tournament-based hetero-associative network stores sequential oriented associations between patterns. An upper auto-associative layer of mirror nodes is superposed to emphasize the co-occurrence of the elements belonging to the same pattern, in the form of a clique. This model is then extended to a hierarchical structure, which helps resolve the interference issue while storing complex sequences. This thesis also contributes in proposing and assessing new decoding rules with respect to sparse messages in order to fully benefit from the theoretical quadratic law of the learning diversity. Besides the performance aspect, the biological plausibility is also a constant concern during this thesis work.
239

Elimination of redundant polymorphism queries in object-oriented design patterns

Brown, Rhodes Hart Fraser 07 May 2010 (has links)
This thesis presents an investigation of two new techniques for eliminating redundancy inherent in uses of dynamic polymorphism operations such as virtual dispatches and type tests. The novelty of both approaches derives from taking a subject-oriented perspective which considers multiple applications to the same run-time values, as opposed to previous site-oriented reductions which treat each operation independently. The first optimization (redundant polymorphism elimination -- RPE) targets reuse over intraprocedural contexts, while the second (instance-specializing polymorphism elimination -- ISPE) considers repeated uses of the same fields over the lifetime of individual object and class instances. In both cases, the specific formulations of the techniques are guided by a study of intentionally polymorphic constructions as seen in applications of common object-oriented design patterns. The techniques are implemented in Jikes RVM for the dynamic polymorphism operations supported by the Java programming language, namely virtual and interface dispatching, type tests, and type casts. In studying the complexities of Jikes RVM's adaptive optimization system and run-time environment, an improved evaluation methodology is derived for characterizing the performance of adaptive just-in-time compilation strategies. This methodology is applied to demonstrate that the proposed optimization techniques yield several significant improvements when applied to the DaCapo benchmarks. Moreover, dramatic improvements are observed for two programs designed to highlight the costs of redundant polymorphism. In the case of the intraprocedural RPE technique, a speed up of 14% is obtained for a program designed to focus on the costs of polymorphism in applications of the Iterator pattern. For the instance-specific technique, an improvement of 29% is obtained for a program designed to focus on the costs inherent in constructions similar to the Decorator pattern. Further analyses also point to several ways in which the results of this work may be used to complement and extend existing optimization techniques, and provide clarification regarding the role of polymorphism in object-oriented design.
240

Development Of A Database Management System For Small And Medium Sized Enterprises

Safak, Cigdem 01 May 2005 (has links) (PDF)
Databases and database technology have become an essential component of everyday life in modern society. As databases are widely used in every organization with a computer system, control of data resources and management of data are very important. Database Management System (DBMS) is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. Windows Distributed Internet Applications (DNA) architecture describes a framework of building software technologies together in an integrated web and client-server model of computing. This thesis focuses on development of a general database management system, for small and medium sized manufacturing enterprises, by using Windows DNA technology. Defining, constructing and manipulating institutional, commercial and operational data of the company is the main frame of the work. And also by integrating &ldquo / Optimization&rdquo / and &ldquo / Agent&rdquo / system components which were previously developed in Middle East Technical University, Mechanical Engineering Department, Computer Integrated Manufacturing Laboratory (METUCIM) into the SME DBMS, a unified information system is developed. &ldquo / Optimization&rdquo / system was developed in order to calculate optimum cutting conditions for turning and milling operations. &ldquo / Agent&rdquo / system was implemented to control and send work orders to the available manufacturing cell in METUCIM. The components of these systems are redesigned to share a unique database together with the newly developed &ldquo / SME Information System&rdquo / application program in order to control data redundancy and to provide data sharing and data integrity.

Page generated in 0.0662 seconds