• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1931
  • 582
  • 307
  • 237
  • 150
  • 48
  • 38
  • 34
  • 25
  • 23
  • 21
  • 21
  • 15
  • 15
  • 12
  • Tagged with
  • 4266
  • 1169
  • 1042
  • 973
  • 612
  • 603
  • 599
  • 594
  • 478
  • 457
  • 421
  • 408
  • 369
  • 325
  • 318
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Shuffle-X graphs and their cayley variants

陳貴海, Chen, Guihai. January 1997 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
332

Improved algorithms for some classical graph problems

Chong, Ka-wong., 莊家旺 January 1996 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
333

Towards a new extension relation for compositional test case generation for CSP concurrent processes

Chan, Wing-kwong., 陳榮光. January 2003 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
334

Kinematics, dynamics and control of high precision parallel manipulators

Cheung, Wing-fung, Jacob., 張穎鋒. January 2007 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
335

Formalized parallel dense linear algebra and its application to the generalized eigenvalue problem

Poulson, Jack Lesly 03 September 2009 (has links)
This thesis demonstrates an efficient parallel method of solving the generalized eigenvalue problem, KΦ = M ΦΛ, where K is symmetric and M is symmetric positive-definite, by first converting it to a standard eigenvalue problem, solving the standard eigenvalue problem, and back-transforming the results. An abstraction for parallel dense linear algebra is introduced along with a new algorithm for forming A := U⁻ᵀ K U⁻¹ , where U is the Cholesky factor of M , that is up to twice as fast as the ScaLAPACK implementation. Additionally, large improvements over the PBLAS implementations of general matrix-matrix multiplication and triangular solves with many right-hand sides are shown. Significant performance gains are also demonstrated for Cholesky factorizations, and a case is made for using 2D-cyclic distributions with a distribution blocksize of one. / text
336

Exploiting data parallelism in artificial neural networks with Haskell

Heartsfield, Gregory Lynn 2009 August 1900 (has links)
Functional parallel programming techniques for feed-forward artificial neural networks trained using backpropagation learning are analyzed. In particular, the Data Parallel Haskell extension to the Glasgow Haskell Compiler is considered as a tool for achieving data parallelism. We find much potential and elegance in this method, and determine that a sufficiently large workload is critical in achieving real gains. Several additional features are recommended to increase usability and improve results on small datasets. / text
337

Parallel and distributed cyber-physical system simulation

Pfeifer, Dylan Conrad 06 November 2014 (has links)
The traditions of real-time and embedded system engineering have evolved into a new field of cyber-physical systems (CPSs). The increase in complexity of CPS components and the multi-domain engineering composition of CPSs challenge the current best practices in design and simulation. To address the challenges of CPS simulation, this work introduces a simulator coordination method drawing from strengths of the field of parallel and distributed simulation (PADS), yet offering benefits aimed towards the challenges of coordinating CPS engineering design simulators. The method offers the novel concept of Interpolated Event data types applied to Kahn Process Networks in order to provide simulator coordination. This can enable conservative and optimistic coordination of multiple heterogeneous and homogeneous simulators, but provide important benefits for CPS simulation, such as the opportunity to reduce functional requirements for simulator interfacing compared to existing solutions. The method is analyzed in theoretical properties and instantiated in software tools SimConnect and SimTalk. Finally, an experimental study applies the method and tools to accelerate Spice circuit simulation with tradeoffs in speed versus accuracy, and demonstrates the coordination of three heterogeneous simulators for a CPS simulation with increasing component model refinement and realism. / text
338

START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic data

Zhu, Xinjie, 朱信杰 January 2015 (has links)
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed. START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users. It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
339

Fast sequential implementation of a lightweight, data stream driven, parallel language with application to intrusion detection

Martin, Xavier 18 December 2007 (has links)
The general problem we consider in this thesis is the following: we have to analyze a stream of data (records, packets, events ...) by successively applying to each piece of data a set of ``rules'. Rules are best viewed as lightweight parallel processes synchronizing on each arrival of a new piece of data. In many applications, such as signature-based intrusion detection, only a few rules are concerned with each new piece of data. But all other rules have to be executed anyway just to conclude that they can ignore it. Our goal is to make it possible to avoid this useless work completely. To do so, we perform a static analysis of the code of each rule and we build a decision tree that we apply to each piece of data before executing the rule. The decision tree tells us whether executing the rule or not will change anything to the global analysis results. The decision trees are built at compile time, but their evaluation at each cycle (i.e., for each piece of data) entails an overhead. Thus we organize the set of all computed decision trees in a way that makes their evaluation as fast as possible. The two main original contributions of this thesis are the following. Firstly, we propose a method to organize the set of decision trees and the set of active rules in such a way that deciding which rules to execute can be made optimally in O(r_u), where r_u is the number of useful rules. This time complexity is thus independent of the actual (total) number of active rules. This method is based on the use of a global decision tree that integrates all individual decision trees built from the code of the rules. Secondly, as such a global tree may quickly become much too large if usual data structures are used, we introduce a novel kind of data structure called sequential tree that allows us to keep global decision trees much smaller in many situations where the individual trees share few common conditions. (When many conditions are shared by individual trees the global tree remains small.) To assess our contribution, we first modify the implementation of ASAX, a generic system for data stream analysis based on the rule paradigm presented above. Then we compare the efficiency of the optimized system with respect to its original implementation, using the MIT Lincoln Laboratory Evaluation Dataset and a classical set of intrusion detection rules. Impressive speed-ups are obtained. Finally, our optimized implementation has been used by Nicolas Vanderavero, in his PhD thesis, for the design of stateful honeytanks (i.e., low-interaction honeypots). It makes it possible to simulate tens of thousands hosts on a single computer, with a high level of realism.
340

Parallel Paths of Equal Reliability Assessed using Multi-Criteria Selection for Identifying Priority Expendature

Hook, Tristan William January 2013 (has links)
This research project identifies some factors for the justification in having parallel network links of similar reliability. There are two key questions requiring consideration: 1) When is it optimal to have or create two parallel paths of equal or similar reliability? 2) How could a multi-criteria selection method be implemented for assigning expenditure? Asset and project management always have financial constraints and this requires a constant balancing of funds to priorities. Many methods are available to address these needs but two of the most common tools are risk assessment and economic evaluations. In principal both are well utilised and generally respected in the engineering community; when it compares parallel systems both tend to favour a single priority link, a single option. Practical conception also tends to support this concept as the expenditure strengthens one link well above the alternative. The example used to demonstrate the point that there is potential for parallel paths of equal or similar reliability is the Wellington link from near the airport (Troy Street) up the coast to Paekakariri. Both the local and highway options have various benefits of ease of travel to shopping facilities. Investigating this section provides several combinations from parallel highways to highway and local roads, so will have differing management criteria and associated land use. Generalised techniques are to be applied to the network. Risk is addressed as a reliability index figure that is preset to provide a consistent parameter (equal reliability) for each link investigated. Consequences are assessed with multi-criteria selection focusing on local benefits and shortcomings. Several models are used to build an understanding on how each consequence factor impacts on the overall model and to identify consequences of such a process. Economics are briefly discussed as the engineering community and funding is almost attributed to financial constraints. No specific analytical assessment has been completed. General results indicate there are supporting arguments to undertake a multi-selection criteria assessment while comparing parallel networks. Situations do occur when there is benefit for parallel networks of equal or similar reliability and therefore equal funding to both can be supported.

Page generated in 0.0646 seconds