• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 72
  • 64
  • 50
  • 25
  • 21
  • 15
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 680
  • 197
  • 162
  • 136
  • 135
  • 134
  • 127
  • 124
  • 118
  • 85
  • 81
  • 75
  • 73
  • 69
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

An adaptive framework for Internet-based distributed genetic algorithms

Berntsson, Lars Johan January 2006 (has links)
Genetic Algorithms (GAs) are search algorithms inspired by genetics and natural selection, and have been used to solve difficult problems in many disciplines, including modelling, control systems and automation. GAs are generally able to find good solutions in reasonable time, however as they are applied to larger and harder problems they are very demanding in terms of computation time and memory. The Internet is the most powerful parallel and distributed computation environment in the world, and the idle cycles and memories of computers on the Internet have been increasingly recognized as a huge untapped source of computation power. By combining Internet computing and GAs, this dissertation provides a framework for Internet-based parallel and distributed GAs that gives scientists and engineers an easy and affordable way to solve hard real world problems. Developing parallel computation applications on the Internet is quite unlike developing applications in traditional parallel computation environments, such as multiprocessor systems and clusters. This is because the Internet is different in many respects, such as communication overhead, heterogeneity and volatility. To develop an Internet-based GA, we need to understand the implication of these differences. For this purpose, a convergence model for heterogenous and volatile networks is presented and used in experiments that study GA performance and robustness in Internet-like scenarios. The main outcome of this research is an Internet-based distributed GA framework called G2DGA. G2DGA is an island model distributed GA, which can provide support for big populations needed to solve many real world problems. G2DGA uses a novel hybrid peer-to-peer (P2P) design with island node activity coordinated by supervisor nodes that offer a global overview of the GA search state. Compared to client/server approaches, the P2P architecture improves scalability and fault tolerance by allowing direct communication between the islands and avoiding single-point-of-failure situations. One of the defining characteristics of Internet computing is the dynamics and volatility of the environment, and a parallel and distributed GA that does not adapt to its environment cannot use the available resources efficiently. Two novel adaptive methods are investigated. The first method is migration topology adaptation, which uses clustering on elite individuals from each island to rebuild the migration topology. Experiments with the migration topology adapter show that it gives G2DGA better performance than a GA with static migration topology of a similar or larger connectivity level. The second method is population size adaptation, which automatically finds the number of islands and island population sizes needed to solve a given problem efficiently. Experiments on the population size adapter show that it is robust, and compares favourably with the traditional trial-and-error approach in terms of computational effort and solution quality. The scalability and robustness of G2DGA has been extensively tested in network scenarios of varying volatility and heterogeneity. Experiments with up to 60 computers were conducted in computer laboratories, while more complex network scenarios have been studied in an Internet simulator. In the experiments, G2DGA consistently performs as well as, and usually significantly better than, static distributed GAs and the difference grows larger with increased network instability. The results show that G2DGA, by continuously adjusting the migration policy and the population size, can detect and make efficient use of idle cycles donated over volatile Internet connections. To demonstrate that G2DGA can be used to implement and solve real world problems, a challenging application in VLSI design was developed and used in the testing of the framework. The application is a multi-layer floorplanner, which uses a novel GA representation and operators based on a slicing structure approach. Its packing quality compares favourably with other multi-layer floorplanners found in the literature. Internet-based distributed GA research is exciting and important since it enables GAs to be applied to problem areas where resource limitations make traditional approaches unworkable. G2DGA provides a scalable and robust Internet-based distributed GA framework that can serve as a foundation for future work in the field.
202

Software-centric and interaction-oriented system-on-chip verification.

Xu, Xiao Xi January 2009 (has links)
As the complexity of very-large-scale-integrated-circuits (VLSI) soars, the complexity of verifying them increases even faster. Design verification becomes the biggest bottleneck in VLSI design, consuming around 70% of the effort and time in a typical design cycle. The problem is even more severe as the system-on-chip (SoC) design paradigm is gaining popularity. Unfortunately, the development in verification techniques has not kept up with the growth of the design capability, and is being left further behind in the SoC era. In recent years, a new generation of hardware-modelling-languages alongside the best practices to use them have emerged and evolved in an attempt to productively build an intelligent stimulationobservation environment referred to as the test-bench. Ironically, as test-benches are becoming more powerful and sophisticated under these best practices known as verification methodologies, the overall verification approaches today are still officially described as ad hoc and experimental and are in great need of a methodological breakthrough. Our research was carried out to seek the desirable methodological breakthrough, and this thesis presents the research outcome: a novel and holistic methodology that brings an opportunity to address the SoC verification problems. Furthermore, our methodology is a solution completely independent of the underlying simulation technologies; therefore, it could extend its applicability into future VLSI designs. Our methodology presents two ideas. (a) We propose that system-level verification should resort to the SoC-native languages rather than the test-bench construction languages; the software native to the SoC should take more critical responsibilities than the test-benches. (b) We challenge the fundamental assumption that “objects-under-test” and “tests” are distinct entities; instead, they should be understood as one type of entities – the interactions; interactions, together with the interference between interactions, i.e., the parallelism and resource-competitions, should be treated as the focus in system-level verification. The above two ideas, namely, software-centric verification and interaction-oriented verification have yielded practical techniques. This thesis elaborates on these techniques, including the transfer-resource-graph based test-generation method targeting the parallelism, the coverage measures of the concurrency completeness using Petri-nets, the automation of the test-programs which can execute smartly in an event-driven manner, and a software observation mechanism that gives insights into the system-level behaviours. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1363926 / Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2009
203

A coprocessor for fast searching in large databases: Associative Computing Engine

Layer, Christophe, January 2007 (has links)
Ulm, Univ., Diss., 2007.
204

Untersuchungen zur Implementierung von Bildverarbeitungsalgorithmen mittels pulsgekoppelter neuronaler Netze

Mayr, Christian Georg January 2008 (has links)
Zugl.: Dresden, Techn. Univ., Diss., 2008
205

Analoge Auslese- und Triggerelektronik für Mikrostreifen-Gaszähler

Glass, Boris. January 1999 (has links)
Heidelberg, Univ., Diplomarb., 1997.
206

Exploiting instruction-level parallelism a constructive approach /

Santos, Luiz Cláudio Villar dos, January 1998 (has links)
Thesis (doctoral)--Technische Universiteit Eindhoven, 1998. / Vita. Includes bibliographical references (p. 137-142).
207

Iterative MIMO decoding algorithms and VLSI implementation aspects

Studer, Christoph January 2009 (has links)
Zugl.: Zürich, Techn. Hochsch., Diss., 2009
208

Verlustleistungs-Modellierung exemplarischer Schlüsselkomponenten der hochratigen digitalen Signalverarbeitung

Henning, Christiane. Unknown Date (has links) (PDF)
Techn. Hochsch., Diss., 2002--Aachen.
209

Monolithisch integrierte Empfängerschaltung in 0,35um CMOS für optische Übertragungssysteme mit Datenraten bis 1,25GBit/s

Schrödinger, Karl. Unknown Date (has links) (PDF)
Techn. Universiẗat, Diss., 2004--Berlin.
210

Layout and structure aware synthesis of integrated circuits

Kutzschebauch, Thomas. Unknown Date (has links) (PDF)
University, Diss., 2003--Kaiserslautern.

Page generated in 0.0728 seconds