• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27373
  • 5236
  • 1472
  • 1279
  • 1279
  • 1279
  • 1279
  • 1279
  • 1269
  • 1206
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 42854
  • 42854
  • 14618
  • 10964
  • 3180
  • 2978
  • 2818
  • 2597
  • 2582
  • 2519
  • 2479
  • 2470
  • 2387
  • 2288
  • 2087
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Magnetic resonance imaging modeling and applications to fast imaging and guidance of ultrasound surgery.

Darkazanli, Ammar. January 1993 (has links)
Magnetic Resonance Imaging (MRI) is the only known radiological modality that provides a diagnostic cross sectional images non-invasively in virtually any orientation without patient repositioning. The principles of MRI are based on Bloch's equations, which describe the behavior of proton molecules in the presence of a magnetic field. There are many interesting areas where MRI has contributed, such as perfusion and diffusion studies, MR angiography, cardiac studies as weIl as therapeutic applications in cancer treatment. In this dissertation two MRI related topics were investigated. First, a computer program was developed to simulate virtually any MRI pulse sequence. The phase encoding gradient pulses are also included which has proved to be very useful in predicting image artifacts and contrast behavior. The second is the application of MRI in guiding ultrasound surgery. A detailed study was performed on the sensitivity of MRI parameters to temperature changes. In-vivo studies were also performed on seven Greyhound dogs and twenty five rabbits. Temperature elevations were successfully depicted using MRI. Computer simulations were also used to study the effects of changing temperature during image acquisition.
62

Discrete pattern matching over sequences and interval sets.

Knight, James Robert. January 1993 (has links)
Finding matches, both exact and approximate, between a sequence of symbols A and a pattern P has long been an active area of research in algorithm design. Some of the more well-known byproducts from that research are the diffprogram and grep family of programs. These problems form a sub-domain of a larger area of problems called discrete pattern matching which has been developed recently to characterize the wide range of pattern matching problems. This dissertation presents new algorithms for discrete pattern matching over sequences and develops a new sub-domain of problems called discrete pattern matching over interval sets. The problems and algorithms presented here are characterized by three common features: (1) a "computable scoring function" which defines the quality of matches; (2) a graph based, dynamic programming framework which captures the structure of the algorithmic solutions; and (3) an interdisciplinary aspect to the research, particularly between computer science and molecular biology, not found in other topics in computer science. The first half of the dissertation considers discrete pattern matching over sequences. It develops the alignment-graph/dynamic-programming framework for the algorithms in the sub-domain and then presents several new algorithms for regular expression and extended regular expression pattern matching. The second half of the dissertation develops the sub-domain of discrete pattern matching over interval sets, also called super-pattern matching. In this sub-domain, the input consists of sets of typed intervals, defined over a finite range, and a pattern expression of the interval types. A match between the interval sets and the pattern consists of a sequence of consecutive intervals, taken from the interval sets, such that their corresponding sequence of types matches the pattern. The name super-pattern matching comes from those problems where the interval sets corresponds to the sets of substrings reported by various pattern matching problems over a common input sequence. The pattern for the super-pattern matching problem, then, represents a "pattern of patterns," or super-pattern, and the sequences of intervals matching the super-pattern correspond to the substring of the original sequence which match that larger "pattern."
63

Synthesis of design evaluation modules in an object-oriented simulation environment: Methodology, techniques, and prototype.

Chien, Lien-Pharn. January 1993 (has links)
A wide range of modeling and simulation packages have been applied to evaluate computer systems, telecommunication networks, and diverse environments in industry. The objective in utilizing simulation is to assess system designs prior to actual implementation. The approaches used to perform modeling range from programming with a specific simulation description language (the most common) to automation, using an icon-driven user interface. Flexibility, maintainability and acceptability are the main criteria used to make a choice. The objectives of the modeling and simulation environment are to automatically construct simulation models for the systems being designed, and to efficiently define the system performance measures. To meet these objectives, an environment called Performance Object-oriented modeling and Simulation Environment (POSE) has been created. In order to profile the knowledge involved in POSE, a knowledge representation scheme, System Entity Structure (SES) is adopted for efficient representation. POSE is organized into several layers such that the procedures of modeling can be set up in a hierarchical manner with the support of hierarchical model-based management. At the stage of defining system requirements, the structure of the Experimental Frame is applied to handle the system's traffic generation, performance data collection and computation. A methodology called Integrated performance Specification (IPS) is designed to facilitate model generation of the frame structure. At the system modeling level, elementary models are organized via combining the properties of a queuing model and the structure of Discrete EVent System Specification (DEVS) formalism. An overall system model is then constructed based upon the elementary models by applying the SM-Algo algorithm. Finally, the system models are integrated with the proper experimental frames in distributed and centralized execution modes to create integrated simulation models. An algorithm called MI-Algo assists the integration procedure. A window-based graphical front-end offers a simple and straightforward user interface to enhance the efficiency of POSE.
64

Constructing scientific applications from heterogeneous resources.

Homer, Patrick Thomas. January 1994 (has links)
The computer simulation of scientific processes is playing an increasingly important role in scientific research. For example, the development of adequate flight simulation environments, numeric wind tunnels, and numeric propulsion systems is reducing the danger and expense involved in prototyping new aircraft and engine designs. One serious problem that hinders the full realization of the potential of scientific simulation is the lack of tools and techniques for dealing with the heterogeneity inherent in today's computational resources and applications. Typically, either ad hoc connection techniques, such as manual file transfer between machines, or approximation techniques, such as boundary value equations, are employed. This dissertation develops a programming model in which scientific applications are designed as heterogeneous distributed programs, or meta-computations. The central feature of the model is an interconnection system that handles the transfer of control and data among the heterogeneous components of the meta-computation, and provides configuration tools to assist the user in starting and controlling the distributed computation. Key benefits of this programming model include the ability to simulate the interactions among the physical processes being modeled through the free exchange of data between computational components. Another benefit is the possibility of improved user interaction with the meta-computation, allowing the monitoring of intermediate results during long simulations and the ability to steer the simulation, either directly by the user or through the incorporation of an expert system into the meta-computation. This dissertation describes a specific realization of this model in the Schooner interconnection system, and its use in the construction of a number of scientific meta-computations. Schooner uses a type specification language and an application-level remote procedure call mechanism to ease the task of the scientific programmer in building meta-computations. It also provides static and dynamic configuration management features that support the creation of meta-computations from components at runtime, and their modification during execution. Meta-computations constructed using Schooner include examples involving molecular dynamics and neural nets. Schooner is also in use in several major projects as part of a NASA effort to develop improved jet engine simulations.
65

Simultaneous Embedding and Level Planarity

Estrella Balderrama, Alejandro January 2009 (has links)
Graphs are a common model for representing information consisting of a set of objects or entities and a set of connections or relations between them. Graph Drawing is concerned with the automatic visualization of graphs in order to make the information useful. That is, a good drawing should be helpful in the application domain where it is used by capturing the relationships in the underlying data. We consider two important problems in automated graph drawing: simultaneous embedding and level planarity. Simultaneous embedding is the problem of drawing multiple graphs while maintaining the readability of each graph independently and preserving the mental map when going from one graph to another. In this case, each graph has the same vertex set (same entities) but different edge sets (different relationships). Level planarity arises in the layout of graphs that contain hierarchical relationships. When drawing graphs in the plane, this translates to a restricted form of planarity where the vertical order of the entities is pre-determined. We consider the computational complexity of the simultaneous embedding problem. In particular, we show that in its generality the simultaneous embedding problem is NP-hard if the edges are drawn as straight-lines. We present algorithms for drawing graphs on predetermined levels, which allow the simultaneous embedding of restricted types of graphs, such as outerplanar graphs, trees and paths. Finally, our practical contribution is a tool that implements known and novel algorithms related to simultaneous embedding and level planarity and can be used both as a visualization software and as an aid to study theoretical problems.
66

Efficient Geometric Algorithms for Wireless Networks

Sankararaman, Swaminathan January 2011 (has links)
Wireless communications has a wide range of applications including cellular telephones, wireless networking and security systems. Typical systems work through radio frequency communication and this is heavily dependent on the geographic characteristics of the environment. This dissertation discusses the application of geometric optimization in wireless networks where the communication \links" are not static but may be dynamically changing. We show how to exploit the geometric properties of these networks to model their behavior. In the first part of the dissertation, we consider the problem of interference-aware routing in Multi-channel mesh networks employing directional antennas to improve spatial throughput. In such networks, optimal routes are paths with a channel assignment for the links such that the path and link bandwidths are the same. We develop a method to perform topology control while taking into account interference by constructing a spanner; a sub-network containing O(n= θ) links, where n is the network size and θ is a tunable parameter, such that path costs increase by at most a constant factor. In second part, we study the problem of base-station positioning in Sensor Net- works such that we achieve energy-efficient data transmission from the sensors. Given the battery limitations of the sensors, our objective is to maximize the network lifetime. First, we present efficient algorithms for computing a transmission scheme given a fixed base-station and also provide a distributed implementation. Next, we present efficient algorithms for the problem of locating the base-station and simultaneously finding a transmission scheme. We compare our algorithms with linear-programming based algorithms through simulations. In the third part, we study strategies for managing friendly jammers to create virtual barriers preventing eavesdroppers from tapping sensitive RFID communication. Our scheme precludes the use of encryption. Applications domains include (i) privacy of inventory management systems, (ii) credit card communications, (iii) secure communication in any wireless networks without encryption. By carefully managing jammers producing noise, we show how to degrade the signal at eavesdroppers sufficiently, without jeopardizing network performance. We present algorithms targeted towards optimizing the number and power of jammers. Experimental simulations back up our results.
67

A Framework for Recognizing and Executing Verb Phrases

Hewlett, Daniel Krishnan January 2011 (has links)
Today, the physical capabilities of robots enable them to perform a wide variety of useful tasks for humans, making the need for simple and intuitive interaction between humans and robots readily apparent. Taking natural language as a key element of this interaction, we present a novel framework that enables robots to learn qualitative models of the semantics of an important class of verb phrases, such as "follow me to the kitchen," and leverage these verb models to perform two tasks: Executing verb phrase commands, and recognizing when another agent has performed a given verb. This framework is based on a qualitative, relational model of verb semantics called the Verb Finite State Machine, or VFSM. We describe the VFSM in detail, motivating its design and providing a characterization of the class of verbs it can represent. The VFSM supports the recognition task natively, and we show how to combine it with modern planning techniques to support verb execution in complex environments. Grounded natural language semantics must be learned through interaction with humans, so we describe methods from learning VFSM verb models through natural interaction with a human teacher in the apprenticeship learning paradigm. To demonstrate the efficacy of our framework, we present empirical results showing rapid learning and high performance on both the recognition and execution tasks. In these experiments, the VFSM is able to consistently outperform a baseline method based on recent work in the verb learning literature. We close with a discussion of some of the current limitations of the framework, and a roadmap for future work in this area.
68

Micro-Specialization: Dynamic Code Specialization in DBMSes

Zhang, Rui January 2012 (has links)
Database management systems (DBMSes) form a cornerstone of modern IT infrastructure, and it is essential that they have excellent performance. In this research, we exploit the opportunities of applying dynamic code specialization to DBMSes, particularly by focusing on runtime invariant present in DBMSes during query evaluation. Query evaluation involves extensive references to the relational schema, predicate values, and join types, which are all invariant during query evaluation, and thus are subject to dynamic value-based code specialization. We observe that DBMSes are general in the sense that they must contend with arbitrary schemas, queries, and modifications; this generality is implemented using runtime metadata lookups and tests that ensure that control is channelled to the appropriate code in all cases. Unfortunately, these lookups and tests are carried out even when information is available that renders some of these operations superfluous, leading to unnecessary runtime overheads. We introduce micro-specialization, an approach that uses relation- and query-specific information to specialize the DBMS code at runtime and thereby eliminate some of these overheads. We develop a taxonomy of approaches and specialization times and propose a general architecture that isolates most of the creation and execution of the specialized code sequences in a separate DBMS-independent module. We show that this approach requires minimal changes to a DBMS and can improve the performance simultaneously across a wide range of queries, modifications, and bulk-loading, in terms of storage, CPU usage, and I/O time of the TPC-H and TPC-C benchmarks. We also discuss an integrated development environment that helps DBMS developers apply micro-specializations to identified target code sequences.
69

Fault tolerance and reconfiguration strategies for tree architectures

Ko, Chen-Ken, 1961- January 1990 (has links)
Reconfigurable binary tree architectures have been widely studied and used in various VLSI implementations. These fault tolerance approaches can be classified into two categories. In thesis, we propose a fault diagnosis for the first category. Then a new block-oriented fault tolerance scheme for tree architectures is presented for the second category. The fundamental idea is to extend each single PE node in the tree to a block. Each block could consist of several PEs and the associated interconnection links. It is shown that several previous fault tolerant designs in the literature are special cases of the proposed design. The VLSI layout of binary tree is very efficient and the problem of long interconnections in other designs has been alleviated. Efficient reconfiguration algorithms and analysis are also presented.
70

Performance evaluation of interconnection networks for ISDN switching applications

Lin, Cheng-Leo George, 1958- January 1990 (has links)
Interconnection Networks of various designs have been proposed for use as fast packet switches for broadband ISDN applications. We use stochastic activity networks to model and simulate these designs. In particular, we use stochastic activity networks to compare three switch designs (basic banyan, modified delta, and a design with multiplexer and demultiplexer) under both uniform and non-uniform workload assumptions. Regarding contention resolution, we consider two policies, one with blocking, and one where the packet is rejected and must be retransmitted. For each scenario, we determine blocking probability and mean transmission delay. We find that while traditional designs work well with uniform workloads, they do not work so well with non-uniform workloads, and in fact, the simpler design with multiplexer and demultiplexer works better in some reject-retransmission cases. The modified delta network, due to its multiple path, performs the best among the three designs with uniform workloads. (Abstract shortened with permission of author.)

Page generated in 0.1251 seconds