• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20450
  • 5226
  • 1262
  • 1210
  • 867
  • 670
  • 435
  • 410
  • 410
  • 410
  • 410
  • 410
  • 407
  • 158
  • 156
  • Tagged with
  • 34397
  • 34397
  • 14116
  • 10832
  • 3107
  • 2981
  • 2737
  • 2541
  • 2483
  • 2354
  • 2279
  • 2178
  • 2165
  • 2046
  • 1937
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Concurrency control in object-oriented databases: A mechanism and a performance study.

Olsen, David Harold. January 1993 (has links)
Object-oriented databases are becoming increasingly popular in businesses. Some areas of interest include formal object-oriented database models, query optimization, analysis and design techniques, and concurrency control. These issues have been addressed as they pertain to the relational model and to centralized databases but they have not been addressed as they apply to the object-oriented model. This dissertation involves the development and testing of a lock-based concurrency control (CC) mechanism specifically designed for an object-oriented database. The mechanism is named O²C² pronounced oh-two see-two. After the O²C² development is introduced, a proof of correctness is presented to demonstrate that the schedules it produces are equivalent to serializable ones. A discussion ensues concerning simulating O²C² under varying conditions of contention (from very little contention to high contention) to measure its performance in terms of completed transactions per second. This research was conducted in three phases: model development and implementation, simulation to test the model, and analysis of the results. First, CC issues specific to object-oriented databases were developed and applicable models created. Next, the simulation phase was conducted using a specific transaction model. In the third phase, an analysis of the simulation results demonstrates how the mechanism performs under specific conditions.
82

A framework for monitoring program execution.

Jeffery, Clinton Lewis. January 1993 (has links)
Program execution monitors are used to improve human beings' understanding of program run-time behavior in a variety of important applications such as debugging, performance tuning, and the study of algorithms. Unfortunately, many program execution monitors fail to provide adequate understanding of program behavior, and progress in this area of systems software has been slow due to the difficulty of the task of writing execution monitors. In high-level programming languages the task of writing execution monitors is made more complex by features such as non-traditional control flow and complex semantics. Additionally, in many languages, such as the Icon programming language, a significant part of the execution behavior that various monitors need to observe occurs in the language run-time system code rather than the source code of the monitored program. This dissertation presents a framework for monitoring Icon programs that allows rapid development of execution monitors in the Icon language itself. Monitors have full source-level access to the target program with which to gather and process execution information, without intrusive modification to the target executable. In addition, the framework supports the monitoring of implicit run-time system behavior crucial to program understanding. In order to demonstrate its practicality, the framework has been used to implement a collection of program visualization tools. Program visualization provides graphical feedback about program execution that allows human beings to deal with volumes of data more effectively than textual techniques. Ideally, the user specifies program execution controls in such tools directly in the graphics used to visualize execution, employing the same visual language that is used to render the output. Some monitors that exhibit this characteristic are presented.
83

FT-SR: A programming language for constructing fault-tolerant distributed systems.

Thomas, Vicraj Timothy January 1993 (has links)
This dissertation focuses on the area of improving programming language support for constructing fault-tolerant systems. Specifically, the design and implementation of FT-SR, a programming language developed for building a wide variety of fault-tolerant systems, is described. FT-SR is based on the concurrent programming language SR and is designed as a set of extensions to SR. A distinguishing feature of FT-SR is the flexibility it provides the programmer in structuring fault-tolerant software. It is flexible enough to be used for structuring systems according to any of the standard fault-tolerance structuring paradigms that have been developed for such systems, including the object/action model, the restartable action paradigm, and the state machine approach. This is especially important in systems building because different structuring paradigms are often appropriate for different parts of the system. This flexibility sets FT-SR apart from other fault-tolerant programming languages which provide language support for the one paradigm that is best suited for the class of applications they choose to support. FT-SR, on the other hand, is suitable for programming a variety of systems and applications. FT-SR derives its flexibility from a programming model based on fail-stop atomic objects. These objects execute operations as atomic actions except when a failure or series of failures cause underlying implementation assumptions to be violated; in this case, notification is provided. This dissertation argues that fail-stop atomic objects are the fundamental building blocks for all fault-tolerant programs. FT-SR provides the programmer with simple fail-stop atomic objects, and mechanisms that allow these fail-stop atomic objects to be composed to form higher-level fail-stop atomic objects that can tolerate a greater number of faults. The mechanisms for composing fail-stop atomic objects are based on standard redundancy techniques. This ability to combine the basic building blocks in a variety of ways allows programmers to structure their programs in a manner best suited to the application at hand. FT-SR has been implemented using version 3.1 of the x-kernel and runs standalone on Sun 3s. The implementation is interesting because of the novel algorithms and optimizations used within the language runtime system.
84

Computer-aided systems engineering methodology support and its effect on the output of structured analysis.

Jankowski, David John. January 1994 (has links)
This dissertation investigates the effects of computer-aided systems engineering (CASE) tool methodology support on the system specification output from structured analysis. A replicated project study was employed to allow for control of the requirements specification. Sixteen groups of four upper-division, undergraduate MIS students developed a system specification from the requirements specification of a hotel information system. The groups developed the specifications by following the Yourdon structured analysis methodology, with the aid of two popular, personal computer-based CASE tools. Both CASE tools claim to support the methodology but the methodology support offered by the two tools is different. Specifically, the size of the rule base varies between the tools, and the implementation mechanism for enforcing a specific methodology rule varies both between tools and between rules. It is hypothesized that the number of violations of a particular methodology rule is a function of the implementation mechanism, i.e., rules that are rigidly enforced will be violated less frequently than rules that are not rigidly enforced or are not a part of the CASE tool's rule base. The results indicate that, regardless of the type of methodology support supplied by the CASE tool, there are very few violations of the methodology rules that apply to the internal consistency of a data flow diagram. However, when the system is examined by verifying the hierarchical consistency of the data flow diagrams, the number of specification errors increases. Further, for some of the methodology rules, the number of violations is proportional to the amount of support provided by the CASE tool. One consequence of these results is that rules applying to the hierarchical consistency of data flows diagrams should be enforced by the CASE tool in as strict a manner as possible to assist in preventing errors from propagating down to the primitive process specifications and corrupting the construction of structure charts.
85

Optimization of the packaging of the Mars oxygen manufacturing plant

Santhanam, Venkatesan, 1968- January 1992 (has links)
A complete analysis of the various energy losses involved in the Mars oxygen manufacturing plant is performed. The various losses considered are the pressure losses and radiation losses in connections between components, radiative heat losses from and between different components of the plant. These, together with the cost and mass of a library of components are used to construct an objective function to optimize the packaging of the plant. A software package has been created to determine component locations in the package such that this objective function is minimized. The package reads input data from a component database system and performs various iterations to arrive at a configuration having the minimum of the energy losses. The steepest descent method is used for minimizing the nonlinear objective functions describing the energy losses. The minimization of the cost and mass factors are performed by a brute search method. The results of the optimization are presented graphically.
86

An object oriented approach to finite element analysis and multi-body dynamic analysis program designs

Sagal, Ellen Jean, 1954- January 1993 (has links)
Procedurally-oriented computer programs used to perform finite element and multibody dynamics analyses are difficult to understand, use, and modify. A new approach, object-oriented programming, was used to develop a finite element code that is easier to apply, understand, and modify. Object-oriented code is easier to understand, as the characteristics and operations associated with a physical phenomena are grouped in a class whose structure closely parallels the modeled entity. Elements, bodies, joints, and mechanisms are modeled as classes. Program application is facilitated by a hierarchy of class structure. Manipulation of higher level body and mechanism class types direct the complicated, lower level code of element calculations. Lower level code is hidden in an object library resulting in a shorter, simpler driver program for an analysis. Modification and expansion of programs is easily accomplished through object-oriented language features such as modularization of code into classes and overloaded functions. Body and element abstract base classes provide "templates" for creation of new type classes used to develop additional analyses.
87

From calipers to computers: Three-dimensional imaging in forensic anthropology

Ackermann, Rebecca Rogers, 1969- January 1994 (has links)
Forensic anthropology is an applied science sorely lacking in theoretical underpinnings, despite the fact that forensic anthropologists have unique, albeit usually fleeting, access to modern skeletal remains. By constructing a database of three-dimensional images, such remains can be accessed indefinitely. I have proposed a method for just such imaging, using Macintosh hardware and NIH Image software to digitally preserve remains using red-blue three-dimensional imaging techniques. Additionally, I address the qualitative and quantitative accuracy of these images. By creating this type of forensic database, anthropologists can then reformulate outdated methodologies that address issues like populational variance, thereby using modern forensic skeletal remains to better understand some of the fundamental theoretical issues within anthropology.
88

Simulating the Long House Valley: An evaluation of the role of agent-based computer simulation in archaeology

Littler, Matthew Laws, 1973- January 1998 (has links)
This study presents the results of a detailed analysis of an agent-based computer simulation called Artificial Anasazi. The simulation attempts to replicate the population growth and settlement patterns of the prehistoric Kayenta Anasazi of Long House Valley in northeastern Arizona between A.D. 400-1300. Agent-based simulations model social evolution from the bottom-up, using heterogeneous agents that follow simple rules, in contrast to the top-down computer simulations usually used by archaeologists. Artificial Anasazi is tested against the archaeological record of the real Long House Valley through both qualitative and quantitative methods, and an analysis of the relevant ethnographic information is presented. The ultimate goal of this study is to elucidate the potentials and pitfalls of using agent-based computer simulation as a serious research tool in archaeology.
89

Side channel attack resistance| Migrating towards high level methods

Borowczak, Mike 21 December 2013 (has links)
<p> Our world is moving towards ubiquitous networked computing with unstoppable momentum. With technology available at our every finger tip, we expect to connect quickly, cheaply, and securely on the sleekest devices. While the past four decades of design automation research has focused on making integrated circuits smaller, cheaper and quicker the past decade has drawn more attention towards security. Though security within the scope of computing is a large domain, the focus of this work is on the elimination of computationally based power byproducts from high-level device models down to physical designs and implementations The scope of this dissertation is within the analysis, attack and protection of power based side channels. Research in the field concentrates on determining, masking and/or eliminating the sources of data dependent information leakage within designs. While a significant amount of research is allocated to reducing this leakage at low levels of abstraction, significantly less research effort has gone into higher levels of abstraction. This dissertation focuses on both ends of the design spectrum while motivating the future need for hierarchical side channel resistance metrics for hardware designs. Current low level solutions focus on creating perfectly balanced standard cells through various straight-forward logic styles. Each of these existing logic styles, while enhancing side channel resistance by reducing the channels' variance, come at significant design expense in terms of area footprint, power consumption, delay and even logic style structure. The first portion of this proposal introduces a universal cell based on a dual multiplexer, implemented using a pass-transistor logic which approaches and exceeds some standard cell cost benchmarks. The proposed cell and circuit level methods shows significant improvements in security metrics over existing cells and approaches standard CMOS cell and circuit performance by reducing area, power consumption and delay. While most low level works stop at the cell level, this work also investigates the impact of environmental factors on security. On the other end of the design spectrum, existing secure architecture and algorithm research attempts to mask side channels through random noise, variable timing, instruction reordering and other similar methods. These methods attempt to obfuscate the primary source of information with side channels. Unfortunately, in most cases, the techniques are still susceptible to attack - of those with promise, most are algorithm specific. This dissertation approaches high-level security by eliminating the relationship between high level side channel models and the side channels themselves. This work discusses two different solutions targeting architecture level protection. The first, deals with the protection of Finite State Machines, while the seconds deals with protection of a class of cryptographic algorithms using Feedback Shift Registers. This dissertation includes methods for reducing the power overhead of any FSM circuit (secured or not). The solutions proposed herein render potential side channel models moot by eliminating or reducing the model's data dependent variability. Designers unwilling to compromise on a doubling of area can include some sub-optimal security to their devices. </p>
90

Understanding and Rejecting Errant Touches on Multi-touch Tablets

Shu, Ke 28 December 2013 (has links)
<p> Given the pervasion of multi-touch tablet, pen-based applications have rapidly moved onto this new platform. Users draw both with bare fingers and using capacitive pens as they would do on paper in the past. Unlike paper, these tablets cannot distinguish legitimate finger/pen input from accidental touches by other parts of the user's hand. In this thesis, we refer it to as errant touch rejection problem since users may unintentionally touch the screen with other parts of their hand. </p><p> In this thesis, I design, implement and evaluate new approaches, bezel-focus rejection, of preventing errant touches on multi-touch tablets. I began the research by conducting a formal study to collect and characterize errant touches. I analyzed the data collected from the study and the results are guiding me to design rejection techniques. I will conclude this research by developing bezel-focus rejection and evaluate its performance. The results show that bezel-focus rejection yields high rejection rate of errant touches and make users more inclined to rest hands on tablet than comparison techniques. </p><p> This research has two major contributions to Human Computer Interaction (HCI) community. First, my proposed errant touch rejection approaches can be applied the other pen-based note-taking applications. Second, my experimental results can serve as a guide to other developing similar techniques.</p>

Page generated in 0.0932 seconds