• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61750
  • 6049
  • 5658
  • 3722
  • 3435
  • 2277
  • 2277
  • 2277
  • 2277
  • 2277
  • 2264
  • 1224
  • 1145
  • 643
  • 535
  • Tagged with
  • 103551
  • 45402
  • 28871
  • 20536
  • 17945
  • 12455
  • 10978
  • 10831
  • 9121
  • 8524
  • 7161
  • 6379
  • 6194
  • 6175
  • 6054
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

The Design and Implementation of a Library Data Base

Papanastasopoulos, Constantine 09 1900 (has links)
<p>This project will implement an interactive system that allows users to interrogate the hierarchic program Library Index.</p> <p>Major components include transformation of the existing Index into the necessary Data Base and the Design and Implementation of the interactive query-answer system.</p> <p>The system will include the SCHEMA, SUB-SCHEMAS and the required application Programs.</p> / Master of Science (MS)
292

GRAPHICS APPLICATIONS OF EMERGENT BEHAVIOR OF NATURE-INSPIRED MODELS

Bushra Ferdousi (18359268) 12 April 2024 (has links)
<p dir="ltr">Nature-inspired models are an exciting and innovative area of research that explores the patterns and behaviors found in organic systems. These models demonstrate emergent behaviors that result in naturalistic patterns similar to those found in nature, making them widely applicable in various graphics applications and visualization techniques.</p><p dir="ltr">In the literature review, the behavior and structure of each nature-inspired model applied in computational art, graphics techniques, and visualization are described in detail. The taxonomy developed through the analysis of the similarities and differences among these models guides the research approach towards two specific nature-inspired models: Physarum and Differential Growth.</p><p><br></p><p dir="ltr">The Physarum model is implemented based on particle systems in graphics applications, which allows for the emergence of unique behaviors. These behaviors are similar to those found in social conflict behavior observed in artificial life systems. An extension of the Physarum model with Reaction-Diffusion texture generation produces patterns similar to those found in structures such as seashells and angelfish.</p><p dir="ltr">Differential growth is simulated in a particle system coupled with a vector field, creating an interactive software for pattern formation. This software enables users to adjust the parameters of the vector field and differential growth to create patterns observed in organic systems, such as kale leaves. The research aims to determine whether this software is understandable and usable for users to create patterns effectively.</p><p dir="ltr">The taxonomy developed in this study is a valuable resource for researchers, computational artists, and programmers to experiment with nature-inspired models governing complex rules and pattern formation. These models can be applied in graphics techniques such as animation, texture mapping, and artistic designs for exploration purposes.</p><p dir="ltr">In conclusion, nature-inspired models have proven to be an innovative and effective way to create naturalistic patterns in various graphics applications and visualization techniques. The research conducted in this study provides valuable insights into the behavior and structure of these models and how they can be developed further to create new and exciting designs.<br></p>
293

Multi-resolution computer architecture simulation

Huey, Steven Joseph 01 July 2001 (has links)
No description available.
294

Packet loss in the cognitive packet network

Gellman, Michael 01 January 2002 (has links)
Packet loss affects the ability of the network to satisfy the needs of its users. Packet loss can occur because of either congestion in the network, or because of transmission errors. It is important to reduce packet loss because applications such as real-time application and TCP-based applications are both sensitive to packet loss. The network should support some mechanism for reducing the loss experienced by its applications. This thesis presents a method for incorporating loss measurements into the Cognitive Packet Network. The idea of a Cumulative Loss is considered) which is defined as the loss from the source to the destination through the next hop. In addi- tion, a method is presented that allows for the combination of loss and delay as QoS constraints. These modifications are then tested to support their implementation.
295

A postprocessor for static and dynamic finite element analysis

Chen, Jianming, 1959- January 1989 (has links)
A user controlled interactive computer graphics postprocessor for two-dimensional static and dynamic finite element analysis is developed. This post processor is a menu driven interactive program. This post processor supports more than 50 graphics devices. This program can manipulate the original finite element mesh data, displacement, stress, strain and up to four other values such as temperature. The user can choose any one of the following methods to display the values: Deformed mesh, Vector flow, Color Contours or Curved Contours. With this postprocessor, an improved contouring algorithm is proposed specially for finite element method. This algorithm uses the same isoparametric element representation as used in the analysis stage. That means the contour curves are accurate assuming that the nodal values are accurate and the real values inside the element can be interpolated by the element shape functions. So, this algorithm provides a continuity as the same order as that of the shape functions used for the finite element.
296

Functional description and formal specification of a generic gateway.

Son, Chang Won. January 1988 (has links)
This dissertation is concerned with the design of a generic gateway which provides an interoperability between dissimilar computer networks. The generic gateway is decomposed with subnetwork dependent blocks and subnetwork independent blocks. The subnetwork dependent block is responsible to communicate with subnetwork nodes. The subnetwork independent block is responsible to interconnect the subnetwork dependent blocks. The communications between subnetwork dependent and independent blocks are done by service access points which defined independently to any specific subnetworks. Formal specification of a generic gateway is provided by LOTOS. The generic gateway specification is tested by a verifiable test method which is proposed in this dissertation. The correctness of the specification has been verified while the specified model is simulated. The major difference between conventional simulation and the verifiable test is in the objective of simulation. In the verifiable test method, the semantical properties are examined during the simulation process. The tester can be either human observer or other process.
297

GRAPHICS TERMINAL EMULATION ON THE PC

Noll, Noland LeRoy, 1958- January 1987 (has links)
The HP2623 graphics terminal emulator is implemented on the PC for use with the Starbase graphics package provided on the departmental HP9000 series 500 computer system. This paper discusses the development and implementation of this emulator. A demonstration of its compatibility with Starbase is also provided along with a users' manual and a programmers' reference.
298

MLM graphics : the creation of a software framework for graphical applications / Maranda L. Miller graphics / Creation of a software framework for graphical applications

Miller, Maranda L. January 2000 (has links)
This thesis describes the process of writing a software application geared toward developing computer graphics in the Windows environment. The code is written using Visual C++ and the Microsoft Foundation Classes (MFC). As an illustration of this process we will walk through the development of a software application. This application will allow a user to create and edit an image composed of simple line graphics and geometric shapes. The user can select drawing colors, select drawing styles, and do area filling. This application also illustrates the use of menus and dialog boxes. / Department of Computer Science
299

A design for sensing the boot type of a trusted platform module enabled computer

Vernon, Richard C. 09 1900 (has links)
Modern network technologies were not designed for high assurance applications. As the DOD moves towards implementing the Global Information Grid (GIG), hardened networks architectures will be required. The Monterey Security Architecture (MYSEA) is one such project. This work addresses the issue of object reuse as it pertains to volatile memory spaces in untrusted MYSEA clients. When a MYSEA client changes confidentiality levels, it is possible that classified material remains in volatile system memory. If the system is not power cycled before the next the login, an attacker could retrieve sensitive information from the previous session. This thesis presents a conceptual design to protect against such an attack. A processor may undergo a hard or soft reboot. The proposed design uses a secure coprocessor to sense the reboot type of the host platform. In addition, a count is kept of the number of hard reboots the host platform has undergone. Using services provided by the secure coprocessor, the host platform can trustfully attest to a remote entity that it has undergone a hard reboot. This addresses the MYSEA object reuse problem. The design was tested using the CPU simulator software SimpleScalar.
300

Repurposing Software Defenses with Specialized Hardware

Sinha, Kanad January 2019 (has links)
Computer security has largely been the domain of software for the last few decades. Although this approach has been moderately successful during this period, its problems have started becoming more apparent recently because of one primary reason — performance. Software solutions typically exact a significant toll in terms of program slowdown, especially when applied to large, complex software. In the past, when chips became exponentially faster, this growing burden could be accommodated almost for free. But as Moore’s law winds down, security-related slowdowns become more apparent, increasingly intolerable, and subsequently abandoned. As a result, the community has started looking elsewhere for continued protection, as attacks continue to become progressively more sophisticated. One way to mitigate this problem is to complement these defenses in hardware. Despite lacking the semantic perspective of high-level software, specialized hardware typically is not only faster, but also more energy-efficient. However, hardware vendors also have to factor in the cost of integrating security solutions from the perspective of effectiveness, longevity, and cost of development, while allaying the customer’s concerns of performance. As a result, although numerous hardware solutions have been proposed in the past, the fact that so few of them have actually transitioned into practice implies that they were unable to strike an optimal balance of the above qualities. This dissertation proposes the thesis that it is possible to add hardware features that complement and improve program security, traditionally provided by software, without requiring extensive modifications to existing hardware microarchitecture. As such, it marries the collective concerns of not only users and software developers, who demand performant but secure products, but also that of hardware vendors, since implementation simplicity directly relates to reduction in time and cost of development and deployment. To support this thesis, this dissertation discusses two hardware security features aimed at securing program code and data separately and details their full system implementations, and a study of a negative result where the design was deemed practically infeasible, given its high implementation complexity. Firstly, the dissertation discusses code protection by reviving instruction set randomization (ISR), an idea originally proposed for countering code injection and considered impractical in the face of modern attack vectors that employ reuse of existing program code (also known as code reuse attacks). With Polyglot, we introduce ISR with strong AES encryption along with basic code randomization that disallows code decryption at runtime, thus countering most forms of state-of-the-art dynamic code reuse attacks, that read the code at runtime prior to building the code reuse payload. Through various optimizations and corner case workarounds, we show how Polyglot enables code execution with minimal hardware changes while maintaining a small attack surface and incurring nominal overheads even when the code is strongly encrypted in the binary and memory. Next, the dissertation presents REST, a hardware primitive that allows programs to mark memory regions invalid for regular memory accesses. This is achieved simply by storing a large, pre-determined random value at those locations with a special store instruction and then, detecting incoming values at the data cache for matches to the predetermined value. Subsequently, we show how this primitive can be used to protect data from common forms of spatial and temporal memory safety attacks. Notably, because of the simplicity of the primitive, REST requires trivial microarchitectural modifications and hence, is easy to implement, and exhibits negligible performance overheads. Additionally, we demonstrate how it is able to provide practical heap safety even for legacy binaries. For the above proposals, we also detail their hardware implementations on FPGAs, and discuss how each fits within a complete multiprocess system. This serves to give the reader an idea of usage and deployment challenges on a broader scale that goes beyond just the technique’s effectiveness within the context of a single program. Lastly, the dissertation discusses an alternative to the virtual address space, that randomizes the sequence of addresses in a manner invisible to even the program, thus achieving transparent randomization of the entire address space at a very fine granularity. The biggest challenge is to achieve this with minimal microarchitectural changes while accommodating linear data structures in the program (e.g., arrays, structs), both of which are fundamentally based on a linear address space. As a result, this modified address space subsumes the benefits of most other spatial randomization schemes, with the additional benefit of ideally making traversal from one data structure to another impossible. Our study of this idea concludes that although valuable, current memory safety techniques are cheaper to implement and secure enough, so that there are no perceivable use cases for this model of address space safety.

Page generated in 0.094 seconds