• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 745
  • 350
  • 73
  • 73
  • 73
  • 73
  • 73
  • 72
  • 48
  • 31
  • 9
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 1694
  • 1694
  • 271
  • 253
  • 236
  • 208
  • 186
  • 185
  • 173
  • 166
  • 145
  • 138
  • 137
  • 126
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Reasoning about imperative and higher-order programs a dissertation /

Koutavas, Vasileios. January 1900 (has links)
Thesis (Ph. D.)--Northeastern University, 2008. / Title from title page (viewed March 24, 2009). College of Computer and Information Science. Includes bibliographical references (p. 163-171).
532

A comparative study of the Linux and windows device driver architecture with a focus on IEEE1394 (high speed serial bus) drivers /

Tsegaye, Melekam Asrat. January 2002 (has links)
Thesis (M. Sc. (Computer Science))--Rhodes University, 2004.
533

The design and implementation of a robust, cost-conscious peer-to-peer lookup service

Harvesf, Cyrus Mehrabaun. January 2008 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Blough, Douglas; Committee Member: Liu, Ling; Committee Member: Owen, Henry; Committee Member: Riley, George; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
534

The design, construction, and implementation of an engineering software command processor and macro compiler /

Coleman, Jesse J. January 1995 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1995. / Typescript. Includes bibliographical references (leaves 186-187).
535

Integrated species distribution modelling system : a user friendly front end to the GARP modelling toolkit

Sutton, T. P. 04 1900 (has links)
Thesis (MA)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: At a social, ecological and biological level it is important tha t we gain a better understanding of species distribution and the constraints to species distribution. Various modelling tools and approaches are available to provide this type of functionality. The GARP (Genetic Algorithm for Rule set Production) Modelling System (GMS) was selected because of its strong predictive modelling abilities and its ability to represent the results of model iterations in both a tabular and cartographic manner. A shortcoming in this system was identified in tha t it requires strong information technology skills in order to carry out the modelling process. This can be attributed to the lack of a user-friendly interface to the system. In order to address this a loosely coupled system was developed that provides an easy to use web-based front end to the GMS. This Integrated Modelling System extends the core functionality of the GMS by providing a system that provides detailed history for each analysis, allows fine tuning of the modelling process, integrates directly with a biodiversity database containing specimen observations, and provides a simple ‘wizard’ interface to the modelling process. / AFRIKAANSE OPSOMMING: Van ’n sosiale, ekologiese en biologiese standpunt is dit belangrik dat ons spesies verspreiding en die beperkings daarvan verstaan. ’n Verskeidenheid sagteware pakkette en metodologiee is beskikbaar om spesies verspreiding te modelleer. Die GARP (Genetic Algorithm for Rule set Production) sagteware was gebruik vir sy sterk voorspellingsvermoe, en sy kapasiteit vir kartografiese en tubulere tentoonstelling van model resultate. ’n Tekortkoming met hierdie stelsel was gei'dentifiseer - dit is nie gebruikersvriendelik nie en gebruikers het sterk informasie tegnologie vermoens nodig. Om hierdie tekortkominge aan te spreek was ’n sagteware program ontwerp wat van GARP gebruik maak deur middel van ’n webblaaier. Hierdie ge'integreerde stelsel bou op die basiese funksionaliteit van GARP om ’n werk omgewing te skep wat ’n gedetailleerde geskiedenis van elke model stoor, fyn beheer oor die model toelaat, direk met ’n bio diver siteits databasis koppel, en van ’n eenvoudige ’wizard’ stelsel gebruik maak om gebruikers opsies te bepaal.
536

An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents

Chibaya, Colin January 2014 (has links)
Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
537

Training support vector machines with particle swarms

Paquet, Ulrich 06 August 2007 (has links)
Particle swarms can easily be used to optimize a function with a set of linear equality constraints, by restricting the swarm’s movement to the constrained search space. A “Linear Particle Swarm Optimiser” and “Converging Linear Particle Swarm Optimiser” is developed to optimize linear equality-constrained functions. It is shown that if the entire swarm of particles is initialized to consist of only feasible solutions, then the swarm can optimize the constrained objective function without ever again considering the set of constraints. The Converging Linear Particle Swarm Optimiser overcomes the Linear Particle Swarm Optimiser’s possibility of premature convergence. Training a Support Vector Machine requires solving a constrained quadratic programming problem, and the Converging Linear Particle Swarm Optimiser ideally fits the needs of an optimization method for Support Vector Machine training. Particle swarms are intuitive and easy to implement, and is presented as an alternative to current numeric Support Vector Machine training methods. / Dissertation (MSc)--University of Pretoria, 2007. / Computer Science / Unrestricted
538

Particle swarms in sizing and global optimization

Schutte, Jaco Francois 13 August 2007 (has links)
Please read the abstract in the section 00front of this document / Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2007. / Mechanical and Aeronautical Engineering / MEng / unrestricted
539

Hardware-Software Co-design for Practical Memory Safety

Hassan, Mohamed January 2022 (has links)
A vast amount of software, from low-level systems code to high-performance applications, is written in memory-unsafe languages such as C and C++. The lack of memory safety in C/C++ can lead to severe consequences; a simple buffer overflow can result in code or data corruption anywhere in the program memory. The problem is even worse in systems that constantly operate on inputs of unknown trustworthiness. For example, in 2021 a memory safety vulnerability was discovered in sudo, a near-ubiquitous utility available on major Unix-like operating systems. The vulnerability, which remained silent for over 10 years, allows any unprivileged user to gain root privileges on a victim machine using a default sudo configuration. As memory-safe languages are unlikely to displace C/C++ in the near future, efficient memory safety mechanisms for both existing and future C/C++ code are needed. Both industry and academia have proposed various techniques to address the C/C++ memory safety problem over the last three decades, either by software-only or hardware-assisted solutions. Software-only techniques such as Google’s AddressSanitizer are used to detect memory errors during the testing phase before products are shipped. While sanitizers have been shown to be effective at detecting memory errors with little effort, they typically suffer from high runtime overheads and increased memory footprint. Hardware-assisted solutions such as Oracle’s Application Data Integrity (ADI) and ARM’s Memory Tagging Extension (MTE) have much lower performance overheads, but they do not offer complete protection. Academic proposals manage to minimize the performance costs of memory safety defenses while maintaining fine-grained security protection. Unfortunately, state-of-the-art solutions require complex metadata that increases the program memory footprint, complicates the hardware design, and breaks compatibility with the rest of the system (e.g., unprotected libraries). To address these problems, the research within this thesis innovates in the realm of compiler transformations and hardware extensions to improve the state of the art in memory safety solutions. Specifically, this thesis shows that leveraging common software trends and rethinking computer microarchitectures can efficiently circumvent the problems of traditional memory safety solutions for C and C++. First, I present a novel cache line formatting technique, dubbed Califorms. Califorms builds on a concept called memory blocklisting, which prohibits a program from access- ing certain memory regions based on program semantics. State-of-the-art hardware-assisted memory blocklisting, while much faster than software blocklisting, creates memory fragmentation for each use of the blocklisted location. To prevent this issue, Califorms encodes the metadata, which is used to identify the blocklisted locations, in the blocklisted (i.e., dead) locations themselves. This inlined metadata can be then integrated into the microarchitecture by changing the cache line format. As a result, both the metadata and data are fetched together, eliminating the need for extra memory accesses. Hence, Califorms reduces the performance overheads of memory safety while providing byte-granular protection and maintaining very low hardware overheads. Secondly, I explore how leveraging common software trends can reduce the performance and memory costs of memory permitlisting (also known as base & bounds). Thus, I present No-FAT, a novel technique for enforcing spatial and temporal memory safety. The key observation that enables No-FAT is the increasing adoption of binning allocators. No-FAT, when used with a binning allocator, is able to implicitly derive an allocation’s bounds information (i.e., the base address and size) from the pointer itself without relying on expensive metadata. Moreover, as No-FAT’s memory instructions are aware of allocation bounds information, No-FAT effectively mitigates certain speculative attacks (e.g., Spectre-V1, which is also known as bounds checking bypass) with no additional cost. While No-FAT successfully detects memory safety violations, it falls short against physical attacks. Hence, I propose C-5, an architecture that complements No-FAT with strong data encryption. C-5 strictly uses access control in the L1 cache and encrypts program data at the L1-L2 cache interface. As a result, C-5 mitigates both in-process and physical attacks without burdening system performance. In addition to memory blocklisting and permitlisting, a cost-effective way to alleviate the memory safety threats is by deploying exploit mitigation techniques (e.g., Intel’s CET and ARM’s PAC). Unfortunately, current exploit mitigations offer incomplete security protection in order to save on performance. This thesis investigates potential opportunities to boost the security guarantees of exploit mitigations while maintaining their low overheads. Thus, I present ZeRØ, a hardware primitive that preserves pointer integrity at no performance cost, effectively mitigating pointer manipulation attacks such as ROP, COP, JOP, COOP, and DOP. ZeRØ proposes unique memory instructions and a novel metadata encoding scheme to protect code and data pointers from memory safety violations. The combination of instructions and metadata allows ZeRØ to avoid explicitly tagging every word in memory. On 64-bit systems, ZeRØ encodes the pointer type and location in the currently unused upper pointer bits. This way ZeRØ reduces the performance overheads of enforcing pointer integrity to zero while requiring simple hardware modifications. Finally, although current mitigation techniques excel at providing efficient protection for high-end devices, they typically suffer from significant performance and energy overheads when ported to the embedded domain. As a result, there is a need for developing new defenses that (1) have low overheads, (2) provide high security coverage, and (3) are especially designed for embedded devices. To achieve these goals I present EPI, an efficient pointer integrity mechanism that is tailored to microcontrollers and embedded devices. Similar to ZeRØ, EPI assigns unique tags to different program assets and uses unique memory instructions for accessing them. However, EPI uses a 32bit friendly encoding scheme to inline the tags within the program data. EPI introduces runtime overheads of less than 1%, making it viable for embedded and low-resource systems.
540

Probabilistic Determination of Failure Load Capacity Variations for Lattice Type Structures Based on Yield Strength Variations including Nonlinear Post-Buckling Member Performance

Bathon, Leander Anton 01 January 1992 (has links)
With the attempt to achieve the optimum in analysis and design, the technological global knowledge base grows more and more. Engineers all over the world continuously modify and innovate existing analysis methods and design procedures to perform the same task more efficiently and with better results. In the field of complex structural analysis many researchers pursue this challenging task. The complexity of a lattice type structure is caused by numerous parameters: the nonlinear member performance of the material, the statistical variation of member load capacities, the highly indeterminate structural composition, etc. In order to achieve a simulation approach which represents the real world problem more accurately, it is necessary to develop technologies which include these parameters in the analysis. One of the new technologies is the first order nonlinear analysis of lattice type structures including the after failure response of individual members. Such an analysis is able to predict the failure behavior of a structural system under ultimate loads more accurately than the traditionally used linear elastic analysis or a classical first order nonlinear analysis. It is an analysis procedure which can more accurately evaluate the limit-state of a structural system. The Probability Based Analysis (PBA) is a new technology. It provides the user with a tool to analyze structural systems based on statistical variations in member capacities. Current analysis techniques have shown that structural failure is sensitive to member capacity. The combination of probability based analysis and the limit-state analysis will give the engineer the capability to establish a failure load distribution based on the limit-state capacity of the structure. This failure load distribution which gives statistical properties such as mean and variance improves the engineering judgment. The mean shows the expected value or the mathematical expectation of the failure load. The variance is a tool to measure the variability of the failure load distribution. Based on a certain load case, a small variance will indicate that a few members cause the tower failure over and over again; the design is unbalanced. A large variance will indicate that many different members caused the tower failure. The failure load distribution helps in comparing and evaluating actual test results versus analytical results by locating an actual test among the possible failure loads of a tower series. Additionally, the failure load distribution allows the engineer to calculate exclusion limits which are a measure of the probability of success, or conversely the probability of failure for a given load condition. The exclusion limit allows engineers to redefine their judgement on safety and usability of transmission towers. Existing transmission towers can be reanalyzed using this PBA and upgraded based on a given exclusion limit for a chosen tower capacity increase according to the elastic analysis from which the tower was designed. New transmission towers can be analyzed based on the actual yield strength data and their nonlinear member performances. Based on this innovative analysis the engineer is able to improve tower design by using a tool which represents the real world behavior of steel transmission towers more accurately. Consequently it will improve structural safety and reduce cost.

Page generated in 0.1525 seconds