• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 62
  • 29
  • 15
  • 15
  • 12
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A hybrid method for load, stress and fatigue analysis of drill string screw connectors

Bahai, Hamid R. S. January 1993 (has links)
No description available.
2

Tribological considerations of threaded fastener friction and the importance of lubrication

Dyson, C.J., Hopkins, W.A., Aljeran, D.A., Fox, M.F., Priest, Martin 10 January 2024 (has links)
Yes / The torque-tension relationship of threaded fasteners affects almost all engineering disciplines. Tribological processes at fastener interfaces manifest as the system's friction coefficient. Lubrication-related influences are usually described empirically using K or μ. The drive towards lightweight fastener materials in engineering systems and lubricants with reduced environmental impact is challenging existing knowledge and industrial practice in a range of applications, many safety critical. More comprehensive understanding is needed to achieve repeatable friction during assembly and re-assembly, resistance to loosening and fretting during operation, and effective anti-seize for disassembly with a growing range of materials and lubricants. The lubricants considered showed three predominant lubrication mechanisms: plastic deformation of metal powders; burnishing/alignment of molybdenum disulphide, MoS2; and adhering/embedding of non-metal particles. Multivariate analysis identified key sensitivities for these mechanisms. Assembly generated changes at fastener surfaces and in the lubricating materials. Re-assembly exhibited significant reductions in friction. / The full-text of this article will be released for public view at the end of the publisher embargo on 07 Dec 2024.
3

Implementation of forth with floating point capabilities on an 8085 system

Graham, Douglas R. January 1985 (has links)
No description available.
4

Static Execution Time Analysis of Parallel Systems

Gustavsson, Andreas January 2016 (has links)
The past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism is no longer feasible due to extensive power consumption and heat dissipation. Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level. This is most often done using multiple, relatively slow and simple, processing cores situated on a single processor chip. The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g., a bus, to that memory and also all higher levels of memory). To fully exploit this type of parallel processor chip, programs running on it will have to be concurrent. Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal behavior. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is crucial that methods to derive safe estimations on the timing properties of parallel computer systems are developed, if at all possible. This thesis presents a method to derive safe (lower and upper) bounds on the execution time of a given parallel system, thus showing that such methods must exist. The interface to the method is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis. The method is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way. The thesis also proves the soundness of the presented method (i.e., that the estimated timing bounds are indeed safe) and evaluates a prototype implementation of it. / Den strategi som historiskt sett använts för att öka processorers prestanda (genom ökad klockfrekvens och ökad instruktionsnivåparallellism) är inte längre hållbar på grund av den ökade energikonsumtion som krävs. Därför är den nuvarande trenden inom processordesign att låta mjukvaran påverka det parallella exekveringsbeteendet. Detta görs vanligtvis genom att placera multipla processorkärnor på ett och samma processorchip. Kärnorna delar vanligtvis på några av processorchipets resurser, såsom cache-minne (och därmed också det nätverk, till exempel en buss, som ansluter kärnorna till detta minne, samt alla minnen på högre nivåer). För att utnyttja all den prestanda som denna typ av processorer erbjuder så måste mjukvaran som körs på dem kunna delas upp över de tillgängliga kärnorna. Eftersom flerkärniga processorer är standard idag så måste även realtidssystem baseras på dessa och den nämnda typen av kod.  Ett realtidssystem är ett datorsystem som måste vara både funktionellt och tidsmässigt korrekt. För vissa typer av realtidssystem kan ett inkorrekt tidsmässigt beteende ha katastrofala följder. Därför är det ytterst viktigt att metoder för att analysera och beräkna säkra gränser för det tidsmässiga beteendet hos parallella datorsystem tas fram. Denna avhandling presenterar en metod för att beräkna säkra gränser för exekveringstiden hos ett givet parallellt system, och visar därmed att sådana metoder existerar. Gränssnittet till metoden är ett litet formellt definierat trådat programmeringsspråk där trådarna tillåts kommunicera och synkronisera med varandra. Metoden baseras på abstrakt exekvering för att effektivt beräkna de säkra (men ofta överskattade) gränserna för exekveringstiden. Abstrakt exekvering baseras i sin tur på abstrakta interpreteringstekniker som vida används inom tidsanalys av sekventiella datorsystem. Avhandlingen bevisar även korrektheten hos den presenterade metoden (det vill säga att de beräknade gränserna för det analyserade systemets exekveringstid är säkra) och utvärderar en prototypimplementation av den. / Worst-Case Execution Time Analysis of Parallel Systems / RALF3 - Software for Embedded High Performance Architectures
5

Features of a Multi-Threaded Memory Allocator

Wasik, Ayelet January 2008 (has links)
Multi-processor computers are becoming increasingly popular and are important for improving application performance. Providing high-performance memory-management is important for multi-threaded programs. This thesis looks at memory allocation of dynamic-allocation memory in concurrent C and C++ programs. The challenges facing the design of any memory allocator include minimizing fragmentation, and promoting good locality. A multi-threaded memory-allocator is also concerned with minimizing contention, providing mutual exclusion, avoiding false-sharing, and preventing heap-blowup (a form of fragmentation). Several potential features are identified in existing multi-threaded memory-allocators. These features include per-thread heaps with a global heap, object ownership, object containers, thread-local free-list buffers, remote free-lists, allocation buffers, and lock-free operations. When used in different combinations, these features can solve most of the challenges facing a multi-threaded memory-allocator. Through the use of a test suite composed of both single and multi-threaded benchmark programs, several existing memory allocators and a set of new allocators are compared. It is determined that different features address different multi-threaded issues in the memory allocator with respect to performance, scaling, and fragmentation. Finally, recommendations are made for the design of a general-purpose memory-allocator.
6

Features of a Multi-Threaded Memory Allocator

Wasik, Ayelet January 2008 (has links)
Multi-processor computers are becoming increasingly popular and are important for improving application performance. Providing high-performance memory-management is important for multi-threaded programs. This thesis looks at memory allocation of dynamic-allocation memory in concurrent C and C++ programs. The challenges facing the design of any memory allocator include minimizing fragmentation, and promoting good locality. A multi-threaded memory-allocator is also concerned with minimizing contention, providing mutual exclusion, avoiding false-sharing, and preventing heap-blowup (a form of fragmentation). Several potential features are identified in existing multi-threaded memory-allocators. These features include per-thread heaps with a global heap, object ownership, object containers, thread-local free-list buffers, remote free-lists, allocation buffers, and lock-free operations. When used in different combinations, these features can solve most of the challenges facing a multi-threaded memory-allocator. Through the use of a test suite composed of both single and multi-threaded benchmark programs, several existing memory allocators and a set of new allocators are compared. It is determined that different features address different multi-threaded issues in the memory allocator with respect to performance, scaling, and fragmentation. Finally, recommendations are made for the design of a general-purpose memory-allocator.
7

Machine Assisted Reasoning for Multi-Threaded Java Bytecode / Datorstödda resonemang om multi-trådad Java-bytekod

Lagerkvist, Mikael Zayenz January 2005 (has links)
<p>In this thesis an operational semantics for a subset of the Java Virtual Machine (JVM) is developed and presented. The subset contains standard operations such as control flow, computation, and memory management. In addition, the subset contains a treatment of parallel threads of execution.</p><p> </p><p>The operational semantics are embedded into a $µ$-calculus based proof assistant, called the VeriCode Proof Tool (VCPT). VCPT has been developed at the Swedish Institute of Computer Science (SICS), and has powerful features for proving inductive assertions.</p><p> </p><p>Some examples of proving properties of programs using the embedding are presented.</p> / <p>I det här examensarbetet  presenteras en operationell semantik för en delmängd av Javas virtuella maskin. Den delmängd som hanteras innehåller kontrollflöde, beräkningar och minneshantering. Vidare beskrivs  semantiken för parallella exekveringstrådar.</p><p>Den operationella semantiken formaliseras i en bevisassistent for $µ$-kalkyl, VeriCode Proof Tool (VCPT). VCPT har utvecklats vid Swedish Institiute of Computer Science (SICS), och har kraftfulla tekniker för att bevisa induktiva påståenden.</p><p>Några exempel på bevis av egenskaper hos program användandes formaliseringen presenteras också.</p>
8

TENA Software Decommutation System

Wigent, Mark A., Mazzario, Andrea M. 10 1900 (has links)
The Test and Training Enabling Architecture (TENA) is implemented within the TENA Software Decommutation System (TSDS) in order to bring TENA as close as possible to the sensor interface. Key attributes of TSDS include: • TSDS is a software-based approach to telemetry stream decommutation implemented within Java. This offers technical advantages such as platform independence and portability. • TSDS uses auto code generation technologies to further reduce the effort associated with updating decommutation systems to support new telemetry stream definitions. Users of TSDS within the range are not required to have detailed knowledge of proprietary protocols, nor are they required to have an understanding of how to implement decommutation within software. The use of code generation in software decommutation offers potential cost savings throughout the entire T&E community. • TSDA offers a native TENA interface so that telemetry data can be published directly into TENA object models.
9

Static Timing Analysis of Parallel Systems Using Abstract Execution

Gustavsson, Andreas January 2014 (has links)
The Power Wall has stopped the past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism.Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level.This is most often done using multiple processing cores situated on a single processor chip.The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g. a bus, to that memory and also all higher levels of memory), and to fully exploit this type of parallel processor chip, programs running on it will have to be concurrent.Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal output. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is of utmost importance that methods to analyze and derive safe estimations on the timing properties of parallel computer systems are developed. This thesis presents an analysis that derives safe (lower and upper) bounds on the execution time of a given parallel system.The interface to the analysis is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis.The analysis is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way.Basically, abstract execution simulates the execution of several real executions of the analyzed program in one go.The thesis also proves the soundness of the presented analysis (i.e. that the estimated timing bounds are indeed safe) and includes some examples, each showing different features or characteristics of the analysis. / Worst-Case Execution Time Analysis of Parallel Systems / RALF3 - Software for Embedded High Performance Architectures
10

PERFORMANCE-AWARE RESOURCE MANAGEMENT OF MULTI-THREADED APPLICATIONS FOR MANY-CORE SYSTEMS

Olsen, Daniel 01 August 2016 (has links)
Future integrated systems will contain billions of transistors, composing tens to hundreds of IP cores. Modern computing platforms take advantage of this manufacturing technology advancement and are moving from Multi-Processor Systems-on-Chip (MPSoC) towards Many-Core architectures employing high numbers of processing cores. These hardware changes are also driven by application changes. The main characteristic of modern applications is the increased parallelism and the need for data storage and transfer. Resource management is a key technology for the successful use of such many-core platforms. The thread to core mapping can deal with the run-time dynamics of applications and platforms. Thus, the efficient resource management enables the efficient usage of the platform resources. maximizing platform utilization, minimizing interconnection network communication load and energy budget. In this thesis, we present a performance-aware resource management scheme for many- core architectures. Particular, the developed framework takes as input parallel applications and performs an application profiling. Based on that profile information, a thread to core mapping algorithm finds (i) the appropriate number of threads that this application will have in order to maximize the utilization of the system and (ii) the best mapping for maximizing the performance of the application under the selected number of threads. In order to validate the proposed algorithm, we used and extended the Sniper, state-of-art, many-core simulator. Last, we developed a discrete event simulator, on top of Sniper simulator, in order to test and validate multiple scenarios faster. The results show that the the proposed methodology, achieves on average a gain of 23% compared to a performance oriented mapping presented and each application completes its workload 18% faster on average.

Page generated in 0.0382 seconds