• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1752
  • 650
  • 251
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3763
  • 3763
  • 729
  • 721
  • 601
  • 543
  • 543
  • 475
  • 474
  • 427
  • 403
  • 380
  • 347
  • 332
  • 273
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Implementation Strategies for Time Constraint Monitoring

Gustavsson, Sanny January 1999 (has links)
An event monitor is a part of a real-time system that can be used to check if the system follows the specifications posed on its behavior. This dissertation covers an approach to event monitoring where such specifications (represented by time constraints) are represented by graphs. Not much work has previously been done on designing and implementing constraint graph-based event monitors. In this work, we focus on presenting an extensible design for such an event monitor. We also evaluate different data structure types (linked lists, dynamic arrays, and static arrays) that can be used for representing the constraint graphs internally. This is done by creating an event monitor implementation, and conducting a number of benchmarks where the time used by the monitor is measured. The result is presented in the form of a design specification and a summary of the benchmark results. Dynamic arrays are found to be the generally most efficient, but advantages and disadvantages of all the data structure types are discussed.
482

An evaluation of algorithms for real-time strategic placement of sensors

Tiberg, Jesper January 2004 (has links)
In this work an investigation is performed in whether the algorithms Simultaneous Perturbation Stochastic Approximation (SPSA) and Virtual Force Algorithm (VFA) are suitable for real-time strategic placement of sensors in a dynamic environment. An evaluation of these algorithms is conducted and compared to Simulated Annealing (SA), which has been used before in similar applications. For the tests, a computer based model of the sensors and the environment in which they are used, is implemented. The model handles sensors, moving objects, specifications for the area the sensors are supposed to monitor, and all interaction between components within the model. It was the belief of the authors that SPSA and VFA are suited for this kind of problem, and that they have advantages over SA in complex scenarios. The results shows this to be true although SA seems to perform better when it comes to smaller number of sensors to be placed
483

Fiber-Optic Interconnections in High-Performance Real-Time Computer Systems

Jonsson, Magnus January 1997 (has links)
Future parallel computer systems for embedded real-time applications,where each node in itself can be a parallel computer, are predicted to havevery high bandwidth demands on the interconnection network. Otherimportant properties are time-deterministic latency and guarantees to meetdeadlines. In this thesis, a fiber-optic passive optical star network with amedium access protocol for packet switched communication in distributedreal-time systems is proposed. By using WDM (Wavelength DivisionMultiplexing), multiple channels, each with a capacity of several Gb/s, areobtained. A number of protocols for WDM star networks have recently been proposed.However, the area of real-time protocols for these networks is quiteunexplored. The protocol proposed in this thesis is based on TDMA (TimeDivision Multiple Access) and uses a new distributed slot-allocationalgorithm with real-time properties. Services for both guarantee-seekingmessages and best-effort messages are supported for single destination,multicast, and broadcast transmission. Slot reserving can be used toincrease the time-deterministic bandwidth, while still having an efficientbandwidth utilization due to a simple slot release method. By connecting several clusters of the proposed WDM star network by abackbone star, thus forming a star-of-stars network, we get a modular andscalable high-bandwidth network. The deterministic properties of thenetwork are theoretically analyzed for both intra-cluster and inter-clustercommunication, and computer simulations of intra-cluster communicationare reported. Also, an overview of high-performance fiber-opticcommunication systems is presented.
484

Probabilistic Analysis of Low-Criticality Execution

Küttler, Martin, Roitzsch, Michael, Hamann, Claude-Joachim, Völp, Marcus 16 March 2018 (has links) (PDF)
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
485

TIME PREDICTABILITY OF GPU KERNEL ON AN HSA COMPLIANT PLATFORM

Tsog, Nandinbaatar, Larsson, Marcus January 2016 (has links)
During recent years, the importance of utilizing more computational power in smaller computersystems has increased. The utilization of more computational power in smaller packages, the abil-ity to combine more than one type of processor unit has become more popular in the industry. By combining, one achieves more power efficiency as well as gain more computational power insmaller area. However, heterogeneous programming has proved to be difficult, and that makes soft-ware developers diverge from learning heterogeneous programming languages. This has motivatedHSA foundation to develop a new hardware architecture, called Heterogeneous System Architecture(HSA). This architecture brings features that make the process of heterogeneous programming de-velopment more accessible, efficient, and easier to the software developers. The purpose of thisthesis is to investigate this new architecture, to learn and observe the timing characteristics of atask running a parallel region (a kernel) on a GPU in an HSA compliant system. With an objectiveto gain more knowledge, four test cases have been developed to collect time data and to analyzethe time of the code executed on the GPU. These are: comparison between CPU and GPU, tim-ing predictability of parallel periodic tasks, schedulability in HSA, and memory copy. Based onthe results of the analysis, it has been concluded that the HSA has potential to be very attractivefor developing heterogeneous programs due to its more streamlined infrastructure. It is easier toadapt, requires less knowledge regarding the underlying hardware, and the software developers canuse their preferred programming languages, instead of learning new programming framework, suchas OpenCL. However, since the architecture is new, there are bugs and HSA features that are yetto be incorporated into the drivers. Performance wise, HSA is faster compared to legacy methods,but lacks in providing consistent time predictability, which is important for real-time systems.
486

Development of a Botrytis specific immunosensor: towards using PCR species identification

Binder, Michael 01 1900 (has links)
Botrytis species affect over 300 host plants in all climate areas of the world, at both pre and post-harvest stages, leading to significant losses in agricultural produce. Therefore, the development of a rapid, sensitive and reliable method to assess the pathogen load of infected crops can help to prescribe an effective curing regime. Growers would then have the ability to predict and manage the full storage potential of their crops and thus provide an effective disease control and reduce post-harvest losses. A highly sensitive electrochemical immunosensor based on a screen-printed gold electrode (SPGE) with onboard carbon counter and silver / silver chloride (Ag/AgCl) pseudo-reference electrode was developed in this work for the detection and quantification of Botrytis species. The sensor utilised a direct sandwich enzyme-linked immunosorbent assay (ELISA) format with a monoclonal antibody against Botrytis immobilised on the gold working electrode. Two immobilisation strategies were investigated for the capture antibody, and these included adsorption and covalent immobilisation after self-assembled monolayer formation with 3-dithiodipropionic acid (DTDPA). A polyclonal antibody conjugated to the electroactive enzyme horseradish peroxidase (HRP) was then applied for signal generation. Electrochemical measurements were conducted using 3,3’, 5,5’-tetramethylbenzidine dihydrochloride / hydrogen peroxide (TMB/H2O2) as the enzyme substrate system at a potential of -200 mV. The developed biosensor was capable of detecting latent Botrytis infections 24 h post inoculation with a linear range from 150 to 0.05 μg fungal mycelium ml-1 and a limit of detection (LOD) as low as 16 ng ml-1 for covalent immobilisation and 58 ng ml-1 for adsorption, respectively. Benchmarked against the commercially available Botrytis ELISA kits, the optimised immuno-electrochemical biosensor showed strong correlation of the quantified samples (R2=0.998) ... [cont.].
487

Implementing non-photorealistic rendreing enhancements with real-time performance

Winnemöller, Holger 09 May 2013 (has links)
We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways. / KMBT_363 / Adobe Acrobat 9.54 Paper Capture Plug-in
488

A device-free locator using computer vision techniques

Van den Bergh, Frans 20 November 2006 (has links)
Device-free locators allow the user to interact with a system without the burden of being physically in contact with some input device or without being connected to the system with cables. This thesis presents a device-free locator that uses computer vision techniques to recognize and track the user's hand. The system described herein uses a video camera to capture live video images of the user, which are segmented and processed to extract features that can be used to locate the user's hand within the image. Two types of features, namely moment based invariants and Fourier descriptors, are compared experimentally. An important property of both these techniques is that they allow the recognition of hand-shapes regardless of affine transformation, e.g. rotation within the plane or scale changes. A neural network is used to classify the extracted features as belonging to one of several hand signals, which can be used in the locator system as 'button clicks' or mode indicators. The Siltrack system described herein illustrates that the above techniques can be implemented in real-time on standard hardware. / Dissertation (MSc (Computer Science))--University of Pretoria, 2007. / Computer Science / unrestricted
489

A Multi-core Testbed on Desktop Computer for Research on Power/Thermal Aware Resource Management

Dierivot, Ashley 06 June 2014 (has links)
Our goal is to develop a flexible, customizable, and practical multi-core testbed based on an Intel desktop computer that can be utilized to assist the theoretical research on power/thermal aware resource management in design of computer systems. By integrating different modules, i.e. thread mapping/scheduling, processor/core frequency and voltage variation, temperature/power measurement, and run-time performance collection, into a systematic and unified framework, our testbed can bridge the gap between the theoretical study and practical implementation. The effectiveness for our system was validated using appropriately selected benchmarks. The importance of this research is that it complements the current theoretical research by validating the theoretical results in practical scenarios, which are closer to that in the real world. In addition, by studying the discrepancies of results of theoretical study and their applications in real world, the research also aids in identifying new research problems and directions.
490

Power and Thermal Aware Scheduling for Real-time Computing Systems

Huang, Huang 09 March 2012 (has links)
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.

Page generated in 0.0805 seconds