• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1742
  • 650
  • 250
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3744
  • 3744
  • 721
  • 719
  • 600
  • 543
  • 542
  • 474
  • 472
  • 427
  • 396
  • 378
  • 347
  • 332
  • 268
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

An Experimental Evaluation of the Scalability of Real-Time Scheduling Algorithms on Large-Scale Multicore Platforms

Dellinger, Matthew Aalseth 21 June 2011 (has links)
This thesis studies the problem of experimentally evaluating the scaling behaviors of existing multicore real-time task scheduling algorithms on large-scale multicore platforms. As chip manufacturers rapidly increase the core count of processors, it becomes imperative that multicore real-time scheduling algorithms keep pace. Thus, it must be determined if existing algorithms can scale to these new high core-count platforms. Significant research exists on the theoretical performance of multicore real-time scheduling algorithms, but the vast majority of this research ignores the effects of scalability. It has been demonstrated that multicore real-time scheduling algorithms are feasible for small core-count systems (e.g. 8-core or less), but thus far the majority of the algorithmic research has never been tested on high core-count systems (e.g. 48-core or more). We present an experimental analysis of the scalability of 16 multicore real-time scheduling algorithms. These algorithms include global, clustered, and partitioned algorithms. We cover a broad range of algorithms, including deadline-based and utility accrual scheduling algorithms. These algorithms are compared under metrics including schedulability, tardiness, deadline satisfaction ratio, and utility accrual ratio. We consider multicore platforms ranging from 8 to 48 cores. The algorithms are implemented in a real-time Linux kernel we create called ChronOS. ChronOS is based on the Linux kernel's PREEMPT RT patch, which provides the underlying operating system kernel with real-time capabilities such as full kernel preemptibility and priority inheritance for kernel locking primitives. ChronOS extends these capabilities with a flexible, scalable real-time scheduling framework. Our study shows that it is possible to implement global fixed and dynamic priority and simple global utility accrual real-time scheduling algorithms which will scale to large-scale multicore platforms. Interestingly, and in contrast to the conclusion of prior research, our results reveal that some global scheduling algorithms (e.g. G-NP-EDF) is actually scalable on large core counts (e.g. 48). In our implementation, scalability is restricted by lock contention over the global schedule and the cost of inter-processor communication, rather than the global task queue implementation. We also demonstrate that certain classes of utility accrual algorithms such as the GUA class are inherently not scalable. We show that algorithms implemented with scalability as a first-order implementation goal are able to provide real-time guarantees on our 48-core platform. / Master of Science
172

An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms

Saha, Sonal 12 September 2011 (has links)
Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS scheduling algorithms, which employ different techniques to utilize the CPU idle time to scale the frequency. Many of these algorithms have been experimentally studied through simulations, but have not been implemented on real hardware platforms. Though simulation-based experimental studies can provide a first-order understanding, implementation-based studies can reveal actual timeliness and energy consumption behaviours. This is particularly important, when it is difficult to devise accurate simulation models of hardware, which is increasingly the case with modern systems. In this thesis, we study the timeliness and energy consumption behaviours of fourteen state- of-the-art RT-DVFS schedulers by implementing and evaluating them on two hardware platforms. The schedulers include CC-EDF, LA-EDF, REUA, DRA andd AGR1 among others, and the hardware platforms include ASUS laptop with the Intel I5 processor and a mother- board with the AMD Zacate processor. We implemented these schedulers in the ChronOS real-time Linux kernel and measured their actual timeliness and energy behaviours under a range of workloads including CPU-intensive, memory-intensive, mutual exclusion lock-intensive, and processor-underloaded and overloaded workloads. Our studies reveal that measuring the CPU power consumption as the cube of CPU frequency can lead to incorrect conclusions. In particular, it ignores the idle state CPU power consumption, which is orders of magnitude smaller than the active power consumption. Consequently, power savings obtained by exclusively optimizing active power consumption (i.e., RT-DVFS) may be offset by completing tasks sooner by running them at the highest frequency and transitioning to the idle state earlier (i.e., no DVFS). Thus, the active power consumption savings of the RT-DVFS techniques' that we report are orders of magnitude smaller than their simulation-based savings reported in the literature. / Master of Science
173

The Effects of Packet Buffer Size and Packet Priority on Bursty Real-Time Traffic

Winblad von Walter, Ragnar, Sandred, Johan January 2024 (has links)
Networks which use real-time communication have high requirements on latency and packet loss. Improving one aspect may results in worse performance for another, and it can be difficult to prioritize one over the other as all the requirements need to be met in order for the network tooperate as expected. Many studies have investigated reducing the size of packet buffers to improve the latency. However, they have mainly focused on studying TCP traffic which may not be optimal for real-time traffic, where it instead could be more suitable to use UDP. We have performed an experiment where we compared the performance of real-time traffic over multiple different buffer sizes. We generated traffic using synchronized bursts of packets which were either sample value (SV) or IP packets, as defined by IEC 61850. We measured the packet loss and latency for situations where the traffic was either entirely composed of SV packets, or when it had mixed SV and IP traffic. For the mixed traffic, we also experimented with using different VLAN priorities for the two types of packets. We have determined deadline thresholds that show what size of packet buffer will start causing packets to miss their deadline, and what size will lead every packet in bursts oftraffic to miss their deadlines. We also found that increasing the priority of SV packets in mixed traffic can have either a positive or a negative impact on their performance, depending on how highly they are prioritized.
174

Helping job seekers prepare for technical interviews by enabling context-rich interview feedback

Lu, Yi 11 June 2024 (has links)
Technical interviews have become a popular method for recruiters in the tech industry to assess job candidates' proficiency in both soft skills and technical skills as programmers. However, these interviews can be stressful and frustrating for interviewees. One significant cause of the negative experience of technical interviews was the lack of feedback, making it difficult for job seekers to improve their performance progressively by participating in technical interviews. Although there are open platforms like Leetcode that allow job seekers to practice their technical proficiency, resources for conducting mock interviews to practice soft skills like communication are limited and costly to interviewees. To address this, we investigated how professional interviewers provide feedback if they were conducting a mock interview and the difficulties they face when interviewing job seekers by running mock interviews between software engineers and job seekers. With the insights from the formative studies, we developed a new system for technical interviews aiming to help interviewers conduct technical interviews with less cognitive load and provide context-rich feedback. An evaluation study on the usability of using our system to conduct technical interviews further revealed the unresolved cognitive loads of interviewers, underscoring the requirements for further improvement to facilitate easier interview processes and enable peer-to-peer interview practices. / Master of Science / Technical interview is a common method used by tech companies to evaluate job candidates. During these interviews, candidates are asked to solve algorithm problems and explain their thought processes while coding. Running these interviews, recruiters can assess the job candidate's ability to write codes and solve problems in a limited time. At the same time, the requirements for interviewees to talk aloud help interviewers evaluate their communication and collaboration skills. Although technical interviews enable employers to assess job applicants from multiple perspectives, they also introduce interviewees to stress and anxiety. Among the many complaints about technical interviews, one significant difficulty of the interview process is the lack of feedback from interviewers. As a result, it is difficult for interviewees to improve progressively by participating in technical interviews repeatedly. Although there are platforms for interviewees to practice code writing, resources like mock interviews with actual interviewers for job seekers to practice communication skills are costly and rare. Our study investigated how professional programmers run mock technical interviews and provide feedback when required. The mock interview observations helped us understand the standard procedure and common practices of how practitioners run these interviews. At the same time, we concluded the potential cause of cognitive loads and difficulties for interviewers to run such interviews. To answer the difficulties of conducting technical interviews, we developed a new system that enabled interviewers to conduct technical interviews with less cognitive load and provide enriched feedback. After rerunning mock interviews with our system, we noted that while some features in our system helped make the interview process easier, additional cognitive loads are unresolved. Looking into these difficulties, we suggested several directions for future studies to improve our design to enable an easier interview process for interviewers and support interview rehearsals between job seekers.
175

WINGS CONCEPT: PRESENT AND FUTURE

Harris, Jim, Downing, Bob 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Western Aeronautical Test Range (WATR) of NASA’s Dryden Flight Research Center (DFRC) is facing a challenge in meeting the technology demands of future flight mission projects. Rapid growth in technology for aircraft has resulted in complexity often surpassing the capabilities of the current WATR real-time processing and display systems. These current legacy systems are based on an architecture that is over a decade old. In response, the WATR has initiated the development of the WATR Integrated Next Generation System (WINGS). The purpose of WINGS is to provide the capability to acquire data from a variety of sources and process that data for subsequent analysis and display to Project Users in the WATR Mission Control Centers (MCCs) in real-time, near real-time and subsequent post-mission analysis. WINGS system architecture will bridge the continuing gap between new research flight test requirements and capability by distributing current system architectures to provide incremental and iterative system upgrades.
176

A generic predictive information system for resource planning and optimisation

Tavakoli, Siamak January 2010 (has links)
The purpose of this research work is to demonstrate the feasibility of creating a quick response decision platform for middle management in industry. It utilises the strengths of current, but more importantly creates a leap forward in the theory and practice of Supervisory and Data Acquisition (SCADA) systems and Discrete Event Simulation and Modelling (DESM). The proposed research platform uses real-time data and creates an automatic platform for real-time and predictive system analysis, giving current and ahead of time information on the performance of the system in an efficient manner. Data acquisition as the backend connection of data integration system to the shop floor faces both hardware and software challenges for coping with large scale real-time data collection. Limited scope of SCADA systems does not make them suitable candidates for this. Cost effectiveness, complexity, and efficiency-orientation of proprietary solutions leave space for more challenge. A Flexible Data Input Layer Architecture (FDILA) is proposed to address generic data integration platform so a multitude of data sources can be connected to the data processing unit. The efficiency of the proposed integration architecture lies in decentralising and distributing services between different layers. A novel Sensitivity Analysis (SA) method called EvenTracker is proposed as an effective tool to measure the importance and priority of inputs to the system. The EvenTracker method is introduced to deal with the complexity systems in real-time. The approach takes advantage of event-based definition of data involved in process flow. The underpinning logic behind EvenTracker SA method is capturing the cause-effect relationships between triggers (input variables) and events (output variables) at a specified period of time determined by an expert. The approach does not require estimating data distribution of any kind. Neither the performance model requires execution beyond the real-time. The proposed EvenTracker sensitivity analysis method has the lowest computational complexity compared with other popular sensitivity analysis methods. For proof of concept, a three tier data integration system was designed and developed by using National Instruments’ LabVIEW programming language, Rockwell Automation’s Arena simulation and modelling software, and OPC data communication software. A laboratory-based conveyor system with 29 sensors was installed to simulate a typical shop floor production line. In addition, EvenTracker SA method has been implemented on the data extracted from 28 sensors of one manufacturing line in a real factory. The experiment has resulted 14% of the input variables to be unimportant for evaluation of model outputs. The method proved a time efficiency gain of 52% on the analysis of filtered system when unimportant input variables were not sampled anymore. The EvenTracker SA method compared to Entropy-based SA technique, as the only other method that can be used for real-time purposes, is quicker, more accurate and less computationally burdensome. Additionally, theoretic estimation of computational complexity of SA methods based on both structural complexity and energy-time analysis resulted in favour of the efficiency of the proposed EvenTracker SA method. Both laboratory and factory-based experiments demonstrated flexibility and efficiency of the proposed solution.
177

A cross-layer middleware architecture for time and safety critical applications in MANETs

Pease, Sarogini G. January 2013 (has links)
Mobile Ad hoc Networks (MANETs) can be deployed instantaneously and adaptively, making them highly suitable to military, medical and disaster-response scenarios. Using real-time applications for provision of instantaneous and dependable communications, media streaming, and device control in these scenarios is a growing research field. Realising timing requirements in packet delivery is essential to safety-critical real-time applications that are both delay- and loss-sensitive. Safety of these applications is compromised by packet loss, both on the network and by the applications themselves that will drop packets exceeding delay bounds. However, the provision of this required Quality of Service (QoS) must overcome issues relating to the lack of reliable existing infrastructure, conservation of safety-certified functionality. It must also overcome issues relating to the layer-2 dynamics with causal factors including hidden transmitters and fading channels. This thesis proposes that bounded maximum delay and safety-critical application support can be achieved by using cross-layer middleware. Such an approach benefits from the use of established protocols without requiring modifications to safety-certified ones. This research proposes ROAM: a novel, adaptive and scalable cross-layer Real-time Optimising Ad hoc Middleware framework for the provision and maintenance of performance guarantees in self-configuring MANETs. The ROAM framework is designed to be scalable to new optimisers and MANET protocols and requires no modifications of protocol functionality. Four original contributions are proposed: (1) ROAM, a middleware entity abstracts information from the protocol stack using application programming interfaces (APIs) and that implements optimisers to monitor and autonomously tune conditions at protocol layers in response to dynamic network conditions. The cross-layer approach is MANET protocol generic, using minimal imposition on the protocol stack, without protocol modification requirements. (2) A horizontal handoff optimiser that responds to time-varying link quality to ensure optimal and most robust channel usage. (3) A distributed contention reduction optimiser that reduces channel contention and related delay, in response to detection of the presence of a hidden transmitter. (4) A feasibility evaluation of the ROAM architecture to bound maximum delay and jitter in a comprehensive range of ns2-MIRACLE simulation scenarios that demonstrate independence from the key causes of network dynamics: application setting and MANET configuration; including mobility or topology. Experimental results show that ROAM can constrain end-to-end delay, jitter and packet loss, to support real-time applications with critical timing requirements.
178

GPU-Based Visualisation of Viewshed from Roads or Areas in a 3D Environment

Christoph, Heilmair January 2016 (has links)
Viewshed refers to the calculation and visualisation of what part of a terrain isvisible from a given observer point. It is used within many fields, such as militaryplanning or telecommunication tower placement. So far, no general fast methodsexist for calculating the viewshed for multiple observers that may for instancerepresent a road within the terrain. Additionally, if the terrain contains over-lapping structures such as man-made constructions like bridges, most currentviewshed algorithms fail. This report describes two novel methods for viewshedcalculation using multiple observers for terrain that may contain overlappingstructures. The methods have been developed at Vricon in Linköping as a Mas-ter’s Thesis project. Both methods are implemented using the graphics program-ming unit and the OpenGL graphics library, using a computer graphics approach.Results are presented in the form of figures and images, as well as running timetables using two different test setups. Lastly, future possible improvements arealso discussed. The results show that the first method is a viable real-time solu-tion and that the second method requires some additional work.
179

A real-time expert system shell for process control.

Kang, Alan Montzy January 1990 (has links)
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering / A multi-layered expert system shell that specifically addresses real-time issues is designed and implemented. The architecture of this expert system shell supports the concepts of parallelism, concurrent computation and competitive reasoning in that it allows several alternatives to be explored simultaneously. An inference engine driven by a hybrid of forward and backward chanining methods is used to achieve real-time response, and certainty factors are used for uncertainty management. Real-time responsiveness is improved by allowing the coexistence of procedural and declarative knowledge within the same system. A test bed that was set up in order to investigate the performance of the implemented shell is described. It was found in the performance analysis that the proposed system meets the real-time requirements as specified in this research. / Andrew Chakane 2018
180

Emprego da reação em cadeia pela polimerase em tempo real para o controle de eficiência de bacterinas anti-leptospirose / Employment of real time polymerase chain reaction to control the efficiency of leptospirosis bacterins

Dib, Cristina Corsi 31 August 2011 (has links)
A estirpe Fromm de Leptospira interrogans sorovar Kennewicki foi utilizada para produção de uma bacterina experimental anti-leptospirose. A extração do RNA total utilizado para transcrição reversa e quantificação dos antígenos LigA e LipL32 por PCR em Tempo Real, foi efetuada a partir de alíquotas colhidas das diluições da bacterina antes da sua inativação, as quais foram armazenadas à temperatura de -80ºC. O volume restante da bacterina foi inativado em banho-maria à 56ºC e mantido à temperatura de -20ºC para avaliação da sua potência em hamsters bem como da detecção e quantificação dos antígenos LigA e LipL32 em ensaios de ELISA Indireto e ELISA Sanduíche Indireto. Os resultados do ensaio de potência em hamsters demonstraram que a bacterina foi aprovada de acordo com as exigências dos padrões internacionais de qualidade até a diluição 1/6400, protegendo os hamters contra a infecção letal frente ao desafio com a diluição 10-6 (100 doses infectantes 50%/ 0,2mL). Os resultados das reações de Real Time PCR detectaram 3,2 x 103 e 2,3 x 101 cópias do mRNA que codifica a proteína LigA, na bacterina pura e diluída a 1:200, respectivamente. Apenas oito cópias do mRNA que codifica a proteína LipL32 foram detectadas na amostra de bacterina pura. Os ensaios com ELISA Indireto não detectaram a proteína LigA na amostra de bacterina inativada, mas demonstraram a detecção da proteína LipL32 até a diluição 1/1600 da bacterina. Os ensaios de ELISA Sanduíche Indireto apresentaram reações cruzadas nas placas controle, e, portanto seus resultados não puderam ser considerados nas análises. Os resultados da real time PCR não puderam ser correlacionados com o teste de potência em hamsters, mas os ensaios de ELISA Indireto para a proteína LipL32 demonstraram resultados condizentes com os apresentados pelo teste de potência em hamsters oferecendo uma possível alternativa in vitro para avaliação de potência de bacterinas anti-leptospirose. / Leptospira interrogans serovar Kennewicki strain Fromm was used for the production of a experimental leptospirosis bacterin. The extraction of total RNA used for reverse transcription and quantification of the antigens LigA and LipL32 for Real Time PCR was performed from the aliquots harvested of bacterin dilutions before inactivation that were separated and maintained at -80ºC. The remaining volume of bacterin was inativated at 56ºC and maintained at -20ºC for the evaluation bacterin potency in hamsters and detection and quantification of LigA and LipL32 antigens by Indirect ELISA assay and Indirect Sandwich ELISA. The results of potency assay in hamsters demonstrated that the bacterin was approved by the international patterns of quality until dilution 1/6400, protecting the hamters against lethal infection challenge by the dilution 10-6 (100 infectious doses 50%/0,2 mL). The results of Real Time PCR detected 3,2 x 103 e 2,3 x 101 copies of mRNA that encodes the LigA protein, in samples of pure bacterin and diluted 1:200, respectively. Few eight copies of mRNA that encodes LipL32 protein were detected in pure bacterin samples. Indirect ELISA assays not detected LigA protein in inactivated bacterin samples, but demonstrated LipL32 protein detection until dilution 1:1600 of bacterin. Indirect Sandwich ELISA presented cross-reaction in control plates, so the results cannot be considerated in the analysis. The results of real time PCR cannot be correlated with the potency assay in hamsters but Indirect ELISA assay for protein LipL32 demonstrated that the results were suitable with the results presented by the potency assay in hamsters offering a possible in vitro alternative for the evaluation of leptospirosis bacterins potency.

Page generated in 0.0995 seconds