Spelling suggestions: "subject:"bracing."" "subject:"gracing.""
171 |
Does the Halting Necessary for Hardware Trace Collection Inordinately Perturb the Results?Watson, Myles G. 16 November 2004 (has links) (PDF)
Processor address traces are invaluable for characterizing workloads and testing proposed memory hierarchies. Long traces are needed to exercise modern cache designs and produce meaningful results, but are difficult to collect with hardware monitors because microprocessors access memory too frequently for disks or other large storage to keep up. The small, fast buffers of the monitors fill quickly; in order to obtain long contiguous traces, the processor must be stopped while the buffer is emptied. This halting may perturb the traces collected, but this cannot be measured directly, since long uninterrupted traces cannot be collected. We make the case that hardware performance counters, which collect runtime statistics without influencing execution, can be used to measure halting effects. We use the performance counters of the Pentium 4 processor to collect statistics while halting the processor as if traces were being collected. We then compare these results to the statistics obtained from unhalted runs. We present our results in terms of which counters are affected, why, and what this means for trace-collection systems.
|
172 |
Design of a Surrogate Hypersonic Inlet for the HIFIRE-6 ConfigurationMileski, Joseph W. 26 August 2022 (has links)
No description available.
|
173 |
Ray Collection Bounding Volume HierarchyRivera, Kris Krishna 01 January 2011 (has links)
This thesis presents Ray Collection BVH, an improvement over a current day Ray Tracing acceleration structure to both build and perform the steps necessary to efficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonly used acceleration structure, which aides in rendering complex scenes in 3D space using Ray Tracing by breaking the scene of triangles into a simple hierarchical structure. The algorithm this thesis explores was developed in an attempt at accelerating the process of both constructing this structure, and also using it to render these complex scenes more efficiently. The idea of using "ray collection" as a data structure was accidentally stumbled upon by the author in testing a theory he had for a class project. The overall scheme of the algorithm essentially collects a set of localized rays together and intersects them with subsequent levels of the BVH at each build step. In addition, only part of the acceleration structure is built on a per-Ray need basis. During this partial build, the Rays responsible for creating the scene are partially processed, also saving time on the overall procedure. Ray tracing is a widely used technique for simple rendering from realistic images to making movies. Particularly, in the movie industry, the level of realism brought in to the animated movies through ray tracing is incredible. So any improvement brought to these algorithms to improve the speed of rendering would be considered useful and iii welcome. This thesis makes contributions towards improving the overall speed of scene rendering, and hence may be considered as an important and useful contribution
|
174 |
FlexRender: A Distributed Rendering Architecture for Ray Tracing Huge Scenes on Commodity Hardware.Somers, Robert Edward 01 June 2012 (has links) (PDF)
As the quest for more realistic computer graphics marches steadily on, the demand for rich and detailed imagery is greater than ever. However, the current "sweet spot" in terms of price, power consumption, and performance is in commodity hardware. If we desire to render scenes with tens or hundreds of millions of polygons as cheaply as possible, we need a way of doing so that maximizes the use of the commodity hardware we already have at our disposal.
Techniques such as normal mapping and level of detail have attempted to address the problem by reducing the amount of geometry in a scene. This is problematic for applications that desire or demand access to the scene's full geometric complexity at render time. More recently, out-of-core techniques have provided methods for rendering large scenes when the working set is larger than the available system memory.
We propose a distributed rendering architecture based on message-passing that is designed to partition scene geometry across a cluster of commodity machines in a spatially coherent way, allowing the entire scene to remain in-core and enabling the construction of hierarchical spatial acceleration structures in parallel. The results of our implementation show roughly an order of magnitude speedup in rendering time compared to the traditional approach, while keeping memory overhead for message queuing around 1%.
|
175 |
Can Clustering Improve Requirements Traceability? A Tracelab-Enabled StudyArmstrong, Brett Taylor 01 December 2013 (has links) (PDF)
Software permeates every aspect of our modern lives. In many applications, such in the software for airplane flight controls, or nuclear power control systems software failures can have catastrophic consequences. As we place so much trust in software, how can we know if it is trustworthy? Through software assurance, we can attempt to quantify just that.
Building complex, high assurance software is no simple task. The difficult information landscape of a software engineering project can make verification and validation, the process by which the assurance of a software is assessed, very difficult. In order to manage the inevitable information overload of complex software projects, we need software traceability, "the ability to describe and follow the life of a requirement, in both forwards and backwards direction."
The Center of Excellence for Software Traceability (CoEST) has created a compelling research agenda with the goal of ubiquitous traceability by 2035. As part of this goal, they have developed TraceLab, a visual experimental workbench built to support design, implementation, and execution of traceability experiments. Through our collaboration with CoEST, we have made several contributions to TraceLab and its community.
This work contributes to the goals of the traceability research community. The three key contributions are (a) a machine learning component package for TraceLab featuring six (6) classifier algorithms, five (5) clustering algorithms, and a total of over 40 components for creating TraceLab experiments, built upon the WEKA machine learning package, as well as implementing methods outside of WEKA; (b) the design for an automated tracing system that uses clustering to decompose the task of tracing into many smaller tracing subproblems; and (c) an implementation of several key components of this tracing system using TraceLab and its experimental evaluation.
|
176 |
A Study of Semi-automated TracingHolden, Jeffrey 01 June 2011 (has links) (PDF)
Requirements tracing is crucial for software engineering practices including change analysis, regression testing, and reverse engineering. The requirements tracing process produces a requirements traceability matrix(TM) which links high- and low-level document elements. Manually generating a TM is laborious, time consuming, and error-prone. Due to these challenges TMs are often neglected. Automated information retrieval(IR) techniques are used with some efficiency. However, in mission- or safety-critical systems a human analyst is required to vet the candidate TM. This introduces semi-automated requirements tracing, where IR methods present a candidate TM and a human analyst validates it, producing a final TM. In semi-automated tracing the focus becomes the quality of the final TM. This thesis expands upon the research of Cuddeback et al. by examining how human analysts interact with candidate TMs. We conduct two experiments, one using an automated tracing tool and the other using manual validation. We conduct formal statistical analysis to determine the key factors impacting the analyst’s tracing performance. Additionally, we conduct a pilot study investigating how analysts interact with TMs generated by automated IR methods. Our research statistically confirms the finding of Cuddeback et al. that the strongest impact on analyst performance is the initial TM quality. Finally we show evidence that applying local filters to IR results produce the best candidate TMs.
|
177 |
Indoor localization using received signal strengthObeidat, Huthaifa A.N., Abd-Alhameed, Raed, Noras, James M., Zhu, Shaozhen (Sharon), Ghazaany, Tahereh S., Ali, N.T., Elkhazmi, Elmahdi A. January 2013 (has links)
No / A comparison between two indoor localization algorithms using received signal strength is carried out. The first algorithm is the vector algorithm; the second is the matrix algorithm. The comparison considered the effects of the reference points, the access point, and the frequency on the accuracy of the localization process. The experiments were carried out using ray tracing software and MATLAB. This paper justifies the use of adopting the matrix algorithm.
|
178 |
FUZZING DEEPER LOGIC WITH IMPEDING FUNCTION TRANSFORMATIONRowan Brock Hart (14205404) 02 December 2022 (has links)
<p>Fuzzing, a technique for negative testing of programs using randomly mutated or gen?erated input data, is responsible for the discovery of thousands of bugs in software from web browsers to video players. Advances in fuzzing focus on various methods for enhancing the number of bugs found and reducing the time spent to find them by applying various static, dynamic, and symbolic binary analysis techniques. As a stochastic process, fuzzing is an inherently inefficient method for discovering bugs residing in deep logic of programs due to the compounding complexity of preconditions as paths in programs grow in length. We propose a novel system to overcome this limitation by abstracting away path-constraining preconditions from a statement level to a function level by identifying impeding functions, functions that inhibit control flow from proceeding. REFACE is an end-to-end system for enhancing the capabilities of an existing fuzzer by generating variant binaries that present an easier-to-fuzz interface and expands an ongoing fuzzing campaign with minimal offline overhead. REFACE operates entirely on binary programs, requiring no source code or sym?bols to run, and is fuzzer-agnostic. This enhancement represents a step forward in a new direction toward abstraction of code that has historically presented a significant barrier to fuzzing and aims to make incremental progress by way of several ancillary dataflow analysis techniques with potential wide applicability. We attain a significant improvement in speed of obtaining maximum coverage, re-discover one known bug, and discover one possible new bug in a binary program during evaluation against an un-modified state-of-the-art fuzzer with no augmentation.</p>
|
179 |
Differentiable Simulation for Photonic Design: from Semi-Analytical Methods to Ray TracingZhu, Ziwei January 2024 (has links)
The numerical solutions of Maxwell’s equations have been the cornerstone of photonic design for over a century. In recent years, the field of photonics has witnessed a surge in interest in inverse design, driven by the potential to engineer nonintuitive photonic structures with remarkable properties. However, the conventional approach to inverse design, which relies on fully discretized numerical simulations, faces significant challenges in terms of computational efficiency and scalability.
This thesis delves into an alternative paradigm for inverse design, leveraging the power of semi-analytical methods. Unlike their fully discretized counterparts, semi-analytical methods hold the promise of enabling simulations that are independent of the computational grid size, potentially revolutionizing the design and optimization of photonic structures. To achieve this goal, we put forth a more generalized formalism for semi-analytical methods and have developed a comprehensive differential theory to underpin their operation. This theoretical foundation not only enhances our understanding of these methods but also paves the way for their broader application in the field of photonics.
In the final stages of our investigation, we illustrate how the semi-analytical simulation framework can be effectively employed in practical photonic design scenarios. We demonstrate the synergy of semi-analytical methods with ray tracing techniques, showcasing their combined potential in the creation of large-scale optical lens systems and other complex optical devices.
|
180 |
Implementation of Digital Contact Tracing for COVID-19 in a Hospital Context: Experiences and Perspectives of Leaders and Healthcare WorkersO'Dwyer, Brynn 27 November 2023 (has links)
Background. In parallel with public health responses, health systems have had to rapidly implement infection control strategies during the SARS-CoV-2 (COVID-19) pandemic. Various technologies, such as digital contact tracing (DCT), have been implemented to enhance case investigations among healthcare workers' (HCWs). Currently, little attention has focused on the perspectives of those who have implemented DCT innovations and those who have adopted such technologies within a healthcare environment. --
Objective. This study aimed to describe the implementation, acceptance, and outcomes of a web-based DCT tool used extensively at a specialized pediatric acute-care hospital in Ontario during the COVID-19 pandemic from the perspective of key stakeholders. --
Methods. Using an exploratory qualitative design, this research involved 21 semi-structured interviews with healthcare administrators (n=6; 29%), occupational health specialists (n=8; 38%), and healthcare workers (n=7; 33/%) at the Children's Hospital of Eastern Ontario. Interview protocols and analysis were guided by the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework. The interviews lasted on average 33.6 minutes in length and were audio-recorded. Verbatim transcripts were subjected to thematic analysis using NVivo software. --
Results. The implementation of DCT during the COVID-19 pandemic was viable and well-received among stakeholders. End-users cited that their engagement with the DCT tool was facilitated by its perceived ease of use and the ability to gain awareness of probable COVID-19 exposures; however, risk-assessment consequences and access concerns were present as barriers (reach). Stakeholders commonly agreed that the DCT tool exerted a positive outcome on the hospital's capacity to meet the demands of COVID-19, notably through the facilitation of timely case investigations and by informing decision-making processes (effectiveness). Implementors and occupational specialists conveyed staffing impacts, and the loss of nuanced information as unintended consequences (effectiveness). Safety-focused communication strategies and having a technology that was human-centered were crucial factors driving staff adoption. Conversely, adoption was challenged by the misaligned delivery of the DCT tool with HCWs standard practices, alongside the evolving perspectives of COVID-19. Some end-users expressed an initial disconnect towards the DCT tool, raising questions about the fidelity of the implementation. However, stakeholders collectively agreed on the viability of the DCT approach and its applicability to infectious disease practices (maintenance). --
Conclusion. Stakeholders reported DCT in the hospital context to be acceptable and efficient in meeting the demands of the COVID-19 pandemic. Recommendations for optimized DCT use include education and training for relevant personnel, improved access and usability, and integration into clinical systems. The findings contribute to evidence-based practices and guide future scale-up initiatives focused on digital surveillance in the hospital context.
|
Page generated in 0.0624 seconds