• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1744
  • 650
  • 250
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3747
  • 3747
  • 723
  • 719
  • 600
  • 543
  • 542
  • 474
  • 472
  • 427
  • 398
  • 378
  • 347
  • 332
  • 268
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Trajectory reconstruction by analysis of trace evidence on spent bullets fired through building materials: analysis by microscopy and direct analysis in real time

Edison, William James 12 March 2016 (has links)
Trajectory reconstruction of shooting incidents can help investigators determine critical case information regarding the number of shooters involved, their location(s), and intent. The examination of trace amounts of intermediate target materials collected on the surface of spent bullets provides crucial information needed for trajectory reconstruction. Determining the origin of an unknown material adhered to a spent bullet allows for the identification of intermediate targets the bullet either contacted or penetrated during flight. Although significant information can be obtained from examination of these trace materials adhered to spent bullets, this aspect of trajectory reconstruction is often ignored. The ability of different bullet types to collect trace materials from intermediate targets and the ability to associate these trace materials to their origin was examined using microscopy and Direct Analysis in Real Time (DART). Full metal jacket (FMJ), jacketed hollow point (JHP), and lead round nose (LRN) bullets were fired into sheets of five different commonly used building materials (oriented strand board, sanded plywood, white melamine board, synthetic PVC board, and medium density fiberboard) to produce a total of 45 spent bullets to be examined. All spent bullets were examined and photographed using a DSLR camera paired with a stereomicroscope. The spent bullets were then examined using DART/MS to determine if any ion profiles generated from the trace materials could be associated with those of the intermediate target building materials which the bullets were fired through. The collection of trace material from all five types used was highly dependent on the type of bullet. Very minimal amounts of trace materials were observed on the majority of LRN bullets, which failed to produce an identifiable ion signature. The FMJ bullets that were fired through PVC material collected trace material that produced an ion profile, while all other building materials failed to transfer to the FMJ bullets. All JHP bullets collected significant amounts of the five building materials tested inside the hollow point cavity and along the nose of the bullet. In every spent JHP bullet sample, the trace material collected produced a unique ion profile. Additionally, MS data from four of the five building materials tested matched the MS data generated from trace material collected on JHP bullets from the respective target materials.
452

Design of an adaptive RF fingerprint indoor positioning system

Mohd Sabri, Roslee January 2018 (has links)
RF fingerprinting can solve the indoor positioning problem with satisfactory accuracy, but the methodology depends on the so-called radio map calibrated in the offline phase via manual site-survey, which is costly, time-consuming and somewhat error-prone. It also assumes the RF fingerprint’s signal-spatial correlations to remain static throughout the online positioning phase, which generally does not hold in practice. This is because indoor environments constantly experience dynamic changes, causing the radio signal strengths to fluctuate over time, which weakens the signal-spatial correlations of the RF fingerprints. State-of-the-arts have proposed adaptive RF fingerprint methodology capable of calibrating the radio map in real-time and on-demand to address these drawbacks. However, existing implementations are highly server-centric, which is less robust, does not scale well, and not privacy-friendly. This thesis aims to address these drawbacks by exploring the feasibility of implementing an adaptive RF fingerprint indoor positioning system in a distributed and client-centric architecture using only commodity Wi-Fi hardware, so it can seamlessly integrate with existing Wi-Fi network and allow it to offer both networking and positioning services. Such approach has not been explored in previous works, which forms the basis of this thesis’ main contribution. The proposed methodology utilizes a network of distributed location beacons as its reference infrastructure; hence the system is more robust since it does not have any single point-of-failure. Each location beacon periodically broadcasts its coordinate to announce its presence in the area, plus coefficients that model its real-time RSS distribution around the transmitting antenna. These coefficients are constantly self-calibrated by the location beacon using empirical RSS measurements obtained from neighbouring location beacons in a collaborative fashion, and fitting the values using path loss with log-normal shadowing model as a function of inter-beacon distances while minimizing the error in a least-squared sense. By self-modelling its RSS distribution in real-time, the location beacon becomes aware of its dynamically fluctuating signal levels caused by physical, environmental and temporal characteristics of the indoor environment. The implementation of this self-modelling feature on commodity Wi-Fi hardware is another original contribution of this thesis. Location discovery is managed locally by the clients, which means the proposed system can support unlimited number of client devices simultaneously while also protect user’s privacy because no information is shared with external parties. It starts by listening for beacon frames broadcasted by nearby location beacons and measuring their RSS values to establish the RF fingerprint of the unknown point. Next, it simulates the reference RF fingerprints of predetermined points inside the target area, effectively calibrating the site’s radio map, by computing the RSS values of all detected location beacons using their respective coordinates and path loss coefficients embedded inside the received beacon frames. Note that the coefficients model the real-time RSS distribution of each location beacon around its transmitting antenna; hence, the radio map is able to adapt itself to the dynamic fluctuations of the radio signal to maintain its signal-spatial correlations. The final step is to search the radio map to find the reference RF fingerprint that most closely resembles the unknown sample, where its coordinate is returned as the location result. One positioning approach would be to first construct a full radio map by computing the RSS of all detected location beacons at all predetermined calibration points, then followed by an exhaustive search over all reference RF fingerprints to find the best match. Generally, RF fingerprint algorithm performs better with higher number of calibration points per unit area since more locations can be classified, while extra RSS components can help to better distinguish between nearby calibration points. However, to calibrate and search many RF fingerprints will incur substantial computing costs, which is unsuitable for power and resource limited client devices. To address this challenge, this thesis introduces a novel algorithm suitable for client-centric positioning as another contribution. Given an unknown RF fingerprint to solve for location, the proposed algorithm first sorts the RSS in descending order. It then iterates over this list, first selecting the location beacon with the strongest RSS because this implies the unknown location is closest to the said location beacon. Next, it computes the beacon’s RSS using its path loss coefficients and coordinate information one calibration point at a time while simultaneously compares the result with the measured value. If they are similar, the algorithm keeps this location for subsequent processing; else it is removed because distant points relative to the unknown location would exhibit vastly different RSS values due to the different site-specific obstructions encountered by the radio signal propagation. The algorithm repeats the process by selecting the next strongest location beacon, but this time it only computes its RSS for those points identified in the previous iteration. After the last iteration completes, the average coordinate of remaining calibration points is returned as the location result. Matlab simulation shows the proposed algorithm only takes about half of the time to produce a location estimate with similar positioning accuracy compared to conventional algorithm that does a full radio map calibration and exhaustive RF fingerprint search. As part of the thesis’ contribution, a prototype of the proposed indoor positioning system is developed using only commodity Wi-Fi hardware and open-source software to evaluate its usability in real-world settings and to demonstrate possible implementation on existing Wi-Fi installations. Experimental results verify the proposed system yields consistent positioning accuracy, even in highly dynamic indoor environments and changing location beacon topologies.
453

Real-Time Composer-Performer Collaboration As Explored In Wilderness, A Dance And Audio Installation

January 2012 (has links)
abstract: From fall 2010 to spring 2011, the author was the pianist in twenty public performances of Wilderness, a site-adaptable dance and audio installation by choreographer Yanira Castro and composer Stephan Moore. Wilderness's music was generated as the result of an algorithmic treatment of data collected from the movements of both dancers and audience members within the performance space. The immediacy of using movement to instantaneously generate sounds resulted in the need for a real-time notational environment inhabited by a sight-reading musician. Wilderness provided the author the opportunity to extensively explore an extreme sight-reading environment, as well as the experience of playing guided improvisations over existing materials while incorporating lateral thinking strategies, resulting from a real-time collaboration between composer and performer during the course of a live performance. This paper describes Wilderness in detail with particular attention focused on aspects of the work that most directly affect the pianist: the work's real-time notational system, live interaction between composer and performer, and the freedoms and limitations of guided improvisation. There is a significant amount of multi-media documentation of Wilderness available online, and the reader is directed toward this online content in the paper's appendix. / Dissertation/Thesis / D.M.A. Music 2012
454

Development of a Botrytis specific immunosensor : towards using PCR species identification

Binder, Michael January 2014 (has links)
Botrytis species affect over 300 host plants in all climate areas of the world, at both pre and post-harvest stages, leading to significant losses in agricultural produce. Therefore, the development of a rapid, sensitive and reliable method to assess the pathogen load of infected crops can help to prescribe an effective curing regime. Growers would then have the ability to predict and manage the full storage potential of their crops and thus provide an effective disease control and reduce post-harvest losses. A highly sensitive electrochemical immunosensor based on a screen-printed gold electrode (SPGE) with onboard carbon counter and silver / silver chloride (Ag/AgCl) pseudo-reference electrode was developed in this work for the detection and quantification of Botrytis species. The sensor utilised a direct sandwich enzyme-linked immunosorbent assay (ELISA) format with a monoclonal antibody against Botrytis immobilised on the gold working electrode. Two immobilisation strategies were investigated for the capture antibody, and these included adsorption and covalent immobilisation after self-assembled monolayer formation with 3-dithiodipropionic acid (DTDPA). A polyclonal antibody conjugated to the electroactive enzyme horseradish peroxidase (HRP) was then applied for signal generation. Electrochemical measurements were conducted using 3,3’, 5,5’-tetramethylbenzidine dihydrochloride / hydrogen peroxide (TMB/H2O2) as the enzyme substrate system at a potential of -200 mV. The developed biosensor was capable of detecting latent Botrytis infections 24 h post inoculation with a linear range from 150 to 0.05 μg fungal mycelium ml-1 and a limit of detection (LOD) as low as 16 ng ml-1 for covalent immobilisation and 58 ng ml-1 for adsorption, respectively. Benchmarked against the commercially available Botrytis ELISA kits, the optimised immuno-electrochemical biosensor showed strong correlation of the quantified samples (R2=0.998).
455

Vliv PUFA n-3 na expresi genů kódujících proteiny řídící homeostázu cholesterolu

Hyblerová, Dagmar January 2014 (has links)
The aim of this study was to confirm that the polyunsaturated fatty acids n-3 (n-3 PUFA) have a positive effect on plasma lipids. These acids can reduce cholesterol by increasing gene expression Insig-1 while decreasing the expression of genes encoding Hmgcr and Ldlr. We tested in experimental rats, which were added to the feed mixture of 6 % safflower oil , 6 % fish oil or 6 % of the oil from the algae Schizochytrium. Relative gene expression was Insig-1 in the test group with addition of fish oil to 120% of controls (P<0.05) and in the group with addition of oils from algae Schizochytrium the relative expression of 170 % of control (P<0.05). These results confirm our hypothesis, only a part, as the relative expression of the gene and Hmgcr and Ldlr was in the test group with addition of fish oil 103% (P>0.05) and 101 % of control (P>0.05) and in the group with addition of oils from algae Schizochytrium the relative expression of 117% (P>0.05) and 156 % (P>0.05) compared to control. Thus, to reduce the relative expression of these genes did not. However, we have shown that n-3 PUFA contribute to a reduction in plasma cholesterol and in this case up to 20 % of control. The concentration of cholesterol in the group with addition of safflower oil was 1.35 mmol.l-1, the group with the addition of fish oil 0.98 mmol.l-1.
456

Adaptive Scheduling in a Distributed Cyber-Physical System: A case study on Future Power Grids

Choudhari, Ashish 01 December 2015 (has links)
Cyber-physical systems (CPS) are systems that are composed of physical and computational components. CPS components are typically interconnected through a communication network that allows components to interact and take automated actions that are beneficial for the overall CPS. Future Power-Grid is one of the major example of Cyber-physical systems. Traditionally, Power-Grids use a centralized approach to manage the energy produced at power sources or large power plants. Due to the advancement and availability of renewable energy sources such as wind farms and solar systems, there are more number of energy sources connecting to the power grid. Managing these large number of energy sources using a centralized technique is not practical and is computationally very expensive. Therefore, a decentralized way of monitoring and scheduling of energy across the power grid is preferred. In a decentralized approach, computational load is distributed among the grid entities that are interconnected through a readily available communication network like internet. The communication network allows the grid entities to coordinate and exchange their power state information with each other and take automated actions that lead to efficient consumption of energy as well as the network bandwidth. Thus, the future power grid is appropriately called a "Smart-Grid". While Smart-Grids provide efficient energy operations, they also impose several challenges in the design, verification and monitoring phases. The computer network serves as a backbone for scheduling messages between the Smart-Grid entities. Therefore, network delays experienced by messages play a vital role in grid stability and overall system performance. In this work, we study the effects of network delays on Smart-Grid performance and propose adaptive algorithms to efficiently schedule messages between the grid entities. Algorithms proposed in this work also ensure the grid stability and perform network congestion control. Through this work, we derive useful conclusions regarding the Smart-Grid performance and find new challenges that can serve as future research directions in this domain.
457

Policies for Migration of Real-Time tasks in Embedded Multicore Systems

Katre, Kedar Maheshwar 01 December 2010 (has links)
There has been a lot of work that has been done on timing predictability of real-time tasks on embedded systems. The main assumption in these studies has been that the timing behavior has been based on single processor systems. The scenario has changed entirely when the single core systems have been replaced with the new Multicore systems. The timing predictability is controlled by the migrating tasks, the network topology connecting the cores and the number of cores on the system. In this thesis we come up with a feasibility analysis which depends on the characteristics of the tasks viz. number of cache lines, time of migration, available bandwith, number of tasks etc. We also test this analysis on novel mechanisms of migration which have been proposed recently and present its results.
458

Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures

Poluri, Kaushik 01 August 2013 (has links)
AN ABSTRACT OF THE THESIS OF KAUSHIK POLURI, for the Master of Science degree in Electrical and Computer Engineering, presented on 07/03/2013, at Southern Illinois University Carbondale. TITLE: Bounding the Worst-Case Response Times of Hard-Real-Time Tasks under the Priority Ceiling Protocol in Cache-Based Architectures MAJOR PROFESSOR: Dr. HARINI RAMAPRASAD Schedulability analysis of hard-real-time systems requires a-priori knowledge of the worst-case execution times (WCET) of all tasks. Static timing analysis is a safe technique used for calculating WCET that attempts to model program complexity, architectural complexity and complexity introduced by interference from other tasks. Modern architectural features such as caches make static timing analysis of a single task challenging due to unpredictability introduced by their reliance on the history of memory accesses and the analysis of a set of tasks even more challenging due to cache-related interference among tasks. Researchers have proposed several static timing analysis techniques that explicitly consider cache-eviction delays for independent hard-real-time tasks executing on cache-based architectures. However, there is little research in this area for resource-sharing tasks. Recently, an analysis technique was proposed for systems using the Priority Inheritance Protocol (PIP) to manage resource-arbitration among tasks. The Priority Ceiling Protocol (PCP) is a resource-arbitration protocol that offers distinct advantages over the PIP, including deadlock avoidance. However, to the best of our knowledge, there is currently no technique to bound the WCET of resource-sharing tasks under PCP with explicit consideration of cache-eviction delays. This thesis presents a technique to bound the WCETs and hence, the Worst-Case Response Times (WCRTs) of resource-sharing hard-real-time tasks executing on cache-based uniprocessor systems, specifically focusing on data cache analysis.
459

Aperiodic Job Handling in Cache-Based Real-Time Systems

Motakpalli, Sankalpanand 01 December 2017 (has links)
Real-time systems require a-priori temporal guarantees. While most of the normal operation in such a system is modeled using time-driven, hard-deadline sporadic tasks, event-driven behavior is modeled using aperiodic jobs with soft or no deadlines. To provide good Quality-of- Service for aperiodic jobs in the presence of sporadic tasks, aperiodic servers were introduced. Aperiodic servers act as a sporadic task and reserve a quota periodically to serve aperiodic jobs. The use of aperiodic servers in systems with caches is unsafe because aperiodic servers do not take into account, the indirect cache-related preemption delays that the execution of aperiodic jobs might impose on the lower-priority sporadic tasks, thus jeopardizing their safety. To solve this problem, we propose an enhancement to the aperiodic server that we call a Cache Delay Server. Here, each lower-priority sporadic task is assigned a delay quota to accommodate the cache-related preemption delay imposed by the execution of aperiodic jobs. Aperiodic jobs are allowed to execute at their assigned server priority only when all the active lower-priority sporadic tasks have a sufficient delay quota to accommodate it. Simulation results demonstrate that a Cache Delay Server ensures the safety of sporadic tasks while providing acceptable Quality-of-Service for aperiodic jobs. We propose a Integer Linear Program based approach to calculate delay quotas for sporadic tasks within a task set where Cache Delay Servers have been pre-assigned. We then propose algorithms to determine Cache Delay Server characteristics for a given sporadic task set. Finally, we extend the Cache Delay Server concept to multi-core architectures and propose approaches to schedule aperiodic jobs on appropriate Cache Delay Servers. Simulation results demonstrate the effectiveness of all our proposed algorithms in improving aperiodic job response times while maintaining the safety of sporadic task execution.
460

Comparison of Anti-Aliasing in Motion

Andersson, Lukas January 2018 (has links)
Background. Aliasing is a problem that every 3D game has because of the resolutions that monitors are using right now is not high enough. Aliasing is when you look at an object in a 3D world and see that it has jagged edges where it should be smooth. This can be reduced by a technique called anti-aliasing. Objectives. The object of this study is to compare three different techniques, Fast approximate anti-aliasing (FXAA), Subpixel Morphological Anti Aliasing (SMAA) and Temporal anti-aliasing (TAA) in motion to see which is a good default for games. Methods. An experiment was run where 20 people participated and tested a real-time prototype which had a camera moving through a scene multiple times with different anti-aliasing techniques. Results. The results showed that TAA was consistently performing best in the tests of blurry picture quality, aliasing and flickering. Both SMAA and FXAA were only comparable to TAA in the blur area of the test and falling behind all the other parts. Conclusions. TAA is a great anti-aliasing technique to use for avoiding aliasing and flickering while in motion. Blur was thought to be a problem but as the test shows most people did not feel that blur was a problem for any of the techniques that were used.

Page generated in 0.0553 seconds