• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development and use of a passive technique for measuring nitrogen dioxide in the background atmosphere

Gair, Amanda J. January 1989 (has links)
No description available.
2

A Performance Monitoring Tool Suite for Software and SoC on-chip Bus: Using 3D Graphics SoC as an example

Chang, Yi-Hao 19 March 2012 (has links)
Nowadays SoC involves both software and hardware designs, performance bottleneck may occur either in software/hardware or even both. But present performance monitoring tools usually evaluates one of software/hardware performance, which is not quite enough for nowadays SoC designs. Furthermore, due to increasing complexity of user requirements, embedded OS, such as Linux is introduced to manage the limited hardware resources for complicated applications. However, it also makes performance monitoring harder since the memory addressing space is divided into user space and kernel space with different capability to access system resources, which makes user space application impossible to retrieve system performance information without kernel or hardware supports. In this thesis, we propose a performance monitoring tool suite which is capable of analyzing the performance of user pace application, kernel space device driver and AMBA AHB bus for SoC running under Linux. We develop Performance Monitoring Tool Suite (PMTS) which includes: Program Monitor (PM) to monitor the execution time of software; Bus Utilization Monitor (BUM), Bus Contention Monitor (BCM) and Bus Global Monitor (BGM) to monitor the bus utilization, contentions¡Ketc. PMTS can help user to find out the performance bottleneck of both software and hardware more easily. We have applied PMTS to FPGA develop board and find out the hardware/software performance bottlenecks of the designs. From the experimental results we can know that adding PMTS won¡¦t impact the critical path of SoC.
3

The Tell–Tale Cardiac Thin Filament Model: An Investigation into the Dynamics of Contraction and Relaxation

Williams, Michael Ryan, Williams, Michael Ryan January 2017 (has links)
The correct function of cardiac sarcomeric proteins allow for people to maintain quality of life. However, mutations of the cardiac sarcomeric proteins can result in remodeling of the heart which typically results in death. I present a full atomistic cardiac thin filament model that I have developed and three studies that I conducted while at the University of Arizona, while pursuing my doctoral degree in chemistry The goal was to develop the model to be able to study the effects of the mutations on the thin filament proteins. First, I present the long process of developing the model that is still evolving as new information is available. Second, I present the study of two mutants, the troponin T R92L mutant and the tropomyosin D230N mutant. Molecular dynamics was used to simulate the wild–type and mutant versions of the model which resulted in a visualization of the change of interaction between the tropomyosin and troponin, specifically at the overlap region. Third, I present the study of calcium release which is the "gatekeeper" to cardiac contraction. Steered molecular dynamics was utilized to find a previously unseen molecular mechanism that alters the rate of calcium release depending on the mutant. Fourth, I present the study of the mechanism of the tropomyosin transition across the actin filament, in which a longitudinal transition is favored. The studies helped to provide an atomistic level understanding of the cardiac thin filament as well as the methodology to which the mutations disrupt the natural functions of the sarcomeric proteins. The new results of the research can provide new insight into how the effects of the disease causing mutations can be mitigated, potentially extending the life of people with the conditions.
4

Accuracy and precision of bedrock sur-face prediction using geophysics and geostatistics.

Örn, Henrik January 2015 (has links)
In underground construction and foundation engineering uncertainties associated with subsurface properties are inevitable to deal with. Site investigations are expensive to perform, but a limited understanding of the subsurface may result in major problems; which often lead to an unexpected increase in the overall cost of the construction project. This study aims to optimize the pre-investigation program to get as much correct information out from a limited input of resources, thus making it as cost effective as possible. To optimize site investigation using soil-rock sounding three different sampling techniques, a varying number of sample points and two different interpolation methods (Inverse distance weighting and point Kriging) were tested on four modeled reference surfaces. The accuracy of rock surface predictions was evaluated using a 3D gridding and modeling computer software (Surfer 8.02®). Samples with continuously distributed data, resembling profile lines from geophysical surveys were used to evaluate how this could improve the accuracy of the prediction compared to adding additional sampling points. The study explains the correlation between the number of sampling points and the accuracy of the prediction obtained using different interpolators. Most importantly it shows how continuous data significantly improves the accuracy of the rock surface predictions and therefore concludes that geophysical measurement should be used combined with traditional soil rock sounding to optimize the pre-investigation program.
5

A Comparison of Observation Systems for Monitoring Engagement in an Intervention Program

Linden, April D. 05 1900 (has links)
The measurement of engagement, or the interaction of a person with their environment, is an integral part of assessing the quality of an intervention program for young children diagnosed with autism spectrum disorder. Researchers and practitioners can and do measure engagement in many ways on the individual and group level. The purpose of this methodological study was to compare three commonly used recording systems: individual partial interval, group momentary time sampling, and group partial interval. These recording methods were compared across three classes of engagement: social, instructional, and non-instructional in a clinical setting with children with autism. Results indicate that group measurement systems were not sensitive to individual changes in engagement when child behaviors were variable. The results are discussed in the context of behavior analytic conceptual systems and the relative utility and future research directions for behavior analytic practice and research with young children in group settings.
6

Experiential Sampling For Object Detection In Video

Paresh, A 05 1900 (has links)
The problem of object detection deals with determining whether an instance of a given class of object is present or not. There are robust, supervised learning based algorithms available for object detection in an image. These image object detectors (image-based object detectors) use characteristics learnt from the training samples to find object and non-object regions. The characteristics used are such that the detectors work under a variety of conditions and hence are very robust. Object detection in video can be performed by using such a detector on each frame of the video sequence. This approach checks for presence of an object around each pixel, at different scales. Such a frame-based approach completely ignores the temporal continuity inherent in the video. The detector declares presence of the object independent of what has happened in the past frames. Also, various visual cues such as motion and color, which give hints about the location of the object, are not used. The current work is aimed at building a generic framework for using a supervised learning based image object detector for video that exploits temporal continuity and the presence of various visual cues. We use temporal continuity and visual cues to speed up the detection and improve detection accuracy by considering past detection results. We propose a generic framework, based on Experiential Sampling [1], which considers temporal continuity and visual cues to focus on a relevant subset of each frame. We determine some key positions in each frame, called attention samples, and object detection is performed only at scales with these positions as centers. These key positions are statistical samples from a density function that is estimated based on various visual cues, past experience and temporal continuity. This density estimation is modeled as a Bayesian Filtering problem and is carried out using Sequential Monte Carlo methods (also known as Particle Filtering), where a density is represented by a weighted sample set. The experiential sampling framework is inspired by Neisser’s perceptual cycle [2] and Itti-Koch’s static visual attention model[3]. In this work, we first use Basic Experiential Sampling as presented in[1]for object detection in video and show its limitations. To overcome these limitations, we extend the framework to effectively combine top-down and bottom-up visual attention phenomena. We use learning based detector’s response, which is a top-down cue, along with visual cues to improve attention estimate. To effectively handle multiple objects, we maintain a minimum number of attention samples per object. We propose to use motion as an alert cue to reduce the delay in detecting new objects entering the field of view. We use an inhibition map to avoid revisiting already attended regions. Finally, we improve detection accuracy by using a particle filter based detection scheme [4], also known as Track Before Detect (TBD). In this scheme, we compute likelihood of presence of the object based on current and past frame data. This likelihood is shown to be approximately equal to the product of average sample weights over past frames. Our framework results in a significant reduction in overall computation required by the object detector, with an improvement in accuracy while retaining its robustness. This enables the use of learning based image object detectors in real time video applications which otherwise are computationally expensive. We demonstrate the usefulness of this framework for frontal face detection in video. We use Viola-Jones’ frontal face detector[5] and color and motion visual cues. We show results for various cases such as sequences with single object, multiple objects, distracting background, moving camera, changing illumination, objects entering/exiting the frame, crossing objects, objects with pose variation and sequences with scene change. The main contributions of the thesis are i) We give an experiential sampling formulation for object detection in video. Many concepts like attention point and attention density which are vague in[1] are precisely defined. ii) We combine detector’s response along with visual cues to estimate attention. This is inspired by a combination of top-down and bottom-up attention maps in visual attention models. To the best of our knowledge, this is used for the first time for object detection in video. iii) In case of multiple objects, we highlight the problem with sample based density representation and solve by maintaining a minimum number of attention samples per object. iv) For objects first detected by the learning based detector, we propose to use a TBD scheme for their subsequent detections along with the learning based detector. This improves accuracy compared to using the learning based detector alone. This thesis is organized as follows . Chapter 1: In this chapter we present a brief survey of related work and define our problem. . Chapter 2: We present an overview of biological models that have motivated our work. . Chapter 3: We give the experiential sampling formulation as in previous work [1], show results and discuss its limitations. . Chapter 4: In this chapter, which is on Enhanced Experiential Sampling, we suggest enhancements to overcome limitations of basic experiential sampling. We propose track-before-detect scheme to improve detection accuracy. . Chapter 5: We conclude the thesis and give possible directions for future work in this area. . Appendix A: A description of video database used in this thesis. . Appendix B: A list of commonly used abbreviations and notations.
7

Utveckling av beslutsstöd för kreditvärdighet

Arvidsson, Martin, Paulsson, Eric January 2013 (has links)
The aim is to develop a new decision-making model for credit-loans. The model will be specific for credit applicants of the OKQ8 bank, becauseit is based on data of earlier applicants of credit from the client (the bank). The final model is, in effect, functional enough to use informationabout a new applicant as input, and predict the outcome to either the good risk group or the bad risk group based on the applicant’s properties.The prediction may then lay the foundation for the decision to grant or deny credit loan. Because of the skewed distribution in the response variable, different sampling techniques are evaluated. These include oversampling with SMOTE, random undersampling and pure oversampling in the form of scalar weighting of the minority class. It is shown that the predictivequality of a classifier is affected by the distribution of the response, and that the oversampled information is not too redundant. Three classification techniques are evaluated. Our results suggest that a multi-layer neural network with 18 neurons in a hidden layer, equippedwith an ensemble technique called boosting, gives the best predictive power. The most successful model is based on a feed forward structure andtrained with a variant of back-propagation using conjugate-gradient optimization. Two other models with a good prediction quality are developed using logistic regression and a decision tree classifier, but they do not reach thelevel of the network. However, the results of these models are used to answer the question regarding which customer properties are importantwhen determining credit risk. Two examples of important customer properties are income and the number of earlier credit reports of the applicant. Finally, we use the best classification model to predict the outcome of a set of applicants declined by the existent filter. The results show that thenetwork model accepts over 60 % of the applicants who had previously been denied credit. This may indicate that the client’s suspicionsregarding that the existing model is too restrictive, in fact are true.
8

An Evaluation of a 3D Sampling Technique and LiDAR for the Determination of Understory Vegetation Density Levels in Pine Plantations

Clarkson, Matthew Thomas 05 May 2007 (has links)
A three dimensional sampling technique was used to compare field understory conditions in Southeastern Louisiana using a laser range finder at three height levels (0.5m, 1.0m, and 1.5m) to LiDAR generated understory conditions to determine if a relationship existed. A similar comparison was made between densitometer crown closure measurements and understory LiDAR vegetation counts. A comparison between overstory LiDAR counts and understory LiDAR counts was also performed. LiDAR and understory counts exhibited a significant linear relationship but were poorly correlated at each sample level (Level-1 R2 = 0.34 ? 0.38, Level-2 R2 = 0.36 ? 0.43). The Level-3 LiDAR slope coefficient was non-significant. The crown closure versus understory linear model did not produce any significant results. The overstory LiDAR versus understory LiDAR model produced a moderate correlation (R2 = 0.5226) and was significant. The process of relating LiDAR points to understory conditions was not repeatable, even in the same geographic region.
9

Suivi par capteurs passifs des polluants émergents dans les eaux de surface en contexte urbain / Monitoring emerging pollutants in surface waters using in-situ sampling devices in an urban context

Villanueva, Jessica Denila 08 July 2013 (has links)
L’étude a pour objectif de déterminer la qualité des eaux de surface sous différentes conditions climatiques et pratiques de gestion des eaux de surface. Trois sites ont étés choisis pour: (1) la rivière Jalle de Blanquefort, (2) le lac de Bordeaux, France et (3) la rivière Pasig aux Philippines. Les sites français présentent des collecteurs d'eau qui se déversent directement dans les eaux de surface. La rivière Pasig sert de collecteur d'eaux usées en l’absence de stations de traitement des eaux usées et collectées. Au cours des campagnes de mesure, il a été possible de suivre l’impact de la variabilité climatique (pluviométrie) et d’événements aléatoires sur la qualité chimiques (éléments traces métalliques et polluants organiques) des eaux ; en combinant échantillonnage classique et par capteurs passifs. Les propriétés physico-chimiques de l'eau ainsi que les caractéristiques des particules et l'utilisation d'analyses statistiques permettent de préciser le comportement des molécules détectées et de décrire l’evolution hydrochimique des eaux de surface urbaines et estuariennes vis-à-vis d’aléas climatiques contrastés. / The study aimed to assess the water quality of the surface water in differingclimate conditions and management practices. Three interesting sites were chosen, (1)Jalle River and (2) Bordeaux Lac both in France and the (3) Pasig River in thePhilippines. The French sites have rainfall and run-off collectors that directly dischargewater to the water bodies. Pasig River, on the other hand, becomes a waste collector aswaste management and treatment plant are lacking. Trace metals and organics(pesticides, herbicides, pharmaceuticals and drugs) were measured. Conventional andpassive sampling approches were employed. The mass fluxes were obtained in order tocalculate the pollution transport. Physico-chemical properties and the particlecharacteristics, integrating statistical analyses, facilitated in explaining the behavior of themeasured molecules and describing the hydrological system in relation to climatevariability.
10

A critical examination of translation and evaluation norms in Russian Bible translation

Wehrmeyer, Jennifer Ella 01 January 2003 (has links)
This research aimed to determine whether the rejection by Russian Orthodox Church leaders of recent translations of the Bible into Russian could be ascribed to a conflict of Russian and Western translation norms. Using Lefevere's (1992) notion of systems, the study compared the norms of Russian Bible translations, Western Bible translation and Russian literary translation, as well as those of a segment of the target audience, to determine the extent of their compatibility with each other and with the translations in question. The results showed that the recent translations did reflect the norms of Western Bible translation, but that these were not atypical of norms for previous Russian and Slavonic translations, nor for the norms of Russian literary translation. However, the results also showed that in practice target audience norms mirrored those of the Russian Orthodox Church, resulting in a similar rejection of the newer translations. / Linguistics / M.A. (Linguistics)

Page generated in 0.0616 seconds