• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pulse shape discrimination studies in liquid argon for the DEAP-1 detector

Lidgard, Jeffrey Jack 25 April 2008 (has links)
A detector with a target mass of 7 kg of liquid argon was designed, constructed and operated at Queen’s University. This detector is a scaled model for the DEAP project toward a tonne-scale argon detector to search for the WIMP candidate of the so far undetected, dark matter of the universe. The primary intent of the scaled detector was to measure the achievable level to reject background events by use of pulse shape discrimination, being based upon the scintillation timing properties of liquid argon. After refining the apparatus and components, the detector was in operation from the 20th of August until the 16th of October 2007 before being moved to its current location in SNOLAB. During this time, a population of 31 million well-tagged gamma events were collected, of which 15.8 million were in the energy range of interest for calibration. This population was sufficient to demonstrate the discrimination of background events by pulse shape discrimination at the level of 6.3 × 10-8. An analytical model was constructed, based on the scintillation processes and detector response, and has been sufficiently investigated to make predictions of further achievable discrimination. / Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2008-04-25 01:39:39.121
2

Alpha backgrounds in the DEAP dark matter search experiment.

Pollmann, TINA 10 August 2012 (has links)
One of the pressing concerns in Dark Matter detection experiments is ensuring that the potential signal from exceedingly rare Dark Matter interactions is not obscured by background from interactions with more common particles. This work focuses on the ways in which alpha particles from primordial isotopes in the DEAP detector components can cause background events in the region of interest for Dark Matter search, based on both Monte Carlo simulations and data from the DEAP-1 prototype detector. The DEAP experiment uses liquid argon as a target for Dark Matter interactions and relies on the organic electroluminescent dye tetraphenyl butadiene (TPB) to shift the UV argon scintillation light to the visible range. The light yield and pulse shape of alpha particle induced scintillation of TPB, which is an essential input parameter for the simulations, was experimentally determined. An initial mismatch between simulated and measured background spectra could be explained by a model of geometric background events, which was experimentally confirmed and informed the design of certain parts of the DEAP-3600 detector that is under construction at the moment. Modification of the DEAP-1 detector geometry based on this model led to improved background rates. The remaining background was well described by the simulated spectra, and competitive limits on the contamination of acrylic with primordial isotopes were obtained. Purity requirements for the DEAP-3600 detector components were based on this work. The design and testing of a novel large area TPB deposition source, which will be used to make TPB coatings for the DEAP-3600 detector, is described. / Thesis (Ph.D, Physics, Engineering Physics and Astronomy) -- Queen's University, 2012-08-09 13:12:52.26
3

PERFORMANCE EVALUATION OF SIGNAL CONDITIONING BOARDS AND SIMULATION OF THE IMPACT OF ELECTRONICS NOISE ON THE DEAP-3600 DARK MATTER DETECTOR

Chouinard, Rhys Timon Unknown Date
No description available.
4

Amélioration de la vitesse et de la qualité d'image du rendu basé image / Improving speed and image quality of image-based rendering

Ortiz Cayón, Rodrigo 03 February 2017 (has links)
Le rendu photo-réaliste traditionnel exige un effort manuel et des calculs intensifs pour créer des scènes et rendre des images réalistes. C'est principalement pour cette raison que la création de contenus pour l’imagerie numérique de haute qualité a été limitée aux experts et le rendu hautement réaliste nécessite encore des temps de calcul significatifs. Le rendu basé image (IBR) est une alternative qui a le potentiel de rendre les applications de création et de rendu de contenus de haute qualité accessibles aux utilisateurs occasionnels, puisqu'ils peuvent générer des images photo-réalistes de haute qualité sans subir les limitations mentionnées ci-dessus. Nous avons identifié trois limitations importantes des méthodes actuelles de rendu basé image : premièrement, chaque algorithme possède des forces et faiblesses différentes, en fonction de la qualité de la reconstruction 3D et du contenu de la scène, et un seul algorithme ne permet souvent pas d’obtenir la meilleure qualité de rendu partout dans l’image. Deuxièmement, ces algorithmes présentent de forts artefacts lors du rendu d’objets manquants ou partiellement reconstruits. Troisièmement, la plupart des méthodes souffrent encore d'artefacts visuels significatifs dans les régions de l’image où la reconstruction est de faible qualité. Dans l'ensemble, cette thèse propose plusieurs améliorations significatives du rendu basé image aussi bien en termes de vitesse de rendu que de qualité d’image. Ces nouvelles solutions sont basées sur le rendu sélectif, la substitution de modèle basé sur l'apprentissage, et la prédiction et la correction des erreurs de profondeur. / Traditional photo-realistic rendering requires intensive manual and computational effort to create scenes and render realistic images. Thus, creation of content for high quality digital imagery has been limited to experts and highly realistic rendering still requires significant computational time. Image-Based Rendering (IBR) is an alternative which has the potential of making high-quality content creation and rendering applications accessible to casual users, since they can generate high quality photo-realistic imagery without the limitations mentioned above. We identified three important shortcomings of current IBR methods: First, each algorithm has different strengths and weaknesses, depending on 3D reconstruction quality and scene content and often no single algorithm offers the best image quality everywhere in the image. Second, such algorithms present strong artifacts when rendering partially reconstructed objects or missing objects. Third, most methods still result in significant visual artifacts in image regions where reconstruction is poor. Overall, this thesis addresses significant shortcomings of IBR for both speed and image quality, offering novel and effective solutions based on selective rendering, learning-based model substitution and depth error prediction and correction.
5

Optimalizace metaheuristikami v Pythonu pomocí knihovny DEAP / Optimization by means of metaheuristics in Python using the DEAP library

Kesler, René January 2019 (has links)
{This thesis deals with optimization by means of metaheuristics, which are used for complicated engineering problems that cannot be solved by classical methods of mathematical programming. At the beginning, choosed metaheuristics are described: simulated annealing, particle swarm optimization and genetic algorithm; and then they are compared with use of test functions. These algorithms are implemented in Python programming language with use of package called DEAP, which is also described in this thesis. Algorithms are then applied for optimization of design parameters of the heat storage unit.
6

Emotion Recognition Using Deep Convolutional Neural Network with Large Scale Physiological Data

Sharma, Astha 25 October 2018 (has links)
Classification of emotions plays a very important role in affective computing and has real-world applications in fields as diverse as entertainment, medical, defense, retail, and education. These applications include video games, virtual reality, pain recognition, lie detection, classification of Autistic Spectrum Disorder (ASD), analysis of stress levels, and determining attention levels. This vast range of applications motivated us to study automatic emotion recognition which can be done by using facial expression, speech, and physiological data. A person’s physiological signals such are heart rate, and blood pressure are deeply linked with their emotional states and can be used to identify a variety of emotions; however, they are less frequently explored for emotion recognition compared to audiovisual signals such as facial expression and voice. In this thesis, we investigate a multimodal approach to emotion recognition using physiological signals by showing how these signals can be combined and used to accurately identify a wide range of emotions such as happiness, sadness, and pain. We use the deep convolutional neural network for our experiments. We also detail comparisons between gender-specific models of emotion. Our investigation makes use of deep convolutional neural networks, which are the latest state of the art in supervised learning, on two publicly available databases, namely DEAP and BP4D+. We achieved an average emotion recognition accuracy of 98.89\% on BP4D+ and on DEAP it is 86.09\% for valence, 90.61\% for arousal, 90.48\% for liking and 90.95\% for dominance. We also compare our results to the current state of the art, showing the superior performance of our method.
7

Low Complexity Hybrid Precoding and Combining for Millimeter Wave Systems

Alouzi, Mohamed 27 April 2023 (has links)
The evolution to 5G and its use cases is driven by data-intensive applications requiring higher data rates over wireless channels. This has led to research in massive multiple input multiple output (MIMO) techniques and the use of the millimeter wave (mm wave) band. Because of the higher path loss at mm wave frequencies and the poor scattering nature of the mm wave channel (fewer paths exist), this thesis first proposes the use of the sphere decoding (SD) algorithm, and the semidefinite relaxation (SDR) detector to improve the performance of a uniform planar array (UPA) hybrid beamforming technique with large antenna arrays. The second contributions of this thesis consist of a low-complexity algorithm using the gradient descent for hybrid precoding and combining designs in mm wave systems. Also, in this thesis we present a low-complexity algorithm for hybrid precoding and combining designs that uses momentum gradient descent and Newton’s Method for mm wave systems which makes the objective function converge faster compared to other iterative methods in the literature; the two proposed low-complexity algorithms for hybrid precoding and combining do not depend on the antenna array geometry, unlike the orthogonal matching pursuit (OMP) hybrid precoding/combining approach. Moreover, these algorithms allow hybrid precoders/combiners to yield a performance very close to that of the optimal unconstrained digital precoders and combiners with a small number of iterations. Simulation results verify that the proposed hybrid precoding/combining scheme that uses momentum gradient descent and Newton’s Method outperforms previous methods that appear in the literature in terms of bit error rate (BER) and achievable spectral efficiency with lower complexity. Finally, an iterative algorithm that directly converts the hybrid precoding/combining in the full array (FA) architecture to subarray (SA) architecture is proposed and examined in this thesis. It is called direct conversion of iterative hybrid precoding/combining from FA to SA (DCIFS) hybrid precoding/combining. The proposed DCIFS design takes into consideration the matrix structure of the analog and baseband precoding and combining in the design derivation. Moreover, it does not depend on the antenna array geometry, unlike other techniques, such as the orthogonal matching pursuit (OMP) hybrid precoding/combining approach, nor does it assume any other constraints. Simulation results show that the proposed DCIFS hybrid design, when compared to the FA hybrid designs counterpart, can provide a spectral efficiency that is close to optimum while maintaining a very low complexity and better spectral efficiency than the conventional SA hybrid design with the same hardware complexity.
8

A Performance Evaluation of Low Pressure Carbon Dioxide Discharge Test

Lee, Sung-Mo 30 April 2004 (has links)
For gaseous fire extinguishing systems, the maximum percent of agent in pipe, i.e., pipe volume vs. agent liquid volume should be determined for proper system design and performance by confirming the maximum length of pipe run in which their flow calculation methods can predict the discharge pressures and agent concentration. It is the purpose of this paper to determine the ability and limitations of the NFPA 12 flow calculation methodology to identify the maximum percent of agent in pipe by conducting full scale low-pressure CO2 system discharge tests. A total of twenty low-pressure CO2 system discharge tests were conducted under different conditions. If all the measured pressures at the three node points of pipe runs and the measured CO2 concentrations in the test enclosures did not deviate from the predicted values of computerized flow calculations by more than ¡¾10 percent, the tests were judged to be acceptable. In the test results, the low-pressure CO2 system with a pipe run exceeding 492 ft (150 m) was not likely to achieve the concentration required for fire extinguishment within the determined discharge time although the pipe network was installed in compliance with the calculations based on the pressure drop equation in NFPA 12.
9

Data Driven Energy Efficiency of Ships

Taspinar, Tarik January 2022 (has links)
Decreasing the fuel consumption and thus greenhouse gas emissions of vessels has emerged as a critical topic for both ship operators and policy makers in recent years. The speed of vessels has long been recognized to have highest impact on fuel consumption. The solution suggestions like "speed optimization" and "speed reduction" are ongoing discussion topics for International Maritime Organization. The aim of this study are to develop a speed optimization model using time-constrained genetic algorithms (GA). Subsequent to this, this paper also presents the application of machine learning (ML) regression methods in setting up a model with the aim of predicting the fuel consumption of vessels. Local outlier factor algorithm is used to eliminate outlier in prediction features. In boosting and tree-based regression prediction methods, the overfitting problem is observed after hyperparameter tuning. Early stopping technique is applied for overfitted models.In this study, speed is also found as the most important feature for fuel consumption prediction models. On the other hand, GA evaluation results showed that random modifications in default speed profile can increase GA performance and thus fuel savings more than constant speed limits during voyages. The results of GA also indicate that using high crossover rates and low mutations rates can increase fuel saving.Further research is recommended to include fuel and bunker prices to determine more accurate fuel efficiency.

Page generated in 0.0162 seconds