• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 691
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1730
  • 534
  • 243
  • 183
  • 164
  • 153
  • 152
  • 124
  • 112
  • 107
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

EFFECT OF ANCILLA LOSSES ON FAULT-TOLERANT QUANTUM ERROR CORRECTION IN THE [[7,1,3]] STEANE CODE

Nawaf, Sameer Obaid 01 December 2013 (has links)
Fault tolerant quantum error correction is a procedure which satisfies the feature that if one of the gates in the procedure has failed then the failure causes at most one error in the output qubits of the encoded block. Quantum computer is based on the idea of two quantum state systems (Qubits). However, the majority of systems are constructed from higher than two- level subspace. Bad control and environmental interactions in these systems lead to leakage fault. Leakage errors are errors that couple the states inside a code subspace to the states outside a code subspace. One example for leakage fault is loss errors. Since the fault tolerant procedure may be unable to recognize the leakage fault because it was designed to deal with Pauli errors. In that case a single leakage fault might disrupt the fault tolerant technique. In this thesis we investigate the effect of ancilla losses on fault-tolerant quantum error correction in the [[7,1,3]] Steane code. We proved that both Shor and Steane methods are still fault tolerant if loss errors occur.
122

Détermination de l’exactitude d’un géoïde gravimétrique / Determination of the accuracy of gravimetric geoid

Ismail, Zahra 09 May 2016 (has links)
La détermination des modèles du géoïde avec une précision centimètrique est parmi les objectifs principaux de différents groupes de recherche. Une des méthodes les plus utilisées afin de calculer un modèle de géoïde est le Retrait-Restauration en utilisant le terrain résiduel. Cette méthode combine les informations à des courtes, moyennes et grandes longueurs d’onde via trois étapes principales en appliquant la formule de Stokes. À chaque étape nous citons les sources d’erreurs et leur influence sur la précision du calcul du géoïde. Nous intéressons surtout à la correction du terrain dans la première étape (le retrait) et l’estimation de la précision de l’intégrale de Stokes dans la deuxième étape (l’intégration). La correction du terrain consiste à enlever les hautes fréquences du signal gravimétrique via un processus de calcul donné par le contexte de la méthode de Retrait-Restauration. Nous faisons des tests sur les différents paramètres pour choisir ses valeurs correspondant à une précision d’un centimètre notamment le choix des petit et grand rayons et l’influence de la résolution du MNT. Nous étudions aussi la phase d’intégrale de Stokes en limitant à la fonction standard de Stokes, sans modifications. Les paramètres de cette étape sont étudiés en générant des données synthétiques à partir du EGM2008 (Earth Gravity Model). Nous estimons la précision de l’intégration de Stokes dans différentes zones / The determination of a geoid model with a centemetric precision is one of the main interests of several research groups. One of the most used methods in use to calculate a geoid model is the Remove-Compute-Restor procedure using the residual terrain model. This threestep method combine the information at different wavelength frequency using the integration of Stokes. At each step, we mention the error sources and its influence over the precision of calculated geoid. We are mainly interested in the terrain correction at the first step (the remove) and in the estimation of the precision of the Stokes’ integration at the second step (the compute). The terrain correction removes the high frequencies of the gravimetric signal by using a calculation procedure in the frame of the Remove-Restore procedure. We perform our tests on the different parameters to choose its values corresponding to a precision of 1 cm especially the small and large radii and the influence of the DTM resolution. We study also the step of Stokes’ integration using the standard Srokes’ function. The parameters of this phase are studied by generating synthetic data from EGM2008. We estimate the precision of Stokes’s integral at different landscapes.
123

What is driving house prices in Stockholm?

Ångman, Josefin January 2016 (has links)
An increased mortgage cap was introduced in 2010, and as of May 1st 2016 an amortization requirement was introduced in an attempt to slow down house price development in Sweden. Fluctuations in the house prices can significantly influence macroeconomic stability, and with house prices in Stockholm rising even more rapidly than Sweden as a whole makes the understanding of Stockholm’s dynamics very important, especially for policy implications. Stockholm house prices between the first quarter of 1996 and the fourth quarter of 2015 is therefore investigated using a Vector Error Correction framework. This approach allows a separation between the long run equilibrium price and short run dynamics. Decreases in the real mortgage rate and increased real financial wealth seem to be most important in explaining rising house prices. Increased real construction costs and increased real disposable income also seem to have an effect. The estimated models suggest that around 40-50 percent, on average, of a short-term deviation from the long-run equilibrium price is closed within a year. As of the last quarter 2015, real house prices are significantly higher compared to the long run equilibrium price modeled. The deviation is found to be around 6-7 percent.
124

Objectivation et standardisation des évaluations ergonomiques des postes de travail à partir de données Kinect / Objectivation and standardisation of ergonomics assessment on workstations based on Kinect data

Plantard, Pierre 08 July 2016 (has links)
L'analyse ergonomique des postes de travail reste le point de départ de toute politique de prévention des risques de maladies professionnelles. De nombreux travaux scientifiques s'attachent à quantifier les déterminants à risque pour aboutir le plus souvent à un score de pénibilité. La difficulté actuelle des méthodes de cotation ergonomique se situe au niveau de la capture de ces déterminants. La majorité des systèmes se limitent à une collecte de données souvent subjective et très influencé par la personne effectuant la cotation. La volonté de l'entreprise par le biais de ce stage est d'objectiver l'analyse ergonomique des postes de travail par une capture du mouvement de l'opérateur. Le principale défi est le passage d'outil et de méthode scientifique à une utilisation de terrain avec toutes les contraintes qu'elle induit.L'apport des avancées technologiques et scientifiques encourage ce passage par des outils utilisable dans le contexte industriel. Les deux principaux objectif de ce stage se sont situer premièrement sur la limitation des biais de capture pour amener un précision et un standardisation de la mesure de terrain, ainsi que sur l'accès à de nouvelles données notamment l'aspect temporel de la tâche effectuée. Le matériel utilisé est le capteur de profondeur Kinect développé par Microsoft. Cet appareil fait l'objet d’études scientifique dans différents domaines et plus particulièrement dans son utilisation pour de la capture de mouvements.Lors de ce stage, nous nous somme attaché à traiter le signal émis par la Kinect, pour obtenir des données permettant le remplissage automatique de grille de cotation. Le bruit de mesure fût travailler à l'aide d'un filtre récursif passe bas utilisé fréquemment en laboratoire d'analyse du mouvement. Le traitement des données spatiales brut des articulations de l'opérateur pour obtenir des angles fit l'objet d'un grande partie du travail, car ne nombreux paramètre entre en jeu comme la position du capteur.La réussite du stage à permis de limiter la subjectivité de la mesure mais à également donnée l'accès à de nouveaux indices comme les pourcentage de temps de cycle passé à des angulations dangereuses pour l'opérateur. Le passage d'outil de laboratoire au terrain mérite encore d'être travaillé notamment dans la robustesse des systèmes développés et doit s'appuyer sur des expérimentations de laboratoire. / Evaluation of potential risks of musculoskeletal disorders in real workstations is challenging as the environment is cluttered, which makes it difficult to correctly and accurately assess the pose of a worker. Most of the traditional motion capture systems cannot deal with these workplace constraints. Being marker-free and calibration-free, Microsoft Kinect is a promising device to assess these poses, but the validity of the delivered kinematic data under work conditions is still unknown. In this thesis we first propose an extensive validation of the Kinect system in an ergonomic assessment context with sub-optimal capture condition. As most of the large inaccuracies come from occlusions, we propose a new example-based method to correct unreliable poses delivered by the Kinect in such a situation. We introduced the Filtered Pose Graph structure to make the method select the most relevant candidates before combination. In an ergonomics context, we computed RULA scores and compared them to those computed from an optoelectronic mocap system. We also propose to challenge our method in real workplace environment and compared its performance to experts' evaluation in the Faurecia company. Finally, we evaluated the relevance of the proposed method to estimate internal joint torques thanks to inverse dynamics, even if occlusions occur. Our method opens new perspectives to define new fatigue or solicitation indexes based on continuous measurement contrary to classical static images generally used in ergonomics. The computation time enables real-time feedback and interaction with the operator.
125

Improving attenuation corrections obtained using singles-mode transmission data in small-animal PET

Vandervoort, Eric 05 1900 (has links)
The images in positron emission tomography (PET) represent three dimensional dynamic distributions of biologically interesting molecules labelled with positron emitting radionuclides (radiotracers). Spatial localisation of the radio-tracers is achieved by detecting in coincidence two collinear photons which are emitted when the positron annihilates with an ordinary electron. In order to obtain quantitatively accurate images in PET, it is necessary to correct for the effects of photon attenuation within the subject being imaged. These corrections can be obtained using singles-mode photon transmission scanning. Although suitable for small animal PET, these scans are subject to high amounts of contamination from scattered photons. Currently, no accurate correction exists to account for scatter in these data. The primary purpose of this work was to implement and validate an analytical scatter correction for PET transmission scanning. In order to isolate the effects of scatter, we developed a simulation tool which was validated using experimental transmission data. We then presented an analytical scatter correction for singles-mode transmission data in PET. We compared our scatter correction data with the previously validated simulation data for uniform and non-uniform phantoms and for two different transmission source radionuclides. Our scatter calculation correctly predicted the contribution from scattered photons to the simulated data for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for simulated and experimental PET transmission data for uniform and non-uniform phantoms. We also tested our reconstruction and scatter correction procedure using transmission data for several animal studies (mice, rats and primates). For all studies considered, we found that the average reconstructed linear attenuation coefficients for water or soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2 GHz processor, the scatter correction required between 6 to 27 minutes of CPU time (without any code optimisation) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed attenuation coefficients were between 18 to 45% depending on the phantom size and transmission source used. / Science, Faculty of / Physics and Astronomy, Department of / Graduate
126

Opravné položky k pohledávkám / Correction Items to Assets

Benešová, Eva January 2007 (has links)
The Master´s Thesis deals with concept of tax accepted and non accepted Correction Items to Assets used in Czech Accounting system. Further more it reviews Correction Items to Assets into existing Czech company.
127

Color Correction and Contrast Enhancement for Natural Images and Videos / Correction des couleurs et amélioration du contraste pour images et vidéos naturelles

Tian, Qi-Chong 04 October 2018 (has links)
L'amélioration d'image est une sorte de technique pour améliorer la qualité visuelle d'image, qui joue un rôle très important dans les domaines du traitement d'image et de la vision d'ordinateur. En particulier, nous considérons la correction de couleur et l'amélioration de contraste pour améliorer la qualité d'image.Dans la première partie de cette thèse, nous nous concentrons sur la correction des couleurs pour les images naturelles. Tout d'abord, nous donnons un examen simple de la correction des couleurs. Deuxièmement, nous proposons une méthode efficace de correction des couleurs pour la couture d'images via la spécification d'histogramme et la cartographie globale. Troisièmement, nous présentons une approche de cohérence des couleurs pour les collections d'images, basée sur la spécification de la gamme conservation histogramme.Dans la deuxième partie, nous prêtons attention à l'amélioration du contraste pour les images et les vidéos naturelles. Tout d'abord, nous donnons un simple examen de l'amélioration du contraste. Deuxièmement, nous proposons une méthode de préservation du contraste global de naturalité, qui peut éviter une survalorisation. Troisièmement, nous présentons une méthode de fusion à base de variation pour l'amélioration de l'image d'illumination non uniforme, qui peut éviter la sur-amplification ou la sous-amélioration. Enfin, nous étendons le cadre basé sur la fusion pour améliorer les vidéos avec une stratégie temporellement cohérente, qui n'entraîne pas de scintillement des artefacts. / Image enhancement is a kind of technique to improve the image visual quality, which plays a very important role in the domains of image processing and computer vision. Specifically, we consider color correction and contrast enhancement to improve the image quality.In the first part of this thesis, we focus on color correction for natural images. Firstly, we give a simple review of color correction. Secondly, we propose an efficient color correction method for image stitching via histogram specification and global mapping. Thirdly, we present a color consistency approach for image collections, based on range preserving histogram specification.In the second part, we pay attention to contrast enhancement for natural images and videos. Firstly, we give a simple review of contrast enhancement. Secondly, we propose a naturalness preservation global contrast enhancement method, which can avoid over-enhancement. Thirdly, we present a variational-based fusion method for non-uniform illumination image enhancement, which can avoid overenhancement or under-enhancement. Finally, we extend the fusion-based framework to enhance videos with a temporally consistent strategy, which does not result in flickering artifacts.
128

UAV hyperspectral images corrected of illumination differences considering microtopography in correction models /

Thomaz, Mariana Bardella. January 2020 (has links)
Orientador: Nilton Nobuhiro Imai / Resumo: O uso do UAV no sensoriamento remoto é uma área crescente de conhecimento e pressionou o desenvolvimento de câmeras multi e hiperespectrais leves, que poderiam ser usadas embarcadas em UAV. Como as informações espectrais dependem das condições de iluminação da cena, as imagens adquiridas por UAV exigem pesquisa para avaliar o processamento de imagens para melhor adaptar os conceitos já estabelecidos às imagens orbitais. Portanto, a correção radiométrica é de fundamental importância para a extração de dados de imagens com alta confiança, uma vez que se sabe que o fator de refletância é uma função da estrutura geométrica, ângulo solar e propriedades ópticas. Nesse sentido, foram desenvolvidas metodologias para corrigir imagens das diferenças de iluminação, utilizando as Funções de Distribuição de Refletância Bidirecional e os Modelos de Correção Topográfica, também conhecidos como correção de iluminação. Este trabalho utiliza a câmera hiperespectral Rikola a bordo de um UAV em latitudes tropicais. Ele avalia como a anisotropia pode influenciar a variabilidade na reflectância dos alvos e como os modelos de correção topográfica podem ser aplicados, usando a micro topografia, para atenuar esses efeitos. Três testes foram realizados para estudar i) as geometrias de visada da câmera hiperespectral Rikola e a disponibilidade de dados fora do Nadir, ii) a variação do fator de anisotropia entre os alvos nas geometrias de visada e iii) modelos de correção de microtopografia para corrigi... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Use of UAV in remote sensing is a growing area of knowledge and pressed the development of lightweight multi and hyperspectral cameras, which could be used on board of UAV. Since the spectral information depends on the lighting condition of the scene, UAV acquired images demands research to assess image processing to better adapt the concepts already stablished to orbital images. Therefore, the radiometric correction is of main importance for data extraction from imagery with high confidence, once it is known that the reflectance factor is a function of geometric structure, solar angle and optical properties. In this regard, methodologies were developed to correct images from illumination differences, using the Bidirectional Reflectance Distribution Functions and the Topographic Correction Models, also known as illumination correction. This Work uses Rikola hyperspectral camera onboard of a UAV in tropical latitudes. It assesses how the anisotropy can influence variability in reflectance of targets, and how topographic correction models can be applied, using micro topography, to attenuate these effects. Three tests were done to study i) the view geometries of Rikola Hyperspectral camera and the no-Nadir data availability, ii) the variation of Anisotropy Factor between targets in the many view geometries and iii) Microtopography correction models to correct illumination differences using a highly detailed DSM (10 cm and 3 cm) to assess the micro relief. We applied the correcti... (Complete abstract click electronic access below) / Mestre
129

BrandGAN: Unsupervised Structural Image Correction

El Katerji, Mostafa 12 May 2021 (has links)
Recently, machine learning models such as Generative Adversarial Networks and Autoencoders have received significant attention from the research community. In fact, researchers have produced novel ways for using this technology in the space of image manipulation for cross-domain image-to-image transformations, upsampling, style imprinting, human facial editing, and computed tomography correction. Previous work primarily focuses on transformations where the output inherits the same skeletal outline as the input image. This work proposes a novel framework, called BrandGAN, that tackles image correction for hand-drawn images. One of this problem’s novelties is that it requires the skeletal outline of the input image to be manipulated and adjusted to look more like a target reference while retaining key visual features that were included intentionally by its creator. GANs, when trained on a dataset, are capable of producing a large variety of novel images derived from a combination of visual features from the original dataset. StyleGAN is a model that iterated on the concept of GANs and was able to produce high-fidelity images such as human faces and cars. StyleGAN includes a process called projection that finds an encoding of an input image capable of producing a visually similar image. Projection in StyleGAN demonstrated the model’s ability to represent real images that were not a part of its training dataset. StyleGAN encodings are vectors that represent features of an image. Encodings can be combined to merge or manipulate features of distinct images. In BrandGAN, we tackle image correction by leveraging StyleGAN’s projection and encoding vector feature manipulation. We present a modified version of projection to find an encoding representation of hand-drawn images. We propose a novel GAN indexing technique, called GANdex, capable of finding encodings of novel images derived from the original dataset that share visual similarities with the input image. Finally, with vector feature manipulation, we combine the GANdex vector’s features with the input image’s projection to produce the final image-corrected output. Combining the vectors results in adjusting the input imperfections to resemble the original dataset’s structure while retaining novel features from the raw input image. We evaluate seventy-five hand-drawn images collected through a study with fifteen participants using objective and subjective measures. BrandGAN reduced the Fréchet inception distance from 193 to 161 and the Kernel-Inception distance from 0.048 to 0.026 when comparing the hand-drawn and BrandGAN output images to the reference design dataset. A blinded experiment showed that the average participant could identify 4.33 out of 5 images as their own when presented with a visually similar control image. We included a survey that collected opinion scores ranging from one or “strongly disagree” to five or “strongly agree.” The average participant answered 4.32 for the retention of detail, 4.25 for the output’s professionalism, and 4.57 for their preference of using the BrandGAN output over their own.
130

Investigating the Density-Corrected SCAN using Water Clusters and Chemical Reaction Barrier Heights

Bhetwal, Pradeep January 2023 (has links)
Kohn-Sham density functional theory (KS-DFT) is one of the most widely used electronic structure methods. It is used to find the various properties of atoms, molecules, clusters, and solids. In principle, results for these properties can be found by solving self-consistent one-electron Schrödinger-like equations based on density functionals for the energy. In practice, the density functional for the exchange-correlation contribution to the energy must be approximated. The accuracy of practical DFT depends on the choice of density functional approximation (DFA) and also on the electron density produced by the DFA. The SCAN(strongly constrained and appropriately normed) functional developed by Sun, Ruzsinszky, and Perdew is the first meta-GGA (meta-generalized gradient approximation) functional that is constrained to obey all 17 known exact constraints that a meta-GGA can. SCAN has been found to outperform most other functionals when it is applied to aqueous systems. However, density-driven errors (energy errors occurring from an inexact density produced by a DFA) hinder SCAN from achieving chemical accuracy in some systems, including water. Density-corrected DFT (DC-DFT) can alleviate this shortcoming by adopting a more accurate electron density which, in most applications, is the electron density obtained at the Hartree-Fock level of theory, due to its relatively low computational cost. In the second chapter, calculations to determine the accuracy of the HF-SCAN functional for water clusters are performed. The interaction and binding energies of water clusters in the BEGDB and WATER27 data sets are computed, and then the spurious charge transfer in deprotonated, protonated, and neutral water dimer is interpreted. The density-corrected SCAN (DC-SCAN) functional elevates the accuracy of SCAN toward the CCSD(T) limit, not only for the neutral water clusters but also for all considered hydrated ion systems (to a lesser extent). In the third chapter, the barrier heights of the BH76 test set are analyzed. Three fully non-local proxy functionals (LC-ωPBE, SCAN50%, and SCAN-FLOSIC) and their selfconsistent proxy densities are used. These functionals share two important points of similarity to the exact functional. They produce reasonably accurate self-consistent barrier heights and their self-consistent total energies are nearly piecewise linear in fractional electron number. Somewhat-reliable cancellation of density - and functional-driven errors for the energy has been established. / Physics

Page generated in 0.1105 seconds