• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 13
  • 12
  • 7
  • 6
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 65
  • 52
  • 35
  • 26
  • 24
  • 24
  • 23
  • 21
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Occlusion Management in Conventional and Head-Mounted Display Visualization through the Relaxation of the Single Viewpoint/Timepoint Constraint

Meng-Lin Wu (6916283) 16 August 2019 (has links)
<div>In conventional computer graphics and visualization, images are synthesized following the planar pinhole camera (PPC) model. The PPC approximates physical imaging devices such as cameras and the human eye, which sample the scene with linear rays that originate from a single viewpoint, i.e. the pinhole. In addition, the PPC takes a snapshot of the scene, sampling it at a single instant in time, or timepoint, for each image. Images synthesized with these single viewpoint and single timepoint constraints are familiar to the user, as they emulate images captured with cameras or perceived by the human visual system. However, visualization using the PPC model suffers from the limitation of occlusion, when a region of interest (ROI) is not visible due to obstruction by other data. The conventional solution to the occlusion problem is to rely on the user to change the view interactively to gain line of sight to the scene ROIs. This approach of sequential navigation has the shortcomings of (1) inefficiency, as navigation is wasted when circumventing an occluder does not reveal an ROI, (2) inefficacy, as a moving or a transient ROI can hide or disappear before the user reaches it, or as scene understanding requires visualizing multiple distant ROIs in parallel, and (3) user confusion, as back-and-forth navigation for systematic scene exploration can hinder spatio-temporal awareness.</div><div><br></div><div>In this thesis we propose a novel paradigm for handling occlusions in visualization based on generalizing an image to incorporate samples from multiple viewpoints and multiple timepoints. The image generalization is implemented at camera model level, by removing the same timepoint restriction, and by removing the linear ray restriction, allowing for curved rays that are routed around occluders to reach distant ROIs. The paradigm offers the opportunity to greatly increase the information bandwidth of images, which we have explored in the context of both desktop and head-mounted display visualization, as needed in virtual and augmented reality applications. The challenges of multi-viewpoint multi-timepoint visualization are (1) routing the non-linear rays to find all ROIs or to reach all known ROIs, (2) making the generalized image easy to parse by enforcing spatial and temporal continuity and non-redundancy, (3) rendering the generalized images quickly as required by interactive applications, and (4) developing algorithms and user interfaces for the intuitive navigation of the compound cameras with tens of degrees of freedom. We have addressed these challenges (1) by developing a multiperspective visualization framework based on a hierarchical camera model with PPC and non-PPC leafs, (2) by routing multiple inflection point rays with direction coherence, which enforces visualization continuity, and without intersection, which enforces non-redundancy, (3) by designing our hierarchical camera model to provide closed-form projection, which enables porting generalized image rendering to the traditional and highly-efficient projection followed by rasterization pipeline implemented by graphics hardware, and (4) by devising naturalistic user interfaces based on tracked head-mounted displays that allow deploying and retracting the additional perspectives intuitively and without simulator sickness.</div>
152

Echantillonage d'importance des sources de lumières réalistes / Importance Sampling of Realistic Light Sources

Lu, Heqi 27 February 2014 (has links)
On peut atteindre des images réalistes par la simulation du transport lumineuse avec des méthodes de Monte-Carlo. La possibilité d’utiliser des sources de lumière réalistes pour synthétiser les images contribue grandement à leur réalisme physique. Parmi les modèles existants, ceux basés sur des cartes d’environnement ou des champs lumineuse sont attrayants en raison de leur capacité à capter fidèlement les effets de champs lointain et de champs proche, aussi bien que leur possibilité d’être acquis directement. Parce que ces sources lumineuses acquises ont des fréquences arbitraires et sont éventuellement de grande dimension (4D), leur utilisation pour un rendu réaliste conduit à des problèmes de performance.Dans ce manuscrit, je me concentre sur la façon d’équilibrer la précision de la représentation et de l’efficacité de la simulation. Mon travail repose sur la génération des échantillons de haute qualité à partir des sources de lumière par des estimateurs de Monte-Carlo non-biaisés. Dans ce manuscrit, nous présentons trois nouvelles méthodes.La première consiste à générer des échantillons de haute qualité de manière efficace à partir de cartes d’environnement dynamiques (i.e. qui changent au cours du temps). Nous y parvenons en adoptant une approche GPU qui génère des échantillons de lumière grâce à une approximation du facteur de forme et qui combine ces échantillons avec ceux issus de la BRDF pour chaque pixel d’une image. Notre méthode est précise et efficace. En effet, avec seulement 256 échantillons par pixel, nous obtenons des résultats de haute qualité en temps réel pour une résolution de 1024 × 768. La seconde est une stratégie d’échantillonnage adaptatif pour des sources représente comme un "light field". Nous générons des échantillons de haute qualité de manière efficace en limitant de manière conservative la zone d’échantillonnage sans réduire la précision. Avec une mise en oeuvre sur GPU et sans aucun calcul de visibilité, nous obtenons des résultats de haute qualité avec 200 échantillons pour chaque pixel, en temps réel et pour une résolution de 1024×768. Le rendu est encore être interactif, tant que la visibilité est calculée en utilisant notre nouvelle technique de carte d’ombre (shadow map). Nous proposons également une approche totalement non-biaisée en remplaçant le test de visibilité avec une approche CPU. Parce que l’échantillonnage d’importance à base de lumière n’est pas très efficace lorsque le matériau sous-jacent de la géométrie est spéculaire, nous introduisons une nouvelle technique d’équilibrage pour de l’échantillonnage multiple (Multiple Importance Sampling). Cela nous permet de combiner d’autres techniques d’échantillonnage avec le notre basé sur la lumière. En minimisant la variance selon une approximation de second ordre, nous sommes en mesure de trouver une bonne représentation entre les différentes techniques d’échantillonnage sans aucune connaissance préalable. Notre méthode est pertinence, puisque nous réduisons effectivement en moyenne la variance pour toutes nos scènes de test avec différentes sources de lumière, complexités de visibilité et de matériaux. Notre méthode est aussi efficace par le fait que le surcoût de notre approche «boîte noire» est constant et représente 1% du processus de rendu dans son ensemble. / Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process.
153

Modélisation des propriétés thermomécaniques effectives de dépôts élaborés par projection thermique / Modelling of the effective thermomechanical properties of thermal spray coatings

QiAO, Jianghao 20 September 2012 (has links)
Dans la présente étude, la conductivité thermique et le module d'élasticité de revêtementsd’YPSZ élaborés par projection plasma ont été prédits par modélisations numériques 2D et3D de type différences finies et éléments finis.L'influence de la résolution d'image, de la taille et de la valeur du seuil sur les propriétésprédites du revêtement a été étudiée. En outre, les effets de la méthode numérique et du typede condition aux limites ont été étudiés. En particulier, la quantification de l'effet Knudsen(effet de raréfaction) sur le transfert de chaleur à travers une structure poreuse a été réaliséepar modélisation numérique en combinaison avec l'analyse d'image. Les conductivitéseffectives obtenues par modélisation 3D s'avèrent plus élevées que celles obtenues en 2D, etaussi en meilleur accord avec les résultats mesurés. Une corrélation 2D/3D a été trouvéepour la modélisation de la conductivité thermique : cette corrélation permet de prédire lesvaleurs 3D à partir des valeurs calculées en 2D. / In the present study, the thermal conductivity and elastic modulus of thermal spray YPSZcoatings were predicted by 2D and 3D finite differences and finite elements numericalmodeling based on cross-sectional images.The influence of the image resolution, size and threshold on the predicted properties of thecoating was studied. Moreover, the effects of the numerical method and of the boundarycondition were investigated. In particular, the quantification of the Knudsen effect(rarefaction effect) on the heat transfer through a porous structure was realized by numericalmodeling in combination with image analysis. The predicted thermal conductivities obtainedby 3D modeling were found to be higher than those obtained by 2D modeling, and in betteragreement with the measured results. A 2D/3D correlation was sucessfully found for themodeling of thermal conductivity: this correlation allows predicting 3D computed valuesfrom 2D ones.
154

Offset Surface Light Fields

Ang, Jason January 2003 (has links)
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
155

Offset Surface Light Fields

Ang, Jason January 2003 (has links)
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
156

Image-based Capture and Modeling of Dynamic Human Motion and Appearance

Birkbeck, Neil Aylon Charles Unknown Date
No description available.
157

Matériaux architecturés pour refroidissement par transpiration : application aux chambres de combustion / Architectured materials for transpiration cooling : application to combustion chambers

Pinson, Sébastien 09 December 2016 (has links)
Dans l’optique de refroidir les parois des chambres de combustion aéronautiques le plus efficacement possible, un intérêt particulier est aujourd’hui porté à la technologie de refroidissement par transpiration. L’air de refroidissement s’écoule au travers d’une paroi poreuse dans laquelle une grande quantité de chaleur est échangée par convection. L’éjection de l’air profite ensuite de la distribution des pores pour former une couche limite protectrice relativement homogène.Les matériaux métalliques obtenus à partir de poudres partiellement frittées sont de bons candidats pour former ces parois poreuses. Ce travail se focalise sur les échanges internes et consiste à développer une méthodologie permettant de dégager les architectures partiellement frittées les plus adaptées à ce type d’application.L’écoulement et les échanges de chaleur lors du refroidissement par transpiration sont régis par quelques propriétés effectives des matériaux qui sont fonction de l’architecture : la conductivité thermique effective, le coefficient de transfert convectif volumique et les propriétés de perméabilité. A l’aide de travaux expérimentaux ou d’études numériques sur des échantillons numérisés par tomographie aux rayons X, des relations simples entre les propriétés effectives des matériaux partiellement frittés et leurs paramètres architecturaux sont tout d’abord développées. La porosité, la surface spécifique et le type de poudre utilisé sont retenus pour prédire les paramètres effectifs.Ces relations sont finalement intégrées dans un modèle de transfert de chaleur prédisant la performance d’une solution dans les conditions de fonctionnement du moteur. Une optimisation "multi-objectifs" et une analyse des designs optimaux permettent alors de mettre en valeur quelques architectures montrant un fort potentiel pour des applications de refroidissement par transpiration. Des matériaux peu poreux formés à partir de larges poudres irrégulières semblent assurer le meilleur compromis entre tous les critères pris en compte. / In order to cool aero-engine combustion chambers as efficiently as possible, there is today a special interest given to transpiration cooling technology. The cooling air flows through a porous liner in which a large amount of heat can be exchanged by convection. The air injection could then take benefit of the pore distribution to form a more homogeneous protective boundary layer.Partially sintered metallic materials are potential candidates to form these porous liners. The present work focuses on internal heat transfers. It aims to develop a methodology capable of highlighting the most adapted partially sintered architectures to this kind of application.During transpiration cooling, flows and heat transfers are governed by some effective material properties which depends on the porous architecture: the effective solid phase thermal conductivity, the volumetric heat transfer coefficient and the permeability properties. Thanks to experimental works and numerical studies on samples digitized by X-ray tomography, simple relationships are first developed between the effective material properties of partially sintered materials and their architectural parameters. The porosity, the specific surface area and the powder type are selected to predict the effective properties.These relationships are finally integrated into a heat transfer model predicting the thermal performance of a design at working engine conditions. A multi-objective optimization and an analysis of the optimal designs highlight some architectures as being potentially interesting for transpiration cooling. Materials with a low porosity and made of large irregular powders seem to ensure the best trade-off among the different criteria taken into consideration.
158

Security and usability of authentication by challenge questions in online examination

Ullah, Abrar January 2017 (has links)
Online examinations are an integral component of many online learning environments and a high-stake process for students, teachers and educational institutions. They are the target of many security threats, including intrusion by hackers and collusion. Collu-sion happens when a student invites a third party to impersonate him/her in an online test, or to abet with the exam questions. This research proposed a profile-based chal-lenge question approach to create and consolidate a student's profile during the learning process, to be used for authentication in the examination process. The pro-posed method was investigated in six research studies using a usability test method and a risk-based security assessment method, in order to investigate usability attributes and security threats. The findings of the studies revealed that text-based questions are prone to usability issues such as ambiguity, syntactic variation, and spelling mistakes. The results of a usability analysis suggested that image-based questions are more usable than text-based questions (p < 0.01). The findings identified that dynamic profile questions are more efficient and effective than text-based and image-based questions (p < 0.01). Since text-based questions are associated with an individual's personal information, they are prone to being shared with impersonators. An increase in the numbers of chal-lenge questions being shared showed a significant linear trend (p < 0.01) and increased the success of an impersonation attack. An increase in the database size decreased the success of an impersonation attack with a significant linear trend (p < 0.01). The security analysis of dynamic profile questions revealed that an impersonation attack was not successful when a student shared credentials using email asynchronously. However, a similar attack was successful when a student and impersonator shared information in real time using mobile phones. The response time in this attack was significantly different when a genuine student responded to his challenge questions (p < 0.01). The security analysis revealed that the use of dynamic profile questions in a proctored exam can influence impersonation and abetting. This view was supported by online programme tutors in a focus group study.
159

Controle feedback de n?vel baseado em sensor de imagem aplicado ao equipamento misturador-decantador ? invers?o de fases (MDIF?) / Level feedback control using an image-based detector applied to a mixer-settler based on phase inversion equipment (MDIF?).

Fernandes, Lenita da Silva Lucio 03 September 2009 (has links)
Made available in DSpace on 2014-12-17T15:01:20Z (GMT). No. of bitstreams: 1 LenitaSLF.pdf: 2627101 bytes, checksum: 55e09659bccdbe036d1c49a7fea4b6ca (MD5) Previous issue date: 2009-09-03 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / The treatment of wastewaters contaminated with oil is of great practical interest and it is fundamental in environmental issues. A relevant process, which has been studied on continuous treatment of contaminated water with oil, is the equipment denominated MDIF? (a mixer-settler based on phase inversion). An important variable during the operation of MDIF? is the water-solvent interface level in the separation section. The control of this level is essential both to avoid the dragging of the solvent during the water removal and improve the extraction efficiency of the oil by the solvent. The measurement of oil-water interface level (in line) is still a hard task. There are few sensors able to measure oil-water interface level in a reliable way. In the case of lab scale systems, there are no interface sensors with compatible dimensions. The objective of this work was to implement a level control system to the organic solvent/water interface level on the equipment MDIF?. The detection of the interface level is based on the acquisition and treatment of images obtained dynamically through a standard camera (webcam). The control strategy was developed to operate in feedback mode, where the level measure obtained by image detection is compared to the desired level and an action is taken on a control valve according to an implemented PID law. A control and data acquisition program was developed in Fortran to accomplish the following tasks: image acquisition; water-solvent interface identification; to perform decisions and send control signals; and to record data in files. Some experimental runs in open-loop were carried out using the MDIF? and random pulse disturbances were applied on the input variable (water outlet flow). The responses of interface level permitted the process identification by transfer models. From these models, the parameters for a PID controller were tuned by direct synthesis and tests in closed-loop were performed. Preliminary results for the feedback loop demonstrated that the sensor and the control strategy developed in this work were suitable for the control of organic solvent-water interface level / O tratamento de ?guas residuais contaminadas com ?leo ? um assunto de grande interesse pr?tico, principalmente no que se refere ?s quest?es ambientais. Um processo bastante relevante que vem sendo estudado no tratamento semi-cont?nuo de ?guas contaminadas com ?leo ? o Misturador-Decantador ? Invers?o de Fases (MDIF?). Uma vari?vel importante na opera??o do sistema MDIF? ? o n?vel da interface solvente org?nico/?gua na se??o de separa??o. O controle deste n?vel ? essencial tanto para evitar o arraste de solvente pela ?gua como para melhorar a efici?ncia de remo??o do ?leo pelo solvente. A despeito disso, a medida (em linha) da interface de separa??o ainda constitui uma tarefa dif?cil e s?o poucos os sensores capazes de medir tal interface. No caso de sistemas em escala de bancada, n?o h? sensores de interface com dimens?es compat?veis. O presente trabalho teve como objetivo implementar um sistema de controle de n?vel para a interface de separa??o (solvente org?nico/?gua) no equipamento MDIF?. A medida de n?vel da interface ? baseada na aquisi??o e tratamento de imagens obtidas dinamicamente por uma c?mera (webcam). A estrat?gia de controle foi desen olvida para operar em modo feedback, onde a medida de n?vel obtida pelo sensor de imagem ? comparada ao n?vel desejado e a a??o de controle ? tomada sobre uma v?lvula, seguindo uma lei do tipo PID. Um programa de aquisi??o de dados e controle foi desenvolvido em Fortran para realizar as seguintes tarefas: aquisi??o de imagem, identifica??o da interface solvente org?nico/?gua, tomada de decis?es, envio dos sinais de controle e registro de dados em arquivo. Alguns experimentos em malha aberta foram realizados, onde perturba??es aleat?rias tipo pulso foram aplicadas na vari?vel de entrada (vaz?o de sa?da de ?gua). As respostas do n?vel de interface permitiram a identifica??o do processo atrav?s de modelos de fun??o de transfer?ncia. A partir destes modelos, os par?metros para o controlador PID foram sintonizados por s?ntese direta e testes em malha fechada foram realizados. Os resultados preliminares da malha feedback demonstraram que o sensor e a estrat?gia de controle desenvolvidos neste trabalho foram satisfat?rios para controlar o n?vel da interface solvente org?nico/?gua
160

Um método espectrométrico de emissão em chama baseado em imagens digitais para determinação indireta de fármacos e determinação simultânea de sódio e cálcio / A Digital Image-Based Flame Emission Spectrometric Method for Indirect Determination of Drugs and Simultaneous Determination of Sodium and Calcium

Lyra, Wellington da Silva 11 September 2012 (has links)
Made available in DSpace on 2015-05-14T13:21:30Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 12264651 bytes, checksum: c1dad0a27d50ddb05acdcc8e21042b61 (MD5) Previous issue date: 2012-09-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this work the potential of the Digital Image-Based Flame Emission Spectrometry (DIB-FES) is demonstrated through two completely separate applications. The first one consists in the indirect determination of three drugs in injectable form: sodium dicofenac, sodium dipyrone and clacium gluconate and the second in the combination of DIB-FES with Multiple Linear Regression (MLR) for the simultaneous determination of sodium and calcium in powder milk. Up to the present moment the literature does not report the use of traditional FES in: indirect determination of organic substances, simultaneous determination of analytes using a unique detector and overcoming the problem of spectral interference. In DIB-FES digital images of the flame are captured by webcam in its oxidant region (2,5 cm over the burner of the flame photometer) and are associated with the radiation emitted by metals present in the air-butane flame. Based on Red-Green-Blue (RGB) colour system, univariate and multivariate calibration models were developed, which were validated and then applied to real samples. In each application the results were compared with the results obtained by their respective reference methods. There were no statistically significant differences between the results when the paired t-test at the 95% confidence level. The estimated precision was better than the respective reference methods and accuracy was assessed by high values of bias and recovery rates between 97 and 104% in the two applications. / Neste trabalho a potencialidade da Espectrometria de Emissão em Chama Baseada em Imagens digitais (DIB-FES) é demonstrada por meio de duas aplicações completamente distintas. A primeira consiste na determinação indireta de três fármacos em soluções injetáveis: diclofenaco sódico, dipirona sódica e gluconato de cálcio e a segunda na combinação da DIB-FES com Regressão Linear Múltipla (MLR) para a determinação simultânea de sódio e cálcio em amostras de leite em pó. Até o presente momento a literatura não reporta a utilização da FES tradicional na: determinação indireta de substâncias orgânicas, determinação simultânea de analitos utilizando um único detector e superação do problema de interferência espectral. Na DIB-FES as imagens digitais da chama são capturadas pela webcam na região oxidante da mesma (2,5 cm acima do queimador do fotômetro) e estão associadas às radiações emitidas pelos metais presentes na chama ar-butano. Com base no sistema de cores Vermelho-Verde-Azul (RGB) foram desenvolvidos modelos de calibração univariada e multivariada, os quais foram validados e então aplicados em amostras reais. Em cada aplicação os resultados obtidos foram comparados com os resultados obtidos por seus respectivos métodos de referência. Não houve diferenças estatisticamente significativas entre resultados ao aplicar o teste t emparelhado ao nível de 95% de confiança. A precisão estimada foi melhor do que a de seus respectivos métodos de referência e a exatidão foi verificada por altas taxas de tendência e fatores de recuperação entre 97 e 104% nas duas aplicações.

Page generated in 0.0374 seconds