• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 698
  • 169
  • 90
  • 71
  • 64
  • 43
  • 35
  • 24
  • 22
  • 21
  • 18
  • 10
  • 6
  • 6
  • 5
  • Tagged with
  • 1512
  • 144
  • 131
  • 128
  • 124
  • 114
  • 113
  • 96
  • 92
  • 89
  • 82
  • 78
  • 75
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Individual Differences in the Use of Remote Vision Stereoscopic Displays

Winterbottom, Marc 05 June 2015 (has links)
No description available.
282

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
283

Antibody screening using a biophotonic array sensor for immune system response profile

Read, Thomas January 2013 (has links)
With a population both increasing in number and age, comes a need for new diagnostic tools in the healthcare system, capable of diagnosing and monitoring multiple disorders in a cheap and effective way to provide personalised healthcare. Multiplex label-free biosensors have the potential to rejuvenate the current system. This thesis details the assessment of an ‘in house’ built labelfree array screening technology that has potential to be a point-of-care diagnostic for personalised medicine – the Array Reader. The performance of the Array Reader platform is considered in detail and optimised for both antibody and protein screening arrays. A Global Fit protocol is developed to extract kinetic constants for all protein-protein interactions, assuming a Langmuir adsorption binding model. Standard operating procedures are developed to provide optimised dynamic range, sensitivity, reproducibility and limit of detection of immuno-kinetic assay. A new antibody bio-stack signal amplification strategy is formed, improving the detection limit 60-fold. As a consequence, the bio-stack resulted in a novel method for determining the plasmon field penetration depth, defining the assay sensing volume at the nanoparticle surface. Antibody screening arrays were investigated with an IgG quantification assay to determine total IgG content from serum samples. It relied on the ability of protein A/G to bind antibodies via the Fc region. Specific antigens were used to measure the binding properties of the antibody Fab region. By characterising both regions, we have gained insight into the overall ability of an antibody to trigger an immune response. Protein screening assay were investigated targeting C-reactive protein (CRP), a marker of inflammation. The assays performance characteristics compared favourably with clinically used CRP assays. Finally, an antibody screening array was developed to assess the efficacy of a vaccine against Yersinia pestis in a non-human primate model. The vaccine screening array is an excellent example of the versatility of the platform and just one of many possible applications for the future.
284

An analysis of the vocabulary and reading comprehension challenges faced by first year B.Ed. students / Catharina Elisabeth Martens

Martens, Catharina Elisabeth January 2014 (has links)
First year students at university level encounter various challenges that might impact on their success or failure. At this level, learning is fairly dependent on extensive and intensive reading, thus the reader should have an adequate vocabulary size to assist with the reading comprehension process. Knowledge of vocabulary (or words) is deemed an essential factor in reading proficiency, mainly because meaning is derived from words and also because of the connection between words and comprehension of text. This study investigated the particular relationship among vocabulary size and vocabulary depth and reading comprehension of 105 first year B.Ed. students majoring in English at a university in the North West Province. Also, the vocabulary test results of two different groups, first and fourth years, were compared to determine if advancement of vocabulary levels occur over the study period of four years. A quantitative research approach was used in which the study population was required to complete standardised vocabulary size and vocabulary depth tests, reading comprehension tests and a survey questionnaire. The results were statistically computed to determine the relationship between vocabulary size and breadth and reading comprehension. The results showed a positive and significant effect size correlation between vocabulary size and depth, and reading comprehension. The participants in the study were mainly Afrikaans speaking students who received their school education in Afrikaans. The instruments used in the research were the Vocabulary Levels test (Nation, 1990), Read’s Word Associates Test (1992) and TOEFL reading comprehension tests. The questionnaire was added to determine previous exposure to English and current reading habits of the participants. A two-tailed Pearson product moment correlation and multiple regression analyses were run in order to determine which of the variables, vocabulary size or depth, makes a more significant contribution to reading comprehension and also to establish which variable was the most significant predictor of academic success in the June examination. Vocabulary size was identified as predictor for success in the June examination; furthermore, if gender is used as independent variable, different vocabulary size tests are identified for males and females. / MEd (Curriculum Development), North-West University, Potchefstroom Campus, 2015
285

An analysis of the vocabulary and reading comprehension challenges faced by first year B.Ed. students / Catharina Elisabeth Martens

Martens, Catharina Elisabeth January 2014 (has links)
First year students at university level encounter various challenges that might impact on their success or failure. At this level, learning is fairly dependent on extensive and intensive reading, thus the reader should have an adequate vocabulary size to assist with the reading comprehension process. Knowledge of vocabulary (or words) is deemed an essential factor in reading proficiency, mainly because meaning is derived from words and also because of the connection between words and comprehension of text. This study investigated the particular relationship among vocabulary size and vocabulary depth and reading comprehension of 105 first year B.Ed. students majoring in English at a university in the North West Province. Also, the vocabulary test results of two different groups, first and fourth years, were compared to determine if advancement of vocabulary levels occur over the study period of four years. A quantitative research approach was used in which the study population was required to complete standardised vocabulary size and vocabulary depth tests, reading comprehension tests and a survey questionnaire. The results were statistically computed to determine the relationship between vocabulary size and breadth and reading comprehension. The results showed a positive and significant effect size correlation between vocabulary size and depth, and reading comprehension. The participants in the study were mainly Afrikaans speaking students who received their school education in Afrikaans. The instruments used in the research were the Vocabulary Levels test (Nation, 1990), Read’s Word Associates Test (1992) and TOEFL reading comprehension tests. The questionnaire was added to determine previous exposure to English and current reading habits of the participants. A two-tailed Pearson product moment correlation and multiple regression analyses were run in order to determine which of the variables, vocabulary size or depth, makes a more significant contribution to reading comprehension and also to establish which variable was the most significant predictor of academic success in the June examination. Vocabulary size was identified as predictor for success in the June examination; furthermore, if gender is used as independent variable, different vocabulary size tests are identified for males and females. / MEd (Curriculum Development), North-West University, Potchefstroom Campus, 2015
286

Vers un système d'information géographique du couvert nival en Estrie

Fortier, Robin January 2010 (has links)
The objective of this research is to develop a system capable of simulating snow depth and snow water equivalent in the Sherbrooke to Mount-Megantic area of Quebec's Eastern Townships using meteorological and digital terrain data as input.The working hypothesis is that meteorological data may drive a point energy and mass balance snow cover model.The model used was developed by the Hydrologic Research Lab (National Weather Service) which was calibrated for local conditions using field data collected during two winters at several sites on Mount-Megantic. Snow water equivalent and depth are used for calibration and validation of the model. Automated snow sensors were also used to obtain temperature calibration data.The snow surveys and correction of the air temperature for elevation improves the estimates of snow depth and water equivalent.The results suggest that data from the Sherbrooke meteorological stations can be used to estimate the snow cover over the area of Eastern Townships. Air temperature extrapolation across the field area is a challenge. However the simulated snow cover conforms generally well with data observed at several stations throughout the region.
287

Exploring snow information content of interferometric SAR Data / Exploration du contenu en information de l'interférométrie RSO lié à la neige

Gazkohani, Ali Esmaeily January 2008 (has links)
The objective of this research is to explore the information content of repeat-pass cross-track Interferometric SAR (InSAR) with regard to snow, in particular Snow Water Equivalent (SWE) and snow depth. The study is an outgrowth of earlier snow cover modeling and radar interferometry experiments at Schefferville, Quebec, Canada and elsewhere which has shown that for reasons of loss of coherence repeat-pass InSAR is not useful for the purpose of snow cover mapping, even when used in differential InSAR mode. Repeat-pass cross-track InSAR would overcome this problem. As at radar wavelengths dry snow is transparent, the main reflection is at the snow/ground interface. The high refractive index of ice creates a phase delay which is linearly related to the water equivalent of the snow pack. When wet, the snow surface is the main reflector, and this enables measurement of snow depth. Algorithms are elaborated accordingly. Field experiments were conducted at two sites and employ two different types of digital elevation models (DEM) produced by means of cross track InSAR. One was from the Shuttle Radar Topography Mission digital elevation model (SRTM DEM), flown in February 2000. It was compared to the photogrammetrically produced Canadian Digital Elevation Model (CDEM) to examine snow-related effects at a site near Schefferville, where snow conditions are well known from half a century of snow and permafrost research. The second type of DEM was produced by means of airborne cross track InSAR (TOPSAR). Several missions were flown for this purpose in both summer and winter conditions during NASA's Cold Land Processes Experiment (CLPX) in Colorado, USA. Differences between these DEM's were compared to snow conditions that were well documented during the CLPX field campaigns. The results are not straightforward. As a result of automated correction routines employed in both SRTM and AIRSAR DEM extraction, the snow cover signal is contaminated. Fitting InSAR DEM's to known topography distorts the snow information, just as the snow cover distorts the topographic information. The analysis is therefore mostly qualitative, focusing on particular terrain situations. At Schefferville, where the SRTM was adjusted to known lake levels, the expected dry-snow signal is seen near such lakes. Mine pits and waste dumps not included in the CDEM are depicted and there is also a strong signal related to the spatial variations in SWE produced by wind redistribution of snow near lakes and on the alpine tundra. In Colorado, cross-sections across ploughed roads support the hypothesis that in dry snow the SWE is measurable by differential InSAR. They also support the hypothesis that snow depth may be measured when the snow cover is wet. Difference maps were also extracted for a 1 km2 Intensive Study Area (ISA) for which intensive ground truth was available. Initial comparison between estimated and observed snow properties yielded low correlations which improved after stratification of the data set.In conclusion, the study shows that snow-related signals are measurable. For operational applications satellite-borne cross-track InSAR would be necessary. The processing needs to be snow-specific with appropriate filtering routines to account for influences by terrain factors other than snow.
288

Holoscopic 3D image depth estimation and segmentation techniques

Alazawi, Eman January 2015 (has links)
Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner.
289

Post-production of holoscopic 3D image

Abdul Fatah, Obaidullah January 2015 (has links)
Holoscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation.
290

Identification of corneal mechanical properties using optical tomography and digital volume correlation

Fu, Jiawei January 2014 (has links)
This work presents an effective methodology for measuring the depth-resolved 3D full-field deformation of semitransparent, light scattering soft tissues such as vertebrate eye cornea. This was obtained by performing digital volume correlation on optical coherence tomography volume reconstructions of silicone rubber phantoms and porcine cornea samples. Both the strip tensile tests and the posterior inflation tests have been studied. Prior to these tests, noise effect and strain induced speckle decorrelation were first studied using experimental and simulation methods. The interpolation bias in the strain results has also been analyzed. Two effective approaches have been introduced to reduce the interpolation bias. To extract material constitutive parameters from the 3D full-field deformation measurements, the virtual fields method has been extended into 3D. Both manually defined virtual fields and the optimized piecewise virtual fields have been developed and compared with each other. Efforts have also been made in developing a method to correct the refraction induced distortions in the optical coherence tomography reconstructions. Tilt tests of different silicone rubber phantoms have been implemented to evaluate the performance of the refraction correction method in correcting the distorted reconstructions.

Page generated in 0.052 seconds