Spelling suggestions: "subject:"3desurface"" "subject:"3bysurface""
1 |
3D characterization of acidized fracture surfacesMalagon Nieto, Camilo 17 September 2007 (has links)
The complex interrelations among the different physical processes involved in acid
fracturing make it difficult to design, and later, to predict the outcome of stimulation
jobs. Actual tendencies require the use of computational models to deal with the
dynamic interaction of variables. This thesis presents a new study of acidized surface
textures by means of a laser profilometer to improve our understanding of the remaining
etched surface topography and its hydraulic response.
Visualization plots generated by the profilometer identified hydrodynamic channels that
could not be identified by the naked eye in acidized surfaces. The plots clarified the
existence of rock heterogeneities and revealed how the processes of dissolution function
in chalk rock.
Experimental data showed clearly that the effect of dissolution depends on the type of
rock and the fluid system; dolomite, for example, dissolves more rapidly but more
roughly than limestone. Fluid leakoff rate and temperature also affect the dissolution.
Further research is necessary to clarify the effects of conductivity.
|
2 |
Validation of 3D Surface Measurements Using Computed TomographyMORTON, AMY 10 January 2012 (has links)
Objective and accurate surface measurements are important in many clinical disciplines. Non-irradiating and low cost alternatives are available but validation of these measurement tools for clinical application is variable and sparse. This thesis presents a three dimensional (3D) surface measurement method validated by gold standard Computed Tomography (CT). Forty-one 3D surface data sets were acquired by two modalities, a laser scanner and a binocular camera. The binocular camera was tested with three different texture modifiers that increased the colour variability of the imaged surface. A surface area calculation algorithm was created to process the data sets. Relative differences were calculated for each area measurement with respect to its corresponding CT measurement. The laser scanner data sets were affected by movement and specular reflection artefacts. The measurements were statistically equivalent to CT if less than 20% error were considered acceptable. The binocular camera with the slide projected texture modifier was shown to be statistically equivalent to CT gold standard with less than 5% error (p < 0.0005). The surface area measurement method can easily be expanded and customized. By following the protocol outlined by the example in this work, researchers and clinicians would also be able to objectively asses other vision systems' performance and suitability. / Thesis (Master, Computing) -- Queen's University, 2012-01-10 11:37:50.374
|
3 |
3D Surface Reconstruction from Multi-Camera Stereo with Disturbed ProcessingArora, Gorav 03 1900 (has links)
In this thesis a system which extracts 3D surfaces of arbitrary scenes under natural illumination is constructed using low-cost, off-the-shelf components. The system is implemented over a network of workstations using standardized distributed software technology. The architecture of the system is highly influenced by the performance requirements of multimedia applications which require 3D computer vision. Visible scene surfaces are extracted using a passive multi-baseline stereo technique. The implementation efficiently supports any number of cameras in arbitrary positions through an effective rectification strategy. The distributed software components interact through CORBA and work cooperatively in parallel. Experiments are performed to assess the effects of various parameters on the performance of the system and to demonstrate the feasibility of this approach. / Thesis / Master of Engineering (ME)
|
4 |
Surfaces functionality of precision machined components : modelling, simulation, optimization and controlAris, Najmil Faiz Mohamed January 2008 (has links)
This research develops an analytical scientific approach for investigating the high precision surface generation and the quantitative analysis of the effects of direct factors in precision machining. The research focuses on 3D surface characterization with particular reference to the turning process and associated surface generation. The most important issue for this research is surface functionality which is becoming important in the current engineering industry. The surface functionality should match with the characterization parameters of the machined surface, which can be expressed in formula form as proposed in chapter 4. Modelling and simulation are extensively used in the research. The modelling approach integrates the cutting forces model, thermal mode% vibration model, tool wear model, machining system response model and surface topography model. All of those models are integrated as a whole model. The physical model with such as direct inputs is formed. The major inputs to the model are tooling geometry and the process variables. The outputs from the modelling approach are cutting force, surface texture parameters, dimensional errors, residual stress and material removal rate. MATLAB and Simulink are used as tools to implement the modelling and simulation. According to the simulation results, it is found that the feed rate has the most profound effect on in surface generation. The influence of the vibrations between the cutting tool and the workpiece on the surface roughness may be minimised by the small feed rate and large tool nose radius. Surface functionality simulation has been developed to model and simulate the surface generation in precision turning. The surface functionality simulation model covers the material and tool wear as well. It shows that chip formation is resulted from cutting forces. Cutting trials are conducted to validate the modelling and simulation developed. There are positive results that show the agreement between the simulation and experimental results. The analysis of the results of turning trials and simulations are conducted in order to find out the effects of process variables and tooling characteristics on surface texture and topography and machining instability. From the research, it can be concluded that the investigation on modelling and simulation of precision surfaces generation in precision turning is performed well against the research objectives as proposed. Recommendations for future work are to improve the model parameters identification, including comprehensive tool wear, chip formation and using Neural Networks modelling in the engineering surface construction system.
|
5 |
Compressed Sensing in the Presence of Side InformationRostami, Mohammad January 2012 (has links)
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization.
After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse
signals.
CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices.
Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction.
A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme.
|
6 |
Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithmsDigne, Julie 23 November 2010 (has links) (PDF)
Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
|
7 |
Vergleichende Untersuchungen zur Wiedergabegenauigkeit optoelektronischer berührungsloser und plastischer Abformungen weicher GesichtsstrukturenBirkner, Luisa 05 May 2014 (has links) (PDF)
Vergleichende Untersuchungen zur Wiedergabegenauigkeit optoelektronischer berührungsloser und plastischer Abformungen weicher Gesichtsstrukturen
Universität Leipzig, Dissertation
Problemstellung: Deformation der fazialen Weichgewebe bei liegenden Patienten, hervorgerufen durch schwerkraft- und materialgewichtabhängige Einflüsse der bei konventionellen Abformmethoden verwendeten Materialien Hydrokolloiden und Elas- tomere.
Ziel: Vergleich der Wiedergabegenauigkeit weicher Gesichtsstrukturen bei konven- tionellen plastischen Abformmethoden und einem optischen, mechanischen berüh- rungsfreien dreidimensionalen fotorealistischen Modell.
Material und Methode: Konventionelle Abformung bei 20 Probanden mit Hydrokolloid und Elastomer sowie ein optischer Gesichtsscan vom Mittelgesicht. Studienaufbau: Digitali- sieren der Gipsmodelle und Auswertung aller STL-Datensätze zum Vergleich zwischen plastischen und optoelektronischen Abformungen sowie die Evaluation der vorhandenen Abweichungen in 34 konstruierten Punkten. Statistik: Testen auf Normalverteilung und Varianzengleichheit zum Prüfen der Signifikanz mittels Zweistichproben-t-Test und Wilcoxon-Rangsummen-Test.
Ergebnisse: Der allgemeiner Abformfehler zwischen optischem Scan und konven- tionellen Abformungen liegt bei 1,19 mm ± 0,32 mm mit Variationen bei den Ma- terialien Alginat 1,02 mm ± 0,24 mm und Silikon 1,36 mm ± 0,31 mm. Signifikante Unterschiede zwischen den Abformmaterialien zeigen sich in 6 von 34 Messpunkten (p < 0,05). Alginat weist tendentiell die besseren Ergebnisse auf und ruft weniger Weichgewebsveränderung hervor. Die beschriebenen Differenzen entstehen durch die Deformation der Weichgewebe bei der Gesichtsabformung. Bei den Punkten ohne statistische Signifikanz ist die Abformtechnik als sehr präzise zu betrachten. In gut skelettal-unterstützten Regionen zeigt Silikon in einigen Messpunkten geringere Abweichungen.
Schlussfolgerung: Die Auswahl des Abformmaterials sollte in Abhängigkeit von der Ausgangssituation gewählt werden. Trotz der modifizierten Abformtechnik zeigen die Ergebnisse einen deutlichen Unterschied zwischen digital erfasstem dreidimensionalem Gesichtsscan und konventionellen Abformmethoden. Den optischen Systemen ist derzeit der Vorzug zu geben. Die Vorteile sind hinsichtlich noninvasiver, iterierender Aufnahmen, genauerer und effektiverer Analyse sowie CAD/CAM-Herstellung von Epithesen aus biokompatiblen Werkstoffen beachtlich.
|
8 |
Compressed Sensing in the Presence of Side InformationRostami, Mohammad January 2012 (has links)
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization.
After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse
signals.
CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices.
Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction.
A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme.
|
9 |
Reconstruction of 3D scenes from pairs of uncalibrated images : creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applicationsAlkhadour, Wissam Mohamad January 2010 (has links)
Much research effort has been devoted to producing algorithms that contribute directly or indirectly to the extraction of 3D information from a wide variety of types of scenes and conditions of image capture. The research work presented in this thesis is aimed at three distinct applications in this area: interactively extracting 3D points from a pair of uncalibrated images in a flexible way; finding corresponding points automatically in high resolution images, particularly those of archaeological scenes captured from a freely moving light aircraft; and improving a correlation approach to dense disparity mapping leading to 3D surface reconstructions. The fundamental concepts required to describe the principles of stereo vision, the camera models, and the epipolar geometry described by the fundamental matrix are introduced, followed by a detailed literature review of existing methods. An interactive system for viewing a scene via a monochrome or colour anaglyph is presented which allows the user to choose the level of compromise between amount of colour and ghosting perceived by controlling colour saturation, and to choose the depth plane of interest. An improved method of extracting 3D coordinates from disparity values when there is significant error is presented. Interactive methods, while very flexible, require significant effort from the user finding and fusing corresponding points and the thesis continues by presenting several variants of existing scale invariant feature transform methods to automatically find correspondences in uncalibrated high resolution aerial images with improved speed and memory requirements. In addition, a contribution to estimating lens distortion correction by a Levenberg Marquard based method is presented; generating data strings for straight lines which are essential input for estimating lens distortion correction. The remainder of the thesis presents correlation based methods for generating dense disparity maps based on single and multiple image rectifications using sets of automatically found correspondences and demonstrates improvements obtained using the latter method. Some example views of point clouds for 3D surfaces produced from pairs of uncalibrated images using the methods presented in the thesis are included.
|
10 |
Reconstruction of 3D scenes from pairs of uncalibrated images. Creation of an interactive system for extracting 3D data points and investigation of automatic techniques for generating dense 3D data maps from pairs of uncalibrated images for remote sensing applications.Alkhadour, Wissam M. January 2010 (has links)
Much research effort has been devoted to producing algorithms that contribute directly or indirectly to the extraction of 3D information from a wide variety of types of scenes and conditions of image capture. The research work presented in this thesis is aimed at three distinct applications in this area: interactively extracting 3D points from a pair of uncalibrated images in a flexible way; finding corresponding points automatically in high resolution images, particularly those of archaeological scenes captured from a freely moving light aircraft; and improving a correlation approach to dense disparity mapping leading to 3D surface reconstructions.
The fundamental concepts required to describe the principles of stereo vision, the camera models, and the epipolar geometry described by the fundamental matrix are introduced, followed by a detailed literature review of existing methods.
An interactive system for viewing a scene via a monochrome or colour anaglyph is presented which allows the user to choose the level of compromise between amount of colour and ghosting perceived by controlling colour saturation, and to choose the depth plane of interest. An improved method of extracting 3D coordinates from disparity values when there is significant error is presented.
Interactive methods, while very flexible, require significant effort from the user finding and fusing corresponding points and the thesis continues by presenting several variants of existing scale invariant feature transform methods to automatically find correspondences in uncalibrated high resolution aerial images with improved speed and memory requirements. In addition, a contribution to estimating lens distortion correction by a Levenberg Marquard based method is presented; generating data strings for straight lines which are essential input for estimating lens distortion correction.
The remainder of the thesis presents correlation based methods for generating dense disparity maps based on single and multiple image rectifications using sets of automatically found correspondences and demonstrates improvements obtained using the latter method. Some example views of point clouds for 3D surfaces produced from pairs of uncalibrated images using the methods presented in the thesis are included. / Al-Baath University / The appendices files and images are not available online.
|
Page generated in 0.0429 seconds