• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 10
  • 7
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 11
  • 10
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D visibility analysis for visual quality assessment: approaches based on modelling tools, VGE and WebGIS / CUHK electronic theses & dissertations collection

January 2016 (has links)
In this thesis, the author has explored the feasibility of 3D visibility analysis for visual quality assessment via the aid of modelling tools, virtual geographic environment (VGE) and WebGIS, which may be beneficial to build a measurable evaluating standard of visual environment in urban open spaces, and referential to create an atmosphere with improved visual comfort in guiding planning or design processing. Considered as long-term significance, 3D visibility analysis for visual quality assessment is potential in enabling the quantitative analysis between urban open spaces and visual perception of human, providing appropriate standard for evaluation of visual environment, and bringing the future urban planning and design fields into rational and reasonable. / Due to the development of 3D modelling software and VGE, their applications have been attempted in 3D visibility analysis by a few previous scholars, exploring the possibility of representation for urban environment in 3D and the feasibility of spatial relationship analysis with visual factors. Lots of progress has been made with the participation of the modelling software such as AutoCAD and 3Ds Max, or analytical platforms such as ArcGIS and VGE. As the extension of preceding study, the author is going to discuss improvements and innovations of some tools applied in visibility analysis and visual quality computation, such as Open Simulator as a VGE Platform. Moreover, due to the easy access for entry-level programmer and the public, Google SketchUp (a type of modelling software) and WebGIS have also been tested whether suitable for the analysis handling, considerably decreasing the programming development difficulty. Both SketchUp and WebGIS are thought to be well accepted by thepublic, as SketchUp has been popularised in 3D modelling and WebGIS has been familiar as the form of websites for a long period, which may enable the dissemination of visibility analysis to the public. / From a pilot study of progresses based on the researches of past scholars, the author developed an improved method for 3D visibility analysis, by mathematically deriving the visual factors from the spatial relationship analysis of buildings, terrain and other geographical features. A few quantitative factors as the distance, solid angle and visual field (a distribution of occupied solid angle in all directions) valued in spherical coordinate system were adopted as the basic units for visibility levels. Starting from the space prototype, the research has also focused on several aspects possibly associated to the visual effects in open spaces, including the openness, enclosure and ground coverage for edge, the distribution and dispersion for skyline, and the visibility of individual building for landmark. For further comparison, the variances of those figures were also noticed during space scales changing for the prototype, in order to find possible connection or changing trend before and after. Moreover, experiments of 3D visibility analysis have been designed and put into practice for real scenes to discover the similarities or differences among prototypical spaces and reality, and Piazza del Campo (Siena, Italy), Piazza San Marco (Venice, Italy) and Olomouc centre area (Olomouc, Czech) have been selected as the first group of candidates. As a complementary study, the central campus area of the Chinese University of Hong Kong (CUHK) were also taken into consideration as an experimental site, for comparison of disparities with those classical scenes. Those would be possible references for conclusion of similarity or discrepancy among various spaces, in order to find out spatial general pattern, or reveal the actual affections of visual factor values in the reality. / 在本論文中,作者基於建模工具、虛擬地理環境和网络地理信息系統(WebGIS),對三維可視性分析和視覺評估的可行性進行了探討。概括而言,這項探討將有助於在城市開放空間可評估測量屬性的輔助下實施城市規劃和設計,以期打造具有更佳視覺舒適性的城市環境。而從長遠意義上考量,三維可視性分析和視覺評估亦是實現城市開放空間和視覺感知之間聯繫的定量分析前提,並為城市環境評價提供適當標準,使今後的規劃和設計更趨理性。 / 由於三維建模軟件和虛擬地理環境的發展,部分學者已在三維可視性分析中進行些許嘗試,試圖了解在城市環境中的三維空間關係與視覺因素進行相關分析的可行性。且在此研究之前,部分實驗已於個別建模軟件如AutoCAD和3ds Max、或分析平台如ArcGIS和虛擬地理環境中取得可觀進展。基於以上前人的成就,筆者一方面期望在分析工具上進行適度革新,另一方面追尋可視性分析和視覺評估算法上改進的可能,從而在本文中做出個人領域上的創新。在此之中,考慮到入門難易程度和公眾接納程度,筆者採納了若干開發難度較低的工具相互搭配,如谷歌草图大师(Google SketchUp,一種輕量級建模軟件)、Open Simulator(虛擬地擬環境平台)和网络地理信息系統,來輔助進行城市開放空間的可視性分析。谷歌草图大师常被用作於平民化的三維建模,Open Simulator可提供形象化的現實模擬和交互環境,而WebGIS以眾人熟知的網站形式進行交互操作,這些工具在大眾中擁有較高的普及程度和較低的接納門檻,同時亦為可視性分析在大眾中普及提供了一定的可能。 / 而從三維可視性分析的方法改進上,筆者基於先前學者的部分試點研究,在數學推導上對地形、建築物等地理特徵產生空間關係的計算方法提供合理改進。基於球坐標系,作者用距離、球面度和視角分佈(各方向上視角的分佈)等量化數值為城市開放空間進行基本可視單元的度量;亦從空間原型出發,研究集中討論了若干視覺效果,如開放度、圍合度和地面覆蓋度,天際線的分佈和離散程度,個別建築物的可視性分佈等,在城市開放空間可視效果度量上的作用。此外,通過進一步比較各空間之間可視數值上的差異,量化地去理解空間的視覺效果差異,並可依照參數變化的對比尋找可能存在的變化趨勢和相關性,對空間的可視性有更深的理解。 / 在本論文的實驗中,作者亦採用了部分真實場景進行三維可視性和視覺評估的分析,以期發現與原型空間相似性或差異性。坎波廣場(意大利錫耶納)、聖馬可廣場(意大利威尼斯)和奧洛穆茨中心區域(捷克奧洛穆茨)作為城市經典空間作為第一批實驗場景,而後續的香港中文大學中央校園作為普通場景的對比亦被納入實驗場景之中,以便進行相似性和差異性的探討。採用真實場景的實驗有助於更好地將現實中城市開放空間的視覺特性與演算結果進行結合,揭示各可視數值的實際表現意義,並在空間原型的相似性上進行合理探討。 / Lin, Tianpeng. / Thesis Ph.D. Chinese University of Hong Kong 2016. / Includes bibliographical references (leaves 110-116). / Abstracts also in Chinese. / Title from PDF title page (viewed on 05, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
2

Generating Implicit Functions Model from Triangles Mesh Model by Using Genetic Algorithm

Chen, Ya-yun 09 October 2005 (has links)
The implicit function model is nowadays generally applied to a lot of fields that need 3D, such as computer game, cartoon or for specially effect film. So far, most hardware are still to support the polygon-mesh model but not implicit function model, so polygon-mesh model is still the mainstream of computer graphics. However, translation between the two representation models becomes a new research topic. This paper presents a new method to translate the triangles mesh model into the implicit functions model. The main concept is to use the binary space-partitioning tree to divide the points and patches in the triangle mesh model to create a hierarchical structure. For each leaf node in this hierarchical structure, we would generate a corresponding implicit function. These implicit functions are generated by the genetic algorithm. And the internal nodes in this hierarchical structure are blended by the blending operators. The blending operators make the surface become smooth and continual. The method we proposed reduces the data in a large amount because we only save the coefficients of the implicit surface. And the genetic algorithm can avoid the high computing complexity.
3

Multidimensional Data Processing for Optical Coherence Tomography Imaging

McLean, James Patrick January 2021 (has links)
Optical Coherence Tomography (OCT) is a medical imaging technique which distinguishes itself by acquiring microscopic resolution images in-vivo at millimeter scale fields of view. The resulting in images are not only high-resolution, but often multi-dimensional to capture 3-D biological structures or temporal processes. The nature of multi-dimensional data presents a unique set of challenges to the OCT user that include acquiring, storing, and handling very large datasets, visualizing and understanding the data, and processing and analyzing the data. In this dissertation, three of these challenges are explored in depth: sub-resolution temporal analysis, 3-D modeling of fiber structures, and compressed sensing of large, multi-dimensional datasets. Exploration of these problems is followed by proposed solutions and demonstrations which rely on tools from multiple research areas including digital image filtering, image de-noising, and sparse representation theory. Combining approaches from these fields, advanced solutions were developed to produce new and groundbreaking results. High-resolution video data showing cilia motion in unprecedented detail and scale was produced. An image processing method was used to create the first 3-D fiber model of uterine tissue from OCT images. Finally, a compressed sensing approach was developed which we show to guarantee high accuracy image recovery of more complicated, clinically relevant, samples than had been previously demonstrated. The culmination of these methods represents a step forward in OCT image analysis, showing that these cutting edge tools can also be applied to OCT data and in the future be employed in a clinical setting.
4

Interactive, Computation Assisted Design Tools

Garg, Akash January 2020 (has links)
Realistic modeling, rendering, and animation of physical and virtual shapes have matured significantly over the last few decades. Yet, the creation and subsequent modeling of three-dimensional shapes remains a tedious task which requires not only artistic and creative talent, but also significant technical skill. The perfection witnessed in computer-generated feature films requires extensive manual processing and touch-ups. Every researcher working in graphics and related fields has likely experienced the difficulty of creating even a moderate-quality 3D model, whether based on a mental concept, a hand sketch, or inspirations from one or more photographs or existing 3D designs. This situation, frequently referred to as the content creation bottleneck, is arguably the major obstacle to making computer graphics as ubiquitous as it could be. Classical modeling techniques have primarily dealt with local or low-level geometric entities (e.g., points or triangles) and criteria (e.g., smoothness or detail preservation), lacking the freedom necessary to produce novel and creative content. A major unresolved challenge towards a new unhindered design paradigm is how to support the design process to create visually pleasing and yet functional objects by users who lack specialized skills and training. Most of the existing geometric modeling tools are intended either for use by experts (e.g., computer-aided design [CAD] systems) or for modeling objects whose visual aspects are the only consideration (e.g., computer graphics modeling systems). Furthermore, rapid prototyping, brought on by technological advances 3D printing has drastically altered production and consumption practices. These technologies empower individuals to design and produce original objects, customized according to their own needs. Thus, a new generation of design tools is needed to support both the creation of designs within the domain's constraints, that not only allows capturing the novice user's design intent but also meets the fabrication constraints such that the designs can be realized with minimal tweaking by experts. To fill this void, the premise of this thesis relies on the following two tenets: 1. users benefit from an interactive design environment that allows novice users to continuously explore a design space and immediately see the tradeoffs of their design choices. 2. the machine's processing power is used to assist and guide the user to maintain constraints imposed by the problem domain (e.g., fabrication/material constraints) as well as help the user in exploring feasible solutions close to their design intent. Finding the appropriate balance between interactive design tools and the computation needed for productive workflows is the problem addressed by this thesis. This thesis makes the following contributions: 1. We take a close look at thin shells--materials that have a thickness significantly smaller than other dimensions. Towards the goal of achieving interactive and controllable simulations we realize a particular geometric insight to develop an efficient bending model for the simulation of thin shells. Under isometric deformations (deformations that undergo little to no stretching), we can reduce the nonlinear bending energy into a cubic polynomial that has a linear Hessian. This linear Hessian can be further approximated with a constant one, providing significant speedups during simulation. We also build upon this simple bending model and show how orthotropic materials can be modeled and simulated efficiently. 2. We study the theory of Chebyshev nets--a geometric model of woven materials using a two-dimensional net composed of inextensible yarns. The theory of Chebyshev nets sheds some light on their limitations in globally covering a target surface. As it turns out, Chebyshev nets are a good geometric model for wire meshes, free-form surfaces composed of woven wires arranged in a regular grid. In the context of designing sculptures with wire mesh, we rely on the mathematical theory laid out by Hazzidakis~\cite{Hazzidakis1879} to determine an artistically driven workflow for approximately covering a target surface with a wire mesh, while globally maintaining material and fabrication constraints. This alleviates the user from worrying about feasibility and allows focus on design. 3. Finally, we present a practical design tool for the design and exploration of reconfigurables, defined as an object or collection of objects whose transformation between various states defines its functionality or aesthetic appeal (e.g., a mechanical assembly composed of interlocking pieces, a transforming folding bicycle, or a space-saving arrangement of apartment furniture). A novel space-time collision detection and response technique is presented that can be used to create an interactive workflow for managing and designing objects with various states. This work also considers a graph-based timeline during the design process instead of the traditional linear timeline and shows its many benefits as well as challenges for the design of reconfigurables.
5

Three Dimensional Modeling of Hard Connective Tissues Using a Laser Displacement Sensor

Kanabar, Prachi 02 September 2008 (has links)
No description available.
6

Three-Dimensional Spherical Modeling of the Mantles of Mars and Ceres: Inference from Geoid, Topography and Melt History

Sekhar, Pavithra 03 April 2014 (has links)
Mars is one of the most intriguing planets in the solar system. It is the fourth terrestrial planet and is differentiated into a core, mantle and crust. The crust of Mars is divided into the Southern highlands and the Northern lowlands. The largest volcano in the solar system, Olympus Mons is found on the crustal dichotomy boundary. The presence of isolated volcanism on the surface suggests the importance of internal activity on the planet. In addition to volcanism in the past, there has been evidence of present day volcanic activity. Convective upwelling, including decompression melting, has remained an important contributing factor in melting history of the planet. In this thesis, I investigate the production of melt in the mantle for a Newtonian rheology, and compare it with the melt needed to create Tharsis. In addition to the melt production, I analyze the 3D structure of the mantle for a stagnant lithosphere. I vary different parameters in the Martian mantle to understand the production of low or high degree structures early on to explain the crustal dichotomy. This isothermal structure in the mantle contributes to the geoid and topography on the planet. I also analyze how much of the internal density contributes to the surface topography and areoid of Mars. In contrast to Mars, Ceres is a dwarf planet in the Asteroid belt. Ceres is an icy body and it is unclear if it is differentiated into a core, mantle and crust yet. However, studies show that it is most likely a differentiated body and the mantle consists of ice and silicate. The presence of brucite and serpentine on the surface suggests the presence of internal activity. Being a massive body and also believed to have existed since the beginning of the solar system, studying Ceres will shed light on the conditions of the early solar system. Ceres has been of great interest in the scientific community and its importance has motivated NASA to launch a mission, Dawn, to study the planet. Dawn will collect data from the dwarf planet when it arrives in 2015. In my modeling studies, I implement a similar technique on Ceres, as followed on Mars, and focus on the mantle convection process and the geoid and topography. The silicate-ice mixture in the mantle gives rise to a non-Newtonian rheology that depends on the grain size of the ice particle. The geoid and topography observed for different differentiated scenarios in my modeling can be compared with the data from the Dawn mission when it arrives at Ceres in 2015. / Ph. D.
7

Supraspinatus Musculotendinous Architecture: A Cadaveric and In Vivo Ultrasound Investigation of the Normal and Pathological Muscle

Kim, Soo Young 24 September 2009 (has links)
The purpose of the study was to investigate the static and dynamic architecture of supraspinatus throughout its volume in the normal and pathological state. The architecture was first investigated in cadaveric specimens free of any tendon pathology. Using a serial dissection and digitization method tailored for supraspinatus, the musculotendinous architecture was modeled in situ. The 3D model reconstructed in Autodesk MayaTM allowed for visualization and quantification of the fiber bundle architecture i.e. fiber bundle length (FBL), pennation angle (PA), muscle volume (MV) and tendon dimensions. Based on attachment sites and architectural parameters, the supraspinatus was found to have two architecturally distinct regions, anterior and posterior, each with three subdivisions. The findings from the cadaveric investigation served as a map and platform for the development of an ultrasound (US) protocol that allowed for the dynamic fiber bundle architecture to be quantified in vivo in normal subjects and subjects with a full-thickness supraspinatus tendon tear. The architecture was studied in the relaxed state and in three contracted states (60º abduction with either neutral rotation, 80º external rotation, or 80º internal rotation). The dynamic changes in the architecture within the distinct regions of the muscle were not uniform and varied as a function of joint position. Mean FBL in the anterior region shortened significantly with contraction (p<0.05) but not in the posterior. In the anterior region, mean PA was significantly smaller in the middle part compared to the deep (p<0.05). Comparison of the normal and pathological muscle found large differences in the percentage change of FBL and PA with contraction. The architectural parameter that showed the largest changes with tendon pathology was PA. In sum, the results showed that the static and dynamic fiber bundle architecture of supraspinatus is heterogeneous throughout the muscle volume and may influence tendon stresses. The architectural data collected in this study and the 3D muscle model can be used to develop future contractile models. The US protocol may serve as an assessment tool to predict the functional outcome of rehabilitative exercises and surgery.
8

Experimental Study of Rocking Motion of Rigid Bodies on Deformable Medium via Monocular Videogrammetry

Greenbaum, Raphael January 2014 (has links)
The study of rigid body rocking is applicable to a wide variety of structural and non-structural elements. The current applications range from bridge pier and shallow footing design to hospital and industrial equipment, even art preservation. Despite the increasing number of theoretical and simulation studies of rocking motion, few experimental studies exist. Of those that have been published, most are focused on a constrained version of the complete problem introducing modifications to the physical problem with the purpose of eliminating either sliding, uplift or the three dimensional response of the body. However, all of these phenomena may affect the response of an unrestrained rocking body. Furthermore, the majority of the experimental studies that have been published have used methods that are ill-suited to comprehensive three dimensional experimental analysis of the problem. The intent of this work is two-fold. First, to present a computer vision method that allows for the experimental measurement of the rigid body translation and rotation time histories in three dimensions. Experimental results obtained with this method will be presented to demonstrate that it obtains greater than 97% accuracy when compared against National Institute of Standards and Technology traceable displacement sensors. The experimental results highlight important phenomena predicted in some state-of-the-art models for 3D rocking behavior. Second, to present experimental evidence of the importance of characterizing the support medium as deformable instead of the commonly assumed rigid model. It will be shown in this work that this assumption of a rigid support may in some cases lead to non-conservative analysis that is unable to predict rocking motion and, in some cases, even failure.
9

Sensing Building Structure Using UWB Radios for Disaster Recovery

Lee, Jeong Eun 30 May 2018 (has links)
This thesis studies the problem of estimating the interior structure of a collapsed building using embedded Ultra-Wideband (UWB) radios as sensors. The two major sensing problems needed to build the mapping system are determining wall type and wall orientation. We develop sensing algorithms that determine (1) load-bearing wall composition, thickness, and location and (2) wall position within the indoor cavity. We use extensive experimentation and measurement to develop those algorithms. In order to identify wall types and locations, our research approach uses Received Signal Strength (RSS) measurement between pairs of UWB radios. We create an extensive database of UWB signal propagation data through various wall types and thicknesses. Once the database is built, fingerprinting algorithms are developed which determine the best match between measurement data and database information. For wall mapping, we use measurement of Time of Arrival (ToA) and Angle of Arrival (AoA) between pairs of radios in the same cavity. Using this data and a novel algorithm, we demonstrate how to determine wall material type, thickness, location, and the topology of the wall. Our research methodology utilizes experimental measurements to create the database of signal propagation through different wall materials. The work also performs measurements to determine wall position in simulated scenarios. We ran the developed algorithms over the measurement data and characterized the error behavior of the solutions. The experimental test bed uses Time Domain UWB radios with a center frequency of 4.7 GHz and bandwidth of over 3.2 GHz. The software was provided by Time Domain as well, including Performance Analysis Tool, Ranging application, and AoA application. For wall type identification, we use the P200 radio. And for wall mapping, we built a special UWB radio with both angle and distance measurement capability using one P200 radio and one P210 radio. In our experimental design for wall identification, we varied wall type and distance between the radios, while fixing the number of radios, transmit power and the number of antennas per radio. For wall mapping, we varied the locations of reference node sensors and receiver sensors on adjoining and opposite walls, while fixing cavity size, transmit power, and the number of antennas per radio. As we present in following chapters, our algorithms have very small estimation errors and can precisely identify wall types and wall positions.
10

Measurements and Three-Dimensional Modeling of Air Pollutant Dispersion in an Urban Street Canyon

Tsai, Meng-YU 06 June 2005 (has links)
In this study, Three-dimensional (3D) airflow and dispersion of pollutants were modeled under various excess wall temperature and traffic rate using the RNG k-£` turbulence model and Boussinesq approximation, which was solved numerically using the finite volume method. The street canyon is 60 m long (=L) and 20 m wide (=W). The height of five-story buildings on both sides of the street are about 16 m (=H). Hence, the street canyon has an aspect ratio (AR=H/W) of 0.8 and a length to width ratio of 3 (=L/W). Vehicle emissions were estimated from the measured traffic flow rates and modeled as banded line sources. 3D simulations reveal that the vortex line, joining the centers of cross-sectional vortices of the street canyon, meanders between street buildings. Notably, there is also a horizontal vortex within street canyon. Pollutant concentrations decline as the height increases, and are higher on the leeward side than on the windward side. The ratio of CO pollutants between leeward side and windward side is related to wind velocity. As wind smaller than 0.7 m/sec , the ratio is 1.23¡Fhowever, the ratio is 2.03 with more wind speed above 1.2 m/sec. The CO concentration reveals that the predicted values generally follow the hourly zigzag traffic rate, indicating that CO is closely related to the traffic emissions in a street canyon. The 3D airflow in the street canyon is dominated by both wind fields on buildings top and street exit. The 3D simulations reveal that air flux is 50% higher than 2D. Entrainment of outside air reduces pollutant concentrations, thus reducing concentrations of CO¡BNOx¡Band SO2 by about 51%¡B68% and 70% ,respectively. Thermal boundary layers are very thin. Entrainment of outside air increases and pollutant concentration decreases with increasing heating condition. For T = 5 K, the upward velocity on leeward side increases by about 10%, Also, the downward velocity on windward side decreases by about 28 %. Furthermore, simulation showed that the averaged inflow speed in the lateral direction increases by about 100% as compared with T = 0 K. Hence, the pollutant concentrations with T = 5 K is ony 50% of those without heating. Simulations are followed measurements in street canyon. The averaged simulated concentrations with no heating conditions are about 11~24% and 22~36% lower than measured for CO and NOx , respectively. For heating conditions and without outside traffic source, the averaged simulated concentrations with T = 2 K are 29~36% lower than the measurements. Even at T = 5 K , the concentrations are only about 54% of those without heating, due to the fact that pollutant dilution is enhanced by buoyancy force as to having more outside air entrained into the canyon. However, when traffic emissions outside two ends of canyon were considered, the simulated CO concentrations are 23% and 19% higher than those without outside traffic sources at T = 0 K and T = 2 K, respectively. Traffic-produced turbulence (TPT) enhances the turbulent kinetic energy and the mixing of temperature and admixtures in the canyon. Although the simulated means with the TPT effect are in better agreement with the measured means than those without the TPT effect, the average reduction of CO concentration by the TPT is only about 5% at a given height and heating conditions. Factors affecting the variations between this work and other studies are addressed and explained.

Page generated in 0.2509 seconds