• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 15
  • 7
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 35
  • 14
  • 12
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D visibility analysis for visual quality assessment: approaches based on modelling tools, VGE and WebGIS / CUHK electronic theses & dissertations collection

January 2016 (has links)
In this thesis, the author has explored the feasibility of 3D visibility analysis for visual quality assessment via the aid of modelling tools, virtual geographic environment (VGE) and WebGIS, which may be beneficial to build a measurable evaluating standard of visual environment in urban open spaces, and referential to create an atmosphere with improved visual comfort in guiding planning or design processing. Considered as long-term significance, 3D visibility analysis for visual quality assessment is potential in enabling the quantitative analysis between urban open spaces and visual perception of human, providing appropriate standard for evaluation of visual environment, and bringing the future urban planning and design fields into rational and reasonable. / Due to the development of 3D modelling software and VGE, their applications have been attempted in 3D visibility analysis by a few previous scholars, exploring the possibility of representation for urban environment in 3D and the feasibility of spatial relationship analysis with visual factors. Lots of progress has been made with the participation of the modelling software such as AutoCAD and 3Ds Max, or analytical platforms such as ArcGIS and VGE. As the extension of preceding study, the author is going to discuss improvements and innovations of some tools applied in visibility analysis and visual quality computation, such as Open Simulator as a VGE Platform. Moreover, due to the easy access for entry-level programmer and the public, Google SketchUp (a type of modelling software) and WebGIS have also been tested whether suitable for the analysis handling, considerably decreasing the programming development difficulty. Both SketchUp and WebGIS are thought to be well accepted by thepublic, as SketchUp has been popularised in 3D modelling and WebGIS has been familiar as the form of websites for a long period, which may enable the dissemination of visibility analysis to the public. / From a pilot study of progresses based on the researches of past scholars, the author developed an improved method for 3D visibility analysis, by mathematically deriving the visual factors from the spatial relationship analysis of buildings, terrain and other geographical features. A few quantitative factors as the distance, solid angle and visual field (a distribution of occupied solid angle in all directions) valued in spherical coordinate system were adopted as the basic units for visibility levels. Starting from the space prototype, the research has also focused on several aspects possibly associated to the visual effects in open spaces, including the openness, enclosure and ground coverage for edge, the distribution and dispersion for skyline, and the visibility of individual building for landmark. For further comparison, the variances of those figures were also noticed during space scales changing for the prototype, in order to find possible connection or changing trend before and after. Moreover, experiments of 3D visibility analysis have been designed and put into practice for real scenes to discover the similarities or differences among prototypical spaces and reality, and Piazza del Campo (Siena, Italy), Piazza San Marco (Venice, Italy) and Olomouc centre area (Olomouc, Czech) have been selected as the first group of candidates. As a complementary study, the central campus area of the Chinese University of Hong Kong (CUHK) were also taken into consideration as an experimental site, for comparison of disparities with those classical scenes. Those would be possible references for conclusion of similarity or discrepancy among various spaces, in order to find out spatial general pattern, or reveal the actual affections of visual factor values in the reality. / 在本論文中,作者基於建模工具、虛擬地理環境和网络地理信息系統(WebGIS),對三維可視性分析和視覺評估的可行性進行了探討。概括而言,這項探討將有助於在城市開放空間可評估測量屬性的輔助下實施城市規劃和設計,以期打造具有更佳視覺舒適性的城市環境。而從長遠意義上考量,三維可視性分析和視覺評估亦是實現城市開放空間和視覺感知之間聯繫的定量分析前提,並為城市環境評價提供適當標準,使今後的規劃和設計更趨理性。 / 由於三維建模軟件和虛擬地理環境的發展,部分學者已在三維可視性分析中進行些許嘗試,試圖了解在城市環境中的三維空間關係與視覺因素進行相關分析的可行性。且在此研究之前,部分實驗已於個別建模軟件如AutoCAD和3ds Max、或分析平台如ArcGIS和虛擬地理環境中取得可觀進展。基於以上前人的成就,筆者一方面期望在分析工具上進行適度革新,另一方面追尋可視性分析和視覺評估算法上改進的可能,從而在本文中做出個人領域上的創新。在此之中,考慮到入門難易程度和公眾接納程度,筆者採納了若干開發難度較低的工具相互搭配,如谷歌草图大师(Google SketchUp,一種輕量級建模軟件)、Open Simulator(虛擬地擬環境平台)和网络地理信息系統,來輔助進行城市開放空間的可視性分析。谷歌草图大师常被用作於平民化的三維建模,Open Simulator可提供形象化的現實模擬和交互環境,而WebGIS以眾人熟知的網站形式進行交互操作,這些工具在大眾中擁有較高的普及程度和較低的接納門檻,同時亦為可視性分析在大眾中普及提供了一定的可能。 / 而從三維可視性分析的方法改進上,筆者基於先前學者的部分試點研究,在數學推導上對地形、建築物等地理特徵產生空間關係的計算方法提供合理改進。基於球坐標系,作者用距離、球面度和視角分佈(各方向上視角的分佈)等量化數值為城市開放空間進行基本可視單元的度量;亦從空間原型出發,研究集中討論了若干視覺效果,如開放度、圍合度和地面覆蓋度,天際線的分佈和離散程度,個別建築物的可視性分佈等,在城市開放空間可視效果度量上的作用。此外,通過進一步比較各空間之間可視數值上的差異,量化地去理解空間的視覺效果差異,並可依照參數變化的對比尋找可能存在的變化趨勢和相關性,對空間的可視性有更深的理解。 / 在本論文的實驗中,作者亦採用了部分真實場景進行三維可視性和視覺評估的分析,以期發現與原型空間相似性或差異性。坎波廣場(意大利錫耶納)、聖馬可廣場(意大利威尼斯)和奧洛穆茨中心區域(捷克奧洛穆茨)作為城市經典空間作為第一批實驗場景,而後續的香港中文大學中央校園作為普通場景的對比亦被納入實驗場景之中,以便進行相似性和差異性的探討。採用真實場景的實驗有助於更好地將現實中城市開放空間的視覺特性與演算結果進行結合,揭示各可視數值的實際表現意義,並在空間原型的相似性上進行合理探討。 / Lin, Tianpeng. / Thesis Ph.D. Chinese University of Hong Kong 2016. / Includes bibliographical references (leaves 110-116). / Abstracts also in Chinese. / Title from PDF title page (viewed on 05, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
2

The Venetian Galley of Flanders: From Medieval (2-Dimensional) Treatises to 21st Century (3-Dimensional) Model

Higgins, Courtney Rosali 2012 May 1900 (has links)
Nautical archaeologists and scholars often try to recreate how ships were built and maneuvered. Due to the delicate nature of older wooden vessels, there is often little archaeological evidence remaining to aid in these studies, and researchers must supplement what little they have with other resources, such as texts. By using computer programs to synthesize and enhance the information in the texts, scholars can better understand the vessel and explore questions that even hull remains may not be able to address. During the High to Late Middle Ages, Venice was a key city for trade and commerce. Its location on the Adriatic Sea connected merchants throughout mainland Europe and the Mediterranean Sea. Since its founding in the low Middle Ages, Venice has been connected to the sea, leading to a long history of seafaring and shipbuilding. By the end of the Middle Ages, Venice had established several trade routes throughout the Mediterranean and Black Seas, and one long sea route into the Atlantic, to Lisbon, Flanders, and London. Although no archaeological evidence of these galleys have been found, several contemporary texts describe the merchant galleys of the 15th century. Two of these texts, dating to the first half of the 15th century discuss the dimensions the galley: The book of Michael of Rhodes and the book of Giorgio "Trombetta" da Modone. Perhaps complementary copies of the same original, these texts contain enough information to reconstruct a 3-dimensional model of the galley of Flanders's hull, in this case using off-the-shelf software ((Rhinoceros). From this computer model the vessel can then be analyzed for volumetric information in order to better understand the hull capacity and how the ship was laden.
3

Generating Implicit Functions Model from Triangles Mesh Model by Using Genetic Algorithm

Chen, Ya-yun 09 October 2005 (has links)
The implicit function model is nowadays generally applied to a lot of fields that need 3D, such as computer game, cartoon or for specially effect film. So far, most hardware are still to support the polygon-mesh model but not implicit function model, so polygon-mesh model is still the mainstream of computer graphics. However, translation between the two representation models becomes a new research topic. This paper presents a new method to translate the triangles mesh model into the implicit functions model. The main concept is to use the binary space-partitioning tree to divide the points and patches in the triangle mesh model to create a hierarchical structure. For each leaf node in this hierarchical structure, we would generate a corresponding implicit function. These implicit functions are generated by the genetic algorithm. And the internal nodes in this hierarchical structure are blended by the blending operators. The blending operators make the surface become smooth and continual. The method we proposed reduces the data in a large amount because we only save the coefficients of the implicit surface. And the genetic algorithm can avoid the high computing complexity.
4

Multidimensional Data Processing for Optical Coherence Tomography Imaging

McLean, James Patrick January 2021 (has links)
Optical Coherence Tomography (OCT) is a medical imaging technique which distinguishes itself by acquiring microscopic resolution images in-vivo at millimeter scale fields of view. The resulting in images are not only high-resolution, but often multi-dimensional to capture 3-D biological structures or temporal processes. The nature of multi-dimensional data presents a unique set of challenges to the OCT user that include acquiring, storing, and handling very large datasets, visualizing and understanding the data, and processing and analyzing the data. In this dissertation, three of these challenges are explored in depth: sub-resolution temporal analysis, 3-D modeling of fiber structures, and compressed sensing of large, multi-dimensional datasets. Exploration of these problems is followed by proposed solutions and demonstrations which rely on tools from multiple research areas including digital image filtering, image de-noising, and sparse representation theory. Combining approaches from these fields, advanced solutions were developed to produce new and groundbreaking results. High-resolution video data showing cilia motion in unprecedented detail and scale was produced. An image processing method was used to create the first 3-D fiber model of uterine tissue from OCT images. Finally, a compressed sensing approach was developed which we show to guarantee high accuracy image recovery of more complicated, clinically relevant, samples than had been previously demonstrated. The culmination of these methods represents a step forward in OCT image analysis, showing that these cutting edge tools can also be applied to OCT data and in the future be employed in a clinical setting.
5

Interactive, Computation Assisted Design Tools

Garg, Akash January 2020 (has links)
Realistic modeling, rendering, and animation of physical and virtual shapes have matured significantly over the last few decades. Yet, the creation and subsequent modeling of three-dimensional shapes remains a tedious task which requires not only artistic and creative talent, but also significant technical skill. The perfection witnessed in computer-generated feature films requires extensive manual processing and touch-ups. Every researcher working in graphics and related fields has likely experienced the difficulty of creating even a moderate-quality 3D model, whether based on a mental concept, a hand sketch, or inspirations from one or more photographs or existing 3D designs. This situation, frequently referred to as the content creation bottleneck, is arguably the major obstacle to making computer graphics as ubiquitous as it could be. Classical modeling techniques have primarily dealt with local or low-level geometric entities (e.g., points or triangles) and criteria (e.g., smoothness or detail preservation), lacking the freedom necessary to produce novel and creative content. A major unresolved challenge towards a new unhindered design paradigm is how to support the design process to create visually pleasing and yet functional objects by users who lack specialized skills and training. Most of the existing geometric modeling tools are intended either for use by experts (e.g., computer-aided design [CAD] systems) or for modeling objects whose visual aspects are the only consideration (e.g., computer graphics modeling systems). Furthermore, rapid prototyping, brought on by technological advances 3D printing has drastically altered production and consumption practices. These technologies empower individuals to design and produce original objects, customized according to their own needs. Thus, a new generation of design tools is needed to support both the creation of designs within the domain's constraints, that not only allows capturing the novice user's design intent but also meets the fabrication constraints such that the designs can be realized with minimal tweaking by experts. To fill this void, the premise of this thesis relies on the following two tenets: 1. users benefit from an interactive design environment that allows novice users to continuously explore a design space and immediately see the tradeoffs of their design choices. 2. the machine's processing power is used to assist and guide the user to maintain constraints imposed by the problem domain (e.g., fabrication/material constraints) as well as help the user in exploring feasible solutions close to their design intent. Finding the appropriate balance between interactive design tools and the computation needed for productive workflows is the problem addressed by this thesis. This thesis makes the following contributions: 1. We take a close look at thin shells--materials that have a thickness significantly smaller than other dimensions. Towards the goal of achieving interactive and controllable simulations we realize a particular geometric insight to develop an efficient bending model for the simulation of thin shells. Under isometric deformations (deformations that undergo little to no stretching), we can reduce the nonlinear bending energy into a cubic polynomial that has a linear Hessian. This linear Hessian can be further approximated with a constant one, providing significant speedups during simulation. We also build upon this simple bending model and show how orthotropic materials can be modeled and simulated efficiently. 2. We study the theory of Chebyshev nets--a geometric model of woven materials using a two-dimensional net composed of inextensible yarns. The theory of Chebyshev nets sheds some light on their limitations in globally covering a target surface. As it turns out, Chebyshev nets are a good geometric model for wire meshes, free-form surfaces composed of woven wires arranged in a regular grid. In the context of designing sculptures with wire mesh, we rely on the mathematical theory laid out by Hazzidakis~\cite{Hazzidakis1879} to determine an artistically driven workflow for approximately covering a target surface with a wire mesh, while globally maintaining material and fabrication constraints. This alleviates the user from worrying about feasibility and allows focus on design. 3. Finally, we present a practical design tool for the design and exploration of reconfigurables, defined as an object or collection of objects whose transformation between various states defines its functionality or aesthetic appeal (e.g., a mechanical assembly composed of interlocking pieces, a transforming folding bicycle, or a space-saving arrangement of apartment furniture). A novel space-time collision detection and response technique is presented that can be used to create an interactive workflow for managing and designing objects with various states. This work also considers a graph-based timeline during the design process instead of the traditional linear timeline and shows its many benefits as well as challenges for the design of reconfigurables.
6

Three-Dimensional Spherical Modeling of the Mantles of Mars and Ceres: Inference from Geoid, Topography and Melt History

Sekhar, Pavithra 03 April 2014 (has links)
Mars is one of the most intriguing planets in the solar system. It is the fourth terrestrial planet and is differentiated into a core, mantle and crust. The crust of Mars is divided into the Southern highlands and the Northern lowlands. The largest volcano in the solar system, Olympus Mons is found on the crustal dichotomy boundary. The presence of isolated volcanism on the surface suggests the importance of internal activity on the planet. In addition to volcanism in the past, there has been evidence of present day volcanic activity. Convective upwelling, including decompression melting, has remained an important contributing factor in melting history of the planet. In this thesis, I investigate the production of melt in the mantle for a Newtonian rheology, and compare it with the melt needed to create Tharsis. In addition to the melt production, I analyze the 3D structure of the mantle for a stagnant lithosphere. I vary different parameters in the Martian mantle to understand the production of low or high degree structures early on to explain the crustal dichotomy. This isothermal structure in the mantle contributes to the geoid and topography on the planet. I also analyze how much of the internal density contributes to the surface topography and areoid of Mars. In contrast to Mars, Ceres is a dwarf planet in the Asteroid belt. Ceres is an icy body and it is unclear if it is differentiated into a core, mantle and crust yet. However, studies show that it is most likely a differentiated body and the mantle consists of ice and silicate. The presence of brucite and serpentine on the surface suggests the presence of internal activity. Being a massive body and also believed to have existed since the beginning of the solar system, studying Ceres will shed light on the conditions of the early solar system. Ceres has been of great interest in the scientific community and its importance has motivated NASA to launch a mission, Dawn, to study the planet. Dawn will collect data from the dwarf planet when it arrives in 2015. In my modeling studies, I implement a similar technique on Ceres, as followed on Mars, and focus on the mantle convection process and the geoid and topography. The silicate-ice mixture in the mantle gives rise to a non-Newtonian rheology that depends on the grain size of the ice particle. The geoid and topography observed for different differentiated scenarios in my modeling can be compared with the data from the Dawn mission when it arrives at Ceres in 2015. / Ph. D.
7

Three Dimensional Modeling of Hard Connective Tissues Using a Laser Displacement Sensor

Kanabar, Prachi 02 September 2008 (has links)
No description available.
8

One Dimensional Computer Modeling of a Lithium-Ion Battery

Borakhadikar, Ashwin S. 05 June 2017 (has links)
No description available.
9

One Dimensional Analysis Program for Scramjet and Ramjet Flowpaths

Tran, Kathleen 03 February 2011 (has links)
One-Dimensional modeling of dual mode scramjet and ramjet flowpaths is a useful tool for scramjet conceptual design and wind tunnel testing. In this thesis, modeling tools that enable detailed analysis of the flow physics within the combustor are developed as part of a new one-dimensional MATLAB-based model named VTMODEL. VTMODEL divides a ramjet or scramjet flow path into four major components: inlet, isolator, combustor, and nozzle. The inlet module provides two options for supersonic inlet one-dimensional calculations; a correlation from MIL Spec 5007D, and a kinetic energy efficiency correlation. The kinetic energy efficiency correlation also enables the user to account for inlet heat transfer using a total temperature term in the equation for pressure recovery. The isolator model also provides two options for calculating the pressure rise and the isolator shock train. The first model is a combined Fanno flow and oblique shock system. The second model is a rectangular shock train correlation. The combustor module has two options for the user in regards to combustion calculations. The first option is an equilibrium calculation with a "growing combustion sphere" combustion efficiency model, which can be used with any fuel. The second option is a non-equilibrium reduced-order hydrogen calculation which involves a mixing correlation based on Mach number and distance from the fuel injectors. This model is only usable for analysis of combustion with hydrogen fuel. Using the combustion reaction models, the combustor flow model calculates changes in Mach number and flow properties due to the combustion process and area change, using an influence coefficient method. This method also can take into account heat transfer, change in specific heat ratio, change in enthalpy, and other thermodynamic properties. The thesis provides a description of the flow models that were assembled to create VTMODEL. In calculated examples, flow predictions from VTMODEL were compared with experimental data obtained in the University of Virginia supersonic combustion wind tunnel, and with reported results from the scramjet models SSCREAM and RJPA. Results compared well with the experiment and models, and showed the capabilities provided by VTMODEL. / Master of Science
10

Supraspinatus Musculotendinous Architecture: A Cadaveric and In Vivo Ultrasound Investigation of the Normal and Pathological Muscle

Kim, Soo Young 24 September 2009 (has links)
The purpose of the study was to investigate the static and dynamic architecture of supraspinatus throughout its volume in the normal and pathological state. The architecture was first investigated in cadaveric specimens free of any tendon pathology. Using a serial dissection and digitization method tailored for supraspinatus, the musculotendinous architecture was modeled in situ. The 3D model reconstructed in Autodesk MayaTM allowed for visualization and quantification of the fiber bundle architecture i.e. fiber bundle length (FBL), pennation angle (PA), muscle volume (MV) and tendon dimensions. Based on attachment sites and architectural parameters, the supraspinatus was found to have two architecturally distinct regions, anterior and posterior, each with three subdivisions. The findings from the cadaveric investigation served as a map and platform for the development of an ultrasound (US) protocol that allowed for the dynamic fiber bundle architecture to be quantified in vivo in normal subjects and subjects with a full-thickness supraspinatus tendon tear. The architecture was studied in the relaxed state and in three contracted states (60º abduction with either neutral rotation, 80º external rotation, or 80º internal rotation). The dynamic changes in the architecture within the distinct regions of the muscle were not uniform and varied as a function of joint position. Mean FBL in the anterior region shortened significantly with contraction (p<0.05) but not in the posterior. In the anterior region, mean PA was significantly smaller in the middle part compared to the deep (p<0.05). Comparison of the normal and pathological muscle found large differences in the percentage change of FBL and PA with contraction. The architectural parameter that showed the largest changes with tendon pathology was PA. In sum, the results showed that the static and dynamic fiber bundle architecture of supraspinatus is heterogeneous throughout the muscle volume and may influence tendon stresses. The architectural data collected in this study and the 3D muscle model can be used to develop future contractile models. The US protocol may serve as an assessment tool to predict the functional outcome of rehabilitative exercises and surgery.

Page generated in 0.0482 seconds