• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 34
  • 29
  • 9
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 212
  • 31
  • 28
  • 25
  • 25
  • 19
  • 19
  • 17
  • 16
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Surface profiling of micro-scale structures using partial differential equation

Gonzalez Castro, Gabriela, Spares, Robert, Ugail, Hassan, Whiteside, Benjamin R., Sweeney, John January 2010 (has links)
No
12

Building and Tree Parameterization in Partiallyoccluded 2.5D DSM Data / Byggnads- och träd- parametrisering i halvt skymda 2.5D digitala höjdmodeller

Källström, Johan January 2015 (has links)
Automatic 3D building reconstruction has been a hot research area; a task which has been done manually even up today. Automating the task of building reconstruction enables more applications where up to date information is of great importance. This thesis proposes a system to extract parametric buildings and trees from dense aerial stereo image data. The method developed for the tree identification and parameterization is a totally new approach which have yielded great results. The focus has been to extract the data in such a way that small flying platforms can use it for navigational purposes. The degree of simplification is therefor high. The building parameterization part starts with identifying roof faces by Region Growing random seeds in the digital surface model (DSM) until a coverage threshold is met.For each roof face a plane is fitted using a Least Square approach.The actual parameterization is started with calculating the intersection between the roof faces. Given the nature of 2.5D DSM data there is no possibility to perform wall fitting. Therefor all the walls will be constructed with a 2D line Hough transform of the border data of all the roof faces. The tree parameterization is done by searching for possible roof face topologies resembling the signature of a tree. For each possible tree topology a second degree polynomial surface is fitted to the DSM data covered by the faces in the topology. By looking at the parameters of the fitted polynomial it is then possible to determine if it is a tree or not. All the extraction steps were implemented and evaluated in Matlab, all algorithms have been described, discussed and  motivated in the thesis.
13

Industriellt inspirerad A-projektering med parametriserade rumsobjekt : En studie över hur projekteringen kan gå från ETO-processer till processer grundade på masskundsanpassning / Industrially inspired A-planning using parameterized room objects

Östman, Kristoffer January 2017 (has links)
The digital wave that has engulfed the construction industry has in many ways changed how consultants work. The new working methods have created new conditions and opportunities for how architectural planning in construction can be conducted. Traditional planning is a project-oriented ETO process that can be compared with manufacturing of complex one-of-a-kind products (Jensen, 2014). Many consultants' business models based on the process gets criticized for not encouraging the reuse of solutions or development of their ways of working (Lidelöw, Stehn, Lessing, & Engström, 2015). In the manufacturing industry, processes based on the concept of mass customization are used. The purpose of these processes is to improve the flexibility of the final product, at the same time as the standardization and economies of scale is preserved in the components (Jensen, 2014). Modularization and configuration is central within mass customization. These concepts along with automation formed the basis of the study. The concept intended to incorporate these influences into traditional A-planning, is called room objects. Room objects are parametrically driven objects that collect furnishings and equipment as modules in a BIM environment. In this study the room objects has been developed as products families with the intention and ability to meet the underlying needs (Jiao, Simpson, & Siddique, 2007). The needs consisted of room layouts gathered from reference projects and regulations and have been identified through case studies. The needs have then been revised in stages to ultimately result in the design parameters used in the construction of the room objects. In this thesis, the development of the room objects has been delimited against WC environments. The technical use of the objects has been studied within the concept room objects. The result of the study is presented as a methodology for the use and management inside Autodesk Revit. A big part of the use consists of various tools in the form of scripts developed in the software Dynamo. The tools allow some operations within the use, and thus within the planning, to be automated. They also enable the creation of configurators outside of Revit that facilitates the configuration of the room objects. The analysis and evaluation of the flexibility and usability showed of the room objects, shows that room objects as concept has the potential to work in the phase of A-planning. The flexibility and modularity of the room objects could within the context of the work meet 97 % of the designs given by reference projects and 100% of the designs given by the regulations. The limits of the adjustment possibilities are situated in complex room shapes that require an advanced parametric basis. The usability of the objects appeared to be limited by technical knowledge. The configuration of the room objects requires an insight of the parametric construction. The modification of the room objects requires the user to possess an understanding of parametric modelling at family level in Revit. The alteration of the scripts demands knowledge of visual programming. The potential is there to solve or manage these limitations through technical means and suitable project organization. How the concept can be implemented purely in terms of the process should be further investigated. In this study, there has been a discussion of how the project group can be organized in order to fully utilize the benefits room objects can bring in terms of time savings, experience feedback and quality assurance.
14

En ny metod för att beräkna impuls- och värmeflöden vid stabila förhållanden

Belking, Anna January 2004 (has links)
De Bruin och Hartogensis har föreslagit en ny metod för att beräkna impulsflödet och det sensibla värmeflödet vid stabila förhållanden. Metoden bygger på att de normaliserade standardavvikelserna är approximativt konstanta för den horisontella vinden och temperaturen. Beräkningarna görs endast utifrån medelvinden och temperaturen och dess standardavvikelser. Den här metoden testas i den här studien med datamaterial från Labans kvarnar på Gotland i Östersjön och Östergarnsholm som ligger 4 km utanför Gotland. Labans kvarnar representerar flöden över land och Östergarnsholm flöden över hav. Konstanterna som De Bruin och Hartogensis använde är följande: Cu=2.5 och CT=2.3, vilket gav en mycket liten spridning i deras beräkningar av flöden. Datamaterialet de använde sig av var från Kansas, USA, över en plan grässlätt. Olika statistiska mått har här testats för att erhålla värden på konstanterna. Medel-, median- och typvärde för de normaliserade standardavvikelserna för respektive kvantitet har beräknats. För landförhållanden i den här studien fås lite högre värden på konstanterna, Cu=2.6 och CT=2.6, än vad De Bruin och Hartogensis erhöll.  Vid beräkningar av flöden över hav delas vindriktningen upp i två intervall. Vindriktningen som ligger mellan 220o - 300o representerar vindar som blåser ifrån Gotland och vindriktningar som ligger mellan 80o - 220o representerar vindar från öppet hav. För öppna havsförhållanden fås konstanter som har ett lägre värde vid beräkning av impulsflödet, Cu=2.2 , än de värde som De Bruin och Hartogensis fick. För vindar som blåser ifrån Gotland erhålls konstanten till:Cu=3.0. Konstanter för beräkning av värmeflödet är svårare att bestämma och ger inte alls lika bra resultat över hav som för impulsflödet. Bestämningar av värmeflöde är mycket mer komplicerade än för impulsflöde. Delvis på grund av att det behövs två konstanter, men det beror också på att temperaturstrukturen i det marina gränsskiktet inte följer Monin-Obukhovs similaritetsteori.Framsidans foto / De Bruin and Hartogensis have proposed a new method to determine momentum flux and sensible heat flux at stable conditions. When using this method the assumption is made that the standard deviations for the longitudinal wind component and temperature are approximately constant. Only the mean wind and the temperature and the standard deviations are necessary for the calculations. The method has been analyzed in this study with data from Labans kvarnar sited on Gotland in the Baltic Sea and Östergarnsholm which is situated 4 km outside Gotland. Labans kvarnar represents fluxes over land and Östergarnsholm represents fluxes over sea. The constants that De Bruin and Hartogensis found are the following:Cu=2.5 for wind speed and CT=2.3  for temperature, which shows very little scatter in the calculations of the fluxes. The data they used where measured in Kansas over a very flat grassland site. Different statistics measurements have been tested to receive values of the constants. In search of constants the mean value, median value and the modal value for respectively quantity have been calculated. For land conditions the values of the constants are a little bit higher, Cu=2.6 and CT=2.6, than the values De Bruin and Hartogensis received. When calculating the fluxes over ocean the wind direction is divided in to two intervals. The wind direction between 220o - 300o represents winds from Gotland and wind direction between  80o - 220o represents winds from open sea. For the open sea conditions the constants calculated for the momentum flux in this study are a little bit lower, Cu=2.2, than the value De Bruin and Hartogensis found. For winds from Gotland the constant for momentum flux was found to be: Cu=3.0. When calculating the sensible heat flux the constants are very difficult to find and do not give as good result as for the momentum flux over sea. The conditions for the sensible heat are much more complicated than it is for momentum flux. Firstly two constants are needed and secondly the temperature structure in the marine boundary layer does not follow Monin-Obukhov similarity theory.
15

Parametrizace rozdělení škod v neživotním pojištení / Parametrizace rozdělení škod v neživotním pojištení

Špaková, Mária January 2013 (has links)
Title: Parameterization of claims distribution in non-life insurance Author: Bc. Mária Špaková Department: Department of Probability and Mathematical Statistics Supervisor: RNDr. Michal Pešta Ph.D., MFF UK Abstract: This paper deals with the parameterization of claim size distributions in non-life insurance. It consists of the theoretical and the practical part. In the first part we discuss the usual distributions of claims and their properties. One section is devoted to extreme values distributions. Consequently, we mention the most known methods for parameter estimation - the maximum likelihood method, the method of moments and the method of weighted moments. The last theoretical chapter is focused on some validation techniques and goodness-of-fit tests. In the practical part we apply some of the discussed approaches on real data. However, we concentrate mainly on the large claims modeling - firstly, we select a reasonable threshold for our data and then we fit the claims by the generalized Pareto distribution together with the introduced parameterization procedures. Based on the results of the applied validation methods we will choose appropriate models for the biggest claims. Keywords: parameterization, non-life insurance, claims distribution.
16

Assessment and Improvement of Snow Datasets Over the United States

Dawson, Nicholas, Dawson, Nicholas January 2017 (has links)
Improved knowledge of the cryosphere state is paramount for continued model development and for accurate estimates of fresh water supply. This work focuses on evaluation and potential improvements of current snow datasets over the United States. Snow in mountainous terrain is most difficult to quantify due to the slope, aspect, and remote nature of the environment. Due to the difficulty of measuring snow quantities in the mountains, the initial study creates a new method to upscale point measurements to area averages for comparison to initial snow quantities in numerical weather prediction models. The new method is robust and cross validation of the method results in a relatively low mean absolute error of 18% for snow depth (SD). Operational models at the National Centers for Environmental Prediction which use Air Force Weather Agency (AFWA) snow depth data for initialization were found to underestimate snow depth by 77% on average. Larger error is observed in areas that are more mountainous. Additionally, SD data from the Canadian Meteorological Center, which is used for some model evaluations, performed similarly to models initialized with AFWA data. The use of constant snow density for snow water equivalent (SWE) initialization for models which utilize AFWA data exacerbates poor SD performance with dismal SWE estimates. A remedy for the constant snow density utilized in NCEP snow initializations is presented in the next study which creates a new snow density parameterization (SNODEN). SNODEN is evaluated against observations and performance is compared with offline land surface models from the National Land Data Assimilation System (NLDAS) as well as the Snow Data Assimilation System (SNODAS). SNODEN has less error overall and reproduces the temporal evolution of snow density better than all evaluated products. SNODEN is also able to estimate snow density for up to 10 snow layers which may be useful for land surface models as well as conversion of remotely-sensed SD to SWE. Due to the poor performance of previously evaluated snow products, the last study evaluates openly-available remotely-sensed snow datasets to better understand the strengths and weaknesses of current global SWE datasets. A new SWE dataset developed at the University of Arizona is used for evaluation. While the UA SWE data has already been stringently evaluated, confidence is further increased by favorable comparison of UA snow cover, created from UA SWE, with multiple snow cover extent products. Poor performance of remotely-sensed SWE is still evident even in products which combine ground observations with remotely-sensed data. Grid boxes that are predominantly tree covered have a mean absolute difference up to 87% of mean SWE and SWE less than 5 cm is routinely overestimated by 100% or more. Additionally, snow covered area derived from global SWE datasets have mean absolute errors of 20%-154% of mean snow covered area.
17

Microphysical Analysis and Modeling of Amazonic Deep Convection / Análise e Modelagem Microfísica da Convecção Profunda Amazônica

Basso, João Luiz Martins 16 July 2018 (has links)
Atmospheric moist convection is one of the main topics discussed on weather and climate. This study purpose is to understand why different and similar cloud microphysics parameterizations produce different patterns of precipitation at the ground through several numerical sensitivity tests with the WRF model in the simulation of a squall line case observed on the Amazon region. Four different bulk microphysics parameterizations (Lin, WSM6, Morrison, and Milbrandt) were tested, and the main results show that statistical errors do not change significantly among each other for the four numerical domains (from 27 km up to 1 km grids). The correlations between radar rainfall data and the simulated precipitation fields show the double-moment parameterization Morrison scheme was the one that displayed better results in the overall: While Morrison scheme show 0.6 correlation in the western box of the 1 km domain, WSM6 and Lin schemes show 0.39 and 0.05, respectively. Nevertheless, because this scheme presents good correlations with the radar rain rates, it also shows a fairly better system lifecycle, evolution, and propagation when compared to the satellite data. Although, the complexity that the way microphysics variables are treated in both one-moment and double-moment schemes in this case study do not highly affect the simulatios results, the tridimensional vertical cross-sections show that the Purdue Lin and Morrison schemes display more intense systems compared to WSM6 and Milbrandt schemes, which may be associated with the different treatments of the ice-phase microphysics. In the specific comparison between double-moment schemes, the ice quantities generated by both Morrison and Milbrandt schemes highly affected thesystem displacement and rainfall intensity. This also affects the vertical velocities intensity which, in its, turn, changes the size of the cold pools. Differences in ice quantities were responsible for distinct quantities of total precipitable water content, which is related with the verticallly integrated ice mixing ratio generated by Morrison. The system moves faster in Milbrandt scheme compared to Morrison because the scheme generated more graupel quantities, which is smaller in size than hail, and it evaporates easier in the processes inside the cloud due to its size. This fact also changed the more intense cold pools intensity for Milbrandt scheme compared to Morrison. / A convecção atmosférica é um dos principais tópicos discutidos no tempo e clima. O objetivo deste estudo é entender por que diferentes e semelhantes parametrizações de microfísica de nuvens produzem diferentes padrões de precipitação no solo através de vários testes numéricos de sensibilidade com o modelo WRF na simulação de um caso de linha de instabilidade observado na região amazônica. Quatro diferentes parametrizações microfísicas de tipo bulk (Lin, WSM6, Morrison e Milbrandt) foram testadas, e os principais resultados mostram que os erros estatísticos não se alteram significativamente entre si para os quatro domínios numéricos (da grade de 27 km até a de 1 km). As correlações entre dados pluviométricos de radar e os campos de precipitação simulados mostram que o esquema Morrison de parametrização de duplo momento foi o que apresentou melhores resultados, no geral: enquanto o esquema de Morrison mostra correlação 0,6 na caixa oeste do domínio de 1 km, os esquemas WSM6 e Lin mostram 0,39 e 0,05, respectivamente. No entanto, como esse esquema apresenta boas correlações com as taxas de chuva do radar, ele também mostra um ciclo de vida, evolução e propagação do sistema relativamente melhores quando comparado aos dados de satélite. Embora a complexidade com que as variáveis microfísicas são tratadas nos esquemas de um momento e de duplo momento neste estudo de caso não afetam muito os resultados simulados, as seções transversais verticais tridimensionais mostram que os esquemas de Purdue Lin e Morrison exibem mais intensos em comparação com os esquemas WSM6 e Milbrandt, que podem estar associados aos diferentes tratamentos da microfísica da fase de gelo. Na comparação específica entre esquemas de momento duplo, as quantidades de gelo geradas pelos esquemas de Morrison e Milbrandt afetaram muito o deslocamento do sistema e a intensidade da chuva. Isso também afeta a intensidade das velocidades verticais que, por sua vez, altera o tamanho das piscinas frias. As diferençaas nas quantidades de gelo foram responsáveis por quantidades distintas de conteúdo total de água, que está relacionado com a razão de mistura de gelo verticalmente integrada gerada por Morrison. O sistema se move mais rápido no esquema de Milbrandt comparado a Morrison porque o esquema gerou mais quantidades de graupel, que é menor em tamanho do que o granizo, e evapora mais facilmente nos processos dentro da nuvem devido ao seu tamanho. Este fato também mudou a intensidade das piscinas frias mais intensas, porém menores em extensão horizontal, para o esquema Milbrandt em comparação com Morrison.
18

Improved 3D Heart Segmentation Using Surface Parameterization for Volumetric Heart Data

Xing, Baoyuan 24 April 2013 (has links)
Imaging modalities such as CT, MRI, and SPECT have had a tremendous impact on diagnosis and treatment planning. These imaging techniques have given doctors the capability to visualize 3D anatomy structures of human body and soft tissues while being non-invasive. Unfortunately, the 3D images produced by these modalities often have boundaries between the organs and soft tissues that are difficult to delineate due to low signal to noise ratios and other factors. Image segmentation is employed as a method for differentiating Regions of Interest in these images by creating artificial contours or boundaries in the images. There are many different techniques for performing segmentation and automating these methods is an active area of research, but currently there are no generalized methods for automatic segmentation due to the complexity of the problem. Therefore hand-segmentation is still widely used in the medical community and is the €œGold standard€� by which all other segmentation methods are measured. However, existing manual segmentation techniques have several drawbacks such as being time consuming, introduce slice interpolation errors when segmenting slice-by-slice, and are generally not reproducible. In this thesis, we present a novel semi-automated method for 3D hand-segmentation that uses mesh extraction and surface parameterization to project several 3D meshes to 2D plane . We hypothesize that allowing the user to better view the relationships between neighboring voxels will aid in delineating Regions of Interest resulting in reduced segmentation time, alleviating slice interpolation artifacts, and be more reproducible.
19

Parameterization analysis and inversion for orthorhombic media

Masmoudi, Nabil 05 1900 (has links)
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near the zero offset in the x1-x3 vertical plane, and ∈d, which is the ratio of the horizontal velocity squared in the x1 and x2 direction. Since, with this parameterization, the radiation pattern for the horizontal velocity is azimuth independent, we can perform an initial VTI inversion for two parameters (velocity and ∈1), then use ∈d to fit the azimuth variation in the data. This can be done at the reservoir level or any region of the model.
20

Laboratory Measurements of the Moist Enthalpy Transfer Coefficient

Jeong, Dahai 01 January 2008 (has links)
The enthalpy (sensible and latent heat) exchange processes within the surface layers at an air-water interface have been examined in 15-m wind-wave tunnel at the University of Miami. Measurements yielded 72 mean values of fluxes and bulk variables in the wind speed (referred to 10 m) range form 0.6 to 39 m/s, covering a full range of aerodynamic conditions from smooth to fully rough. Meteorological variables and bulk enthalpy transfer coefficients, measured at 0.2-m height, were adjusted to neutral stratification and 10-m height following the Monin-Obukhov similarity approach. The ratio of the bulk coefficients of enthalpy and momentum was estimated to evaluate Emanuel's (1995) hypothesis. Indirect "Calorimetric" measurements gave reliable estimates of enthalpy flux from the air-water interface, but the moisture gained in the lower air from evaporation of spray over the rough water remained uncertain, stressing the need for flux measurements along with simultaneous spray data to quantify spray's contribution to the turbulent air-water enthalpy fluxes.

Page generated in 0.1446 seconds