• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 974
  • 134
  • 120
  • 110
  • 99
  • 86
  • 82
  • 72
  • 71
  • 71
  • 70
  • 70
  • 70
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Approche de la modélisation d'objets géologiques déformés : conception, structure logique et algorithmique, résultats

Cheaito, Mohamad 20 December 1993 (has links) (PDF)
Ce travail prend place au sein d'une recherche sur la modélisation 3d des scènes géologiques. Son apport spécifique est le suivant: 1) une réflexion générale est menée sur la géométrie des principaux types de corps géologiques ; 2) une réflexion est également menée, au vu notamment des travaux réalisés antérieurement ; 3) à la suite de cette réflexion d'ensemble, nous centrons la réflexion sur un problème particulier: celui de la modélisation d'objets de forme quelconque susceptibles d'être déformés ou d'être composés entre eux. Nous montrons que la question essentielle sous-jacente a une modélisation est celle du choix d'une structure de données ; 4) définition d'une structure hybride, l'arbre BSP mixte ; 5) au terme de la réflexion, nous avons construit le logiciel granite.
422

Estimation of the parameters of stochastic differential equations

Jeisman, Joseph Ian January 2006 (has links)
Stochastic di®erential equations (SDEs) are central to much of modern finance theory and have been widely used to model the behaviour of key variables such as the instantaneous short-term interest rate, asset prices, asset returns and their volatility. The explanatory and/or predictive power of these models depends crucially on the particularisation of the model SDE(s) to real data through the choice of values for their parameters. In econometrics, optimal parameter estimates are generally considered to be those that maximise the likelihood of the sample. In the context of the estimation of the parameters of SDEs, however, a closed-form expression for the likelihood function is rarely available and hence exact maximum-likelihood (EML) estimation is usually infeasible. The key research problem examined in this thesis is the development of generic, accurate and computationally feasible estimation procedures based on the ML principle, that can be implemented in the absence of a closed-form expression for the likelihood function. The overall recommendation to come out of the thesis is that an estimation procedure based on the finite-element solution of a reformulation of the Fokker-Planck equation in terms of the transitional cumulative distribution function(CDF) provides the best balance across all of the desired characteristics. The recommended approach involves the use of an interpolation technique proposed in this thesis which greatly reduces the required computational effort.
423

Performance of regional atmospheric error models for NRTK in GPSnet and the implementation of a NRTK system

Wu, Suquin, s3102813@student.rmit.edu.au January 2009 (has links)
Many high-accuracy regional GPS continuously operating reference (CORS) networks have been established globally. These networks are used to facilitate better positioning services, such as high accuracy real-time positioning. GPSnet is the first state-wide CORS network in Australia. In order to maximize the benefits of the expensive CORS geospatial infrastructure, the state of Victoria in collaboration with three universities (RMIT University, the University of NSW and the University of Melbourne) embarked on research into regional atmospheric error modelling for Network-based RTK (NRTK) via an Australian Research Council project in early 2005. The core of the NRTK technique is the modelling of the spatially-correlated errors. The accuracy of the regional error model is a determining factor for the performance of NRTK positioning. In this research, a number of error models are examined and comprehensively analysed. Among them, the following three models are tested: 1) the Linear Interpolation Method (LIM); 2) the Distance-Based interpolation method (DIM); and 3) the Low-order surface model (LSM). The accuracy of the three models is evaluated using three different observation sessions and a variety of network configurations of GPSnet. Results show that the LIM and DIM can be used to significantly reduce the double-differenced (DD) residuals (up to 60% improvement), and the LIM is slightly better than the DIM (most at mm level). However the DD residuals with the LSM corrections are, in some cases, not only much worse than that of the LIM and DIM but also even must greater/worse than the DD residuals without any corrections applied at all. This indicates that there are no advantages by using the LSM for the error modelling for NRTK in GPSnet, even though it is the most commonly used method by researchers. The performance difference of the LIM for different GPSnet configurations is also tested. Results show that in most cases, the performance difference mainly caused by the number of reference stations used is not significant. This implies that more redundant reference stations may not contribute much to the accuracy improvement of the LIM. However, it may mitigate the station specific errors (if any). The magnitude of the temporal variations of both the tropospheric and ionospheric effects in GPSnet observations is also investigated. Test results suggest that the frequency of generating and transmitting the tropospheric corrections should not be significantly different from that for the ionospehric corrections. Thus 1Hz frequency (i.e. once every second) is recommended for the generation and transmission for both types of the atmospheric corrections for NRTK in GPSnet. The algorithms of the NRTK software package used are examined and extensive analyses are conducted. The performance and limitation of the NRTK system in terms of network ambiguity resolution are assessed. The methodology for generating virtual reference station (VRS) observations in the system is presented. The validation of the algorithms for the generated VRS observations is undertaken. It is expected that this research is significant for both the selection of regional error models and the implementation of the NRTK technique in GPSnet or in the Victorian region.
424

Fast multipole methods for oblique derivative problems

Gutting, Martin January 2007 (has links)
Zugl.: Kaiserslautern, Techn. Univ., Diss., 2007
425

Methods for increased computational efficiency of multibody simulations

Epple, Alexander. January 2008 (has links)
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Olivier A. Bauchau; Committee Member: Andrew Makeev; Committee Member: Carlo L. Bottasso; Committee Member: Dewey H. Hodges; Committee Member: Massimo Ruzzene. Part of the SMARTech Electronic Thesis and Dissertation Collection.
426

Spatial interpolation : a simulated analysis of the effects of sampling strategy on interpolation method /

Davenhall, Brian R. January 1900 (has links)
Thesis (M.S.)--Humboldt State University, 2009. / Includes bibliographical references (leaves 46-48). Also available via Humboldt Digital Scholar.
427

Interpolatory refinement pairs with properties of symmetry and polynomial filling

Gavhi, Mpfareleni Rejoyce 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2008. / Subdivision techniques have, over the last two decades, developed into a powerful tool in computer-aided geometric design (CAGD). In some applications it is required that data be preserved exactly; hence the need for interpolatory subdivision schemes. In this thesis,we consider the fundamentals of themathematical analysis of symmetric interpolatory subdivision schemes for curves, also with the property of polynomial filling up to a given odd degree, in the sense that, if the initial control point sequence is situated on such a polynomial curve, all the subsequent subdivision iterates fills up this curve, for it to eventually also become also the limit curve. A subdivision scheme is determined by its mask coefficients, which we find convenient to mathematically describe as a bi-infinite sequnce a with finite support. This sequence is in one-to-one correspondence with a corresponding Laurent polynomial A with coefficients given by the mask sequence a. After an introductory Chapter 1 on notation, basic definitions, and an overview of the thesis, we proceed in Chapter 2 to separately consider the issues of interpolation, symmetry and polynomial filling with respect to a subdivision scheme, eventually leading to a definition of the class Am,n of mask symbols in which all of the above desired properties are combined. We proceed in Chapter 3 to deduce an explicit characterization formula for the classAm,n, in the process also showing that its optimally local member is the well-known Dubuc–Deslauriers (DD) mask symbol Dm of order m. In fact, an alternative explicit characterization result appears in recent work by De Villiers and Hunter, in which the authors characterized mask symbols A ∈Am,n as arbitrary convex combinations of DD mask symbols. It turns out that Am,m = {Dm}, whereas the class Am,m+1 has one degree of freedom, which we interpret here in the formof a shape parameter t ∈ R for the resulting subdivision scheme. In order to investigate the convergence of subdivision schemes associated with mask symbols in Am,n, we first introduce in Chapter 4 the concept of a refinement pair (a,φ), consisting of a finitely-supported sequence a and a finitelysupported function φ, where φ is a refinable function in the sense that it can be expressed as a finite linear combination, as determined by a, of the integer shifts of its own dilation by factor 2. After presenting proofs of a variety of properties satisfied by a given refinement pair (a,φ), we next introduce the concept of an interpolatory refinement pair as one for which the refinable function φ interpolates the delta sequence at the integers. A fundamental result is then that the existence of an interpolatory refinement pair (a,φ) guarantees the convergence of the interpolatory subdivision scheme with subdivision mask a, with limit function © expressible as a linear combination of the integer shifts of φ, and with all the subdivision iterates lying on ©. In Chapter 5, we first present a fundamental result byMicchelli, according to which interpolatory refinable function existence is obtained for mask symbols in Am,n if the mask symbol A is strictly positive on the unit circle in complex plane. After showing that the DD mask symbol Dm satisfies this sufficient property, we proceed to compute the precise t -interval for such positivity on the unit circle to occur for the mask symbols A = Am(t |·) ∈Am,m+1. Also, we compare our numerical results with analogous ones in the literature. Finally, in Chapter 6, we investigate the regularity of refinable functions φ = φm(t |·) corresponding to mask symbols Am(t |·). Using a standard result fromthe literature in which a lower bound on the Hölder continuity exponent of a refinable function φ is given explicitly in terms of the spectral radius of a matrix obtained from the corresponding mask sequence a, we compute this lower bound for selected values of m.
428

Cardinal spline wavelet decomposition based on quasi-interpolation and local projection

Ahiati, Veroncia Sitsofe 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2009. / Wavelet decomposition techniques have grown over the last two decades into a powerful tool in signal analysis. Similarly, spline functions have enjoyed a sustained high popularity in the approximation of data. In this thesis, we study the cardinal B-spline wavelet construction procedure based on quasiinterpolation and local linear projection, before specialising to the cubic B-spline on a bounded interval. First, we present some fundamental results on cardinal B-splines, which are piecewise polynomials with uniformly spaced breakpoints at the dyadic points Z/2r, for r ∈ Z. We start our wavelet decomposition method with a quasi-interpolation operator Qm,r mapping, for every integer r, real-valued functions on R into Sr m where Sr m is the space of cardinal splines of order m, such that the polynomial reproduction property Qm,rp = p, p ∈ m−1, r ∈ Z is satisfied. We then give the explicit construction of Qm,r. We next introduce, in Chapter 3, a local linear projection operator sequence {Pm,r : r ∈ Z}, with Pm,r : Sr+1 m → Sr m , r ∈ Z, in terms of a Laurent polynomial m solution of minimally length which satisfies a certain Bezout identity based on the refinement mask symbol Am, which we give explicitly. With such a linear projection operator sequence, we define, in Chapter 4, the error space sequence Wr m = {f − Pm,rf : f ∈ Sr+1 m }. We then show by solving a certain Bezout identity that there exists a finitely supported function m ∈ S1 m such that, for every r ∈ Z, the integer shift sequence { m(2 · −j)} spans the linear space Wr m . According to our definition, we then call m the mth order cardinal B-spline wavelet. The wavelet decomposition algorithm based on the quasi-interpolation operator Qm,r, the local linear projection operator Pm,r, and the wavelet m, is then based on finite sequences, and is shown to possess, for a given signal f, the essential property of yielding relatively small wavelet coefficients in regions where the support interval of m(2r · −j) overlaps with a Cm-smooth region of f. Finally, in Chapter 5, we explicitly construct minimally supported cubic B-spline wavelets on a bounded interval [0, n]. We also develop a corresponding explicit decomposition algorithm for a signal f on a bounded interval. ii Throughout Chapters 2 to 5, numerical examples are provided to graphically illustrate the theoretical results.
429

VisualMet : um sistema para visualização e exploração de dados meteorológicos / VisualMet: a system for visualizing and exploring meteorological data

Manssour, Isabel Harb January 1996 (has links)
Os centros operacionais e de pesquisa em previsão numérica do tempo geralmente trabalham com uma grande quantidade de dados complexos multivariados, tendo que interpretá-los num curto espaço de tempo. Técnicas de visualização científica podem ser utilizadas para ajudar a entender o comportamento atmosférico. Este trabalho descreve a arquitetura e as facilidades apresentadas pelo sistema VisualMet, que foi implementado com base em um estudo das tarefas desenvolvidas pelos meteorologistas responsáveis pelo 8º Distrito de Meteorologia, em Porto Alegre. Este centro coleta dados meteorológicos três vezes ao dia, de 32 estações locais, e recebe dados similares do Instituto Nacional de Meteorologia, localizado em Brasília, e do National Meteorological Center, localizado nos Estados Unidos. Tais dados são resultados de observações de variáveis tais como temperatura, pressão, velocidade do vento e tipos de nuvens. As tarefas dos meteorologistas e as classes de dados foram observadas e analisadas para definir as características do sistema. A arquitetura e a implementação do VisualMet seguem, respectivamente, uma abordagem orientada a ferramentas e o paradigma de programação orientada a objetos. Dados obtidos das estações meteorológicas são instancias de uma classe chamada "Entidade". Três outras classes de objetos representando ferramentas que suportam as tarefas dos meteorologistas foram modeladas. Os objetos no sistema são apresentados ao usuário através de duas janelas, "Base de Entidades" e " Base de Ferramentas". A implementação da "Base de Ferramentas" inclui ferramentas de mapeamento (para produzir mapas de contorno, mapas de ícones e gráficos), ferramentas de armazenamento (para guardar e recuperar imagens geradas pelo sistema) e uma ferramenta de consulta (para ler valores de variáveis de estações selecionadas). E dada especial atenção a ferramenta de mapa de contorno, onde foi utilizado o método Multiquádrico para interpolação de dados. O trabalho apresenta ainda um estudo sobre métodos de interpolação de dados esparsos, antes de descrever detalhadamente os resultados obtidos com a ferramenta de mapa de contorno. Estes resultados (imagens) são discutidos e comparados com mapas gerados manualmente por meteorologistas do 8º Distrito de Meteorologia. Possíveis extensões do presente trabalho são também abordadas. / The weather forecast centers deal with a great volume of complex multivariate data, which usually have to be understood within short time. Scientific visualization techniques can be used to support both daily forecasting and meteorological research. This work reports the architecture and facilities of a system, named VisualMet, that was implemented based on a case study of the tasks accomplished by the meteorologists responsible for the 8th Meteorological District, in the South of Brazil. This center collects meteorological data three times a day from 32 local stations and receives similar data from both the National Institute of Meteorology, located in Brasilia, and National Meteorological Center, located in the United States of America. Such data result from observation of variables like temperature, pressure, wind velocity, and type of clouds. The tasks of meteorologists and the classes of application data were observed to define system requirements. The architecture and implementation of Visual- Met follow the tool-oriented approach and object-oriented paradigm, respectively. Data taken from meteorological stations are instances of a class named Entity. Three other classes of tools which support the meteorologists' tasks are modeled. Objects in the system are presented to the user through two windows, "Entities Base" and "Tools Base". Current implementation of the "Tools Base" contains mapping tools (to produce contour maps, icons maps and graphs), recording tools (to save and load images generated by the system) and a query tool (to read variables values of selected stations). The results of applying the multiquadric method to interpolate data for the construction of contour maps are also discussed. Before describing the results obtained with the multiquadric method, this work also presents a study on interpolation methods for scattered data. The results (images) obtained with the contour map tool are discussed and compared with the maps drawn by the meteorologists of the 8th Meteorological District. Possible extensions to this work are also presented.
430

Análise da poluição eletromagnética na região urbana de Mossoró-RN

Santana, Talles Amony Alves de 01 February 2018 (has links)
Submitted by Vanessa Christiane (referencia@ufersa.edu.br) on 2018-03-28T22:40:29Z No. of bitstreams: 1 TallesAAS_DISSERT.pdf: 5584884 bytes, checksum: 971529dd90cd6b5bb51663da75f5dfaf (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2018-04-26T13:38:32Z (GMT) No. of bitstreams: 1 TallesAAS_DISSERT.pdf: 5584884 bytes, checksum: 971529dd90cd6b5bb51663da75f5dfaf (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2018-06-18T16:53:54Z (GMT) No. of bitstreams: 1 TallesAAS_DISSERT.pdf: 5584884 bytes, checksum: 971529dd90cd6b5bb51663da75f5dfaf (MD5) / Made available in DSpace on 2018-06-18T17:02:59Z (GMT). No. of bitstreams: 1 TallesAAS_DISSERT.pdf: 5584884 bytes, checksum: 971529dd90cd6b5bb51663da75f5dfaf (MD5) Previous issue date: 2018-02-01 / The fast human progress and a constant technological innovation in the area of telecommunications makes more and more people exposed to the electromagnetic radiation of the most varied natures. The concern with the possible health risks that this exposure can cause in the population causes that several regulatory agencies develop studies with the objective of establishing acceptable limits of human exposure to this type of radiation. Knowledge of these levels of radiation exposure and how electromagnetic fields are distributed spatially in a particular region is of paramount importance for the development of protection techniques that are effective against exposure to radiation reducing the risks to people in these areas. This work aims to study the distribution of electromagnetic radiation in the urban region of Mossoró by measuring the intensity of electric fields, magnetic fields and power density in 200 points using a suitable meter in the 10 MHz to 8 GHz range, based on the methodology proposed in Resolution 303 of ANATEL. And with these data, determine, through the use of statistical parameters, the most appropriate interpolation technique to estimate the spatial distribution of these fields in non-sampled areas using contour maps, created by the Golden Surfer® software, responsible for indicating the places most exposed to electromagnetic radiation. The measurement points were chosen based on the medium distance between the radio base stations in each of analysis zones. The measured values were compared with those established by regulatory organizations to be analyzed according to established standards / O rápido progresso humano e a constante inovação tecnológica na área das telecomunicações faz com que cada vez mais as pessoas estejam expostas à radiação eletromagnética das mais variadas naturezas. A preocupação com os possíveis riscos a saúde que essa exposição pode provocar na população faz com que vários órgãos regulamentadores desenvolvam estudos com o objetivo de estabelecer limites aceitáveis de exposição humana a esse tipo de radiação. O conhecimento desses níveis de exposição à radiação e de como os campos eletromagnéticos se distribuem espacialmente em determinada região é de suma importância para o desenvolvimento de técnicas de proteção que sejam eficientes contra a exposição a essa radiação diminuindo os riscos às pessoas dessas áreas. Este trabalho tem como objetivo estudar a distribuição da radiação eletromagnética na região urbana de Mossoró, através da medição da intensidade dos campos elétricos, campos magnéticos e densidade de potência em 200 pontos utilizando um medidor adequado na faixa de 10 MHz a 8 GHz, utilizando como base a metodologia proposta na Resolução 303 da ANATEL. E, com esses dados, determinar, através da utilização de parâmetros estatísticos, qual a técnica de interpolação mais adequada para estimar a distribuição espacial desses campos em locais não amostrados utilizando curvas de nível, geradas pelo software Golden Surfer®, responsáveis por indicar os locais mais expostos à radiação eletromagnética. Os pontos de medição foram escolhidos baseando-se na distância média entre as estações de rádio base existentes em cada uma das zonas de análise. Os valores medidos foram comparados com os estabelecidos pelas normas regulamentadoras para serem analisados de acordo com os padrões estabelecidos / 2018-03-28

Page generated in 0.0176 seconds