• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 381
  • 120
  • 102
  • 26
  • 21
  • 16
  • 14
  • 10
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 850
  • 235
  • 78
  • 73
  • 71
  • 70
  • 67
  • 67
  • 64
  • 62
  • 62
  • 61
  • 54
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Essays on applied spatial econometrics and housing economics

Kiefer, Hua, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 112-115).
312

Empirical likelihood with applications in time series

Li, Yuyi January 2011 (has links)
This thesis investigates the statistical properties of Kernel Smoothed Empirical Likelihood (KSEL, e.g. Smith, 1997 and 2004) estimator and various associated inference procedures in weakly dependent data. New tests for structural stability are proposed and analysed. Asymptotic analyses and Monte Carlo experiments are applied to assess these new tests, theoretically and empirically. Chapter 1 reviews and discusses some estimation and inferential properties of Empirical Likelihood (EL, Owen, 1988) for identically and independently distributed data and compares it with Generalised EL (GEL), GMM and other estimators. KSEL is extensively treated, by specialising kernel-smoothed GEL in the working paper of Smith (2004), some of whose results and proofs are extended and refined in Chapter 2. Asymptotic properties of some tests in Smith (2004) are also analysed under local alternatives. These special treatments on KSEL lay the foundation for analyses in Chapters 3 and 4, which would not otherwise follow straightforwardly. In Chapters 3 and 4, subsample KSEL estimators are proposed to assist the development of KSEL structural stability tests to diagnose for a given breakpoint and for an unknown breakpoint, respectively, based on relevant work using GMM (e.g. Hall and Sen, 1999; Andrews and Fair, 1988; Andrews and Ploberger, 1994). It is also original in these two chapters that moment functions are allowed to be kernel-smoothed after or before the sample split, and it is rigorously proved that these two smoothing orders are asymptotically equivalent. The overall null hypothesis of structural stability is decomposed according to the identifying and overidentifying restrictions, as Hall and Sen (1999) advocate in GMM, leading to a more practical and precise structural stability diagnosis procedure. In this framework, these KSEL structural stability tests are also proved via asymptotic analysis to be capable of identifying different sources of instability, arising from parameter value change or violation of overidentifying restrictions. The analyses show that these KSEL tests follow the same limit distributions as their counterparts using GMM. To examine the finite-sample performance of KSEL structural stability tests in comparison to GMM's, Monte Carlo simulations are conducted in Chapter 5 using a simple linear model considered by Hall and Sen (1999). This chapter details some relevant computational algorithms and permits different smoothing order, kernel type and prewhitening options. In general, simulation evidence seems to suggest that compared to GMM's tests, these newly proposed KSEL tests often perform comparably. However, in some cases, the sizes of these can be slightly larger, and the false null hypotheses are rejected with much higher frequencies. Thus, these KSEL based tests are valid theoretical and practical alternatives to GMM's.
313

Air-sea interaction at the synoptic- and the meso-scale / Interaction air-mer à l’échelle synoptique et méso-échelle

Moulin, Aimie 04 November 2015 (has links)
Cette thèse concerne l'étude de l'interaction air-mer, due aux échanges de mouvements, avec un modèle idéalisé mais consistant. Les études sont réalisées à partir d'un modèle shallow-water bicouches (une pour l'océan et une pour l'atmosphère), avec une fine résolution spatiale et temporelle. L'interaction est uniquement due à la friction de surface entre les deux couches.Elle est implémentée par une loi de friction quadratique. La force appliquée à l'océan est calculée en utilisant la différence de vitesse entre les vents et les courants. Pour la force appliquée à l'atmosphère on distingue deux cas l'interaction ``1way'' et ``2way''. Pour la première, la friction appliquée à l'atmosphère néglige la dynamique de l'océan; elle est calculée en utilisant uniquement les vents. Pour l'interaction ``2W'', la friction appliquée à l'atmosphère est l'opposée de celle appliquée à l'océan.Trois configurations idéalisées sont explorées ici.La première configuration explique la génération d'une instabilité barotrope dans l'océan due à la force de friction quadratique et la dissipation visqueuse horizontale de l'atmosphère. Dans le cas 1W le cisaillement entraîne une instabilité barotrope dans l'océan. Dans le cas 2W, l'instabilité est amplifiée en amplitude et en dimension et est transférée à l'atmosphère. L'échelle principale de cette instabilité correspond à celle du rayon de Rossby dans l'océan. Elle est uniquement visible dans les modèles numériques, lorsque la dynamique est résolue à cette échelle à la fois dans l'océan mais aussi dans l'atmosphère.Dans la deuxième configuration, des expériences pour différentes valeurs du coefficient de traînée de surface sont réalisées. Le forçage diffère de la première configuration, et permet d'avoir une dynamique turbulente dans l'océan et l'atmosphère. L'énergie perdue par l'atmosphère et gagnée par l'océan par cisaillement à l'interface sont déterminées et comparées aux estimations basées sur les vitesses moyennes. La corrélations entre la vorticité océanique et atmosphérique est déterminée à l'échelle synoptique et méso-échelle de l'atmosphère. L'océan a un rôle passif, et absorbe l'énergie cinétique à quasiment tout les instants et tous les lieux. Les résultats différent des études réalisées à l'échelle du bassin. De par les faibles vitesses de l'océan, le transfert d'énergie dépend que faiblement des courants. La dynamique de l'océan laisse cependant son empreinte dans la dynamique de l'atmosphère conduisant à un état `quenched disorder' du système océan-atmosphère, pour le plus fort coefficient de friction utilisé.La dernière configuration, considère l'échange de mouvements entre l'océan et l'atmosphère autour d'une île circulaire. Dans les simulations actuelles de la dynamique océanique, le champs du forçage atmosphérique est généralement trop grossier pour inclure la présence de petites îles (<100km). Dans les calculs présentés ici, l'île est représenté dans la couche atmosphérique par un coefficient de traînée cent fois plus fort au dessus de l'île que l'océan. Cela engendre de la vorticité dans l'atmosphère , autour et près du sillage de l'île. L'influence de la vorticité atmosphérique sur la vorticité de l'océan, l'upwelling, la turbulence et le transfert d'énergieest considéré en utilisant des simulations couplées océan-atmosphère.Les résultats sont comparés avec des simulations ayant un forçage atmosphérique constant dans le temps et l'espace (pas de sillage) et des simulations "1W" (pour lesquelles les courants n'ont pas d'influence sur l'atmosphère).Les résultats des simulations sont en accords avec les travaux et les observations précédemment réalisés, et confirment que le sillage atmosphérique est le principal processus générant des tourbillons océanique dans le lit de l'île. Il est aussi montré que la vorticité est injectée directement par le rotationel du vent, mais aussi par la force du vent perpendiculaireau gradient d'épaisseur de la couche de surface océanique. / This thesis considers air-sea interaction, due to momentum exchange, in an idealized but consistent model. Two superposed one-layer fine-resolution shallow-water models are numerically integrated. The upper layer represents the atmosphere and the lower layer the ocean. The interaction is only due to the shear between the two layers. The shear applied to the ocean is calculated using the velocity difference between the ocean and the atmosphere.The frictional force between the two-layers is implemented using the quadratic drag law. Three idealized configurations are explored.First, a new mechanism that induces barotropic instability in the ocean is discussed. It is due to air-sea interaction with a quadratic drag law and horizontal viscous dissipation in the atmosphere. I show that the instability spreads to the atmosphere. The preferred spatial scale of the instability is that of the oceanic baroclinic Rossby radius of deformation.It can only be represented in numerical models, when the dynamics at this scale is resolved in the atmosphere and the ocean.In one-way interaction the shear applied to the atmosphere neglectsthe ocean dynamics, it is calculated using the atmospheric wind, only. In two-way interaction it is opposite to the shear applied to the ocean.In the one-way interaction the atmospheric shear leads to a barotropic instability in the ocean. The instability in the ocean is amplified, in amplitude and scale, in two-way interaction and also triggers an instability in the atmosphere.Second, the air-sea interaction at the atmospheric synoptic and mesoscale due to momentum transfer, only, is considered. Experiments with different values of the surface friction drag coefficient are performed, with a different atmospheric forcing from the first configuration, that leads to a turbulent dynamics in the atmosphere and the ocean. The actual energy loss of the atmosphere and the energy gain by the ocean, due to the inter-facial shear,is determined and compared to the estimates based on average speeds.The correlation between the vorticity in the atmosphere and the ocean is determined. Results differ from previous investigations where the exchange of momentum was considered at basin scale. It is shown that the ocean has a passive role, absorbing kinetic energy at nearly all times and locations.Due to the feeble velocities in the ocean, the energy transfer depends only weakly on the ocean velocity. The ocean dynamics leaves nevertheless its imprint in the atmospheric dynamics leading to a quenched disordered state of the atmosphere-ocean system, for the highest value of the friction coefficient considered. This finding questions the ergodic hypothesis, which is at the basis of a large number of experimental, observational and numericalresults in ocean, atmosphere and climate dynamics.The last configuration considers the air-sea interaction, due to momentum exchange, around a circular island. In todays simulations of the ocean dynamics, the atmospheric forcing fields are usually too coarse to include the presence of smaller islands (typically $<$ 100km).In the calculations presented here, the island is represented in the atmospheric layer by a hundred fold increased drag coefficient above the island as compared to the ocean. It leads to an increased atmospheric vorticity in the vicinity and in the wake of the island. The influence of the atmospheric vorticity on the ocean vorticity, upwelling, turbulence and energy transfer is considered by performing fully coupled simulations of the atmosphere-oceandynamics. The results are compared to simulations with a constant, in space and time, atmospheric forcing (no wake) and simulations with one-waycoupling only (where the ocean velocity has no influence on the atmosphere).Results of our simulations agree with previous published work and observations, and confirm that the wind-wake is the main process leading to mesoscale oceanic eddies in the lee of an island.
314

A computer simulation study of tilted smectic mesophases

Withers, Ian Michael January 2000 (has links)
Results are presented from a series of simulations undertaken to determine the effect of a novel form of molecular biaxiality upon the phase behaviour of the well established Gay-Berne (GB) liquid crystal model. Firstly, the simulation of a bulk system interacting via the Internally-Rotated Gay-Berne (IRGB) potential, which offers a single-site representation of a molecule rigidly constrained into a zig-zag conformation, is presented. The results of simulations performed for systems of IRGB particles with an aspect ratio of 3:1 confirm that the introduction of biaxiality into the model results in the destabilisation of the orientationally ordered phases. For particles with a sufficiently pronounced zig-zag conformation, this results in the complete destabilisation of the smectic A phase and the smectic B phase being replaced by the tilted smectic J phase. Following these observations, the effect upon the phase behaviour of increasing molecular elongation is also considered, with an increase in the aspect ratio from 3:1 to 4:1 resulting in the nematic and smectic J phases being replaced by smectic A and smectic G phases respectively. Secondly, a version of the IRGB potential modified to include a degree of molecular flexibility is considered. Results obtained from bulk systems interacting via the flexible IRGB for 3:1 and 4:1 molecules show that the introduction of flexibility results in the destabilisation of the smectic A phase and the stabilisation of the nematic and tilted hexatic phases. Finally, the effect upon the phase behaviour of the rigid IRGB model of the inclusion of a longitudinal linear quadrupole is examined. These results show that increasing quadrupole moment results in the destabilisation of the tilted hexatic phase, although the biaxial order parameter is increased with increasing quadrupole moment. There is no clear correlation between quadrupole magnitude and the other observed phase transitions, with the nematic and smectic A phases being variously stabilised and destabilised with increasing quadrupole magnitude. For the 4:1 molecules with large quadrupole moments, buckled smectic layers are observed where some molecules are tilted with respect to a local layer normal. Of all the systems considered here, this buckled structure is the one which most closely resembles the elusive smectic C phase.
315

Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

January 2016 (has links)
abstract: We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin’s procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N^2) to O(N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
316

Conceitos de calor e temperatura sob a ótica do momento pedagógico de problematização inicial

Araújo, Artur Torres de 24 February 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-05-06T12:51:07Z No. of bitstreams: 1 arquivo total.pdf: 27144583 bytes, checksum: 1cfeb04081720e2946794bce37e16961 (MD5) / Made available in DSpace on 2016-05-06T12:51:07Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 27144583 bytes, checksum: 1cfeb04081720e2946794bce37e16961 (MD5) Previous issue date: 2015-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This research was performed in the State School Immaculate Conception in the city of Cabedelo, with 40 high school students. For this work were developed two learning situations (SA) focusing on the "initial questioning," three stages of the first of a dynamic call pedagogical moments proposed by Delizoicov. To guide the implementation of the SA was developed a teaching situation (SE). The SA were developed with students in the form of texts containing history of Science and questions where students should discuss, argue and develop a response to questions about the text and the daily about the heat and temperature concepts. Completing the practice a pre-test with multiple choice questions about the concepts of heat and temperature was applied. The students' answers were analyzed using the approach of conceptual profiles proposed by Mortimer. The ideas expressed by students were confronted with current scientific ideas to diagnose conceptual cognitive barriers. In the analysis of students' responses to the SA were identified 25 conceptual cognitive barriers, of which 10 were in the category related to cold concepts, heat and temperature, and were classified as belonging to the animist areas, substantialistic and realistic conceptual profile. In the pre-test were identified 9, these two had already been identified in the responses of the students regarding the SA. These 9 conceptual cognitive barriers were classified as belonging to the substantialist and realistic areas of conceptual profile. These conceptual cognitive barriers should be considered by the teacher during the development of classes and, with it, the teacher should seek to develop practices aimed at overcoming them, should make students aware of the need to build new knowledge (scientific) that may better explain phenomena occurring daily, and causing them to rethink about the world they live in. / Esta investigação foi realizada na Escola Estadual Imaculada Conceição, na cidade de Cabedelo, com 40 estudantes do ensino médio. Para a realização deste trabalho foram desenvolvidas duas situações de aprendizagem (SA) focalizando a “problematização inicial”, a primeira de três etapas de uma dinâmica chamada de momentos pedagógicos proposta por Delizoicov. Para orientar a aplicação das SA foi desenvolvida uma situação de ensino (SE). As SA foram desenvolvidas com os estudantes na forma de textos contendo história da Ciência e questões onde os estudantes deveriam debater, argumentar e desenvolver uma resposta para questões sobre o texto e o cotidiano a respeito dos conceitos de calor e temperatura. Completando a prática foi aplicado um pré-teste com questões de múltipla escolha a respeito dos conceitos de calor e temperatura. As respostas dos estudantes foram analisadas utilizando a abordagem dos perfis conceituais propostos por Mortimer. As ideias expressas pelos estudantes foram confrontadas com as ideias científicas vigentes para diagnosticar barreiras cognitivas conceituais. Na análise das respostas dos estudantes para as SA foram identificadas 25 barreiras cognitivas conceituais, das quais 10 se encontravam na categoria relacionada aos conceitos de frio, calor e temperatura, e foram classificadas como pertencentes às zonas animista, substancialista e realista do perfil conceitual. No pré-teste foram identificadas 9, destas 2 já tinham sido identificadas nas respostas dos estudantes referentes as SA. Estas 9 barreiras cognitivas conceituais foram classificadas como pertencentes às zonas substancialista e realista do perfil conceitual. Estas barreiras cognitivas conceituais devem ser consideradas pelo professor durante o desenvolvimento das aulas e, com isso, o professor deve buscar desenvolver práticas que visem a superação das mesmas, devendo conscientizar os estudantes da necessidade de construírem novos conhecimentos (científicos) que possam explicar melhor fenômenos que ocorrem diariamente, e fazendo-os repensar sobre o mundo em que vivem.
317

Asymptotic and Numerical Algorithms in Applied Electromagnetism

January 2012 (has links)
abstract: Asymptotic and Numerical methods are popular in applied electromagnetism. In this work, the two methods are applied for collimated antennas and calibration targets, respectively. As an asymptotic method, the diffracted Gaussian beam approach (DGBA) is developed for design and simulation of collimated multi-reflector antenna systems, based upon Huygens principle and independent Gaussian beam expansion, referred to as the frames. To simulate a reflector antenna in hundreds to thousands of wavelength, it requires 1E7 - 1E9 independent Gaussian beams. To this end, high performance parallel computing is implemented, based on Message Passing Interface (MPI). The second part of the dissertation includes the plane wave scattering from a target consisting of doubly periodic array of sharp conducting circular cones by the magnetic field integral equation (MFIE) via Coiflet based Galerkin's procedure in conjunction with the Floquet theorem. Owing to the orthogonally, compact support, continuity and smoothness of the Coiflets, well-conditioned impedance matrices are obtained. Majority of the matrix entries are obtained in the spectral domain by one-point quadrature with high precision. For the oscillatory entries, spatial domain computation is applied, bypassing the slow convergence of the spectral summation of the non-damping propagating modes. The simulation results are compared with the solutions from an RWG-MLFMA based commercial software, FEKO, and excellent agreement is observed. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012
318

Dinâmica de plasma e fônon e emissão de radiação terahertz em superfícies de GaAs e telúrio excitadas por pulsos ultracurtos / Plasma-phonon dynamics and terahertz emission in GaAs and Te Surfaces excited via ultrafast pulses

Fabricio Macedo de Souza 10 April 2000 (has links)
Após a excitação de uma amostra semicondutora por um pulso ultracurto, os fotoporadores interagem com a rede excitando modos longitudinais ópticos. Essa interação provoca variações no índice de refração do material, produzindo modulações na resposta óptica do meio (efeito eletro-óptico). Por outro lado, esta dinâmica origina polarizações dependentes do tempo o que gera emissão de radiação terahertz. Experimentos recentes (pump-probe) observaram modulações do campo através de medidas da refletividade resolvidas no tempo. A refletividade e o campo estão relacionados segundo o efeito eletro-óptico. Também se resolve temporalmente o campo irradiado pela amostra, através de antenas que operam na faixa de terahertz. Tanto as medidas eletro-ópticas quanto de emissão terahertz fornecem informações sobre a interação dinâmica do plasma com a rede após a excitação óptica. Nesse trabalho simulamos a interação dinâmica de plasma e fônons em n-GaAs e Telúrio (\"bulk\") após estes serem excitados por um pulso ultracurto. Utilizamos equações hidrodinâmicas para descrever transporte de cargas e uma equação fenomenológica de oscilador harmônico forçado, para descrever oscilações longitudinais ópticas da rede. Complementando nossa descrição temos a equação de Poisson, com a qual calculamos o campo gerado pelo plasma e pela polarização da rede semicondutora. Essas equações constituem um sistema de seis equações diferencias (quatro parciais) acopladas. Para resolvê-las utilizamos o método das diferenças finitas. Do cálculo numérico obtemos a evolução temporal do campo elétrico no interior do material. Com esse campo determinamos as freqüências de oscilação do sistema e calculamos o campo irradiado. Nossos resultados apresentam acordo qualitativo com os experimentos / Above-band-gap optical excitation of semiconductors generates highly non-equilibrium photocarriers which interact with phonons thus exciting vibrational modes in the system. This interaction induces refractive-index changes via the electro-optic effect. Moreover it gives rise to electromagnetic radiation at characteristic frequencies (terahertz). Both effects have been measured by time-resolved ultra fast spectroscopy. Recent pump-probe experiments have found strong modulations of the internal electric field through electro-optic measurements. The emitted electromagnetic radiation has also been detected by a terahertz dipole antenna. Both electro-optic and terahertz emission measurements provide information about the coupled dynamics of photocarriers and phonons. In this work we simulate the dynamics of plasmon-phonon coupled modes in n-GaAs and Tellurium (bulk) following ultrafast laser excitation. The time evolution of the photocarrier densities and currents is described semi classically in terms of the moments of the Boltzmann equation. Phonon effects are accounted for by considering a phenomenological driven-harmonic-oscillator equation, which is coupled to the electron-hole plasma via Poisson\'s equation. These equations constitute a coupled set of differential equations. We use finite differencing to solve these equations. From the numerical results for the evolution of internal fields we can calculate both the characteristic frequencies of system and its terahertz radiation spectrum. Our results are consistent with recent experimental data
319

Avaliação e nova proposta de regionalização hidrológica para o Estado de São Paulo / Evaluation and new hydrologic regionalization proposal for the State of São Paulo

Wagner Wolff 06 February 2013 (has links)
A regionalização hidrológica é uma técnica que permite transferir informação entre bacias hidrográficas semelhantes, a fim de calcular em sítios que não dispõem de dados, as variáveis hidrológicas de interesse; assim, a mesma caracteriza-se por ser uma ferramenta útil na obtenção de outorga de direitos de uso de recursos hídricos, instrumento previsto na Lei 9433/97. Devido à desatualização do modelo atual de regionalização hidrológica do Estado de São Paulo, proposto na década de 80, este estudo tem como objetivo geral avaliar se o mesmo está adequado ao uso, de acordo com a atualização de seu banco de dados, e propor um novo que supere as limitações do antigo. O estudo foi realizado no Estado de São Paulo, que tem área de aproximadamente 248197 km², localizado entre as longitudes -44° 9\', e -53º 5\', e entre as latitudes -22° 40\', e -22° 39\'. Utilizou-se, inicialmente, dados de 176 estações fluviométricas administradas pelo DAEE e pela ANA, disponíveis em http://www.sigrh.sp.gov.br. Determinou-se para as estações, a precipitação média anual da bacia hidrográfica (P), a vazão média plurianual (Q), a vazão mínima média de 7 dias seguidos com período de retorno de 10 anos (Q7,10) e as vazões com 90 e 95% de permanência no tempo (Q90 e Q95). Posteriormente, fez-se análise de consistência excluindo as estações inconsistentes do estudo; assim, restaram 172 para serem utilizadas na avaliação do modelo e formulação de um novo. A avaliação do modelo fez-se pelo índice de confiança (c), que é definido pelo produto entre o coeficiente de correlação (r) e o índice de concordância (d), utilizando como valor de estimativa as vazões geradas pelo modelo, e como valor padrão as calculadas por intermédio das estações fluviométricas. Todas as vazões avaliadas foram classificadas como ótimas, com índice de confiança (c) acima de 0,85; assim, o atual modelo rejeitou a hipótese de que a atualização de seu banco de dados pudesse inferir em sua capacidade preditiva; portanto, o mesmo pode ser usado na obtenção das vazões estudadas que são referência na emissão de outorga em diferentes Estados do Brasil. Entretanto, o modelo apresentou algumas limitações, como extrapolação para áreas de bacias de drenagem menores do que as utilizadas para formulá-lo, e problemas em seu aplicativo computacional: o mesmo informa a precipitação média anual na coordenada geográfica do local de captação da água, e não da bacia de drenagem a montante do referido local. Neste enfoque, foi formulado um novo modelo, que superou as limitações e proporcionou capacidade preditiva maior que a do antigo. / A hydrological regionalization is a technique that allows to transfer information between similar watersheds in order to calculate, in sites where there are no data on the hydrological variables of interest. This technique becomes a useful tool to ensure the rights of water resources use, instrument provided by Law 9433/97. Due to the outdated hydrological regionalization model of São Paulo State, proposed in the 1980\'s, this study aims to broadly assess whether the current model is appropriate to use, according to an analysis of its update database and to propose a new model to overcome the limitations of the current one. The study was conducted in State São Paulo with area of approximately 248197 km ², located between longitudes -44 ° 9 \', and -53 ° 5\', and between latitudes 40 ° -22\' and -22 ° 39\'. We used data from 176 initially gauged stations administered by ANA and DAEE available at http://www.sigrh.sp.gov.br, where it was determined to the stations, the average annual rainfall of the basin (P) multiannual average streamflow (Q), streamflow minimum average of 7 consecutive days with a return period of 10 years (Q7,10) and streamflows with 90 and 95% of permanence in time(Q90 e Q95). Afterwards, we analyzed the consistency excluding the inconsistent stations from the study, thus, remaining 172 to be used in the model evaluation and development of a new model. The model evaluation was made by the confidence index (c), which is the product between the correlation coefficient (r) and the agreement index (d), using as estimate value the streamflows generated by the model and as the standard value, the streamflows calculated through the gauged stations. All streamflows evaluated were classified as optimal, with confidence index (c) above 0.85, therefore, the current model rejected the hypothesis that upgrading the database could infer its predictive ability, so, it can be used to obtain the streamflows studied that refer to use grants in different States of Brazil. However, the model had some limitations, such as extrapolation to areas of smaller watersheds than those used to formulate it, and computer application problems, being that, it reports the average annual precipitation at the geographic coordinate at the local catchment water, not the watershed upstream of that location. A new model was formulated that surpasses the limitations and provides greater predictive ability than the current one.
320

Os efeitos da estrutura de propriedade sobre a politica de dividendos da empresa brasileira / The effects of ownership structure on the Brazilian company dividend policy

Jose Wellington BrandÃo 27 August 2014 (has links)
nÃo hà / A despeito dos diversos achados, ao longo de dÃcadas, sobre a polÃtica de dividendos, a ecisÃo de pagar dividendos ainda à um tema que segue em debate. Diversos fatores tÃm sido propostos como capazes de explicar a polÃtica de dividendos, como por exemplo, o lucro/rentabilidade, o dividendo prÃvio (manutenÃÃo na polÃtica de dividendos), tamanho da empresa, alavancagem, e oportunidades de crescimento. Mais recentemente, a literatura tem explorado a interferÃncia que a estrutura de propriedade pode ter sobre a distribuiÃÃo de dividendos. Neste contexto surgem as proposiÃÃes como a das hipÃteses relacionadas ao uso da polÃtica de dividendos como instrumento de controle da direÃÃo executiva, e à possÃvel expropriaÃÃo de acionistas minoritÃrios por parte dos controladores. O objetivo desta pesquisa à avaliar, sob o marco teÃrico da Teoria da AgÃncia, se hà uso da polÃtica de dividendos como instrumento de monitoraÃÃo executiva ou de expropriaÃÃo de acionistas minoritÃrios no mercado brasileiro. A amostra à um painel de dados composto por 1890 observaÃÃes anuais de 223 empresas no perÃodo 1996-2012 a partir de dados coletados no sistema EconomÃtica de empresas com aÃÃes negociadas na Bolsa de Valores de SÃo Paulo. A partir da estimaÃÃo de um conjunto de modelos explicativos da polÃtica de dividendos os resultados indicam que a presenÃa de um acionista majoritÃrio tem um efeito negativo sobre a polÃtica de dividendos, em linha com a hipÃtese de expropriaÃÃo. Outro resultado relevante à o efeito positivo da presenÃa de outra empresa nÃo financeira, como acionista majoritÃrio ou principal, sobre o nÃvel de distribuiÃÃo de dividendos, o que està em sintonia com a hipÃtese de monitoramento da direÃÃo executiva / Despite the many finds, for decades on the dividend policy, the ECISION to pay dividends is also a theme that follows in debate. Several factors have been proposed as able to explain the dividend policy, such as profit / profitability prior dividend (maintaining the dividend policy), firm size, leverage, and growth opportunities. More recently, the literature has explored the interference that the ownership structure may have on the distribution of dividends. In this context arise propositions as the assumptions related to the use of the dividend policy as an executive steering control instrument, and the possible expropriation of minority shareholders by the controlling. The objective of this research is to evaluate, under the theoretical framework of the Agency Theory, if there is use of the dividend policy as executive monitoring instrument or expropriation of minority shareholders in the Brazilian market. The sample is a data panel of 1,890 annual observations of 223 companies in the period 1996-2012 from data collected in EconomÃtica system companies listed on the SÃo Paulo Stock Exchange. From the estimation of a set of explanatory models of dividend policy The results indicate that the presence of a majority shareholder has a negative effect on the dividend policy, in line with the hypothesis of expropriation. Another important result is the positive effect of the presence of other non-financial company, as major or principal shareholder on the dividend distribution level, which is in line with the executive direction of the monitoring event

Page generated in 0.0894 seconds