• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Expertise in credit granting : studies on judgment and decision-making behavior

Andersson, Patric January 2001 (has links)
How do experienced lenders make decisions? This dissertation addresses this question by investigating judgment and decision-making behavior of loan officers in banks as well as credit managers in supplying companies. The dissertation applies an integrated economic-psychological perspective and consists of seven parts: a comprehensive literature review and six separate empirical papers. Reviewed areas are research on judgment and decision-making (JDM), research on expert decision-makers, and earlier empirical work on experienced lenders. The six papers shed light on: (1) desirable personal attributes of expert credit analysts; (2) the use of software to track JDM behavior; (3) differences between novices' and experienced loan officers' JDM behavior; (4) the relationships between information acquisition, risk attitude, and experience; (5) attitudes towards credit decision support systems; and (6) attitudes towards requesting collateral. Employed methods were in-depth interviews, a nation-wide survey, and a computer-based experiment.On the whole, the empirical findings give an ambiguous picture of the alleged superiority of experienced lenders' judgment and decision-making behavior. On the one hand, experienced lenders seem to be capable, careful, and conscious of their responsibility. On the other hand, they tend to disagree and make contradictory judgments and decisions. The applied perspective and methodology are not only aimed at providing better insights into how experienced lenders make decisions, but can also stimulate future research on how professionals in other domains than credit granting make decisions. / Diss. Stockholm : Handelshögsk., 2001
522

Opening New Radio Windows and Bending Twisted Beams

Nordblad, Erik January 2011 (has links)
In ground based high frequency (HF) radio pumping experiments, absorption of ordinary (O) mode pump waves energises the ionospheric plasma, producing optical emissions and other effects. Pump-induced or natural kilometre-scale field-aligned density depletions are believed to play a role in self-focussing phenomena such as the magnetic zenith (MZ) effect, i.e., the increased plasma response observed in the direction of Earth's magnetic field. Using ray tracing, we study the propagation of ordinary (O) mode HF radio waves in an ionosphere modified by density depletions, with special attention to transmission through the radio window (RW), where O mode waves convert into the extraordinary (X, or Z) mode. The depletions are shown to shift the position of the RW, or to introduce RWs at new locations. In a simplified model neglecting absorption, we estimate the wave electric field strength perpendicular to the magnetic field at altitudes normally inaccessible. This field could excite upper hybrid waves on small scale density perturbations. We also show how transmission and focussing combine to give stronger fields in some directions, notably at angles close to the MZ, with possible implications for the MZ effect. In a separate study, we consider electromagnetic (e-m) beams with helical wavefronts (i.e., twisted beams), which are associated with orbital angular momentum (OAM). By applying geometrical optics to each plane wave component of a twisted nonparaxial e-m Bessel beam, we calculate analytically the shift of the beam's centre of gravity during propagation perpendicularly and obliquely to a weak refractive index gradient in an isotropic medium. In addition to the so-called Hall shifts expected from paraxial theory, the nonparaxial treatment reveals new shifts in both the transverse and lateral directions. In some situations, the new shifts should be significant also for nearly paraxial beams.
523

Gravity wave coupling of the lower and middle atmosphere.

Love, Peter Thomas January 2009 (has links)
A method of inferring tropospheric gravity wave source characteristics from middle atmosphere observations has been adapted from previous studies for use with MF radar observations of the equatorial mesosphere-lower thermosphere at Christmas Island in the central Pacific. The nature of the techniques applied also permitted an analysis of the momentum flux associated with the characterised sources and its effects on the equatorial mean flow and diurnal solar thermal tide. An anisotropic function of gravity wave horizontal phase speed was identified as being characteristic of convectively generated source spectra. This was applied stochastically to a ray-tracing model to isolate numerical estimates of the function parameters. The inferred spectral characteristics were found to be consistent with current theories relating convective gravity wave spectra to tropospheric conditions and parameters characterising tropical deep convection. The results obtained provide observational constraints on the model spectra used in gravity wave parameterisations in numerical weather prediction and general circulation models. The interaction of gravity waves with the diurnal solar thermal tide was found to cause an amplification of the tide in the vicinity of the mesopause. The gravity wave-tidal interactions were highly sensitive to spectral width and amplitude. Estimates were made of the high frequency gravity wave contribution to forcing the MSAO with variable results. The data used in the analysis are part of a large archive which now has the potential to provide tighter constraints on wave spectra through the use of the methods developed here. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1352362 / Thesis (Ph.D.) -- University of Adelaide, School of Chemistry and Physics, 2009
524

An assessment of recent changes in catchment sediment sources and sinks, central Queensland, Australia

Hughes, Andrew Owen, Physical, Environmental & Mathematical Sciences, Australian Defence Force Academy, UNSW January 2009 (has links)
Spatial and temporal information on catchment sediment sources and sinks can provide an improved understanding of catchment response to human-induced disturbances. This is essential for the implementation of well-targeted catchment-management decisions. This thesis investigates the nature and timing of catchment response to human activities by examining changes in sediment sources and sinks in a dry-tropical subcatchment of the Great Barrier Reef (GBR) catchment area, in northeastern Australia. Changes in catchment sediment sources, both in terms of spatial provenance and erosion type, are determined using sediment tracing techniques. Results indicate that changes in sediment source contributions over the last 250 years can be linked directly to changes in catchment land use. Sheetwash and rill erosion from cultivated land (40???60%) and channel erosion from grazed areas (30-80%) currently contribute most sediment to the river system. Channel erosion, on a basin-wide scale, appears to be more important than previously considered in this region of Australia. Optically stimulated luminescence and 137Cs dating are used to determine pre-and post- European settlement (ca. 1850) alluvial sedimentation rates. The limitations of using 137Cs as a floodplain sediment dating tool in a low fallout environment, dominated by sediment derived from channel and cultivation sources, are identified. Low magnitude increases in post-disturbance floodplain sedimentation rates (3 to 4 times) are attributed to the naturally high sediment loads in the dry-tropics. These low increases suggest that previous predictions which reflect order of magnitude increases in post-disturbance sediment yields are likely to be overestimates. In-channel bench deposits, formed since European settlement, are common features that appear to be important stores of recently eroded material. The spatially distributed erosion/sediment yield model SedNet is applied, both with generic input parameters and locally-derived data. Outputs are evaluated against available empirically-derived data. The results suggest that previous model estimates using generic input parameters overestimate post-disturbance and underestimate pre-disturbance sediment yields, exaggerating the impact of European catchment disturbance. This is likely to have important implications for both local-scale and catchment-wide management scenarios in the GBR region. Suggestions for future study and the collection of important empirical data to enable more accurate model performance are made.
525

Développement d'une méthodologie de la «modélisation compartimentale» des systèmes en écoulement avec ou sans réaction chimique à partir d'expériences de traçage et de simulations de mécanique des fluides numérique / Development of "compartmental modelling" methodology of flowing systems with or without chemical reaction using tracing experiments and computational fluids dynamics simulations

Haag, Jérémie 05 December 2017 (has links)
Cette thèse traite de la modélisation des réacteurs chimiques par la « modélisation compartimentale », qui consiste à diviser le système en un réseau d’une dizaine à quelques centaines de volumes interconnectés, appelés compartiments. La structure du réseau est déduite à partir d’informations provenant d’expériences de traçage, d’informations techniques sur le réacteur chimique, de simulations de mécanique des fluides numérique et des objectifs de la modélisation. Cette méthode procure un bon compromis entre temps de calcul et finesse des résultats. Quand ils sont correctement menés, les modèles à compartiments donnent des prédictions similaires, en termes de réactions chimiques, à ceux issus des simulations de mécanique des fluides numérique réactive avec un temps de calcul plus court et une représentation physique plus concrète du comportement du réacteur. Chaque étude issue de la littérature est consacrée à un réacteur spécifique avec une approche particulière qui ne peut pas être directement transposée sur un autre réacteur. L’objectif de cette thèse est d’apporter une contribution au développement d’une méthodologie la plus générale possible et de développer un outil de génération automatique et de résolution du système d’équations différentielles qui doit être résolu. Dans le premier chapitre, un état de l’art est réalisé, définissant le champ d’application de notre méthode, dans le but d’identifier les méthodes de découpage les plus pertinentes et les différentes méthodes pour calculer les échanges entre les compartiments. Dans un second chapitre, une méthode générale pour de la modélisation compartimentale est développée. Une approche polyvalente est proposée, consistant à découper le réacteur en tranches identiques. Le calcul des échanges entre compartiments, dus à la convection et la turbulence, est présenté en détail, avec la description des trois méthodes de calcul des échanges turbulents. Une interface a été développée permettant de construire n’importe quel réseau de compartiments. À partir de cette interface, les équations sont écrites et automatiquement résolues. La méthode est appliquée dans un troisième chapitre sur un cas défavorable au découpage en tranches. Cela a permis de tester les limites de cette approche. En particulier, deux points ont été étudiés : (1) l’applicabilité du découpage en tranches identiques et (2) la comparaison entre les méthodes de calcul des échanges turbulents. Le premier test a prouvé la robustesse de l’approche par division mais le second test n’a pas permis d’établir si une méthode de calcul est meilleure qu’une autre. Finalement, la méthode a été valorisée et transférée en implémentant les algorithmes développés dans un logiciel commercial. Ce logiciel permet de simuler la dispersion d’espèces réactives et non réactives (traceurs), dans un modèle contenant plusieurs centaines de compartiments organisés en tranches identiques / This PhD deals with modelling of chemical reactors with the “compartmental modelling” approach, which consists in dividing the system into a network from a dozen to several hundreds of interconnected volumes, called compartments. The structure of the network is deduced from tracer experiments, technical information about the chemical reactor and computational fluid dynamics flow simulations. This method provides a good compromise between computation time and results accuracy. When they are properly set-up, compartmental models give similar predictions, in terms of chemical reactions, as those of CFD simulations with a shorter calculation time and a more concrete representation of the reactor behavior. Every study from the literature is devoted to a specific reactor with a particular approach that cannot be straightforwardly transposed to other reactors. The aim of this PhD is to provide a contribution to the development of the most general possible methodology and to develop an automatic tool of generation and resolution of the differential equations system which must be solved. In the first chapter, a state of the art is proposed, defining the field of application of our method, in order to identify the most relevant division methods and the different methods to calculate the exchange between compartments. In the second chapter, a general methodology for compartmental modelling is developed. A versatile approach is proposed, consisting in dividing the reactor in identical slices. The calculation of exchange between compartments, both due to convection and turbulence, is presented in detail, with the description of three calculation methods for turbulent exchange. An interface has been developed, allowing to build any network of compartments. From this interface, the equations are written and solved automatically. The methodology is applied in the third chapter to an unfavorable case for slice cutting. This has allowed to test the limit of this approach. In particular, two points have been studied: (1) the applicability of division into identical slices and (2) the comparison between the turbulent exchange calculation methods. The first test has proved the robustness of the division approach but the second test has not allowed to establish whether one calculation method is better than another. Finally, the methodology has been promoted and transferred by implementing the developed algorithms within a commercial software. This software allows to simulate the dispersion of reactive and non-reactive (tracers) species, in model containing hundreds of compartments organized in identical slices
526

Débogage des systèmes embarqués multiprocesseur basé sur la ré-exécution déterministe et partielle / Deterministic and partial replay debugging of multiprocessor embedded systems

Georgiev, Kiril 04 December 2012 (has links)
Les plates-formes MPSoC permettent de satisfaire les contraintes de performance, de flexibilité et de consommation énergétique requises par les systèmes embarqués émergents. Elles intègrent un nombre important de processeurs, des blocs de mémoire et des périphériques, hiérarchiquement organisés par un réseau d'interconnexion. Le développement du logiciel est réputé difficile, notamment dû à la gestion d'un grand nombre d'entités (tâches/threads/processus). L'exécution concurrente de ces entités permet d'exploiter efficacement l'architecture mais complexifie le processus de mise au point et notamment l'analyse des erreurs. D'une part, les exécutions peuvent être non-déterministes notamment dû à la concurrence, c'est à dire qu'elles peuvent se dérouler d'une manière différente à chaque reprise. En conséquence, il n'est pas garanti qu'une erreur se produirait durant la phase de mise au point. D'autre part, la complexité de l'architecture et de l'exécution peut rendre trop important le nombre d'éléments à analyser afin d'identifier une erreur. Il pourrait donc être difficile de se focaliser sur des éléments potentiellement fautifs. Un des défis majeurs du développement logiciel MPSoC est donc de réduire le temps de la mise au point. Dans cette thèse, nous proposons une méthodologie de mise au point qui aide le développeur à identifier les erreurs dans le logiciel MPSoC. Notre premier objectif est de déboguer une même exécution plusieurs fois afin d'analyser des sources potentielles de l'erreur jusqu'à son identification. Nous avons donc identifié les sources de non-déterminisme MPSoC et proposé des mécanismes de ré-exécution déterministe les plus adaptés. Notre deuxième objectif vise à minimiser les ressources pour reproduire une exécution afin de satisfaire la contrainte MPSoC de maîtrise de l'intrusion. Nous avons donc utilisé des mécanismes efficaces de ré-exécution déterministe et considéré qu'une partie du comportement non-déterministe. Le troisième objectif est de permettre le passage à l'échelle, c'est à dire de déboguer des exécutions caractérisées par un nombre d'éléments de plus en plus croissant. Nous avons donc proposé une méthode qui permet de circonscrire et de déboguer qu'une partie de l'exécution. De plus, cette méthode s'applique aux différents types d'architectures et d'applications MPSoC. / MPSoC platforms provide high performance, low power consumption and flexi-bility required by the emerging embedded systems. They incorporate many proces-sing units, memory blocs and peripherals, hierarchically organized by interconnec-tion network. The software development is known to be difficult, namely due to themanagement of multiple entities (tasks/threads/processes). The concurrent execu-tion of these entities allows to exploit efficiently the architecture but complicatesthe refinement process of the software and especially the debugging activity. Onthe one hand, the executions of the software can be non-deterministic, namely dueto the concurrency, i.e. they perform differently each time. Consequently, thereis no guaranties that an error will occur during the debugging activity. On theother hand, the complexity of the architecture and the execution can increase theelements to be analyzed in the debugging process. As a result, it can be difficultto concentrate on the potentially faulty elements. Therefore, one of the most im-portant challenges in the development process of MPSoC software is to reduce thetime of the refinement process.In this thesis, we propose a new methodology to refine the MPSoC softwarewhich helps the developers to do the debugging activity. Our first objective is tobe able to debug the same execution several times in order to analyze potentialsources of the error. To do so, we identified the sources of non-determinism in theMPSoC software executions and propose the most appropriate methods to recordand replay them. Our second objective is to reduce the execution overhead requi-red by the record mechanisms to limit the intrusiveness which is an importantMPSoC constraint. To accomplish this objective, we consider a part of the non-deterministic behaviour and selected efficient record-replay methods. The thirdobjective is to provide a scalable solution, i.e. to be able to debug more and morecomplex executions, characterized by an increasing number of elements. Therefore,we propose a partial replay method which allows to isolate and debug a fraction ofthe execution elements. Moreover, this method applies to different types of archi-tectures and applications MPSoC.
527

Compréhension expérimentale et numérique des chemins de l'eau sur l'ensemble du champ captant de la Métropole de Lyon / Experimental and numerical understanding of water paths on the well field of Lyon agglomeration

Réfloch, Aurore 31 May 2018 (has links)
L’alimentation en eau potable des 1 300 000 habitants de la Métropole de Lyon provient essentiellement de réserves souterraines, puisées sur le site du champ captant de Crépieux-Charmy. Ce captage est un système complexe de par sa superficie (375 ha), le nombre d’ouvrages de pompage (111 puits et forages), le système de réalimentation artificielle (12 bassins d’infiltration), la présence de différents bras du Rhône en interaction avec l’eau souterraine, mais également du fait de la complexité lithologique naturelle du sous-sol. La compréhension des interactions entre les compartiments de ce système est nécessaire pour assurer la pérennisation quantitative et qualitative de la ressource.La caractérisation des écoulements repose sur trois outils essentiels : l’observation, l’expérimentation et la modélisation numérique.L’observation, basée sur les nombreuses données acquises in-situ, met en évidence le rôle prépondérant de l’exploitation hydrique du site sur les écoulements (pompages et bassins). La réalimentation artificielle met en jeu, annuellement, un volume d’eau qui équivaut à la moitié du volume puisé sur l’ensemble du site, et entraîne un réchauffement non négligeable de la nappe en période estivale. Les cartes piézométriques et thermiques à l’échelle du champ captant permettent de visualiser les évolutions spatiales et temporelles des écoulements. D’après l’analyse de données, le dôme hydraulique créé par la réalimentation artificielle (et destiné à obtenir une barrière hydraulique de protection contre une contamination accidentelle des eaux de surface) semble perdurer au maximum 1 à 2 jours après l’arrêt de l’alimentation des bassins. Un indice d’infiltrabilité est défini pour déterminer la capacité d’infiltration de chaque bassin : tenant compte des diverses variables affectant la vitesse d’infiltration, une diminution temporelle de l’indice d’infiltrabilité reflète le colmatage progressif de la couche de sable de fond de bassin. Cet indice est de ce fait un outil d’aide à la décision pour la priorisation des bassins à réhabiliter.Le volet expérimental se décline en deux points : la caractérisation des fonds de bassins par essais d’infiltration (gain d’infiltrabilité par renouvellement du sable, couche compactée sous le sable caractérisée par une forte anisotropie de sa conductivité hydraulique) et la caractérisation des sens d’écoulement par traçage thermique à l’échelle d’un bassin. Un dispositif expérimental, créé de part et d’autre d’un des bassins permet de suivre l’évolution piézométrique et thermique lors des cycles de remplissage. La création des 31 ouvrages de ce dispositif expérimental a permis de mieux caractériser la lithologie en présence, de valider la présence de la zone non saturée règlementaire au droit du bassin, de confirmer l’existence d’écoulements sous le Vieux-Rhône mais aussi de mettre en évidence le fonctionnement 3D des écoulements.Enfin, un modèle numérique a été créé pour simuler les transferts d’eau et de chaleur, sur l’ensemble du site de captage. Cet outil permet d’identifier et de quantifier les sources d’alimentation de la zone de captage, de mettre en évidence la protection partielle des ouvrages de pompage par les dômes hydrauliques créés par les bassins, et de montrer la complexité des relations nappe-rivière, notamment leur dépendance au niveau d’eau. D’ores et déjà opérationnel pour des temps longs (supérieurs à 15 jours), l’outil numérique proposé est exploitable pour des scénarios d’évolutions climatiques ou d’évolutions de l’exploitation du site. Pour les temps inférieurs à deux semaines, le modèle nécessite une amélioration de la connaissance des interactions nappe-rivière et des transferts thermiques (prise en compte du non-équilibre thermique local).Mots clés : Hydrogéologie, réalimentation artificielle, essais d’infiltration, traçages thermiques, modélisation hydro-thermique 3D. / The supply of drinking water for the 1,300,000 inhabitants of Lyon Metropole mainly comes from underground reserves in the well field of Crépieux-Charmy. This well field is a complex system because of its surface area (375 ha), the number of pumping wells (111 wells), the artificial recharge system (12 infiltration ponds), the interaction between the Rhône River and groundwater, as well as its natural lithological complexity. Understanding the interactions between the compartments of this system is necessary to ensure quantitative and qualitative sustainability of the water resource.The characterization of field-scale flows is based on three essential tools: observation, experimentation and numerical modelling.The observations, based on a lot of operational field data, highlight the influence of site operation on the flows (pumping and basins). Annually, artificial recharge requires a volume of water which accounts for half of the volume pumped on the whole site. This also leads to a significant rise in water table temperatures during summer periods. Piezometric and water temperature maps at the well field scale allow for visualization of the spatial and temporal evolutions of the flow directions. According to the data analysis, hydraulic domes created by the artificial recharge (and designed to provide a hydraulic barrier to protect against accidental contamination of superficial water) seem to persist for a maximum of 1 to 2 days after water supply of the basins has been stopped. An infiltration index has been defined in order to determine the infiltration capacity of each basin. It takes into account the multiple variables affecting the infiltration rate. The temporal evolution of the infiltration capacity of each basin illustrates the fouling of the basement sand layer. The infiltration index is also a decision support tool for the prioritization of basins to be rehabilitated.The experimental component is divided into two parts: basins characterization by infiltrometer tests (increase of infiltration by renewal of the sand layer, compacted layer under the sand characterized by a strong anisotropy of its hydraulic conductivities) and characterization of the flow direction by heat tracing at scale of an infiltration pond. An experimental system, created on both sides of one of the basins allows tracking of the evolution of piezometric and water temperature during filling cycles.The creation of the 31 piezometers of this experimental system enabled better characterization of the lithology of the ground, to validate the conservation of the unsaturated zone under the basin, to confirm the existence of flows under the Vieux-Rhône River, and to highlight the three-dimensional flows.A digital model has been created to reproduce water and heat transfer on the entire well field. This tool is used to identify and quantify the sources of water of the water catchment area, to highlight the partial protection of the pumping wells by the hydraulic domes, and to show the complexity of the groundwater-river relationship, in particular their dependence on the water level. Already operational on long periods (over 15 days), the proposed digital model is useful for scenarios of climate change or changes in operational conditions. For periods shorter than two weeks, the model requires an improvement in the knowledge of groundwater-river interactions and heat transfer (taking into account the local thermal non-equilibrium).Key words: Hydrogeology, artificial recharge, infiltrometer tests, heat tracing, 3D hydro-thermal modelling.
528

[en] APPLICATION OF COMPUTATIONALLY-INTENSIVE PROPAGATION MODELS TO THE PREDICTION OF PATH LOSSES DUE TO MOUNTAINOUS TERRAIN IN THE VHF FREQUENCY BAND / [pt] APLICAÇÃO DE MODELOS COMPUTACIONALMENTE INTENSIVOS NA PREVISÃO DAS PERDAS DE PROPAGAÇÃO DEVIDAS A TERRENOS IRREGULARES NA FAIXA DE VHF

MARCO AURELIO NUNES DA SILVA 21 March 2006 (has links)
[pt] Os efeitos da difração na propagação de ondas de rádio sobre terreno irregular em VHF e outras bandas a ser usado por futuras aplicações da TV digital são normalmente estimados usando um dos muitos modelos clássicos. Nesta dissertação é feita uma comparação dos erros cometidos na previsão do sinal recebido por três modelos de propagação computacionalmente intensivos. Os resultados da presente comparação indicarão se os esforços computacionais envolvidos na aplicação destes métodos são capazes de diminuir o valor médio e desvio padrão das diferenças entre as medidas e predições determinadas pelos métodos clássicos. / [en] Diffraction effects on the propagation of radio wave over irregular terrain in the VHF and other bands to be used by future digital TV applications are normally estimated using one of many classical models. In this dissertation is made a comparison of the errors committed prediction of signal received by three propagation models computationally-intensive. The results of the present comparison will indicate whether the computational efforts involved on the application of these methods are capable of decreasing the mean value and the standard deviation of the difference between measurements and predictions determined by the classical methods.
529

Análise do rumo profissional do trabalhador em relação a sua área de formação

Tavares, Celso Icaro Grijó Costa January 2006 (has links)
Made available in DSpace on 2009-11-18T19:01:07Z (GMT). No. of bitstreams: 1 Celso-Icaro.pdf: 5961708 bytes, checksum: 2e49aaad659e7d74200d9428227ab10f (MD5) Previous issue date: 2006 / This study has the objective to verify the behavior of Brazilian workers residents in the state of Rio de Janeiro, throughout theirs labor life, concerning their ability to remain - and specialize - in the same professional area originally chosen in its superior educational graduation. As the proposed theme stands beyond the available bibliography, the author needed to analyze the Governmental Labor Ministry Statistical Databases, which provides Brazilian Formal Employment and Unemployment Information. The study started with the selection of three groups of professional occupations from the year of 1994 until the end of 2002. All the professionals who were employed in 1994 had had its trajectory traced up to 2002, as well as were analyzed the ingression behavior at the formal labor market in the year of 1994. Thus, the study encompasses not only the existing employment but also the new job posts created in 1994. For the trajectory tracing, a group of graduated students was selected in some universities of the state of Rio de Janeiro in the year of 1994. The study has clearly shown the impact that the availability of jobs in the same area of the graduation has over the workers decision to remain or change the original career. Such information and conclusions can be a valuable instrument to help students and professionals to plan and decide theirs futures careers, for the planning of public politics of educational investments, as well as for the Brazilian educational institutions to adjust the offering of its courses according to the labor market capacity. / Este estudo tem o objetivo de verificar até que ponto os trabalhadores brasileiros, residentes no estado do Rio de Janeiro, têm a capacidade de, ao longo de sua vida laboral, manter-se e aprofundar-se na área de formação originalmente escolhida em sua graduação. Para a consecução de tal objetivo, além da bibliografia referente ao assunto foram analisados dados do mercado de trabalho formal brasileiro. A partir da seleção de três grupos de ocupações profissionais, foi analisado o desempenho dos mesmos no período de 1994 até final de 2002 dentro do estado do Rio de Janeiro. Todos os profissionais que estavam empregados em 1994 foram acompanhados ou tiveram sua trajetória acompanhada até 2002, como também foi analisado o comportamento do ingresso no mercado de trabalho no ano de 1994. Foi feito um paralelo com um grupo de formandos de 1994 escolhidos em algumas instituições de ensino superior do estado do Rio de Janeiro visando a traçar o perfil da trajetória profissional dos indivíduos deste grupo em estudo. O resultado apresentou uma análise do quanto o mercado influencia a manutenção de carreira aderente à sua área de formação original. Tais informações podem ser instrumentos preciosos para decisões de futuros profissionais, para políticas públicas bem como para as instituições de ensino do Brasil.
530

Ray Tracing and Spectral Modelling of Excited Hydroxyl Radiation from Cryogenic Flames in Rocket Combustion Chambers

Perovšek, Jaka January 2018 (has links)
A visualisation procedure was developed which predicts excited hydroxyl (OH*) radiation from the Computational Fluid Dynamics (CFD) solutions of cryogenic hydrogen-oxygen rocket flames. The model of backward ray tracing through inhomogeneous media with a continuously changing refractive index was implemented. It obtains the optical paths of light rays that originate in the rocket chamber, pass through the window and enter a simulated camera. Through the use of spectral modelling, the emission and absorption spectra eλ and κλ are simulated on the ray path from information about temperature, pressure and concentration of constituent species at relevant points. By solving a radiative transfer equation with the integration of emission and absorption spectra along the ray line-by-line, a spectral radiance is calculated, multiplied with the spectral filter transmittance and then integrated into total radiance. The values of total radiances at the window edge are visualised as a simulated 2D image. Such images are comparable with the OH* measurement images. The modelling of refraction effects results in up to 20 % of total radiance range absolute difference compared to line-of-sight integration. The implementation of accurate self-absorption corrects significant over-prediction, which occurs if the flame is assumed to be optically thin. Modelling of refraction results in images with recognisable areas where the effect of a liquid oxygen (LOx) jet core can be observed, as the light is significantly refracted. The algorithm is parallelised and thus ready for use on big computational clusters. It uses partial pre-computation of spectra to reduce computational effort.

Page generated in 0.0727 seconds