91 |
A atividade de trabalho do dentista "traduzida" pela linguagem dos advogados, em processos civis / The work activity of the dentist "translated" by the lawyers language, in civil suitsOtranto, Maria Inês Sarno 17 December 2009 (has links)
Made available in DSpace on 2016-04-28T18:24:13Z (GMT). No. of bitstreams: 1
Maria Ines Sarno Otranto.pdf: 22115821 bytes, checksum: 4f8aac2ebd4911d8ddf1f3cef029758d (MD5)
Previous issue date: 2009-12-17 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Since the Consumer´s Defense Code (CDC - Law n. 8.078) was issued on September 11th, 1990, the number of legal proceedings against health professionals has greatly increased. Such problem and the paucity of researches related to it in the Dentistry literature justify the interest for the subject. The current study aims at reflecting upon the liability of the dentist, under the perspective of the discourse s analyst, by focusing not on the activity context (in the dental office), but on documents of civil lawsuits against dentists. Based on the theoretical-methodological principles of MAINGUENEAU´s discursive Discourse Analysis (interdiscourse primacy; discursive field and space; global semantics of initial petition and refutation discursive positioning), this study analyses how such liability is interpreted by the Plaintiff´s and the Defendant´s lawyers´ argumentation. The research methodology included interviews (PUCSP's Research Ethics Committee Protocol n. 70/2006) with the legal consultant of the S.Paulo Dental Surgeons Association, in order to: i) get information related to the civil lawsuit area; ii) select similar juristic acts, belonging to her own files, since the legal consultant was the dentists´ lawyer in all cases; and iii) collect the selected formal applications. Thus, by crossing the frontiers of the stricto sensu language´s studies and proposing to think about the legal duty of the dentist to inform the patient/consumer in a clear, adequate and appropriate form under the CDC´s heading of Article 14th, a bridge is built between Human Sciences (Discourse Analysis; Law Science), Health Sciences (Dentistry), and the discipline that studies the human activity of work (Ergology). The corpus of analysis is constituted by the initial petition and the correspondent refutation of four indemnity legal actions for physical and moral losses and damages, settled at different Courts of Justice in São Paulo between the years 2001 and 2005. In order to analyze these documents it was taken into account: (a) the constitutive polemic between these two enuncitive positioning´s discourses; and (b) the triple function of the language, in the lawyers´ reported discourses: the language (of the dentist) at work; the language (of the lawyers) as work; and the language (of both lawyers) about the work (of the dentist). The analysis´ results suggest that, when recovering the facts that occurred in the dental office, the lawyers´ argumentation translates discursively the complexity of the dentist´s work activity; and they demonstrate that when a disagreement between dentist and patient occurs, the probability of the establishment of a liability process is always possible to happen because of the language´s non-transparency / Depois da promulgação do Código de Defesa do Consumidor (CDC - Lei nº 8.078, de 11/set./1990), aumentou muito o número de ações legais contra profissionais da área da saúde. Tal problema e a escassez de pesquisas relacionadas ao tema, na área da Odontologia, justificam o interesse pelo assunto. Este trabalho se propõe a refletir, sob a perspectiva do analista do discurso, sobre a questão jurídica da responsabilidade civil do odontologista, focalizando não a atividade em situação (no consultório odontológico), mas documentos constituídos pelos discursos dos advogados do paciente e do dentista, em processos civis. Ancorado nos princípios teórico-metodológicos da Análise do Discurso enunciativo-discursiva de MAINGUENEAU (primazia do interdiscurso; campo e espaço discursivos; semântica global dos posicionamentos discursivos petição inicial e contestação), este estudo analisa como tal responsabilidade é interpretada por meio da argumentação dos advogados de Autor e Réu. A metodologia de pesquisa incluiu entrevistas (Protocolo do Comitê de Ética em Pesquisa da PUCSP nº 70/2006) com a assessora jurídica da Associação Paulista de Cirurgiões-Dentistas, para: i) a obtenção de informações relativas ao direito processual civil; ii) a escolha de processos de natureza jurídica semelhante, retirados do próprio arquivo da assessora - a advogada dos dentistas-Réus, em todos os casos; e iii) a coleta das peças processuais selecionadas. Assim, ao cruzar a fronteira dos estudos da linguagem stricto sensu e refletir sobre o dever jurídico de o dentista informar o paciente/consumidor de forma clara, suficiente e adequada, nos termos do caput do Artigo 14º do CDC, se estabelece uma ponte entre Ciências Humanas (Análise do Discurso; Direito), Ciências da Saúde (Odontologia) e disciplina do trabalho (Ergologia). O objeto de análise da pesquisa são a petição inicial e a correspondente contestação de quatro ações de indenização por perdas e danos, físicos e morais, instauradas em diferentes Varas Cíveis de São Paulo, entre os anos de 2001 e 2005. Para a análise desses documentos, foram levadas em conta: (a) a polemicidade constitutiva dos discursos desses dois posicionamentos enunciativos; e (b) a tripla função da linguagem, no discurso relatado dos advogados: a linguagem (do dentista) no trabalho; a linguagem (dos advogados) como trabalho, e a linguagem (de ambos os advogados) sobre o trabalho (do dentista). Os resultados de tal análise sugerem que, ao recuperar os fatos ocorridos no consultório odontológico, a argumentação dos advogados traduz , discursivamente, a complexidade da atividade de trabalho do dentista; e demonstram que, ao ocorrer um desentendimento entre dentista e paciente, a eventualidade da instauração de um processo de responsabilidade civil é sempre passível de acontecer, em função da não transparência da linguagem
|
92 |
UK equity market microstructure in the age of machineSun, Yuxin January 2018 (has links)
Financial markets perform two major functions. The first is the provision of liquidity in order to facilitate direct investment, hedging and diversification; the second is to ensure the efficient price discovery required in order to direct resources to where they can be best utilised within an economy. How well financial markets perform these functions is critical to the financial welfare of every individual in modern economies. As an example, retirement savings across the world are mostly invested in capital markets. Hence, the functioning of financial markets is linked to the standard of living of individuals. Technological advancements and new market regulations have in recent times significantly impacted how financial markets function, with no period in history having witnessed a more rapid pace of change than the last decade. Financial markets have become very complex, with most of the order execution now done by computer algorithms. New high-tech trading venues, such as dark pools, also now play outsized roles in financial markets. A lot of the impacts of these developments are poorly understood. In the EU particularly, the introduction of the Markets in Financial Instruments Directive (MiFID) and advancements in technology have combined to unleash a dramatic transformation of European capital markets. In order to better understand the role of high-tech trading venues in the modern financial markets' trading environment generally and in the UK in particular, I conduct three studies investigating questions linked to the three major developments in financial markets over the past decade; these are algorithmic/high-frequency trading, market fragmentation and dark trading. In the first study, I examine the changing relationship between the price impact of block trades and informed trading, by considering this phenomenon within a high-frequency trading environment on intraday and inter-day bases. I find that the price impact of block trades is stronger during the first hour of trading; this is consistent with the hypothesis that information accumulates overnight during non-trading hours. Furthermore, private information is gradually incorporated into prices despite heightened trading frequency. Evidence suggests that informed traders exploit superior information across trading days, and stocks with lower transparency exhibit stronger information diffusion effects when traded in blocks, thus informed block trading facilitates price discovery. The second study exploits the regulatory differences between the US and the EU to examine the impact of market fragmentation on dimensions of market quality. Unlike the US's Regulation National Market System, the EU's MiFID does not impose a formal exchange trading linkage or guarantee a best execution price. This has raised concerns about consolidated market quality in increasingly fragmented European markets. The second study therefore investigates the impact of visible trading fragmentation on the quality of the London equity market and find a quadratic relationship between fragmentation and adverse selection costs. At low levels of fragmentation, order flow competition reduces adverse selection costs, improves market transparency and enhances market efficiency by reducing arbitrage opportunities. However, high levels of fragmentation increase adverse selection costs. The final study compares the impact of lit and dark venues' liquidity on market liquidity. I find that compared with lit venues, dark venues proportionally contribute more liquidity to the aggregate market. This is because dark pools facilitate trades that otherwise might not easily have occurred in lit venues when the spread widens and the limit order queue builds up. I also find that informed and algorithmic trading hinder liquidity creation in lit and dark venues, while evidence also suggests that stocks exhibiting low levels of informed trading across the aggregate market drive dark venues' liquidity contribution.
|
93 |
Le rôle des États-Unis dans le maintien des pratiques d’opacité financièreJasmin-Benoit, Jonathan 11 1900 (has links)
Notre recherche vise à vérifier s'il existe un lien entre l'intérêt des Américains pour les investissements étrangers et le maintien des structures opaques de la finance internationale. Les pratiques d'opacité financière (utilisation abusive du secret bancaire, faibles mesures d'identification du client, faible règlementation bancaire, absence d’échange d’information fiscale, absence d’obligations d’enregistrements de compagnies et de fiducies, possibilité d’établir la propriété d’une société avec des prête-noms, utilisation de bons au porteur, manque d’encadrement des fiducies, etc.) semblent accommoder les États qui les dénoncent. Utilisant les théories des jeux à deux niveaux de Putnam, de la règlementation et de l’équilibre de Nash, nous faisons le lien entre le niveau national et international. Notre recherche consiste en deux études de cas. La première traite d’un projet de règlement de l’Internal Revenue Service visant à assurer la déclaration de revenus d’intérêt sur les dépôts bancaires des non-résidents. La seconde traite d’une série de projets de loi déposés au Sénat et à la Chambre des représentants des États-Unis. Ils cherchent à assurer la transparence du processus d’enregistrement de compagnies de manière à faciliter l’accès des agences d’application de la loi à l’information sur les bénéficiaires effectifs des compagnies formées en sol américain. Notre recherche ne permet pas de confirmer notre hypothèse avec certitude. Cependant, nos données tendent à indiquer que les groupes d’intellectuels et les groupes de pression financiers incitent le gouvernement des États-Unis à freiner la mise en application de certaines mesures prévues par le régime antiblanchiment (particulièrement l’identification du client et le partage d’information avec des pays tiers) pour attirer l’investissement étranger. / Our Study aims to verify whether the United States’ interest in attracting foreign investment is linked to international financial opacity. Financial opacity practices (abusive use of bank secrecy, weak “Know Your Client” procedures, unregulated banking sector, no tax information exchange, absence of corporate and fiduciary registration obligations, possibility to set up corporations with nominee, possibility to issue bearer shares, lack of regulations for trusts, etc.) appear to accommodate even the states openly opposed to them. Using Putnam’s theory of two level games, regulation theory and Nash’s theory of equilibrium, we establish a link between the national and international levels. Our analysis includes two case studies. The first one deals with a project regulation from the Internal Revenue Service intended to ensure reporting of deposit interest paid to non-resident aliens. The second is a study of a series of project law presented to the United States Senate and House of Representatives. Those bills would allow for incorporation transparency to assist law enforcement in their efforts to identify beneficial owners of companies formed in the United States. Our hypothesis is not fully confirmed. However, our data tends to indicate that think tank and financial lobby groups influence the United States into stalling enactment of some anti-money laundering measures (particularly client identification and information sharing with tiers countries) in order to attract foreign investment.
|
94 |
Corneal injury to ex-vivo eyes exposed to a 3.8 micron laser /Fyffe, James G. January 2005 (has links) (PDF)
Thesis (M.S.)--Uniformed Services University of the Health Sciences, 2005. / Typescript (photocopy).
|
95 |
Conceptual expression and depictive opacity: Changing attitudes towards architectural drawings between 1960 and 1990Kim, Hoyoung 07 January 2016 (has links)
This dissertation is a study of a remarkable change that came about in the kind of drawings that architects used to present their work between the decades of 1960 and 1990. Drawings in this period, visually rich and compositionally complex, seemed to mark an entirely new sensibility towards their function; their goal seemed to be not so much to clearly depict the forms of a proposed building, but to instead focus on its conceptual aspects. In fact, in several cases, drawings seemed to be treated as graphic projects in their own right, over and above the work they presented. This trend was accompanied by two other developments. Around the same time, there was a sudden increase in theoretical interest in drawings within the architectural community leading to a flurry of published articles, essays and books on the topic. And all this happened to coincide with the time that the Postmodern movement came to dominate architecture. The study aims to understand the relationship between these trends, and to develop a better understanding of the reasons for these changes to have occurred. It does so by, first, developing a theoretical framework to help understand the nature and impact of the changes in drawings. Next, it presents a detailed historical account of these changes. This is followed by an in-depth study of a single architect, James Stirling, to show how the new types of drawings were not simply a means to present ideas, but played a formative role in design as well. Apart from developing a contextualized historical account of an important development in contemporary architectural history, the study also finds that the change in the drawing practice and the theoretical interests were not simply an outcome of Postmodern cultural theory of the period, but were instigated by concerns that arose from within architecture itself. It thus offers a useful case-study on how changes in disciplinary practice are brought about.
|
96 |
The prompt emission of Gamma-Ray Bursts : analysis and interpretation of Fermi observations / L'émission prompte des sursauts gamma : analyse et interprétation des observations de FermiYassine, Manal 11 September 2017 (has links)
Les sursauts gamma (GRBs pour "Gamma-Ray Bursts" en anglais) sont de brèves bouffées très énergétiques de rayonnement de haute énergie qui sont émises sur de courtes échelles de temps (fraction de seconde à plusieurs minutes). L'émission intense des sursauts gamma à haute énergie est supposée provenir d'un trou noir de masse stellaire nouvellement formé, accompagné d'un vent collimaté (i.e. un jet) se propageant à vitesse relativiste. L'émission est observée suivant deux phases successives, la phase prompte très erratique, et la phase de rémanence, moins lumineuse. Les deux instruments embarqués sur le satellite Fermi, le "Gamma-ray Burst Monitor" (GBM) et le "Large Area Telescope" (LAT), permettent d'étudier l'émission prompte des sursauts gamma sur une grande plage d'énergie (de ~10 keV à ~100 GeV). L'objectif principal de ma thèse est l'analyse et l'interprétation des propriétés spectrales et temporelles de l'émission prompte des GRBs observés par Fermi, en particulier avec les nouvelles données du LAT (Pass 8) qui ont été rendues publiques en juin 2015.La première partie de mon travail est une analyse spectrale résolue en temps de la phase prompte du sursaut GRB 090926A avec les données du GBM et du LAT. Mes résultats confirment avec un meilleur niveau de confiance la présence d'une cassure spectrale à ~400 MeV, qui est observée en coincidence avec un pic d'émission très court. Ils révèlent que cette atténuation spectrale est présente durant toute l'émission prompte du sursaut, et que l'énergie de cassure augmente jusqu'au GeV. L'interprétation de la cassure spectrale en termes d'absorption gamma ou de courbure naturelle du spectre d'émission Compton inverse (CI) dans le régime Klein-Nishina fournit des contraintes fortes sur le facteur de Lorentz du jet. Mes résultats conduisent en outre à des rayons d'émission R ∼10^14 cm qui sont compatibles avec une origine interne de l'émission du keV au GeV au-dessus de la photosphère du jet.La seconde partie de mon travail est une exploration du modèle de chocs internes développé par des collaborateurs à l'Institut d'Astrophysique de Paris (IAP). Ce modèle simule la dynamique du jet et les processus d'émission (synchrotron et CI) d'une population d'électrons accélérés aux chocs. J'ai simulé la réponse instrumentale de Fermi à un sursaut synthétique fourni par ce code numérique, et j'ai construit une fonction paramétrique qui peut être utilisée pour ajuster le modèle aux spectres de sursauts du keV au MeV. J'ai appliqué cette fonction avec succès à un échantillon de 64 sursauts brillants détectés par le GBM. J'ai aussi confronté le modèle de l'IAP au spectre d'émission prompte de GRB 090926A. Mes résultats montrent un bon accord, et j'ai identifié quelques pistes pour les améliorer. Les spectres synthétiques sont plus larges que tous les spectres dans l'échantillon du GBM. En conséquence, je discute brièvement quelques pistes de développements théoriques qui pourraient améliorer l'accord du modèle avec les observations, ainsi que des avancées observationnelles attendues dans le futur. / Gamma-Ray Bursts (GRBs) are very energetic and brief flashes of high-energy radiations which are emitted in a short time scale (fraction of a second to several minutes). The GRB bright emission is thought to be powered by a newly formed stellar-mass black hole that is accompanied by a collimated outflow (i.e. a jet) moving at a relativistic speed. The emission is observed as two successive phases: the highly variable “prompt” phase and the late and less luminous “afterglow” phase. The two instruments on board the Fermi space telescope, the Gamma-ray Burst Monitor (GBM) and the Large Area Telescope (LAT), allow the study of GRB prompt emission over a broad energy range (from ~10 keV to ~100 GeV). In June 2015, a new set of LAT data (Pass 8) was publicly released, which were generated using improved algorithms of reconstruction and classification of gamma-ray events. The main goal of my thesis is the analysis and interpretation of the spectral and temporal properties of the prompt emission phase of the GRBs observed by Fermi, especially using LAT Pass8 data.In the first part of my work, I performed a detailed time-resolved spectral analysis of the prompt phase of GRB 090926A with GBM and LAT data. My results confirm with a greater significance the spectral break at ∼400 MeV that is observed during a fast variability pulse, and they also reveal the presence of a spectral attenuation throughout the GRB prompt emission, as well as an increase of the break energy up to the GeV domain. I interpreted the spectral break in terms of gamma-ray absorption or as a natural curvature of the inverse Compton (IC) emission in the Klein-Nishina regime. Strong constraints on the jet Lorentz factor were obtained in both scenarios. My results lead also to emission radii R ∼10^14 cm, which are consistent with an internal origin of both the keV-MeV and GeV prompt emissions above the jet photosphere.The second part of my work is an exploration of the internal shock model that has been developed by collaborators at the "Institut d'Astrophysique de Paris" (IAP). This model simulates the GRB jet dynamics and the radiations (synchrotron and IC processes) from a population of shock-accelerated electrons. I simulated the response of the Fermi instruments to the synthetic GRB spectra provided by this numerical code. From these simulations, I built a new parametric function that can be used to fit the keV-MeV spectra of GRBs with the model. I applied successfully this function to a sample of 64 GBM bright GRBs. I confronted also the IAP model to the prompt emission spectrum of GRB 090926A. I obtained a relatively good agreement and I identified a couple of solutions that may improve it. The synthetic spectra are wider than any GRB spectra in the GBM sample. I present some theoretical developments that could improve the data-model agreement in the future, and I discuss possible advances from future GRB missions as well.
|
97 |
Lights and shadows : multi-wavelength analysis of young stellar objects and their protoplanetary discsRigon, Laura January 2016 (has links)
Stars form from the collapse of molecular clouds and evolve in an environment rich in gas and dust before becoming Main Sequence stars. During this phase, characterised by the presence of a protoplanetary disc, stars manifest changes in the structure and luminosity. This thesis performs a multi-wavelength analysis, from optical to mm range, on a sample of young stars (YSOs), mainly Classical T Tauri (CTTS). The purpose is to study optical and infrared variability and its relation with the protoplanetary disc. Longer wavelength, in the mm range, are used instead to investigate the evolution of the disc, in terms of dust growth. In optical, an F-test on a sample of 39 CTTS reveals that 67\% of the stars are variable. The variability, quantified through pooled sigma, is visible both in magnitude amplitudes and changes over time. Time series analysis applied on the more variable stars finds the presence of quasi periodicity, with periods longer than two weeks, interpreted either as eclipsing material in the disc happening on a non-regular basis, or as a consequence of star-disc interaction via magnetic field lines. The variability of YSOs is confirmed also in infrared, even if with lower amplitude. No strong correlations are found between optical and infrared variability, which implies a different cause or a time shift in the two events. By using a toy model to explore their origin, I find that infrared variations are likely to stem from emissions in the inner disc. The evolution of discs in terms of dust growth is confirmed in most discs by the analysis of the slope of the spectral energy distribution (SED), after correcting for wind emission and optical depth effects. However, the comparison with a radiative transfer model highlights that a number of disc parameters, in particular disc masses and temperature, dust size distribution and composition, can also affect the slope of the SED.
|
98 |
La transparence fiscale, modèle d'imposition du bénéfice des sociétés / No English title availablePruvost, Édouard 13 July 2016 (has links)
Le régime de transparence fiscale est traditionnellement associé au régime fiscal des sociétés de personnes. Cependant, une certaine lecture de la législation fiscale semble démontrer que ce régime pourrait connaître un champ d’application beaucoup plus large. En effet, l’évolution de la législation semble rompre avec le traditionnel critère d’attribution de ce régime d’imposition fondé sur la responsabilité illimitée de l’associé pour y substituer le critère de l’investissement personnel de ce dernier. En outre, il apparaît que le législateur ait depuis toujours été animé par le souci d’éviter une double imposition du bénéfice des sociétés de capitaux. De ce fait, il semblerait que certains critères de la transparence fiscale puissent également être remplis par les sociétés réputées « fiscalement opaques ». A cet égard, cette thèse a pour ambition de démontrer que le régime de transparence fiscale constitue le modèle d’imposition du bénéfice des sociétés. Enfin, cette étude propose une mise en perspective des avantages mais également des limites que présente le régime de transparence fiscale afin de proposer un régime prospectif plus souple et plus cohérent constituant le socle d’une fiscalité novatrice. / This thesis proposes to revisit the issue of tax transparency as tax corporate income model based on the criteria of this regime. In this respect, this research aims to highlight the award criteria, and the criteria for identifying this regime. Historically, fiscal transparency is analyzed as being the tax treatment given to partnerships. However, it appears that changes in tax law have challenged the traditional award criteria of this regime which is now granted to certain companies other than partnerships. Moreover, it appears that some of the tax transparency identification criteria are reflected in the corporation tax system. Thus, this research aims to demonstrate that fiscal transparency is the tax model of corporate earnings, and aims to provide a prospective tax system based on fiscal transparency.
|
99 |
Conception d'un environnement de simulation pour le calcul des profils d'élargissement Stark des raies d'hélium neutreTremblay, Patrick 06 1900 (has links)
Ce mémoire porte sur l'étude des étoiles naines blanches de type DB, dont le spectre est dominé par les raies d’hélium neutre. Des travaux récents ont révélé que les paramètres physiques mesurés à l'aide de la méthode dite spectroscopique —notamment la température effective et la masse de l'étoile — posaient problème pour ce type de naine blanche. Nous avons dans cette étude réexaminé un des ingrédients essentiels de cette méthode, soit le calcul de profils d'élargissement Stark des raies d'hélium neutre. Ce problème fut abordé il y a 25 ans par notre groupe de recherche en utilisant la théorie standard de l'élargissement Stark. Les profils semi-analytiques calculés dans le cadre de cette théorie considèrent les électrons comme étant dynamiques et les ions statiques. Avec l'amélioration de la puissance numérique des ordinateurs, d’autres groupes de recherche ont grandement amélioré le traitement de l'élargissement Stark en produisant des simulations numériques décrivant en détail la dynamique et les interactions des perturbateurs (ions et électrons) près de l'émetteur (l'atome d'hélium dans notre cas). Ils n'ont cependant généré de tables de profils Stark, applicables au calcul de spectres synthétiques d'étoiles naines blanches, que pour deux raies de l'atome d'hélium. Dans ce mémoire, nous décrivons la conception de notre propre environnement de simulation incluant certains aspects importants considérés dans les travaux précédents (unification du traitement des ions et électrons, correction pour la dynamique des ions, transition de la contribution des électrons à l'élargissement, du coeur aux ailes du profil, intégration numérique de l'opérateur quantique d'évolution temporelle de l'hélium perturbé par un champ électrique fluctuant, correction de Debye pour la corrélation du mouvement des perturbateurs chargés, variation de densité locale et réinjection de particules) afin de mieux représenter l'environnement dynamique de l'atome d'hélium. Des étapes intermédiaires, comme la construction de l'espace de simulation et le modèle quasi-statique, nous ont permis de valider cet espace de simulation ainsi que le respect de la statistique du système. Une fois ces étapes de validation franchies, nous avons produit des grilles de profils Stark pour les deux raies les plus importantes de l'atome d'hélium dans le domaine du visible, soit He ɪ λ4471 et He ɪ λ4922, pour des températures entre 10,000 K et 40,000 K et des densités électroniques entre 1 × 10¹⁴ cm⁻³ et 6 × 10¹⁷ cm⁻³. Une comparaison avec d'autres résultats publiés démontre que nos calculs rivalisent avec les meilleurs profils obtenus dans le domaine des simulations numériques de l'élargissement Stark. L'élaboration de cet outil ouvre la voie à la création d'une nouvelle génération de modèles d'atmosphères de naines blanches qui nous permettra de raffiner l'analyse spectroscopique de ces objets. / This thesis deals with the study of white dwarf stars of the DB type, whose spectrum is dominated by neutral helium lines. Recent work has revealed that physical parameters — namely the effective temperature and the stellar mass — measured using the so-called spectroscopic technique are problematic for this type of white dwarf. In this study, we re-examine one of the essential ingredients of this method, namely the calculation of Stark broadening profiles of neutral helium lines. This problem was addressed 25 years ago by our research group using the standard Stark broadening theory. Semi-analytical profiles calculated under this theory consider electrons as dynamic and ions as static. With the improvement of computer numerical power, other research groups have significantly improved the treatment of Stark brodening by producing numerical simulations describing in detail the dynamics and interactions of the perturbers (ions and electrons) near the emitter (the helium atom in our case). However, they generated Stark profile tables, applicable to the computation of synthetic spectra for white dwarf stars, for only two lines of the helium atom. In this thesis, we describe the creation of our own simulation environment including some important aspects considered in previous work (unification of ion and electron treatment, correction for ion dynamics, transition of the electron contribution to broadening from the core to the wings of the profile, numerical integration of the quantum operator of the time evolution of helium perturbed by a fluctuating electric field, Debye correction for the correlation of the motion of charged perturbers, local density variation and particle reinjection) in order to better represent the dynamical environment of the helium atom. Intermediate steps, such as the construction of the simulation space and the quasi-static model, allowed us to validate this simulation space and the respect of the system statistics. Once these validation steps were completed, we produced grids of Stark profiles for the two most important lines of the helium atom in the optical, namely He ɪ λ4471 et He ɪ λ4922, for temperatures between 10,000 K and 40,000 K and electronic densities between 1 × 10¹⁴ cm⁻³ and 6 × 10¹⁷ cm⁻³. A comparison with other published results shows that our calculations rival the best profiles obtained in the field of numerical simulations of Stark broadening. The development of this tool paves the way for the creation of a new generation of white dwarf atmosphere models that will allow us to refine the spectroscopic analysis of these objects.
|
100 |
Generation Of A Novel TiO <sub>2</sub> - Composite – A Feasibility StudyLindstrom, Mathias E.V. 23 July 2002 (has links)
No description available.
|
Page generated in 0.0505 seconds