• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 275
  • 65
  • 50
  • 29
  • 17
  • 11
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 609
  • 58
  • 49
  • 48
  • 43
  • 41
  • 40
  • 39
  • 37
  • 37
  • 35
  • 35
  • 35
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Ambiguidades no lápis da natureza

Ruiz, Paulo Eduardo Rodrigues 23 November 2010 (has links)
Made available in DSpace on 2016-04-27T17:26:53Z (GMT). No. of bitstreams: 1 Paulo Eduardo Rodrigues Ruiz.pdf: 17241835 bytes, checksum: 3ae3cf02e74bd112c2c13464d3b713ec (MD5) Previous issue date: 2010-11-23 / The aim of this dissertation is to analyze the question of the photographic ambiguity in the 19th century from the theoretical assumption that photography was considered, in its origins, a kind of pencil of nature . This concept was developed by the English researcher and photographer William Henry Fox Talbot, in the work The Pencil of Nature, published between 1844 and 1846. Our discussion is based on the assumption that The Pencil of Nature already presented ambiguities and ambivalences typical of the photographic practice in the 19th century and, to an extent, until today, as regards the technical, scientific, commercial and artistic appeal of an industrial society. We draw examples from the photographic production at the time, using concepts developed by philosophers and theorists of photography such as Martin Heidegger, Walter Benjamin, Roland Barthes, Susan Sontag, Philippe Dubois, André Rouillé and Boris Kossoy, related to some basic notions about the photographic language: the question of technology and photographic verisimilitude, the shift in the perception of reality through photography and the presence of death in the photographic image / O objetivo deste trabalho é refletir a respeito da questão da ambiguidade fotográfica no século XIX a partir do pressuposto teórico de que a fotografia era considerada, em seus primórdios, uma espécie de lápis da natureza . Este conceito foi concebido pelo pesquisador e fotógrafo inglês William Henry Fox Talbot, na obra ThePencilofNature, publicada entre os anos de 1844 e 1846. O fio condutor de nossa discussão parte da hipótese de que ThePencilofNature já apresentava uma série de ambiguidades e ambivalências características da prática fotográfica do século XIX e, em parte, até os dias de hoje, em relação ao apelo técnico-científico, comercial e artístico de uma sociedade industrial. Para tal reflexão, utilizaremos alguns exemplos da produção fotográfica da época, fazendo uso de alguns conceitos de filósofos e de pensadores da fotografia tais como Martin Heidegger, Walter Benjamin, Roland Barthes, Susan Sontag, Philippe Dubois, André Rouillé e Boris Kossoy, relacionados a alguns preceitos básicos sobre a linguagem fotográfica: a questão da técnica e da verossimilhança fotográficas, a transformação da percepção do real por meio da fotografia e a presença da morte no registro fotográfico
442

Impact de l’asymétrie de statut groupal sur les stratégies d’ajustement identitaire et comportemental : le rôle des processus cognitifs et situationnels dans la perception de la discrimination / Asymmetry impact of group status on identity adjustment strategies and behavioural : role of cognitive processes and situationnal in the perception of discrimination

Fares, Rabie 24 November 2016 (has links)
A travers cette thèse réalisée auprès des français d'origine maghrébine, nous avons essayé de déceler le rôle de certains processus cognitifs, affectifs et motivationnels qui peuvent conditionner la perception de discrimination en milieu professionnel et déterminer les stratégies d’ajustements mises en œuvre face à la privation de l’emploi. Dans une première étude (Etude1), nous avons essayé d’évaluer les effets directs ou indirects du statut « social acquis » sur la perception de discrimination au niveau individuel et groupal ; en ce sens, nous amorçons un questionnement quant à leurs répercussions sur l’estime de soi et les stratégies d’ajustement cognitives et identitaires. Dans la continuité des travaux sur l'ambiguïté attributionnelle (Crocker & Major, 1989), la deuxième étude (Etude 2) s’est intéressée aux effets émotionnels, cognitifs et comportementaux de l’activation de la situation de la discrimination face à l’emploi selon qu’elle est explicite ou ambiguë. Dans la troisième étude (Etude 3), qui s’est déroulée en deux phases, nous avons étudié les processus de perception de discrimination selon la source de discrimination (endogroupale vs exogroupale). Enfin, dans notre dernière étude (Etude 4), également en deux phases, nous nous sommes intéressés à l’impact du processus de comparaison (intergroupale vs intragroupale) sur la dévaluation du travail et la Croyance en un Monde Juste. / Despite structural dimensions which are linked to the unchanging objective factors of discrimination, we have been focused on the issue of the cognitive, affective and motivational processes that condition the reactions of French citizens with Maghreb origins and their perception. The aim of the first study « Study 1 » was to evaluate the direct and the indirect effects of the « obtained social status » about the feeling of individual and group discrimination towards stigmatized people. In that way to initiate a reflection regarding their impact on the self esteem. Then, within the second study « Study 2 », we were inspired of the work on the attributional ambiguity (Crocker & Major, 1989) in order to interest us on the emotional and behavioural effects which cause explicit or implicit discrimination. Within the third study « Study 3 », in two phases we have studied the perception process according to the source of discrimination. This was carried out in two phases. Finally, in our last study « Study 4 », we were interested on the impact of the comparison impact made (intragroup vs intergroup) concerning the psychological withdrawal and the belief in a righteous world.
443

Teacher Clarity Strategies of Highly Effective Teachers

Hall, Megan Olivia 01 January 2019 (has links)
Teacher clarity supports both cognitive and affective learning for all learners. The scholarly literature lacks research related to teacher clarity in nonlecture learning environments. The purpose of this qualitative study was to discover teacher clarity strategies that effectively promote student learning, particularly in nonlecture learning environments. The conceptual framework involved cognitive load theory and constructivism. The research questions explored how highly effective teachers experience clarity to promote student learning in nonlecture learning environments and what innovative strategies highly effective teachers practice to ensure clarity in nonlecture learning environments. For this in-depth qualitative interview study, data were collected through virtual synchronous focus groups and interviews with 10 State Teachers of the Year and State Teacher of the Year finalists and analyzed using manual and digital coding of emergent themes. Key nonlecture teacher clarity strategies discovered emphasized the importance of interaction, facilitation, and responsiveness through the establishment of safe and inclusive learning environments, active monitoring of student work and understanding, individualized application of strategic ambiguity, and utilization of technology tools. Further research is recommended in strategic ambiguity, interaction through facilitation, safe and inclusive environments, and teacher clarity through technology tools. By contributing to the body of knowledge of educational practices that improve student learning, my study has the potential to empower individual teachers to benefit all learners, and to support organizations in delivering equitable instruction in diverse secondary school settings.
444

Ignorance is bliss: the information malleability effect

Mishra, Himanshu Kumar 01 January 2006 (has links)
In this dissertation, I propose that, post-action, people tend to be more optimistic about outcomes when their actions were based on malleable (vague) information compared to when their actions were based on unmalleable (precise) information. However, pre-action, no such difference occurs. I term this inconsistency in optimism in the pre and post-action stage, the Information Malleability Effect (IME). These actions could include the choice of a product, drawing a ball from an urn, or consumption of a food item. Prior research on ambiguity aversion has reliably documented that people are generally averse to making decisions based on malleable information. On the other hand, research on situated optimism has demonstrated that people exhibit a high level of optimism for events they consider more desirable and they distort the available information to make the desirable events seem more likely to occur. I review these two streams of literature and show that although both literatures make predictions in either the pre or the post-action stage, neither of them alone can explain the IME. I propose a theoretical framework to explain the underlying cause of the IME that combines these two streams of literature and utilizes the motivated reasoning account. Based on this framework, I posit hypotheses that are tested across a series of experiments. Experiment 1a and 1b demonstrate the IME in a between and within participant design. Experiment 2 demonstrates that interpretational flexibility of malleable information results in positive outcomes appearing more plausible and negative outcomes less plausible compared to when information is unmalleable. Experiment 3 provides support for the proposed underlying process by priming accuracy and desired goals.
445

Evasion in Australia's parliamentary question time : the case of the Iraq war

Rasiah, Parameswary January 2008 (has links)
Given that the basic functions of parliamentary Question Time are to provide information and to hold the Government accountable for its actions, the possibility of evasion occurring in such a context is of crucial importance. Evasion (equivocation) has been identified as a matter of concern in political interviews, but no systematic study has been undertaken in the context of parliamentary discourse, notably Question Time, anywhere in the world. This study applies and adapts Harris's (1991) coding framework on various types of responses, Bull and Mayer's (1993) typology of non-replies and Clayman's (2001) work on how politicians 'resist' answering questions, all of which are based on political news interviews, to the study of evasion in Australia's House of Representatives' Question Time. A comprehensive, unified framework for the analysis of evasion is described, a decision flow-chart for the framework is provided, and an illustrative example of the applied framework is given based on Australia's Federal House of Representatives' Question Time. Put simply, the study was undertaken to determine if evasion occurred, how frequently it occurred and how it occurred. It involved the classification of responses as 'answers' (direct or indirect), 'intermediate responses' (such as pointing out incorrect information in the question), and 'evasions' based on specific criteria. Responses which were considered evasions were further analysed to determine the levels of evasion, whether they were covert or overt in nature and the types of 'agenda shifts' that occurred, if any. The thesis also involved a discourse-analytical study of other factors that appear to facilitate Ministerial evasion in Australia's House of Representatives, including the Speaker's performance and the use of 'Dorothy Dixers'. The research data was sourced from Question Time transcripts from the House of Representatives Hansard for the months of February and March 2003, dealing only with questions and responses on the topic of Iraq. In those months there were 87 questions on the topic of Iraq, representing more than two thirds of all questions on Iraq for the whole of 2003. Of these 87 questions, the majority (48) came from the Opposition party, through its leader. The balance (39) was asked by Government MPs. Analysis of the question/answer discourse for all 87 questions revealed that every question asked by Government members was answered compared to only 8 of the 48 Opposition questions. Of the 40 remaining Opposition questions, 21 were given intermediate responses and 19 were evaded outright. The fact that the overwhelming majority (83%) of Opposition questions were not answered together with other findings such as instances of partiality on the part of the Speaker; the use of 'friendly', prearranged questions by Government MPs; and the 'hostile' nature of questions asked by Opposition MPs casts serious doubt on the effectiveness of Question Time as a means of ensuring the Government is held accountable for its actions. The study provides empirical evidence that evasion does occur in Australia's House of Representatives' Question Time.
446

語境預測力對中文一詞多義處理歷程的影響:文句閱讀的眼動研究 / The influence of contextual predictability on processing Chinese polysemy: Eye movement experiments of sentence reading

高佩如, Kao, Pei Ju Unknown Date (has links)
本論文進行兩個眼動實驗,探討語境及多義特性在一詞多義處理歷程中扮演的角色。Frazier與Rayner(1990)認為一詞多義由於詞義間有重疊的語義屬性(sense overlap),因此處理詞義之初,會先以部分語義(partial specification)來理解,不需立即進行完整詞義的解讀,支持部分語義解讀假設(Immediate Partial Interpretation Hypothesis)。在Frisson 與Pickering(2001)所提出詞義未定模型(Underspecification Model)中,進一步說明語境在一詞多義中扮演的角色,認為一詞多義的語義提取會先以未定意義(underspecified meaning)解釋,在語義提取完畢後,語境才會介入詞義的選擇。本論文操弄語境導引和詞義重疊性,來探討一詞多義的詞義提取與選擇歷程。實驗一操弄句子中目標詞的語境預測力(可預測語境vs.不可預測語境)與詞義數量(單詞義vs.多詞義),記錄並分析目標詞與目標詞後區域的凝視時間。結果顯示:目標詞上的凝視時間多詞義顯著短於單詞義,支持部分語義解讀假設。然而,實驗一在目標詞後區域並未看到多詞義因延遲語義解讀(delayed semantic commitments)而造成抑制或競爭效果。實驗二就此現象與「一詞多義本身的詞義重疊程度」與「語境對一詞多義有無發生語義解讀的必要性」進行進一步探討。實驗二(同實驗一)操弄目標詞的語境預測力,與詞義的重疊程度(單義詞vs.中度詞義重疊vs.高度詞義重疊),並控制語境偏向次要詞義的情形。結果顯示:目標詞上的晚期眼動指標在中度詞義重疊(中vs.單)與高度詞義重疊(高vs.單)皆看到與語境預測力發生交互作用,顯示語境預測力的影響在晚期階段發生;目標詞後區域觀察到中度與高度詞義重疊(中vs.單;高vs.單)皆在不可預測語境下發生詞義抑制效果,符合詞義未定模型預期之延遲語義解讀效果,並在目標詞晚期眼動指標看到中度詞義重疊效果與高度詞義重疊效果(單-中 vs. 單-高)受語境影響有不同的效果,顯示詞義重疊會影響詞義選擇的必要性。總結本論文的結果:首先,一詞多義的語義提取符合部分語義解讀的假設,而語境介入的影響支持詞義未定模型的看法,即語境在語義提取之初並未介入影響,在語義未定的情況下,若語境內容有進行語義解讀之必要時,則會發生延遲語義解讀。其次,詞義的促進效果與一詞多義的詞義數量多寡有關,詞義數量減少時,詞義促進效果也隨之消失。最後,當語境偏向一詞多義的次要詞義時,延遲語義效果才會發生。 / This thesis conducted two eye movement experiments with the aim to investigate the role of context and multi-sense feature in processing polysemy. Frazier and Rayner (1990) suggested that, at the beginning of semantic processing, polysemy is comprehended with partial specification. There is no immediate need to process the complete word sense due to the sense overlap between senses. This viewpoint supports the Immediate Partial Interpretation Hypothesis. In Frisson and Pickering’s (2001) Underspecification Model, further elaboration were made on the role of context in the process of retrieving and selecting one of the word senses of a polysemy. The sense that is first retrieved from a polysemy is considered to be an underspecified meaning. It is after the semantic retrieval is finished that context is involved in selecting a word sense. This thesis manipulated context guidance and sense overlap, to further research on processing polysemy in terms of word sense retrieval and selection. Experiment One manipulated contextual predictability (predictable context vs. unpredictable context) and number of senses (one-sense, monosemy vs. many-sense, polysemy) of the target words in sentences. Fixation times of the target words and post-target areas were recorded and analyzed. Results showed that the fixation times on target words were significantly shorter for polysemy than for monosemy, supporting the Immediate Partial Interpretation Hypothesis. However, in Experiment One, there is no inhibitory or competitive effect on the post-target area, indicating that there is no effect of delayed semantic commitments while comprehending polysemy. In Experiment Two, we further investigated how this phenomenon is connected with the degree of sense overlap and whether context is necessary to activate semantic commitment for polysemy. Contextual predictability of target words and the degree of sense overlap (monosemy vs. moderate-sense-overlap vs. high-sense-overlap) were manipulated, with the former designed as in Experiment One. Specifically, the context was controlled to bias toward the subordinate sense. The results showed that there were interactions of sense overlap degree (both moderate-sense-overlap vs. monosemy and high-sense-overlap vs. monosemy) and contextual predictability on target words for later-stage indices. This suggests that contextual predictability effects at later stages. On the post-target areas, there were inhibitory effects found for moderate-sense-overlap vs. monosemy and for high-sense-overlap when the context is unpredictable. This finding supports the delayed effect of semantic commitments in the Underspecification Model. Moreover, effects of sense overlap (polysemy-moderate vs. polysemy-high) were modulated by contextual predictability on target words for later-stage processing, showing that sense overlap affects the necessity of semantic commitments. In conclusion, the semantic retrieval of polysemy can be best explained by the Immediate Partial Interpretation Hypothesis and the involvement of contextual constraint supports the Underspecification Model. That is, context does not affect the beginning phase of semantic retrieval. Since the senses are underspecified, delayed semantic commitment occurs if it is necessary to make semantic commitment in the context. Furthermore, the facilitation of senses is related to the number of a polysemy’s senses. As number of senses decreases, facilitation of senses wanes and disappears. Finally, delayed semantic commitment occurs only when the context biases towards a subordinate sense of the polysemy.
447

Ambiguity Detection for Programming Language Grammars

Basten, Bas 15 December 2011 (has links) (PDF)
Context-free grammars are the most suitable and most widely used method for describing the syntax of programming languages. They can be used to generate parsers, which transform a piece of source code into a tree-shaped representation of the code's syntactic structure. These parse trees can then be used for further processing or analysis of the source text. In this sense, grammars form the basis of many engineering and reverse engineering applications, like compilers, interpreters and tools for software analysis and transformation. Unfortunately, context-free grammars have the undesirable property that they can be ambiguous, which can seriously hamper their applicability. A grammar is ambiguous if at least one sentence in its language has more than one valid parse tree. Since the parse tree of a sentence is often used to infer its semantics, an ambiguous sentence can have multiple meanings. For programming languages this is almost always unintended. Ambiguity can therefore be seen as a grammar bug. A specific category of context-free grammars that is particularly sensitive to ambiguity are character-level grammars, which are used to generate scannerless parsers. Unlike traditional token-based grammars, character-level grammars include the full lexical definition of their language. This has the advantage that a language can be specified in a single formalism, and that no separate lexer or scanner phase is necessary in the parser. However, the absence of a scanner does require some additional lexical disambiguation. Character-level grammars can therefore be annotated with special disambiguation declarations to specify which parse trees to discard in case of ambiguity. Unfortunately, it is very hard to determine whether all ambiguities have been covered. The task of searching for ambiguities in a grammar is very complex and time consuming, and is therefore best automated. Since the invention of context-free grammars, several ambiguity detection methods have been developed to this end. However, the ambiguity problem for context-free grammars is undecidable in general, so the perfect detection method cannot exist. This implies a trade-off between accuracy and termination. Methods that apply exhaustive searching are able to correctly find all ambiguities, but they might never terminate. On the other hand, approximative search techniques do always produce an ambiguity report, but these might contain false positives or false negatives. Nevertheless, the fact that every method has flaws does not mean that ambiguity detection cannot be useful in practice. This thesis investigates ambiguity detection with the aim of checking grammars for programming languages. The challenge is to improve upon the state-of-the-art, by finding accurate enough methods that scale to realistic grammars. First we evaluate existing methods with a set of criteria for practical usability. Then we present various improvements to ambiguity detection in the areas of accuracy, performance and report quality. The main contributions of this thesis are two novel techniques. The first is an ambi- guity detection method that applies both exhaustive and approximative searching, called AMBIDEXTER. The key ingredient of AMBIDEXTER is a grammar filtering technique that can remove harmless production rules from a grammar. A production rule is harmless if it does not contribute to any ambiguity in the grammar. Any found harmless rules can therefore safely be removed. This results in a smaller grammar that still contains the same ambiguities as the original one. However, it can now be searched with exhaustive techniques in less time. The grammar filtering technique is formally proven correct, and experimentally validated. A prototype implementation is applied to a series of programming language grammars, and the performance of exhaustive detection methods are measured before and after filtering. The results show that a small investment in filtering time can substantially reduce the run-time of exhaustive searching, sometimes with several orders of magnitude. After this evaluation on token-based grammars, the grammar filtering technique is extended for use with character-level grammars. The extensions deal with the increased complexity of these grammars, as well as their disambiguation declarations. This enables the detection of productions that are harmless due to disambiguation. The extentions are experimentally validated on another set of programming language grammars from practice, with similar results as before. Measurements show that, even though character-level grammars are more expensive to filter, the investment is still very worthwhile. Exhaustive search times were again reduced substantially. The second main contribution of this thesis is DR. AMBIGUITY, an expert system to help grammar developers to understand and solve found ambiguities. If applied to an ambiguous sentence, DR. AMBIGUITY analyzes the causes of the ambiguity and proposes a number of applicable solutions. A prototype implementation is presented and evaluated with a mature Java grammar. After removing disambiguation declarations from the grammar we analyze sentences that have become ambiguous by this removal. The results show that in all cases the removed filter is proposed by DR. AMBIGUITY as a possible cure for the ambiguity. Concluding, this thesis improves ambiguity detection with two novel methods. The first is the ambiguity detection method AMBIDEXTER that applies grammar filtering to substantially speed up exhaustive searching. The second is the expert system DR. AMBIGUITY that automatically analyzes found ambiguities and proposes applicable cures. The results obtained with both methods show that automatic ambiguity detection is now ready for realistic programming language grammars.
448

Imprecise probability analysis for integrated assessment of climate change

Kriegler, Elmar January 2005 (has links)
<p> We present an application of imprecise probability theory to the quantification of uncertainty in the integrated assessment of climate change. Our work is motivated by the fact that uncertainty about climate change is pervasive, and therefore requires a thorough treatment in the integrated assessment process. Classical probability theory faces some severe difficulties in this respect, since it cannot capture very poor states of information in a satisfactory manner. A more general framework is provided by imprecise probability theory, which offers a similarly firm evidential and behavioural foundation, while at the same time allowing to capture more diverse states of information. An imprecise probability describes the information in terms of lower and upper bounds on probability.</p> <p> For the purpose of our imprecise probability analysis, we construct a diffusion ocean energy balance climate model that parameterises the global mean temperature response to secular trends in the radiative forcing in terms of climate sensitivity and effective vertical ocean heat diffusivity. We compare the model behaviour to the 20th century temperature record in order to derive a likelihood function for these two parameters and the forcing strength of anthropogenic sulphate aerosols. Results show a strong positive correlation between climate sensitivity and ocean heat diffusivity, and between climate sensitivity and absolute strength of the sulphate forcing.</p> <p> We identify two suitable imprecise probability classes for an efficient representation of the uncertainty about the climate model parameters and provide an algorithm to construct a belief function for the prior parameter uncertainty from a set of probability constraints that can be deduced from the literature or observational data. For the purpose of updating the prior with the likelihood function, we establish a methodological framework that allows us to perform the updating procedure efficiently for two different updating rules: Dempster's rule of conditioning and the Generalised Bayes' rule. Dempster's rule yields a posterior belief function in good qualitative agreement with previous studies that tried to constrain climate sensitivity and sulphate aerosol cooling. In contrast, we are not able to produce meaningful imprecise posterior probability bounds from the application of the Generalised Bayes' Rule. We can attribute this result mainly to our choice of representing the prior uncertainty by a belief function.</p> <p> We project the Dempster-updated belief function for the climate model parameters onto estimates of future global mean temperature change under several emissions scenarios for the 21st century, and several long-term stabilisation policies. Within the limitations of our analysis we find that it requires a stringent stabilisation level of around 450 ppm carbon dioxide equivalent concentration to obtain a non-negligible lower probability of limiting the warming to 2 degrees Celsius. We discuss several frameworks of decision-making under ambiguity and show that they can lead to a variety of, possibly imprecise, climate policy recommendations. We find, however, that poor states of information do not necessarily impede a useful policy advice.</p> <p> We conclude that imprecise probabilities constitute indeed a promising candidate for the adequate treatment of uncertainty in the integrated assessment of climate change. We have constructed prior belief functions that allow much weaker assumptions on the prior state of information than a prior probability would require and, nevertheless, can be propagated through the entire assessment process. As a caveat, the updating issue needs further investigation. Belief functions constitute only a sensible choice for the prior uncertainty representation if more restrictive updating rules than the Generalised Bayes'Rule are available.</p> / <p> Diese Arbeit untersucht die Eignung der Theorie der unscharfen Wahrscheinlichkeiten für die Beschreibung der Unsicherheit in der integrierten Analyse des Klimawandels. Die wissenschaftliche Unsicherheit bezüglich vieler Aspekte des Klimawandels ist beträchtlich, so dass ihre angemessene Beschreibung von großer Wichtigkeit ist. Die klassische Wahrscheinlichkeitstheorie weist in diesem Zusammenhang einige Probleme auf, da sie Zustände sehr geringer Information nicht zufriedenstellend beschreiben kann. Die unscharfe Wahrscheinlichkeitstheorie bietet ein gleichermaßen fundiertes Theoriegebäude, welches jedoch eine größere Flexibilität bei der Beschreibung verschiedenartiger Informationszustände erlaubt. Unscharfe Wahrscheinlichkeiten erfassen solche Informationszustände durch die Spezifizierung von unteren und oberen Grenzen an zulässige Werte der Wahrscheinlichkeit.</p> <p> Unsere Analyse des Klimawandels beruht auf einem Energiebilanzmodell mit diffusivem Ozean, welches die globale Temperaturantwort auf eine Änderung der Strahlungsbilanz in Abhängigkeit von zwei Parametern beschreibt: die Klimasensitivität, und die effektive vertikale Wärmediffusivität im Ozean. Wir vergleichen das Modellverhalten mit den Temperaturmessungen des 20. Jahrhunderts, um eine sogenannte Likelihood-Funktion für die Hypothesen zu diesen beiden Parametern sowie dem kühlenden Einfluss der Sulfataerosole zu ermitteln. Im Ergebnis zeigt sich eine stark positive Korrelation zwischen Klimasensitivität und Wärmediffusivität im Ozean, und Klimasensitivität und kühlendem Einfluss der Sulfataerosole.</p> <p> Für die effiziente Beschreibung der Parameterunsicherheit ziehen wir zwei geeignete Modelltypen aus der unscharfen Wahrscheinlichkeitstheorie heran. Wir formulieren einen Algorithmus, der den Informationsgehalt beider Modelle durch eine sogenannte Belief-Funktion beschreibt. Mit Hilfe dieses Algorithmus konstruieren wir Belief-Funktionen für die A-priori-Parameterunsicherheit auf der Grundlage von divergierenden Wahrscheinlichkeitsschätzungen in der Literatur bzw. Beobachtungsdaten. Wir leiten eine Methode her, um die A-priori-Belief-Funktion im Lichte der Likelihood-Funktion zu aktualisieren. Dabei ziehen wir zwei verschiedene Regeln zur Durchführung des Lernprozesses in Betracht: die Dempstersche Regel und die verallgemeinerte Bayessche Regel. Durch Anwendung der Dempsterschen Regel erhalten wir eineA-posteriori-Belief-Funktion, deren Informationsgehalt qualitativ mit den Ergebnissen bisheriger Studien übereinstimmt, die eine Einschränkung der Unsicherheit über die Klimasensitivität und die kühlende Wirkung der Sulfataerosole versucht haben. Im Gegensatz dazu finden wir bei Anwendung der verallgemeinerten Bayesschen Regel keine sinnvollen unteren und oberen Grenzen an die A-posteriori-Wahrscheinlichkeit. Wir stellen fest, dass dieses Resultat maßgeblich durch die Wahl einer Belief-Funktion zur Beschreibung der A-priori-Unsicherheit bedingt ist.</p> <p> Die A-posteriori-Belief-Funktion für die Modellparameter, die wir aus der Anwendung der Dempsterschen Regel erhalten haben, wird zur Abschätzung des zukünftigen Temperaturanstiegs eingesetzt. Wir betrachten verschiedene Emissionsszenarien für das 21. Jahrhundert sowie verschiedene Stabilisierungsziele für den Treibhausgasgehalt in der Atmosphäre. Im Rahmen unserer Analyse finden wir, dass sehr strikte Stabilisierungsziele im Bereich einer Kohlendioxid-Äquivalentkonzentration von ca. 450 ppm in der Atmosphäre notwendig sind, um nicht eine vernachlässigbar kleine untere Wahrscheinlichkeit für die Begrenzung der Erwärmung auf 2 Grad Celsius zu erhalten. Wir diskutieren verschiedene Kriterien für die Entscheidungsfindung unter unscharfer Wahrscheinlichkeit, und zeigen dass sie zu verschiedenen teilweise unscharfen Politikempfehlungen führen können. Nichtsdestotrotz stellen wir fest, dass eine klare Politikempfehlung auch bei Zuständen schwacher Information möglich sein kann.</p> <p> Wir schließen, dass unscharfe Wahrscheinlichkeiten tatsächlich ein geeignetes Mittel zur Beschreibung der Unsicherheit in der integrierten Analyse des Klimawandels darstellen. Wir haben Algorithmen zur Generierung und Weiterverarbeitung von Belief-Funktionen etabliert, die eine deutlich größere A-priori-Unsicherheit beschreiben können, als durch eine A-priori-Wahrscheinlichkeit möglich wäre. Allerdings erfordert die Frage des Lernprozesses für unscharfe Wahrscheinlichkeiten eine weitergehende Untersuchung. Belief-Funktionen stellen nur dann eine vernünftige Wahl für die Beschreibung der A-priori-Unsicherheit dar, wenn striktere Regeln als die verallgemeinerte Bayessche Regel für den Lernprozess gerechtfertigt werden können.</p>
449

The impact of role stress on job satisfaction and the intention to quit among call centre representatives in a financial company

Diamond, Kenneth Lungile January 2010 (has links)
<p>The call centre industry has been one of the fastest growing industries in South Africa. Call centres have for most companies become a basic business requirement for servicing customers. Zapf, Isic, Bechtoldt and Blau (2003: 311) argue that there are high levels of stress amongst employees in call centres, which they believe to be the result of both the work tasks and the interactions with customers. The aim of this study was to establish whether call centre work design and structure contributed to role stress amongst client service representatives (CSRs). It was also the aim of this study to establish whether role stress affected the CSRs‟ levels of job satisfaction and their intentions to quit from their jobs.</p>
450

Measuring broadband, ultraweak, ultrashort pulses

Shreenath, Aparna Prasad 14 July 2005 (has links)
Many essential processes and interactions on atomic and molecular scales occur at ultrafast timescales. The ability to measure and manipulate ultrashort pulses hold the key to probing and understanding these key processes that physicists, engineers, chemists and biologists study today. Measuring ultrashort pulses means that we measure both the intensity (which is a function of time) and the phase of the pulse in time. Or alternately we might measure spectrum and spectral phase (in the corresponding Fourier domain). In the early 1990's, the invention of FROG opened up the field of ultrashort measurement with it's ability to measure the complete pulse. Since then, there have been a whole host of pulse measurement techniques that have been invented to measure all sorts of ultrashort pulses. However, no variation of FROG nor any other fs pulse measurement technique, for that matter, has yet been able to completely measure arbitrary ultraweak femtosecond light pulses such as those found in nature. In this thesis, we will explore a couple of highly sensitive methods in a quest to measure ultraweak ultrashort pulses. We explore the use of Spectral Interferometry, a known sensitive technique as one possibility. We find that it has certain drawbacks that make it not necessarily suitable to tackle this problem. But in the course of our quest, we find that this technique is highly suitable for measuring 10s of picosecond-long shaped pulses. We discuss a couple of developments which make SI highly practical to use for such shaped pulse-measurements. We also develop a new technique which is a variation of FROG, based on the non-linearity of Difference Frequency Generation and Optical Parametric Amplification, which can amplify pulses as weak as a few hundred attojoules to be able to spectrally resolve them and measure the full intensity and phase of these pulses. This technique offers great potential to measure generalized ultraweak ultrashort pulses.

Page generated in 0.063 seconds