1271 |
A STYLISTIC COMPARISON OF TWO SHORT STORIES BY ERNEST HEMINGWAY : "A Clean, Well-Lighted Place" and "Hills Like White Elephants"Hietanen, Marko January 2009 (has links)
The purpose with this essay is to investigate how Ernest Hemingway uses his style of writing in his short stories “A Clean, Well-Lighted Place” and “Hills Like White Elephants”. The questions at issue are: What is characteristic of Hemingway's style when looking at the use of adjectives and sentence complexity? How is the Iceberg Technique used? What stylistic differences and similarities are there between the stories? In my investigation I used a stylistic approach, in which adjectives are counted and sentence length is measured (creating mainly a quantitative analysis). The frequency of adjectives is calculated and compared against the norm in imaginative prose. Sentence length is compared against the norm for modern English. Previous research has provided a foundation for further analysis of the Iceberg Technique. The analysis shows that the frequency of adjectives is very low compared with the norm and that many adjectives are used repeatedly. The sentences are very short, not even reaching half the length of the norm presented. Hemingway’s Iceberg Technique shows in the scarce use of dialogue tags and a plot that does not reveal much about the characters or the setting. The real plot is often hidden, leaving it to the reader to interpret and “feel” what the story is really about. In conclusion: it may be said that both short stories are told in a minimalistic style, using only what is necessary to tell the story. They have a simple plot and simple characters, just like the Hemingway style we know.
|
1272 |
Newspaper Readability : a Broadsheet vs. a TabloidJärvbäck Hillbom, Kristina January 2009 (has links)
Is it possible to trace differences in the syntax used in various newspapers and how these differences influence the readability? Earlier studies confirm this and show that it is possible to make a wider distinction between the languages used in for example a broadsheet compared to the language used in a tabloid. In this study, both sentence length and sentence complexity of a broadsheet and a tabloid with a similar political stance were examined in order to find out if it is possible to show differences in readability between the two newspapers. The articles used in this study are on-line articles and have thus been taken from a search on the internet. In order to obtain adequate research material, ten articles from each newspaper have been used. Five articles from each newspaper website are news articles whereas the remaining five were taken from the culture pages. Regarding sentence length, the average of each article has been calculated. When it comes to sentence complexity, ratios of simple, complex, and compound sentences have been investigated. The analysis revealed that it is possible to show that there are not any substantial differences in sentence length and sentence complexity between the examined newspapers. However, in contrast to the hypothesis of this study, the examined articles in the tabloid consisted of longer sentences and more complex sentence constructions which, according to earlier research, would be an indication of a more formal language which probably has an effect on readability. Since both examined newspapers are supposed to support the Conservative party, it is, with the result of this study, possible to claim that both newspapers have the same targeted audience.
|
1273 |
Patterns of Consumption: Ceramic Residue Analysis at Liangchengzhen, Shandong, ChinaLanehart, Rheta E. 01 January 2015 (has links)
The purpose of this thesis was to identify the different patterns of food consumption across space and time at Liangchengzhen, a Longshan (ca. 2600-1900 B.C.) site located in Shandong Province, China. The primary hypothesis of the research contended that evidence of increasing social inequality with respect to food consumption would be found from early to late phases at Liangchengzhen. In addition, rice and meat from mammals, especially pigs, were hypothesized as the most likely types of prestigious foods for daily and ritual activities. Fish and marine foods in general were hypothesized to be foods that average households could obtain since Liangchengzhen was close to the sea and would not have as high a value as mammal meat.
Pottery was sampled from Early Phase storage/trash and ritual pits as well as Late Phase storage/trash and ritual pits located in Excavation Area One. Pottery types included ding and guan, hypothesized for cooking meat, and yan, hypothesized for steaming vegetables and grains. Lipid residue analysis was performed using gas chromatography/mass spectrometry (GCMS) to quantify the amount of C15 and C17 alkane peaks in the pottery and compare these quantities to the amount of C15 and C17 alkane peaks in terrestrial and marine food reference sources.
Results indicated that socially valued food consumption transitioned from marine food sources in the early phase ritual pits to rice and pig in the late phase ritual pits. Millet and plant residues were consistently present in storage/trash pits from both early and late phases. Findings also indicated that the use of pottery types for cooking were not limited to one source, i.e., marine, rice, millet and plant residues were found in all pottery types while pig residues were found in ding and yan pottery.
Results of the lipid residue analysis provide partial support of increasing social inequality with respect to food consumption from early to late phases at Liangchengzhen, The findings from the lipid residue analysis in this thesis more closely resemble the distribution of integrative, communal consumption pattern in the early phase and a hierarchical consumption pattern during the late phase. Fish, more abundant in the early phase, was almost non-existent by the late phase. Pig and rice, hypothesized as preferred foods, were found only during the late phase, primarily in the ritual pit, H31. Millet and plant were conspicuously present during both phases, but had greater separation from ritual pits during the late phase. However, these findings are surprising since it does not match the material remains of rice and pig found in early phase pits or late phase storage/trash pits from Excavation Area One.
It can be concluded that patterns of consumption at Liangchengzhen changed substantially from the early phase to the late phase with regards to food residues found in hypothesized ritual pits. Considering these data with the understanding that food in China has historically been used as a tool to wield influence and power, it can be hypothesized that a social hierarchy may have developed by the late phase that was not present during the early phase. However, participation in the activities held in late phase ritual pits may have been inclusive for all Liangchangzhen residents rather than exclusive for higher status individuals.
The current research provides a starting point for further investigation into the foodways at Liangchengzhen. This thesis is the first systematic study of food residues from the interior of Neolithic vessels from ancient China that relates the results of the residue analysis to patterns of food consumption and social change.
|
1274 |
Multiple group membership and individual resilience and well-being : the impact of social identity complexity, stigmatization and compatibilitySønderlund, Anders Larrabee January 2015 (has links)
A growing body of research points to the value of multiple group memberships for individual well-being. However, much of this work considers group memberships very broadly and in terms of number alone, and in so doing, advances an argument that when it comes to group memberships, more is better. We conducted five studies to delve further into this idea. Specifically, across these studies we considered how different features of groups may impact on how group memberships combine with one another and affect individual well-being. In two correlational studies, we found that multiple group membership indeed contributed to well-being, but also that this effect was moderated by the distinctiveness of those groups within the overall self-concept (Study 1), and by the social value and visibility of individual group memberships (i.e., stigma; Study 2). In both studies, these effects were mediated by perceived access to social support and by the reported ability to engage in identity expression (i.e., to communicate to others who one “really is”). Across another three studies we experimentally demonstrated that multiple group membership increased well-being and resilience to stress (Study 3 and 4), but only when the given groups were perceived as compatible in nature (Study 3 and 5). Together, these studies suggest that the benefits of multiple group membership depend on factors that go beyond their sheer number. Indeed, the content and social meaning of group memberships, individually and in combination, and the way in which these features guide self-expression and social action, determine whether multiple group memberships are a benefit or a burden for individual well-being and resilience.
|
1275 |
Conflict complexity in Ethiopia : case study of Gambella Regional StateAdeto, Yonas Adaye January 2014 (has links)
The causes of violent conflicts in Ethiopia in general, and in Gambella in particular, are complex. Critically examining and explaining the causes entails going beyond labelling them solely in terms of one variable, such as 'ethnic conflict‘. The contestation of the study is that contemporary conflicts in Ethiopia have remained protracted, untransformed and recurring. This is largely because the past processes which gave rise to them were not properly taken into account and not properly comprehended, thereby giving rise to much superficiality in their explanations, inappropriate policies and a failure of efforts at apprehending them. The thesis identifies four major factors and two contrasting narratives which have framed the analysis of conflict complexity in Gambella. Qualitatively designed, the study focuses mainly on the structural causes of violent conflicts since 1991 and how their constituent elements were conceived and explained by different actors. First, asymmetrical centre-periphery relations entrenched in the state building processes of the imperial and military regimes, continued under the present regime rendering Gambella an object of extraction and repression. Consequently, competing claims of ownership of Gambella between the Anywaa and the Nuer ethnic groups evolved entailing shifting allegiances to the central government. Second, ethnic politics of the new social contract ushered in a new thinking of ‗each ethnic group for itself‘; it made ethnic federalism a means of consolidating the regime‘s political philosophy, depriving the local community of a genuine political representation, leading to broader, deeper and more serious violence. Third, land policy of the incumbent favoured its political party affiliates and foreign investors, thus inducing more violence. Finally, external dynamics impacted on internal conflict complexity. The study has argued that single factor approaches are inadequate to explain what has constituted violent conflicts in Gambella since 1991; it has concluded that internal conflicts are complex, and their constituent elements are conceived of, and explained, differently by the local peoples and different levels of government. Nevertheless, given commitment and a political will, the local and national governments, as well as peoples at grassroots level, have the capacity to transform the present, and to prevent future violent conflicts in the region.
|
1276 |
Thriving at the Edge of ChaosBengtsson, Jonas January 2004 (has links)
In this master thesis two different worldviews are compared: a mechanistic, and an organic worldview. The way we think the world and the nature work reflects on how we think organizations work, or how they ought to work. The mechanistic worldview has dominated our way of thinking since the seventeenth century, and it compares the world with a machine. The organic worldview could use a number of different metaphors, but the one addressed in this thesis is complexity theory. Complexity theory is related to chaos theory and is concerned with complex adaptive systems (cas). Complex adaptive systems exist everywhere and are systems such as the human immune system, economies, and ecosystems. What complexity theory tries to do is to understand these systems—how they arise, how they function and how order emerge in them. When looking at complex adaptive systems you can’t just look at the different parts. You must take a more holistic view and look at the whole and the interaction of the parts. If you just look at the parts you will miss the emergent properties that have emerged as the system has self-organized. One prominent aspect of these systems is that they don’t have any central authority, but somehow order do arise. In relation to organizations, complexity theory has something to say about almost all aspects of organizations: from what kind of leadership is needed, and how teams should be organized to the physical structure of the organization. To understand what complexity theory is and how to relate that to (software developing) organizations is the main focus of this thesis. Scrum is an agile and lightweight process which can be applied on development projects in general, but have been used in such diverse examples as software development projects, marketing programs, and business process reengineering (BPR) initiatives. In this thesis Scrum is used as an example of how to apply complexity theory to organizations. The result of the thesis showed that Scrum is highly influenced and compatible with complexity theory, which implies that complexity theory is of some use in software development. However, there are more work to be done to determine how effective it is, how to introduce it into organizations, and to explore more specific implementations. This master thesis should give the reader a good understanding of what complexity theory is, some specific issues to consider when applying complexity theory on organizations, and some specific examples of how to apply complexity theory on organizations.
|
1277 |
An Analysis of Platform Game Design : Implementation Categories and Complexity MeasurementsGustafsson, Adam January 2014 (has links)
This thesis addresses design and development associated problems identified within theplatform-game genre. The problem described originates from the fluctuating curve ofinterest towards the platform-game genre that can be observed between the 1980’s andtoday. The problem stated in this thesis is that modern platform-game developers mayoften overlook and –or deprioritize important design and gameplay related componentsthat we find reoccurring in previously popular games within the genre.This thesis strives to address such problems by decomposing the developmentprocess of a platform game into a light framework titled Implementation categories. Allincluded categories represent a set of design and development related platform-gamecomponents – primarily identified through previous research in the field. In order tocreate an understanding of each category’s complexity - as well as account for thepossibilities to use the categories as a guideline when developing a platform game - aprototype game was developed. The Implementation categories included in theprototype was then measured with a set of software complexity metrics. This thesis willmotivate and explain the selection of implementation categories, account for the usageof software complexity metrics as well as present a detailed documentation of theprototype development.The result of this thesis is a thorough presentation of the Implementation categories -attached with complexity examples for each category as well as a complete gameprototype. The complete results of this thesis will hopefully be of assistance in smallscale,independent or academic game projects in regard of design, decision making,prioritization and time planning.
|
1278 |
Exploring HCI-issues within error- sensitive intensive healthcare systems : An Ethnographic case studyAxelsson, Lenny January 2014 (has links)
People are used to working routines that are taught and transferred from one to another, routines such as how to interact with an information system and how to use it in a specific context. While user experience and usability have been two issues of interest within the field of HCI, there is a lack of research exploring usage and behavior while interacting with complex error-sensitive systems, in so much as an action that couldn’t be undone once performed. This thesis explores the error-sensitive aspects of complexity within interactions of the administering of medical prescription activities at an intensive healthcare unit. The aim is to investigate the interactions of computer-supported cooperative work environments used for information transformation activities for medical prescriptions. The results reveal a number of HCI-related issues in which clinicians socially bypass system interactions by making incomplete data inputs while assuming a given level of understanding of other employees.
|
1279 |
Financial Networks, Complexity and Systemic RiskRoukny, Tarik 11 January 2016 (has links)
The recent financial crisis has brought to the fore the need to better understand systemic risk, that is, the risk of collapse of a large part of the financial system and its potential effects on the real economy. In this thesis, we argue that a proper assessment of systemic risk must include an analysis of the network of interdependencies that exists between the different financial institutions. In fact, today's level of financial interconnectedness between and among markets has proven to have ambiguous effects. On the one hand, a highly connected system allows to diversify risk at the micro level. On the other hand, too much interdependencies provide various paths for contagion to take place and propagate at the macro level. In what follows, we analyze financial markets as networks of interactions and dependencies between financial agents. Through this lens, we investigate three major aspects: (i) how the structure of financial networks can amplify or mitigate the propagation of financial distress, (ii) what are the implications for macro-prudential regulation and (iii) which patterns of interactions characterize real financial networks.We start out by delivering a stability analysis of a network model of interbank contagion that accounts for panics and bank runs. We identify the effects of market architecture, banks' capital ratios, market liquidity and shocks. Our results show that no single network architecture is always superior to others. In particular, highly concentrated networks can both be the most robust and the most fragile depending on other market characteristics, mainly, liquidity.We then move on to tackle issues related to the building of regulatory frameworks that adequately account for the effects of financial interdependencies. We propose a new methodology to compute individual and systemic probabilities of default and show that certain network characteristics give rise to uncertainty. More precisely, we find that network cycles are responsible for the emergence of multiple equilibria even in the presence of complete knowledge. In turn, multiple equilibria give rise to uncertainty for the regulator in the determination of default probabilities. We also quantify the effects of network structures, leverage, volatility and correlations.Having introduced a way to overcome multiplicity, we deliver a method that quantifies the price of complexity in financial markets based on the above mentioned model. This method consists of determining the scope of possible levels of systemic risk that can be obtained when some parameters are subject to small deviations from their true value. The results show a price to the interconnected nature of credit markets even when the equilibrium is unique: small errors can lead to large mistakes in measuring the likelihood of systemic default. Extending the model to account for derivative contracts, we show that error effects increase dramatically as more types of contracts are present in the system. While there is an intuition for such phenomenon, our framework formalizes the idea and quantifies its determinants.In the last part of this thesis, we contribute to the quantitative analysis of real financial networks. We start with a temporal network analysis of one of the major national interbank markets, that is, the German interbank market. We report on the structural evolution of two of the most important over-the-counter markets for liquidity: the interbank market for credit and for derivatives. We find that the majority of interactions is concentrated onto a set of few market participants. There also exists an important correlation between the borrowing and lending activities for each bank in terms of numbers of counterparties. In contrast with other works, we find little impact of the 2008 crisis on the structure of the credit market. The derivative market however exhibits a peak of concentration in the run up to the crisis. Globally, both markets exhibit large levels of stability for most of the network metrics and high correlation amongst them.Finally, we analyze how banks interact with the real economy by investigating the network of loans from banks to industries in Japan. We find evidence of a particular structure of interactions resulting from the coexistence of specific strategies both on the lending side and the borrowing side: generalist agents and specialist agents. Generalist banks have a diversified portfolio (i.e. they provide liquidity to almost all industries) while specialist banks focus their activity on a narrow set of industries. Similarly, generalists industries obtain credit from all banks while specialist industries have a restricted number of creditors. Moreover, the arrangement of interactions is such that specialists tend to only interact with generalists from the other side. Our model allows to structurally characterize highly persistent, and economically meaningful, sets of generalists and specialists. We further provide an analysis of the factors that predict whether a given bank or industry is a generalist. We show that size is an important determinant, both for banks and industries, but we also highlight additional relevant factors. Finally, we find that generalist banks tend to be less vulnerable. Hence, how banks position themselves in the network has important implications for their risk profile. Overall the results presented in this thesis highlight the complex role played by financial interlinkages. Therefore, they demonstrate the need to embed the network dimension in the regulatory framework to properly assess the stability profile of financial systems. Such findings are relevant for both theoretical modeling and empirical investigations. We believe that they also shed light on crucial aspects of systemic risk relevant for policy making and regulation of today's complex financial systems. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
1280 |
Essays on Complexity in the Financial SystemGeraci, Marco Valerio 15 September 2017 (has links)
The goal of this thesis is to study the two key aspects of complexity of the financial system: interconnectedness and nonlinear relationships. In Chapter 1, I contribute to the literature that focuses on modelling the nonlinear relationship between variables at the extremes of their distribution. In particular, I study the nonlinear relationship between stock prices and short selling. Whereas most of the academic literature has focused on measuring the relationship between short selling and asset returns on average, in Chapter 1, I focus on studying the relationship that arises in the extremes of the two variables. I show that the association between financial stock prices and short selling can become extremely strong under exceptional circumstances, while at the same time being weak in normal times. The tail relationship is stronger for small cap firms, a result that is intuitively in line with the empirical findings that stocks with lower liquidity are more price-sensitive to short selling. Finally, results show that the adverse tail correlation between increases in short selling and declines in stock prices was not always lower during the ban periods, but had declined markedly towards the end of the analysis window. Such results cast doubts about the effectiveness of bans as a way to prevent self-reinforcing downward price spirals during the crisis. In Chapter 2, I propose a measure of interconnectedness that takes into account the time-varying nature of connections between financial institutions. Here, the parameters underlying comovement are allowed to evolve continually over time through permanent shifts at every period. The result is an extremely flexible measure of interconnectedness, which uncovers new dynamics of the US financial system and can be used to monitor financial stability for regulatory purposes. Various studies have combined statistical measures of association (e.g. correlation, Granger causality, tail dependence) with network techniques, in order to infer financial interconnectedness (Billio et al. 2012; Barigozzi and Brownlees, 2016; Hautsch et al. 2015). However, these standard statistical measures presuppose that the inferred relationships are time-invariant over the sample used for the estimation. To retrieve a dynamic measure of interconnectedness, the usual approach has been to divide the original sample period into multiple subsamples and calculate these statistical measures over rolling windows of data. I argue that this is potentially unsuitable if the system studied is time-varying. By relying on short subsamples, rolling windows lower the power of inference and induce dimensionality problems. Moreover, the rolling window approach is known to be susceptible to outliers because, in small subsamples, these have a larger impact on estimates (Zivot and Wang, 2006). On the other hand, choosing longer windows will lead to estimates that are less reactive to change, biasing results towards time-invariant connections. Thus, the rolling window approach requires the researcher to choose the window size, which involves a trade-off between precision and flexibility (Clark and McCracken, 2009). The choice of window size is critical and can lead to different results regarding interconnectedness. The major novelty of the framework is that I recover a network of financial spillovers that is entirely dynamic. To do so, I make the modelling assumption that the connection between any two institutions evolves smoothly through time. I consider this assumption reasonable for three main reasons. First, since connections are the result of many financial contracts, it seems natural that they evolve smoothly rather than abruptly. Second, the assumption implies that the best forecast of a connection in the future is the state of that connection today. This is consistent with the notion of forward-looking prices. Third, the assumption allows for high flexibility and for the data to speak for itself. The empirical results show that financial interconnectedness peaked around two main events: the Long-Term Capital Management crisis of 1998 and the great financial crisis of 2008. During these two events, I found that large banks and broker/dealers were among the most interconnected sectors and that real estate companies were the most vulnerable to financial spillovers. At the individual financial institution level, I found that Bear Stearns was the most vulnerable financial institution, however, it was not a major propagator, and this might explain why its default did not trigger a systemic crisis. Finally, I ranked financial institutions according to their interconnectedness and I found that rankings based on the time-varying approach were more stable than rankings based on other market-based measures (e.g. marginal expected short fall by Acharya et al. (2012) and Brownlees and Engle (2016)). This aspect is significant for policy makers because highly unstable rankings are unlikely to be useful to motivate policy action (Danielsson et al. 2015; Dungey et al. 2013). In Chapter 3, rather than assuming interconnectedness as an exogenous process that has to be inferred, as is done in Chapter 2, I model interconnectedness as an endogenous function of market dynamics. Here, I take interconnectedness as the realized correlation of asset returns. I seek to understand how short selling can induce higher interconnectedness by increasing the negative price pressure on pairs of stocks. It is well known that realized correlation varies continually through time and becomes higher during market events, such as the liquidation of large funds. Most studies model correlation as an exogenous stochastic process, as is done, for example, in Chapter 2. However, recent studies have proposed to interpret correlation as an endogenous function of the supply and demand of assets (Brunnermeier and Pedersen, 2005; Brunnermeier and Oehmke, 2014; Cont and Wagalath, 2013; Yang and Satchell, 2007). Following these studies, I analyse the relationship between short selling and correlation between assets. First, thanks to new data on public short selling disclosures for the United Kingdom, I connect stocks based on the number of common short sellers actively shorting them. I then analyse the relationship between common short selling and excess correlation of those stocks. To this end, I measure excess correlation as the monthly realized correlation of four-factor Fama and French (1993) and Carhart (1997) daily returns. I show that common short selling can predict one-month ahead excess correlation, controlling for similarities in size, book-to-market, momentum, and several other common characteristics. I verify the confirm the predictive ability of common short selling out-of-sample, which could prove useful for risk and portfolio managers attempting to forecast the future correlation of assets. Moreover, I showed that this predictive ability can be used to establish a trading strategy that yields positive cumulative returns over 12 months. In the second part of the chapter I concentrate on possible mechanisms that could give rise to this effect. I focus on three, non-exclusive, mechanisms. First, short selling can induce higher correlation in asset prices through the price-impact mechanism (Brunnermeier and Oehmke, 2014; Cont and Wagalath, 2013). According to this mechanism, short sellers can contribute to price declines by creating sell-order imbalances i.e. by increasing excess supply of an asset. Thus, short selling across several stocks should increase the realized correlation of those stocks. Second, common short selling can be associated with higher correlation if short sellers are acting as voluntary liquidity providers. According to this mechanisms, short sellers might act as liquidity providers in times of high buy-order imbalances (Diether et al. 2009b). In this cases, the low returns observed after short sales might be compensations to short sellers for providing liquidity. In a multi-asset setting, this mechanism would result in short selling being associated with higher correlation mechanism. Both above-mentioned mechanisms deliver a testable hypothesis that I verify. In particular, both mechanisms posit that the association between short selling and correlation should be stronger for stocks which are low on liquidity. For the first mechanism, the price impact effect should be stronger for illiquid stocks and stocks with low market depth. For the liquidity provision mechanism, the compensation for providing liquidity should be higher for illiquid stocks. The empirical results cannot confirm that uncovered association between short selling and correlation is stronger for illiquid stocks, thus not supporting the price-impact and liquidity provision hypothesis. I thus examine a third possible mechanism that could explain the uncovered association between short selling and correlation i.e. the informative trading mechanism. Short sellers have been found to be sophisticated market agents which can predict future returns (Dechow et al. 2001). If this is indeed the case, then short selling should be associated with higher future correlation. I found that informed common short selling i.e. common short selling that is linked to informative trading, was strongly associated to future excess correlation. This evidence supports the informative trading mechanism as an explanation for the association between short selling and correlation. In order to further verify this mechanism, I checked if informed short selling takes place in the data, whilst controlling for several of the determinants of short selling, including short selling costs. The results show evidence of both informed and momentum-based non-informed short selling taking place. Overall, the results have several policy implications for regulators. The results suggest that the relationship between short selling and future excess correlation is driven by informative short selling, thus confirming the sophistication of short sellers and their proven importance for market efficiency and price informativeness (Boehmer and Wu, 2013). On the other hand, I could not dismiss that also non-informative momentum-based short selling is taking place in the sample. The good news is that I did not find evidence of a potentially detrimental price-impact effect of common short selling for illiquid stock, which is the sort of predatory effect that regulators often fear. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
Page generated in 0.1047 seconds