Spelling suggestions: "subject:"topic"" "subject:"oopic""
41 |
The scaling of power in West Cumbria and the role of the nuclear industryHaraldsen, Stephen January 2018 (has links)
This thesis explores the relationship between a global industrial actor and its regional host, and what that can tell us about neoliberalism and globalisation. The relationship between the nuclear industry, in particular the Sellafield site, and the West Cumbrian region where it is located is the specific focus for the data collection and analysis. West Cumbria is an isolated region in the very north-west corner of England. West Cumbria was the site of the UK’s first nuclear reactors. Over seven decades, as other industries have declined, West Cumbria has become home to, and economically dependent on, one of the largest and most complex nuclear sites in the world. The core concepts employed to analyse this relationship are power and scale. In particular, this thesis analyses how power is rescaled in the context of state restructuring and the wider changes associated with globalisation. To be able to analyse power it was necessary to develop an applied understanding of the concept. This is informed by a diverse literature, and takes an implicitly geographical and relational understanding of the exercise of power in its diverse forms, bases and uses. Firstly, policy documentation is analysed to understand the impact of the changes to the governance and management of the UK’s oldest and most hazardous nuclear sites. Secondly, survey and focus group data is analysed which focusses on the position of the nuclear industry in the local economy and specific changes made as a result of the part-privatisation of the industry in 2008. Finally, an analysis of economic development plans which aim to grow West Cumbria’s economy, and demonstrate an increasing priority being given to new nuclear developments. Finally, these three areas are brought together to explore how power is rescaled, its implications and the wider relevance of the thesis to other locations and policy areas.
|
42 |
Topic modeling using latent dirichlet allocation on disaster tweetsPatel, Virashree Hrushikesh January 1900 (has links)
Master of Science / Department of Computer Science / Cornelia Caragea / Doina Caragea / Social media has changed the way people communicate information. It has been noted that social media platforms like Twitter are increasingly being used by people and authorities in the wake of natural disasters. The year 2017 was a historic year for the USA in terms of natural calamities and associated costs. According to NOAA (National Oceanic and Atmospheric Administration), during 2017, USA experienced 16 separate billion-dollar disaster events, including three tropical cyclones, eight severe storms, two inland floods, a crop freeze, drought, and wild re. During natural disasters, due to the collapse of infrastructure and telecommunication, often it is hard to reach out to people in need or to determine what areas are affected. In such situations, Twitter can be a lifesaving tool for local government and search and rescue agencies. Using Twitter streaming API service, disaster-related tweets can be collected and analyzed in real-time. Although tweets received from Twitter can be sparse, noisy and ambiguous, some may contain useful information with respect to situational awareness. For example, some tweets express emotions, such as grief, anguish, or call for help, other tweets provide information specific to a region, place or person, while others simply help spread information from news or environmental agencies. To extract information useful for disaster response teams from tweets, disaster tweets need to be cleaned and classified into various categories. Topic modeling can help identify topics from the collection of such disaster tweets. Subsequently, a topic (or a set of topics) will be associated with a tweet. Thus, in this report, we will use Latent Dirichlet Allocation (LDA) to accomplish topic modeling for disaster tweets dataset.
|
43 |
Topic-focused and summarized web information retrievalYoo, Seung Yeol, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Since the Web is getting bigger and bigger with a rapidly increasing number of heterogeneous Web pages, Web users often suffer from two problems: P1) irrelevant information and P2) information overload Irrelevant information indicates the weak relevance between the retrieved information and a user's information need. Information overload indicates that the retrieved information may contain 1) redundant information (e.g., common information between two retrieved Web pages) or 2) too much amount of information which cannot be easily understood by a user. We consider four major causes of those two problems P1) and P2) as follows; ??? Firstly, ambiguous query-terms. ??? Secondly, ambiguous terms in a Web page. ??? Thirdly, a query and a Web page cannot be semantically matched, because of the first and second causes. ??? Fourthly, the whole content of a Web page is a coarse context-boundary to measure the similarity between the Web page and a query. To answer those two problems P1) and P2), we consider that the meanings of words in a Web page and a query are primitive hints for understanding the related semantics of the Web page. Thus, in this dissertation, we developed three cooperative technologies: Word Sense Based Web Information Retrieval (WSBWIR), Subjective Segment Importance Model (SSIM) and Topic Focused Web Page Summarization (TFWPS). ??? WSBWIR allows for a user to 1) describe their information needs at senselevel and 2) provides one way for users to conceptually explore information existing within Web pages. ??? SSIM discovers a semantic structure of a Web page. A semantic structure respects not only Web page authors logical presentation structures but also a user specific topic interests on the Web pages at query time. ??? TFWPS dynamically generates extractive summaries respecting a user's topic interests. WSBWIR, SSIM and TFWPS technologies are implemented and experimented through several case-studies, classification and clustering tasks. Our experiments demonstrated that 1) the comparable effectiveness of exploration of Web pages using word senses, and 2) the segments partitioned by SSIM and summaries generated by TFWPS can provide more topically coherent features for classification and clustering purposes.
|
44 |
Functional similarities between bimanual coordination and topic/comment structureKrifka, Manfred January 2007 (has links)
Human manual action exhibits a differential use of a non-dominant (typically, left) and a dominant (typically, right) hand. Human communication exhibits a pervasive structuring of utterances into topic and comment. I will point out striking similarities between the coordination of hands in bimanual actions, and the structuring of utterances in topics and comments. I will also show how principles of bimanual coordination influence the expression of topic/comment structure in sign languages and in gestures accompanying spoken language, and
suggest that bimanual coordination might have been a preadaptation of the development of information structure in human communication.
|
45 |
The use of weighted logrank statistics in group sequential testing and non-proportional hazards /Gillen, Daniel L., January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (p. 158-160).
|
46 |
'They opened up a whole new world' : feminine modernity and the feminine imagination in women's magazines, 1919-1939Hackney, Fiona Anne Seaton January 2010 (has links)
“They opened up a whole new world”, or something like it, was a phrase I heard repeatedly when I spoke to women about their memories of magazine reading in the interwar years. How the magazine operated as an imaginative window, a frame, space or mirror for encountering, shaping, negotiating, rethinking, rejecting, mocking, enjoying, the self and others became the central question driving this thesis. The expansion of domestic ‘service’ magazines in the 1920s responded to and developed a new female readership amongst the middle classes and working-class women, preparing the way for high-selling mass-market publications. The multiple models of modern womanhood envisaged in magazines, meanwhile, from the shocking ‘lipstick girl’ of the mid-1920s to the 1930s ‘housewife heroine’, show that what being a woman and modern in the period meant was far from settled, changed over time and differed according to a magazine’s ethos and target readership. In a period that witnessed the introduction of the franchise for women, divorce legislation, birth control, the companionate marriage, cheap mortgages, a marriage bar in the workplace, growth in the number of single women and panic over population decline, amongst other things, magazines helped resolve tensions, set new patterns of behaviour and expectations. This thesis, which examines the magazine as a material artefact produced in a specific historical context, argues that its complex ‘environment’ of coloured pictures, inserts, instructional photographs, escapist fiction, chatty editorial and advertising opened women up to conscious and unconscious desires to be a sports woman, a worker, a mother, a lover, or to look like their favourite film star; a ‘window’, that is, through which women without the benefit of £500 a year and a ‘room of their own’ could gaze and imagine themselves, their lives and those of their families, differently.
|
47 |
A grammatical approach to topic and focus : a syntactic analysis with preliminary evidence from language acquisitionLyu, Hee Young 25 October 2011 (has links)
The goal of this dissertation is to argue on the basis of the minimalist framework that the topichood of sentence topics and contrastive focus result from derivational and structural differences in the left periphery and to provide acquisition data from child language to support this claim, showing data from Korean, a free word-order and pro-drop language in which topics and contrastive foci are realized morphologically. In Korean, topic phrases merge in the left periphery and contrastive focus phrases undergo scrambling, one of the shared properties of free word-order languages. It is consistent in fixed word-order languages such as Italian and Hungarian and a free word-order language like Korean that topics merge and contrastive foci move to the left. Topics precede contrastive foci: topics merge in TopP, a higher functional projection than FocP, to which focus phrases move.
In the process of language acquisition, the derivational and structural differences between topic phrases and contrastive focus phrases may have influences on the developmental order of grammar acquisition. In acquisition data from two-year-old Korean children, topics emerge earlier than contrastive foci, indicating that topic and contrastive focus are also acquisitionally different.
This study is the first attempt to examine the structural differences and the influence on language acquisition of morphologically derived topic phrases and contrastive focus phrases in acquisition data from a free word-order and pro-drop language. This study shows the structural consistency of topic and contrastive focus between a free word-order language and fixed word-order languages. The syntactic and acquisitional distinction of topic merge and contrastive focus movement is compatible with the semantic and pragmatic approaches to topic and focus. This study provides evidence of the syntactic differences between topic and contrastive focus without dependence on phonetic features; therefore, this study is a base for drawing a map of the left periphery of human languages. / text
|
48 |
Bayesian nonparametric models for name disambiguation and supervised learningDai, Andrew Mingbo January 2013 (has links)
This thesis presents new Bayesian nonparametric models and approaches for their development, for the problems of name disambiguation and supervised learning. Bayesian nonparametric methods form an increasingly popular approach for solving problems that demand a high amount of model flexibility. However, this field is relatively new, and there are many areas that need further investigation. Previous work on Bayesian nonparametrics has neither fully explored the problems of entity disambiguation and supervised learning nor the advantages of nested hierarchical models. Entity disambiguation is a widely encountered problem where different references need to be linked to a real underlying entity. This problem is often unsupervised as there is no previously known information about the entities. Further to this, effective use of Bayesian nonparametrics offer a new approach to tackling supervised problems, which are frequently encountered. The main original contribution of this thesis is a set of new structured Dirichlet process mixture models for name disambiguation and supervised learning that can also have a wide range of applications. These models use techniques from Bayesian statistics, including hierarchical and nested Dirichlet processes, generalised linear models, Markov chain Monte Carlo methods and optimisation techniques such as BFGS. The new models have tangible advantages over existing methods in the field as shown with experiments on real-world datasets including citation databases and classification and regression datasets. I develop the unsupervised author-topic space model for author disambiguation that uses free-text to perform disambiguation unlike traditional author disambiguation approaches. The model incorporates a name variant model that is based on a nonparametric Dirichlet language model. The model handles both novel unseen name variants and can model the unknown authors of the text of the documents. Through this, the model can disambiguate authors with no prior knowledge of the number of true authors in the dataset. In addition, it can do this when the authors have identical names. I use a model for nesting Dirichlet processes named the hybrid NDP-HDP. This model allows Dirichlet processes to be clustered together and adds an additional level of structure to the hierarchical Dirichlet process. I also develop a new hierarchical extension to the hybrid NDP-HDP. I develop this model into the grouped author-topic model for the entity disambiguation task. The grouped author-topic model uses clusters to model the co-occurrence of entities in documents, which can be interpreted as research groups. Since this model does not require entities to be linked to specific words in a document, it overcomes the problems of some existing author-topic models. The model incorporates a new method for modelling name variants, so that domain-specific name variant models can be used. Lastly, I develop extensions to supervised latent Dirichlet allocation, a type of supervised topic model. The keyword-supervised LDA model predicts document responses more accurately by modelling the effect of individual words and their contexts directly. The supervised HDP model has more model flexibility by using Bayesian nonparametrics for supervised learning. These models are evaluated on a number of classification and regression problems, and the results show that they outperform existing supervised topic modelling approaches. The models can also be extended to use similar information to the previous models, incorporating additional information such as entities and document titles to improve prediction.
|
49 |
The acquisition of obligatory English subjects by speakers of discourse-oriented ChineseKong, Stano Pei Yin January 2000 (has links)
No description available.
|
50 |
Topic-focused and summarized web information retrievalYoo, Seung Yeol, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Since the Web is getting bigger and bigger with a rapidly increasing number of heterogeneous Web pages, Web users often suffer from two problems: P1) irrelevant information and P2) information overload Irrelevant information indicates the weak relevance between the retrieved information and a user's information need. Information overload indicates that the retrieved information may contain 1) redundant information (e.g., common information between two retrieved Web pages) or 2) too much amount of information which cannot be easily understood by a user. We consider four major causes of those two problems P1) and P2) as follows; ??? Firstly, ambiguous query-terms. ??? Secondly, ambiguous terms in a Web page. ??? Thirdly, a query and a Web page cannot be semantically matched, because of the first and second causes. ??? Fourthly, the whole content of a Web page is a coarse context-boundary to measure the similarity between the Web page and a query. To answer those two problems P1) and P2), we consider that the meanings of words in a Web page and a query are primitive hints for understanding the related semantics of the Web page. Thus, in this dissertation, we developed three cooperative technologies: Word Sense Based Web Information Retrieval (WSBWIR), Subjective Segment Importance Model (SSIM) and Topic Focused Web Page Summarization (TFWPS). ??? WSBWIR allows for a user to 1) describe their information needs at senselevel and 2) provides one way for users to conceptually explore information existing within Web pages. ??? SSIM discovers a semantic structure of a Web page. A semantic structure respects not only Web page authors logical presentation structures but also a user specific topic interests on the Web pages at query time. ??? TFWPS dynamically generates extractive summaries respecting a user's topic interests. WSBWIR, SSIM and TFWPS technologies are implemented and experimented through several case-studies, classification and clustering tasks. Our experiments demonstrated that 1) the comparable effectiveness of exploration of Web pages using word senses, and 2) the segments partitioned by SSIM and summaries generated by TFWPS can provide more topically coherent features for classification and clustering purposes.
|
Page generated in 0.0367 seconds