Spelling suggestions: "subject:"[een] DOCUMENT"" "subject:"[enn] DOCUMENT""
631 |
Card[ing] Capital: A Political Sociological Analysis of the Police Practice of ‘Carding’ in TorontoLevins, Kyle 18 October 2019 (has links)
There has been a history of a strained relationship between the public and the police institution for decades; most recently as a result of documented high levels of arrest rates among marginalized communities. Stop and frisk practices have been active in the United States since the 1950s and have been studied academically in the United States since the 1990s. However, research concerning Canadian data is limited.
This project, using Bourdieusian concepts (field, habitus, capital, and doxa) with other resistance to change/police culture research, addresses the gaps in literature surrounding the practice of ‘carding’ in Canada by determining the forms of strategies and capital used by parties to defend and contest the police practice in the city of Toronto.
Using a form of Document Analysis, this project created inductive categories from reports and recommendations submitted by the Toronto Police, several activist groups, and the province of Ontario between the years of 2012-2015.
Findings from this paper were similar to previous literature; however, we saw an emotional argument surrounding the use of risk emerge through those justifying the police practice of ‘carding’. This emotional argument relied on a platform of fear and risk discourse, arguing that having limited faith in police not only goes against previously accepted practices, but places our communities in greater potential danger.
Furthermore, our findings showed that narratives presented by those contesting the practice of ‘carding’ saw legal and factual arguments emerge which were not seen in previous literature. These legal and factual arguments focused on Constitutionality and statistics to contest the practice of ‘carding’, rather than rely on emotional arguments as seen in previous literature.
This project allowed for a snapshot of the case in Toronto to help understand the issue in a Canadian context. Many themes developed were similar to previous literature; however, we saw a new emotional argument surrounding a risk discourse and those contesting ’carding’ have accessed the legal ‘field’ to express concerns. Directions for future research are presented at the end of this study.
|
632 |
LAMS : a framework for XML web service managementMifsud, Trent, 1976- January 2004 (has links)
Abstract not available
|
633 |
An Investigation into User Text Query and Text Descriptor ConstructionPfitzner, Darius Mark, pfit0022@flinders.edu.au January 2009 (has links)
Cognitive limitations such as those described in Miller's (1956) work on channel capacity and Cowen's (2001) on short-term memory are factors in determining user cognitive load and in turn task performance. Inappropriate user cognitive load can reduce user efficiency in goal realization. For instance, if the user's attentional capacity is not appropriately applied to the task, distractor processing can tend to appropriate capacity from it. Conversely, if a task drives users beyond their short-term memory envelope, information loss may be realized in its translation to long-term memory and subsequent retrieval for task base processing.
To manage user cognitive capacity in the task of text search the interface should allow users to draw on their powerful and innate pattern recognition abilities. This harmonizes with Johnson-Laird's (1983) proposal that propositional representation is tied to mental models. Combined with the theory that knowledge is highly organized when stored in memory an appropriate approach for cognitive load optimization would be to graphically present single documents, or clusters thereof, with an appropriate number and type of descriptors. These descriptors are commonly words and/or phrases.
Information theory research suggests that words have different levels of importance in document topic differentiation. Although key word identification is well researched, there is a lack of basic research into human preference regarding query formation and the heuristics users employ in search. This lack extends to features as elementary as the number of words preferred to describe and/or search for a document. Contrastive understanding these preferences will help balance processing overheads of tasks like clustering against user cognitive load to realize a more efficient document retrieval process. Common approaches such as search engine log analysis cannot provide this degree of understanding and do not allow clear identification of the intended set of target documents.
This research endeavours to improve the manner in which text search returns are presented so that user performance under real world situations is enhanced. To this end we explore both how to appropriately present search information and results graphically to facilitate optimal cognitive and perceptual load/utilization, as well as how people use textual information in describing documents or constructing queries.
|
634 |
in|form: the performative object: the exploration of body, motion and formNewrick, Tiffany Rewa January 2008 (has links)
Through the sculptural object, this thesis, in|form: The per formative object, explores the relationships between body and object, viewer and artist, performance and the per formative. By exploring the performativity of an object (and questioning how an object performs in relation to the body), the documented performances activate an inter-relational act between artist and object (I perform the object, the object performs me simultaneously). The work that unfolds from this investigation considers the placement of the viewer’s body in relation to the artist’s. A dialogue is formed between the three bodies: object, artist and viewer, creating a sense of embodiment within the work through this relationship. in|form explores this embodiment through the role of video documentation. The performances are constructed to be viewed solely through the documentation, which creates a discussion between the ‘live’ moment and the documented event, and how the viewer then relates to this. The performances take place as solo acts, but are constructed with the viewer in mind. As the viewer watches the documented performance of the action between artist and object in space, the relational nature of the work creates a second performance which embodies the viewer. This sole action, recorded and then viewed, considers the relational value of the body, specifically engaging with the abstraction of bodily formlessness within the object to reveal a bodily nature. Using the object to trace the movement of the body creates a language that communicates to, and about, both viewer and artist: through the awareness of passing time, through the large scale projection of the documentation, through the bodily nature of the object, and through the performativity of the object’s responsive nature to the artist’s body as the pair navigate through space. in|form explores how the absence of the body (in a literal sense) considers the body as an object bound by time, at once physical yet transient. By tracing the motion of the body through object, the viewer experiences the body through sensibility. Ultimately, the function of the body negotiating as a time-bound object is imitated through the performativity of the object with artist, and the elusiveness of this action emphasized by its documentation.
|
635 |
元代硬譯公牘文體 -以《元典章》為例 / Stiff Translatorese of the Official Document of the Yuan Dyansty胡斐穎, Hu, Fei Ying Unknown Date (has links)
在元代的時候,我們都知道,元朝是以蒙古人為中心,與其他民族聯合所建立的政權。由於帝國幅員廣闊、民族眾多,在政府各級機關中,均設有譯史、通事、怯里馬赤等翻譯人員,進行蒙漢或其他語言文字的翻譯工作。
然而,有些譯史在翻譯蒙文公牘時,因為過分地按照蒙古語的語法形式翻譯,使得譯成的漢文變得相當生硬,成為一種「蒙古式漢語」的公文,即具有蒙古語語法特徵的漢譯文,令人讀起來很不順暢,甚至詰屈聱牙,艱澀難懂。我們把這種譯文的文體就叫作「硬譯文體」;而因元代的官方文書中,出現許多像這樣的公文,所以就稱這類文獻的文體為「元代硬譯公牘文體」。
不過,有些學者認為,這就是所謂的元代漢語白話。雖然,我們不否認這些翻譯的公文,多是用元代漢語白話寫成的,但它那濃厚的蒙古語語法特徵,卻使它像「混合語」一樣,是漢語、蒙語之外的另一種新的「語言」,因此,是否還能稱它做「元代白話」或「元代漢語白話」,都是值得商榷的。
筆者針對元代硬譯公牘文體產生的背景、原因,及其語法特徵三方面進行討論,希望能夠釐清一些觀念,並瞭解其內容。
|
636 |
Managing dynamic XML dataFisher, Damien Kaine, School of Computer Science & Engineering, UNSW January 2007 (has links)
Recent years have seen a surge in the popularity of XML, a markup language for representing semi-structured data. Some of this popularity can be attributed to the success that the semi-structured data model has had in environments where the relational data model has been insufficiently expressive. Concomitant with XMLs growing popularity, the world of database research has seen the rebirth of interest in tree-structured, hierarchical database systems. This thesis analyzes several problems that arise when constructing XML data management systems, particularly in the case where such systems must handle dynamic content. In the first chapter, we consider the problem of incremental schema validation, which arises in almost any XML database system. We build upon previous work by finding several classes of schemas for which very efficient algorithms exist. We also develop an algorithm that works for any schema, and prove that it is optimal. In the second chapter, we turn to the problem of improving query evaluation times on extremely large database systems. In particular, we boost the performance of the structural and twig joins, fundamental XML query evaluation techniques, through the use of an adaptive index. This index tunes itself to the query workload, providing a 20-80% boost in speed for these join operators. The adaptive nature of the index also allows updates to the database to be easily tracked. While accurate selectivity estimation is a critical problem in any database system due to its importance in choosing optimal query plans, there has been very little work on selectivity estimation in the presence of updates. We ask whether it is possible to design a structure for selectivity in XML databases that is updateable, and can return results with theoretically sound error guarantees. Through a combination of lower and upper bounds, we give strong evidence suggesting that this is unlikely in practice. Motivated by these results, we then develop a heuristic selectivity estimation structure for XML databases. This structure is the first such synopsis that can handle all aspects of core XPath, and is also updateable. Our experimental results demonstrate the efficacy of the approach.
|
637 |
Improving scalability and accuracy of text mining in grid environmentZhai, Yuzheng January 2009 (has links)
The advance in technologies such as massive storage devices and high speed internet has led to an enormous increase in the volume of available documents in electronic form. These documents represent information in a complex and rich manner that cannot be analysed using conventional statistical data mining methods. Consequently, text mining is developed as a growing new technology for discovering knowledge from textual data and managing textual information. Processing and analysing textual information can potentially obtain valuable and important information, yet these tasks also requires enormous amount of computational resources due to the sheer size of the data available. Therefore, it is important to enhance the existing methodologies to achieve better scalability, efficiency and accuracy. / The emerging Grid technology shows promising results in solving the problem of scalability by splitting the works from text clustering algorithms into a number of jobs, each to be executed separately and simultaneously on different computing resources. That allows for a substantial decrease in the processing time and maintaining the similar level of quality at the same time. / To improve the quality of the text clustering results, a new document encoding method is introduced that takes into consideration of the semantic similarities of the words. In this way, documents that are similar in content will be more likely to be group together. / One of the ultimate goals of text mining is to help us to gain insights to the problem and to assist in the decision making process together with other source of information. Hence we tested the effectiveness of incorporating text mining method in the context of stock market prediction. This is achieved by integrating the outcomes obtained from text mining with the ones from data mining, which results in a more accurate forecast than using any single method.
|
638 |
Rôle des organisateurs para-linguistiques dans la consultation des documents électroniquesCaro Dambreville, Stéphane 15 December 1995 (has links) (PDF)
Cette thèse porte sur le rôle des organisateurs para-linguistiques dans la conception de textes techniques sur écran. Les organisateurs plus particulièrement étudiés sont les parenthèses, les notes de bas de page, les typages explicites (tel que « Exemple : » précédant un passage) ainsi que les « escamots » (pop-up windows). Des méthodes de psychologie expérimentale ont été utilisées pour analyser, d'une part, des productions des rédacteurs, et d'autre part, l'influence des organisateurs para-linguistiques sur l'activité de lecture (mémorisation et recherche d'information). L'idée initiale est qu'il est possible de découper les textes en unités d'intentions de communication du rédacteur (souligner ou minimiser l'importance relative d'une unité par ex.). Ainsi le texte devient un ensemble d'unités textuelles (UT) qui relèvent d'intentions différentes du rédacteur. Les intentions peuvent être codées par différents moyens de mise en forme matérielle. Une typologie d'unités textuelles de textes techniques selon les intentions du rédacteur est proposée. On montre expérimentalement qu'elle a une réalité psychologique et qu'une mise en forme matérielle du texte basée sur cette typologie a une influence sur la lecture et la consultation.
|
639 |
Fédération et amélioration des activités documentaires par la pratique d'annotation collectiveCabanac, Guillaume 05 December 2008 (has links) (PDF)
Les activités documentaires couramment réalisées sur les documents papier sont aujourd'hui transposées sur leurs homologues électroniques. Ainsi, une kyrielle de systèmes permet de mener à bien les activités liées aux documents. Ils permettent notamment de rechercher de l'information utilisée pour rédiger un document qui peut être ensuite diffusé, exploité et organisé par ses lecteurs dans leur espace documentaire. Notre étude des systèmes existants a permis de révéler deux limites principales. Premièrement, un système ne répond généralement qu'à une seule, voire à deux activités. Ce cloisonnement des activités est préjudiciable à la fois pour les usagers (qui doivent maîtriser et jongler entre de nombreux outils) et pour les systèmes (qui ne possèdent qu'une représentation parcellaire des besoins des usagers). Deuxièmement, les systèmes n'exploitent pas les résultats des activités documentaires des membres organisationnels.<br /><br />Notre contribution comprend deux volets. Premièrement, nous proposons un modèle fédérant les activités documentaires autour de la pratique d'annotation collective. Des processus collectifs y sont associés afin d'exploiter chaque activité documentaire pour enrichir les autres, apportant ainsi une assistance à chaque individu en tirant parti du groupe, et vice versa. Le but de cette approche originale est double : simplifier l'accès et l'appropriation des documents tout en anticipant les besoins de l'usager pour lui offrir une assistance non intrusive. Deuxièmement, nous proposons d'exploiter les espaces documentaires des membres organisationnels. Bien qu'ils contiennent des informations à haute valeur pour l'organisation, collectées au prix de coûteux efforts, ces espaces demeurent paradoxalement en sommeil. Afin de tirer parti de ces espaces documentaires, nous proposons une interface multi-facettes d'accès au capital documentaire d'une organisation. Cette interface permet l'exploration des documents et individus de l'organisation selon différents axes et niveaux de granularité. Nos propositions ont été validées par différentes expérimentations ainsi que par le développement du prototype TafAnnote qui souligne la faisabilité de notre approche fédérant les activités documentaires autour de l'annotation collective.
|
640 |
Une approche de l'édition structurée des documentsQuint, Vincent 04 May 1987 (has links) (PDF)
L'édition d'un document peut être vue comme la manipulation d'une structure abstraite qui représente<br />l'organisation logique des composants du document. A partir de ce principe, on propose un méta-modèle<br />qui permet la description des structures logiques de toutes sortes de documents et de différents types<br />d'objets fréquents dans les documents : formules mathématiques, tableaux, schémas, etc... on associe aux<br /> structures logiques des règles de présentation qui déterminent l'aspect graphique de leurs composants.<br /> On montre l'intérêt de cette approche en présentant deux systèmes interactifs construits sur ce modèle :<br /> l'éditeur de formules mathématiques Edimath et l'éditeur de documents Grif. La présentation de ces systèmes<br />s'appuie sur un état de l'art de la typographie informatique.
|
Page generated in 0.0651 seconds