• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 436
  • 312
  • 38
  • 38
  • 17
  • 16
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 925
  • 341
  • 286
  • 214
  • 135
  • 84
  • 80
  • 72
  • 54
  • 53
  • 48
  • 48
  • 47
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Researching Plagiarism and Technology in Second Language Writing: “Becomings”

Vasilopoulos, Eugenia 13 May 2021 (has links)
This dissertation is an experimentation in plugging in the work of Deleuze (1990, 1994, 1995), Deleuze and Guattari (1987, 1983, 1994) to create new concepts and methods in educational research. In doing so, I experiment in ‘the real’ through the process of learning, by designing, conducting, and reporting a qualitative empirical study on how second language (L2) writers in an English for Academic Purposes (EAP) program engage with technology in their academic writing, and how plagiarism may, or may not, relate to this process. As such, the research objectives of this study can be understood as: 1) to think differently about the interconnections between plagiarism and technology in L2 writing; and 2) to see what happens to the research when we do so. At the heart of this study, and forming the onto-epistemological lens for inquiry, is a philosophy of immanence, transcendental empiricism, difference, and the actual/virtual. Additional concepts -- assemblage, becoming, affect, rhizome, molar/molecular, order-word, smooth/striated, event, learning, nomad, and war machine – are deployed to reconceptualize how plagiarism and technology shape L2 students’ writing, as well as the treatment of plagiarism within academic learning and educational research. In more concrete terms, this study was conducted at a university-affiliated EAP program designed for international students who hold conditional-admission to their respective degree programs. Seven students and their teachers were recruited over the course of two semesters. Data sources include ongoing in-depth interviews, document analysis of students’ drafts, screen-cast recordings of the students’ writing process, and a researcher diary. Rhizoanalysis, a Deleuzian inspired non-method (Masny, 2016), was used to read the data and map connections between elements. Five cartographic mappings in lieu of ‘findings’ are presented. These mappings do not attempt to provide a complete picture of reality represented in the data, but instead seeks to disrupt and problematize, and then create open space to think of what might be happening and how it might be happening differently. Seemingly straightforward ‘data’ is complicated in terms of: 1) the affective force of plagiarism; 2) the conditions for learning; 3) digital-tools and plagiarism detection; 4) the materiality of text; and 5) researcher-becoming. Consistent with the call for concept creation to generate new thinking, I propose the concept of virtual-plagiarism to un-do our habit of tracing texts (as a response to alleged plagiarism) and move towards mapping the elements, intensities, forces, and flows by which plagiarism is actualized. Put to work, the concept of virtual-plagiarism de/reterritorializes both the student writers’ assemblage and the researcher assemblage, and ultimately disrupts the pedagogic and research practices in L2 academic writing that have long bound the issue of plagiarism to student ethics and/or student aptitude and intention. Just as this project aspires to rethink how plagiarism and technology shape L2 students’ writing and how this phenomenon can be researched, it also invites the reader to follow suit and reimagine how Deleuze- inspired methods and concepts can affect (their own) teaching and educational research practices.
22

Contribution aux Assemblages Hybrides (Boulonnés/Collés) – Application aux Jonctions Aéronautiques

Paroissien, Eric 21 November 2006 (has links) (PDF)
Un assemblage est dit hybride quand il associe deux techniques d’assemblage différentes. La présente étude s’intéresse à la combinaison du collage et du boulonnage. Les jonctions longitudinales de fuselage d’avions civils, montées en simple cisaillement, constituent un cas d’application des travaux. Des outils paramétriques sous la forme de programmes informatiques sont mis au point et donnent la répartition des transferts de charge dans les substrats et dans les fixations ainsi que la répartition des contraintes adhésives, en fonction des paramètres géométriques et mécaniques de l’assemblage. Ces outils reposent sur le développement de modélisations analytiques 1D et 2D, qui supposent un comportement élastique et isotrope des matériaux, et qui sont basées sur la création de nouveaux éléments finis formulés pour simuler deux substrats collés. Un modèle 3D EF est développé afin d’accéder aux comportements mécaniques locaux, et dont l’utilisation est intéressante dans la détermination des raideurs des fixations utilisées dans les modélisations analytiques. La mesure expérimentale du taux de transfert aux fixations sous faibles charges à l’aide de boulons instrumentés valide et calibre les approches analytiques et numériques, tandis que les essais en fatigue estiment les gains en durée de vie.
23

Art making and thesis writing : an assemblage of becomings

Booth, Gillian. 10 April 2008 (has links)
No description available.
24

Engaging Lives: a Nomadic Inquiry Into the Spatial Assemblages and Ethico-aesthetic Practices of Three Makers

Coats, Cala R. 05 1900 (has links)
This research is a nomadic inquiry into the ethics and aesthetics of three makers’ social and material practices. Deleuze’s concept of the nomad operated in multiple ways throughout the process, which was embedded in performative engagements that produced narratives of becoming. Over four months, I built relationships with three people as I learned about the ethico-aesthetic significance of their daily practices. The process started by interviewing participants in their homes and expanded over time to formal and informal engagements in school, community, and agricultural settings. I used Guattari’s ecosophical approach to consider how subjectivity was produced through spatial assemblages by spending time with participants, discussing material structures and objects, listening to personal histories, and collaboratively developing ideas. Participants included a builder who repurposed a missile base into a private residence and community gathering space, an elementary art teacher who practiced urban homesteading, and a young artist who developed an educational farm. The research considers the affective force of normalized social values, the production of desire by designer capitalism, and the mutation of life from neoliberal policies. Our experiences illuminate the community-building potential of direct encounters and direct exchanges. The project generates ideas for becoming an inquirer in the everyday and reveals possibilities for producing pedagogical experiences through collective and dissensual action. Ultimately, the project produces hope for performative and anti-disciplinary approaches to education, rupturing false divisions that fragment the force of thought, to produce, instead, aesthetic experiences that privilege processes and are based in direct and collective engagements with life.
25

A Study of Heavy Minerals Found in a Unique Carbonate Assemblage from the Mt. Mica Pegmatite, Oxford County, Maine

Johnson, Christopher M. 01 May 2013 (has links)
This thesis focuses on heavy mineral species found in a unique carbonate assemblage in the Mt. Mica pegmatite in order to determine the conditions of their formation and their mineral paragenesis as well as to gain insight on the origin of this very unusual carbonate-rich unit.
26

Morphological and functional analysis of the postcranial anatomy of two dicynodont morphotypes from the cynognathus assemblage zone of South Africa and their taxonomic implications

Romala, Govender 26 February 2007 (has links)
Student Number : 9202936M - PhD thesis - School of Geoscience - Faculty of Science / Kannemeyeria simocephalus is probably the best known Middle Triassic dicynodont from South Africa and has been the standard against which other Triassic dicynodonts are compared. In the past studies have concentrated on the cranial morphology of K. simocephalus and how this affected Triassic dicynodont taxonomy and phylogeny. There has been little work on the postcranial anatomy of K. simocephalus, which remains poorly understood. This current study undertook a detailed descriptive analysis of the postcranial anatomy of K. simocephalus that lead to the identification of diagnostic characters of the postcranial skeleton. During the course of the analysis of the postcranial anatomy of K. simocephalus it was noted that material previously assigned to this taxon was significantly different from that recognised as K. simocephalus. Unfortunately, this material consists only of postcranial material and it is therefore referred to as Morphotype B rather than a new species of Kannemeyeria or as a new taxon from the Cynognathus Assemblage Zone (subzone B). A phylogenetic analysis was performed which included K. simocephalus and Morphotype B, and used cranial and postcranial characters. The preliminary phylogenetic results show that there are possibly two taxa of medium to large dicynodonts in the Cynognathus Assemblage Zone (subzone B); one a kannemeyeriid and the second a stahleckeriid. It has also evident that more attention needs to be paid to the study of the postcranial anatomy of Triassic dicynodonts, especially those from Africa and Asia.
27

Approximation de superchaîne, indexation et assemblage de génome / Approximation of superstring, indexation and genome assembly

Cazaux, Bastien 07 December 2016 (has links)
Actuellement, les technologies de séquençage ne permettent de lire la séquence d'un génome entier d'un individu, mais donnent les séquences de portions courtes de ce génome avec des erreurs. On doit ensuite procéder à un assemblage de ces séquences (que l'on appelle lectures ou "read" en anglais) pour retrouver la séquence du génome complet. Une version théorique de cette problématique est le problème de la plus courte superchaîne: étant donné un ensemble de mots (notre ensemble de lectures), on cherche à trouver le plus petit mot qui contient tous les autres comme sous-chaîne (le génome d'origine). Ce problème étudié depuis les années 60 est notoirement difficile à résoudre de manière exacte et approchée.L'assemblage nécessite certains pré-traitements des lectures, comme par exemple la correction des erreurs dues au séquençage dans les lectures (au sens où on cherche à enlever les erreurs). Certains logiciels de correction (ou d'autres pré-traitements) utilisent une structure d'indexation des séquences pour repérer les erreurs. Or, après la correction, cette structure de données est perdue et l'assemblage n'utilise plus que les lectures corrigées. Dans cette thèse, on se demande comment utiliser les structures d'indexation pour faciliter ou améliorer la qualité de l'assemblage.Dans un premier temps, on a montré qu'à partir d'une structure d'indexation, on pouvait rapidement reconstruire les graphes utilisés dans les algorithmes d'assemblage (graphe de Bruijn, graphe de Bruijn contracté, graphe de chevauchements). De plus, on a mis en évidence un nouveau graphe, le graphe hiérarchique de chevauchements ou "Hierarchical Overlap Graph", qui résume les informations des graphes classiques de l'assemblage.Dans un deuxième temps, on s'est demandé comment une structure d'indexation pouvait aider à résoudre directement le problème théorique de la plus courte superchaîne. Pour cela, on a étudié les solutions que l'algorithme glouton donnait à ce problème (leur approximation, leur combinatoire, ...) et à plusieurs de ces variantes (cas des mots renversés et complémentaires, cas de superchaîne cyclique, cas de couverture par un ensemble de superchaînes). Ceci a permis de résoudre plusieurs questions concernant la complexité et l'approximabilité de ces problèmes. En particulier, l'algorithme glouton permet de résoudre en temps linéaire la question de la plus petite couverture par des chaînes cycliques. Même si l'algorithme glouton est le plus simple et un des plus étudiés pour ces problèmes, il n'en reste pas moins un mystère. Notre étude a permis de mettre en évidence un nouveau graphe, le graphe des superchaînes ou "Superstring Graph", qui correspond à un plongement des solutions de l'algorithme glouton dans la structure d'indexation qu'est l'arbre des suffixes. Autrement dit, le graphe des superchaînes synthétise l'ensemble des solutions gloutonnes dans un espace linéaire.Enfin, on s'est intéressé aux algorithmes des meilleurs assembleurs utilisés en pratique (IDBA, SPAdes) qui ont permis d'améliorer l'assemblage de lectures courtes en utilisant plusieurs graphes d'assemblage. Nous avons montré tout d'abord que le graphe des superchaînes permet de stocker plus d'informations que ces assembleurs et avec une complexité en espace bien plus faible. Ensuite, il ressort que l'algorithme glouton pour une variante du problème de plus courte superchaîne donne des séquences qui incluent les contigs trouvés pour ces algorithmes. Ces résultats permettent de lier l'assemblage pratique et les problèmes de superchaînes, et donnent un cadre théorique fort pour étudier ces algorithmes heuristiques. / Whole genome can not be read by the current sequencing technologies. Instead, the output is short sequences which are portions with errors of the whole genome. One must then proceed to an assembly of these sequences (called read) to find the sequence of the complete genome. A theoretical version of this problem is the problem of the shortest superstring: given a set of words (own set of reads), we try to find the shortest string that contains all others as substring (the genome of origin). Studied since the 60s, this problem is notoriously difficult to solve by both exactly and approximate methods.Genome assembly requires some reads preprocessing, such as the correction of errors introduced by the sequencing. Some correction softwares (or other pre-treatments) use an indexing data structure of the sequences to localize errors. However, after the correction, this data structure is lost and the assembly uses only the corrected reads. In this thesis, we wonder how to use indexing structures to facilitate or to improve the quality of the genome assembly.First, we show that the graphs used in assembly algorithms could quickly rebuild from an indexing structure (de Bruijn graph, contracted de Bruijn graph and overlap graph). In addition, we present a new graph which summarizes the information of conventional assembly graphs and that we call the hierarchical overlap graph.Secondly, we wondered how an indexing data structure could directly help to solve the theoretical problem of the shortest superstring. For this purpose, we study the solutions that the greedy algorithm gives to this problem (their approximation, their combinatorics, ...) and many of these variants (reverse complement case, cyclic superstring case, case cover by a set of superstrings). This has solved several questions about the complexity and the approximation of these problems. In particular, the greedy algorithm solves in linear time the question of the shortest cyclic cover of strings. Although the greedy algorithm is the simplest and one of the most studied of these problems, it remains a mystery. Our study has highlighted a new graph, the superstring graph, which corresponds to a dip from solutions of the greedy algorithm in the index structure that is the suffix tree. In other words, the superstring graph summarizes all the greedy solutions in a linear space.Finally, attention has turned to the algorithms of the best assemblers used in practice (IDBA, Spades), which have improved the assembly of short reads using several assembly graphs. We show firstly that the superstring graph can store more information than these assemblers and with a complexity in much smaller space. Then, it is apparent that the greedy algorithm for a variant of the shorter superstring problem provides sequences which include the contigs found for these algorithms. These results link the assembly in practice and the superstring problems, and give a strong theoretical framework for studying these heuristic algorithms.
28

Rôle de la protéine Vpu dans l'assemblage du VIH-1 au niveau des radeaux lipidiques

St-Onge, Geneviève January 2005 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
29

Border enacted : unpacking the everyday performances of border control and resistance

Fisher, Daniel Xavier Odhrasgair January 2018 (has links)
For over a decade European governments have invested in technological systems to develop new forms of border security in their attempts to regulate migration. Numerous innovations have been designed in order to grant border agencies an unbroken vision of the borderspace, thus allowing states to continuously enact the border beyond their territorial boundaries. Meanwhile, other strategies have been designed in order to control the movements and actions of 'irregular migrants' and asylum seekers following their successful attempts at reaching the territorial boundaries of the European Union (EU). In this thesis I seek to tease apart these technocratic claims of omni-voyance and pervasive control by focusing on the everyday realities of border control and the ways in which these are negotiated and resisted by those who seek to evade them. To this aim, I approach the border by drawing on assemblage theory, as well as feminist geopolitics' attention to performance and embodiment. Such an approach re-centres attention on the human performances of border control, emphasises the agency of 'non-human' actors, foregrounds the messy realities of borderspaces, and engages with the multiplicity of borders. In applying this approach, I argue that the border should not be thought of as a static entity; neither in its location in space, nor in terms of the actors that perform it. Instead, I have oriented my approach towards conceptualising the border as in a constant state of becoming - with actors being continuously added to and subtracted from the security assemblages which constitute the border. In particular I focus on the ways in which 'non-state' actors are increasingly being coerced into performing the border and what the effects of this are on those who seek to evade its violent gaze. In order to put this approach to work, I employ a multi-sited ethnographic study of three European borderspaces: the Frontex headquarters in Warsaw, the Straits of Gibraltar and an anonymised city in the United Kingdom (UK). In Warsaw and the Straits of Gibraltar (specifically the cities of Algeciras and Ceuta) my research was focused on two border surveillance assemblages: (1) The European Border Surveillance System (EUROSUR) operated by Frontex and (2) Spain's Sistema Integrado de Vigilancia del Exterior (SIVE) maritime surveillance system. I argue that the 'messiness' of the borderspace proves too complex for the surveillance system to control, the vision produced through SIVE being fragmented and stuttered through both human and technological flaws. I also highlight how securing the border is as much a temporal negotiation as it is a spatial one; the struggle for control over the borderspace comprising a contest of speed. The effect is a geography of the border that foregrounds the 'little details' of borderwork; exposing the flaws behind a scopic narrative that claims unceasing vision and an unhindered reach. While in Ceuta I also challenged the formal performances of the enclave as a 'humanitarian space'. Indeed, I argue that it is as a result of framing the enclave's detention centre as a reception centre for humanitarianism that irregular migrants can be detained in the autonomous city indefinitely. Yet the actors that perform the borders of the enclave do so in an untidy alliance which regularly springs leaks. I also discuss the tactics of the migrants who have made it to the enclave and who now seek to leave it again. In particular I note how their tactics of resistance have become entangled with the bordering strategies specific to the enclave. I also question the extent to which the border enclave and the specific identities forged by the migrants who pass through it will remain with them as they pass through future checkpoints of the European border - the evidence of their time spent in Ceuta locked in their fingertips. In the anonymised city in the UK my aims were to question the reach of the state into the everyday lives of asylum seekers. While the lives of asylum seekers are often described as being in 'limbo', I sought to question the temporalities and materialities of urban living for people stuck in the asylum system. I argue that the strategies used by the UK Home Office are intended to limit the movements and actions of asylum seekers in the city through securitising the support that asylum seekers are entitled to. I focus on the ways in which the border is carried by asylum seekers in the city through their use of ARC and Azure cards, especially, and the ways in which these cards serve to 'fix' people with the negative qualities and stereotypes associated with asylum seekers. Through volunteering for a group offering solidarity support to asylum seekers in the city, I also argue that this strategy of limiting movements can be resisted. Like the tactics encountered in Ceuta, however, these tactics frequently become entangled in the strategies of border control.
30

Algorithmes pour la reconstruction de séquences de marqueurs conservés dans des données de métagénomique / Algorithms for conserved markers sequences reconstruction in metagenomics data

Pericard, Pierre 27 October 2017 (has links)
Les progrès récents en termes de séquençage d’ADN permettent maintenant d’accéder au matériel génétique de communautés microbiennes extraites directement d’échantillons environnementaux naturels. Ce nouveau domaine de recherche, appelé métagénomique, a de nombreuses applications en santé, en agro-alimentaire, en écologie, par exemple. Analyser de tels échantillons demande toutefois de développer de nouvelles méthodes bio-informatiques pour déterminer la composition taxonomique de la communauté étudiée. L’identification précise des organismes présents est en effet une étape essentielle à la compréhension des écosystèmes même les plus simples. Cependant, les technologies de séquençage actuelles produisent des fragments d’ADN courts et bruités, qui ne couvrent que partiellement les séquences complètes des gènes, ce qui pose un véritable défi pour l’analyse taxonomique à haute résolution. Nous avons développé MATAM, une nouvelle méthode bio-informatique dédiée à la reconstruction rapide et sans erreurs de séquences complètes de marqueurs phylogénétiques conservés, à partir de données brutes de séquençage. Cette méthode est composée d’une succession d’étapes qui réalisent la construction et l’analyse d’un graphe de chevauchement de lectures. Nous l’avons appliquée à l’assemblage de la petite sous-unité de l’ARN ribosomique sur des métagénomes simulés, synthétiques et réels. Les résultats obtenus sont de très bonne qualité et améliorent l’état de l’art. / Recent advances in DNA sequencing now allow studying the genetic material from microbial communities extracted from natural environmental samples. This new research field, called metagenomics, is leading innovation in many areas such as human health, agriculture, and ecology. To analyse such samples, new bioinformatics methods are still needed to ascertain the studied community taxonomic composition because accurate organisms identification is a necessary step to understand even the simplest ecosystems. However, current sequencing technologies are generating short and noisy DNA fragments, which only partially cover the complete genes sequences, giving rise to a major challenge for high resolution taxonomic analysis. We developped MATAM, a new bioinformatic methods dedicated to fast reconstruction of low-error complete sequences from conserved phylogenetic markers, starting from raw sequencing data. This methods is a multi-step process that builds and analyses a read overlap graph. We applied MATAM to the reconstruction of the small sub unit ribosomal ARN in simulated, synthetic and genuine metagenomes. We obtained high quality results, improving the state of the art.

Page generated in 0.0258 seconds