• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 35
  • 27
  • 9
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 171
  • 130
  • 78
  • 71
  • 52
  • 50
  • 48
  • 48
  • 44
  • 41
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Visualizing genetic transmission patterns in plant pedigrees

Shaw, Paul David January 2016 (has links)
Ensuring food security in a world with an increasing population and demand on natural resources is becoming ever more pertinent. Plant breeders are using an increasingly diverse range of data types such as phenotypic and genotypic data to identify plant lines with desirable characteristics suitable to be taken forward in plant breeding programmes. These characteristics include a number of key morphological and physiological traits, such as disease resistance and yield that need to be maintained and improved upon if a commercial plant variety is to be successful. The ability to predict and understand the inheritance of alleles that facilitate resistance to pathogens or any other commercially important characteristic is crucially important to experimental plant genetics and commercial plant breeding programmes. However, derivation of the inheritance of such traits by traditional molecular techniques is expensive and time consuming, even with recent developments in high-throughput technologies. This is especially true in industrial settings where, due to time constraints relating to growing seasons, many thousands of plant lines may need to be screened quickly, efficiently and economically every year. Thus, computational tools that provide the ability to integrate and visualize diverse data types with an associated plant pedigree structure will enable breeders to make more informed and subsequently better decisions on the plant lines that are used in crossings. This will help meet both the demands for increased yield and production and adaptation to climate change. Traditional family tree style layouts are commonly used and simple to understand but are unsuitable for the data densities that are now commonplace in large breeding programmes. The size and complexity of plant pedigrees means that there is a cognitive limitation in conceptualising large plant pedigree structures, therefore novel techniques and tools are required by geneticists and plant breeders to improve pedigree comprehension. Taking a user-centred, iterative approach to design, a pedigree visualization system was developed for exploring a large and unique set of experimental barley (H. vulgare) data. This work progressed from the development of a static pedigree visualization to interactive prototypes and finally the Helium pedigree visualization software. At each stage of the development process, user feedback in the form of informal and more structured user evaluation from domain experts guided the development lifecycle with users' concerns addressed and additional functionality added. Plant pedigrees are very different to those from humans and farmed animals and consequently the development of the pedigree visualizations described in this work focussed on implementing currently accepted techniques used in pedigree visualization and adapting them to meet the specific demands of plant pedigrees. Helium includes techniques to aid problems with user understanding identified through user testing; examples of these include difficulties where crosses between varieties are situated in different regions of the pedigree layout. There are good biological reasons why this happens but it has been shown, through testing, that it leads to problems with users' comprehension of the relatedness of individuals in the pedigree. The inclusion of visual cues and the use of localised layouts have allowed complications like these to be reduced. Other examples include the use of sizing of nodes to show the frequency of usage of specific plant lines which have been shown to act as positional reference points to users, and subsequently bringing a secondary level of structure to the pedigree layout. The use of these novel techniques has allowed the classification of three main types of plant line, which have been coined: principal, flanking and terminal plant lines. This technique has also shown visually the most frequently used plant lines, which while previously known in text records, were never quantified. Helium's main contributions are two-fold. Firstly it has applied visualization techniques used in traditional pedigrees and applied them to the domain of plant pedigrees; this has addressed problems with handling large experimental plant pedigrees. The scale, complexity and diversity of data and the number of plant lines that Helium can handle exceed other currently available plant pedigree visualization tools. These techniques (including layout, phenotypic and genotypic encoding) have been improved to deal with the differences that exist between human/mammalian pedigrees which take account of problems such as the complexity of crosses and routine inbreeding. Secondly, the verification of the effectiveness of the visualizations has been demonstrated by performing user testing on a group of 28 domain experts. The improvements have advanced both user understanding of pedigrees and allowed a much greater density and scale of data to be visualized. User testing has shown that the implementation and extensions to visualization techniques has improved user comprehension of plant pedigrees when asked to perform real-life tasks with barley datasets. Results have shown an increase in correct responses between the prototype interface and Helium. A SUS analysis has sown a high acceptance rate for Helium.
262

När filologerna refererar : En referensanalys av svenska doktorsavhandlingar i ämnet latin / When Philologists cite : A Citation Analysis of Swedish Doctoral Dissertations in the Subject Field of Latin

Ramstedt, Erik January 2018 (has links)
This study presents a citation analysis of 20 doctoral dissertations in the subject field of Latin which is part of the broader field of classical philology. The dissertations were all written at Swedish universities and were published during two measurement periods between 1979-2017. The aim of the study is to provide a basis for decision-making for librarians who are responsible for collections of books and journals on classical philology at Swedish university libraries. The study takes as its starting point and theoretical background a citation analysis made by Gregory A. Crawford and published in an article in 2013. That citation analysis was made with a philological journal as empirical object and found a remarkable stability over time in citation practices by scholars involved in classical philology especially regarding the language, age and type of material cited. With Crawfords results as background this present study finds similar patterns of stability in citation practices in the Swedish dissertations analysed. The conclusion of this study is that Swedish university libraries should retain their older books on classical philology as well as continue to develop their collections with books as well as journals written in the English, German, French, and Italian languages. This is a two years master´s thesis in Library and Information Science.
263

”Jag skriver begripligt” : hur, varför och till vem förmedlas forskningen? / ”Understanding what I write” : How, why and for whom is research presented?

Almqvist, Elin, Winquist, Emma January 2002 (has links)
"Understanding what I write" is a study of university-scientists’ work concerning science-information, with the basis in Pedagogical Theory. The empirical material is collected through an inquiry and several interviews with scinentists at the ten universities of Sweden. This inquiry was done within the timeframe of November 2000 to Janurary 2001. The purpose is to find out wich methods scientists use to spread their knowledge to the society and what possibilities there are to improve these. / "Jag skriver begripligt "är en studie av universitetsforskares arbete med forskningsinformation med utgångspunkt i pedagogisk teori. Det empiriska materialet är insamlat i form av en enkät och ett flertal intervjuer med forskare på Sveriges tio universitet, under tidsperioden första november 2000 till sista januari 2001. Syftet med undersökningen är att ta reda på vilka metoder forskare använder för att sprida sin kunskap till det omgivande samhället och vilka möjligheter det finns att förbättra dessa.
264

Museu-monstro: insumos para uma museologia da monstruosidade

Pires, Vladimir Sibylla 16 May 2014 (has links)
Submitted by Priscilla Araujo (priscilla@ibict.br) on 2016-06-29T17:29:59Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_Vladimir Sibylla_Final.pdf: 1362667 bytes, checksum: 4b9c2f5408b210664e6e2f207cf80c2a (MD5) / Made available in DSpace on 2016-06-29T17:29:59Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_Vladimir Sibylla_Final.pdf: 1362667 bytes, checksum: 4b9c2f5408b210664e6e2f207cf80c2a (MD5) Previous issue date: 2014-05-16 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Estamos diante hoje de um novo paradigma produtivo, o cognitivo. Esta mudança faz emergir um conjunto de conceitos que problematizam o modo como analisamos o papel do museu na contemporaneidade. A hegemonia das dimensões imateriais do trabalho está no âmago dessa mudança paradigmática. Esta nova centralidade impõe desafios analíticos e metodológicos para a Ciência da Informação e para a Museologia. Diante disso, outra compreensão de museu se anuncia: não mais centrada em uma relação contratualista, mas atenta à produção do comum; não mais restrita ao edifício ou ao território, mas relacionada com uma rede de redes; não mais a serviço do desenvolvimento de um público ou população, mas uma ferramenta para a autonomia da multidão; não mais focada no objeto ou no patrimônio, como o conhecemos, mas em nossas dinâmicas infocomunicacionais. Um não-museu, um pós-museu para além dos modelos da “obra aberta” e dos “lugares de memória”. Um museu do acontecimental, do encontro entre praxis e poiesis. Um museu-monstro, da excedência criativa da multidão: diante de seu levante na contemporaneidade, uma museologia da monstruosidade?. / We are facing now a new paradigm: the cognitive production. This change elicits a set of concepts that puts in problem the way we analyzed the role of the contemporary museum. The hegemony of immaterial dimensions of work is at the heart of this paradigm shift. This new centrality requires analytical and methodological challenges to the Information Science and Museology. Therefore, a new model of museum is announced: no more centered in a relationship focused on the social contract, but very attentive to the production of the common; no more restricted to the building or to the territory, but related to a network of networks; not more at the service of the development of a public or population, but a tool for the autonomy of the multitude; not focused more on the object or on our heritage, as we know, but in our communicational dynamics. A non-museum, a post-museum beyond the “open work” and the “sites of memory” models. A museum of the “acontecimental”, mixture of praxis and poiesis. A museum-monster of the creative exceedance of the multitude: in face of their uprising, a museology of monstrosity?.
265

Optimizing Sample Design for Approximate Query Processing

Rösch, Philipp, Lehner, Wolfgang 30 November 2020 (has links)
The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
266

Les Techniques De Recommandation Et De Visualisation Pour Les Données A Une Grande Echelle

Moin, Afshin 09 July 2012 (has links) (PDF)
Nous avons assisté au développement rapide de la technologie de l'information au cours de la dernière décennie. D'une part, la capacité du traitement et du stockage des appareils numériques est en constante augmentation grâce aux progrès des méthodes de construction. D'autre part, l'interaction entre ces dispositifs puissants a été rendue possible grâce à la technologie de réseautage. Une conséquence naturelle de ces progrès, est que le volume des données générées dans différentes applications a grandi à un rythme sans précédent. Désormais, nous sommes confrontés à de nouveaux défis pour traiter et représenter efficacement la masse énorme de données à notre disposition. Cette thèse est centrée autour des deux axes de recommandation du contenu pertinent et de sa visualisation correcte. Le rôle des systèmes de recommandation est d'aider les utilisateurs dans le processus de prise de décision pour trouver des articles avec un contenu pertinent et une qualité satisfaisante au sein du vaste ensemble des possibilités existant dans le Web. D'autre part, la représentation correcte des données traitées est un élément central à la fois pour accroître l'utilité des données pour l'utilisateur final et pour la conception des outils d'analyse efficaces. Dans cet exposé, les principales approches des systèmes de recommandation ainsi que les techniques les plus importantes de la visualisation des données sous forme de graphes sont discutées. En outre, il est montré comment quelques-unes des mêmes techniques appliquées aux systèmes de recommandation peuvent être modifiées pour tenir compte des exigences de visualisation.
267

Indexation des émotions dans les documents audiovisuels à partir de la modalité auditive

Lê, Xuân Hùng 01 July 2009 (has links) (PDF)
Cette thèse concerne la détection des émotions dans les énoncés audio multi-lingues. Une des applications envisagées est l'indexation des états émotionnels dans les documents audio-visuels en vue de leur recherche par le contenu. Notre travail commence par l'étude de l'émotion et des modèles de représentation de celle-ci : modèles discrets, continus et hybride. Dans la suite des travaux, seul le modèle discret sera utilisé pour des raisons pratiques d'évaluation mais aussi parce qu'il est plus facilement utilisable dans les applications visées. Un état de l'art sur les différentes approches utilisées pour la reconnaissance des émotions est ensuite présenté. Le problème de la production de corpus annoté pour l'entraînement et l'évaluation des systèmes de reconnaissance de l'état émotionnel est également abordé et un panorama des corpus disponibles est effectué. Une des difficultés sur ce point est d'obtenir des corpus réalistes pour les applications envisagées. Afin d'obtenir des données plus spontanées et dans des langues plus variées, deux corpus ont été créés à partir de films cinématographiques, l'un en Anglais, l'autre en Vietnamien. La suite des travaux se décompose en quatre parties : études et recherche des meilleurs paramètres pour représenter le signal acoustique pour la reconnaissance des émotions dans celui-ci, étude et recherche des meilleurs modèles et systèmes de classification pour ce même problème, expérimentation sur la reconnaissance des émotions inter-langues, et enfin production d'un corpus annoté en vietnamien et évaluation de la reconnaissance des émotions dans cette langue qui a la particularité d'être tonale. Dans les deux premières études, les cas mono-locuteur, multi-locuteur et indépendant du locuteur ont été considérés. La recherche des meilleurs paramètres a été effectuée sur un ensemble large de paramètres locaux et globaux classiquement utilisés en traitement automatique de la parole ainsi que sur des dérivations de ceux-ci. Une approche basée sur la sélection séquentielle forcée avant a été utilisée pour le choix optimal des combinaisons de paramètres acoustiques. La même approche peut être utilisée sur des types de données différents bien que le résultat final dépende du type considéré. Parmi, les MFCC, LFCC, LPC, la fréquence fondamentale, l'intensité, le débit phonétique et d'autres coefficients extraits du domaine temporel, les paramètres de type MFCC ont donné les meilleurs résultats dans les cas considérés. Une approche de normalisation symbolique a permis d'améliorer les performances dans le cas indépendant du locuteur. Pour la recherche du meilleur modèle et système de classification associé, une approche d'élimination successive selon des cas de complexité croissante (mono-locuteur, multi-locuteur et indépendant du locuteur) a été utilisée. Les modèle GMM, HMM, SVM et VQ (quantification vectorielle) on été étudiés. Le modèle GMM est celui qui donne les meilleurs résultats sur les données considérées. Les expérimentations inter-langue (Allemand et Danois) ont montré que les méthodes développées fonctionnent bien d'une langue à une autre mais qu'une optimisation des paramètres spécifique pour chaque langue ou chaque type de données est nécessaire pour obtenir les meilleurs résultats. Ces langues sont toutefois des langues non tonales. Des essais avec le corpus créé en Vietnamien ont montré une beaucoup moins bonne généralisation dans ce cas. Cela peut être du au fait que le Vietnamien est une langue tonale mais cela peut aussi être dû à la différence entre les conditions de création des corpus : acté dans les premiers cas et plus spontané pour le Vietnamien.
268

O papel do software livre na inclus?o digital

Elias, Paulo C?sar 04 October 2006 (has links)
Made available in DSpace on 2016-04-04T18:36:18Z (GMT). No. of bitstreams: 1 Paulo Cesar Elias 1.pdf: 1821568 bytes, checksum: 9d61b151033c83537db955df32b97e0e (MD5) Previous issue date: 2006-10-04 / The contemporary society shows increasingly the necessity of the individual in having the control of the selection, processing, communication and use of the information. The computerization of the society is well-known and increasing, mediated mainly for the new technologies capable to establish links in different and distant geographic spaces, converging with a great amount of information in the most diverse areas of intelligence human being, either the cultural, enterprise, political and governmental use, or even for entertainment. On the other hand, this new society also reveals a great inequality between those who has access to the new technologies of the information and its great nets and those who doesn t have any access of physical or cognitive information, instituting the ones called excluded. Because the technological transformations occurred since the effective implementation of the Internet, new forms of organization and production of software appear, highlighting nowadays , the movement of free software and the current speeches telling us that it would establish itself as a liberating character in the sharing of information and knowledge, able to act as a new tool of digital inclusion. This study verifies the condition of veracity of this hypothesis, carrying out a theoretical research based in the Science of the Information and the discussions of the political economy of the information, investigating the current configuration of the society before the new technologies and the role that free software has been playing as a tool of digital inclusion. / A sociedade contempor?nea mostra cada vez mais a necessidade de o indiv?duo ter controle do processo de sele??o, processamento, comunica??o e uso das informa??es. A informatiza??o da sociedade ? not?ria e crescente, mediada principalmente pelas novas tecnologias, capazes de estabelecer elos em diferentes e distantes espa?os geogr?ficos, convergindo com uma grande quantidade de informa??es nas mais diversas ?reas da intelig?ncia humana, seja para o uso cultural, empresarial, pol?tico e governamental ou mesmo de entretenimento. Em contrapartida, esta nova sociedade revela tamb?m uma grande desigualdade entre os que possuem condi??es de acesso ?s novas tecnologias da informa??o, e as grandes redes de informa??o, dos que n?o possuem acesso algum, f?sico ou cognitivo, instituindo os chamados exclu?dos. Frente ?s transforma??es tecnol?gicas, ocorridas a partir da implementa??o efetiva da Internet, surgem novas formas de organiza??o e produ??o de software, tendo como destaque no cen?rio atual o movimento de software livre e os discursos existentes de que ele se estabeleceria com um car?ter libertador no compartilhamento de informa??o e conhecimento, podendo atuar como uma nova ferramenta de inclus?o digital. Este estudo verifica a condi??o de veracidade dessa hip?tese, realizando uma pesquisa te?rica alicer?ada na Ci?ncia da Informa??o e nas discuss?es da economia pol?tica da informa??o, investigando a atual configura??o da sociedade diante das novas tecnologias e o papel que o software livre vem desempenhando como ferramenta de inclus?o digital.
269

O controle de vocabul?rio como dispositivo para a organiza??o e tratamento e recupera??o da informa??o arquiv?stica / The control of vocabulary as a device for methodological organization, processing and sorting information arquiv?stical

Aguiar, Francisco Lopes de 14 February 2008 (has links)
Made available in DSpace on 2016-04-04T18:36:38Z (GMT). No. of bitstreams: 1 FRANCISCO LOPES DE AGUIAR.pdf: 964649 bytes, checksum: bd7a7ef9df7a5159678e74ad5dc86eca (MD5) Previous issue date: 2008-02-14 / The objective is to understand the theoretical-conceptual and methodologies particularities that compose the vocabulary control elaboration (documentary process) and the controlled vocabulary (documentary product) under the archivist view. In a close-search approach of quality nature tries to review, from the dialogue with the Information Science, particularly with the Organization and Information Treatment area, with the purpose of learning the main postulates theoretical-conceptual and methodologies to assist the construction of this process. Shows evolutive stage of thinking and archivist making, aiming at understanding the social-historical movement of its area, schemes a brief systematization distinguishing some differences and institutional similarities among: Archives, Libraries and Documentation Cores. It is also shown the chief contributions of Documentation Movement in the deconstruction of paradigms and its impact in the organization practices and information treatment. Reviews the concept evolution of triad: Archive, document and information since custodial paradigm to post-custodial, moreover delimits reputably the particularities of archivist information. Emphasizes the need to understand the archivist institutions as to information system in the informational perspective imposed by the context of post-modernity, with eminence to theoretic-concepts implications related to representation process and recovery of documental contents trying to delimit reputably the elements: documents, data, information and knowledge as a matter of management in the recovery information system. Approaches the framework theoreticalconceptual concerning to organization process, representation and recovery of archivist information. Schemes some considerations as for the legitimate of subject/theme, as access point to permanent files. Shows a brief contribution to General Terminology Theory, in order to, contribute to the vocabulary control process, Moreover, systemize a short historic and theoretical-conceptual course of controlled vocabulary (documentary product). At last, presents prepositions in search for a methodology to the development of Controlled vocabularies in the archivist scope. It follows that the control of vocabulary and the controlled vocabulary contemplate recourses and methodological devices to assist the organization and treatment of archivist information. / Objetiva compreender as especificidades te?rico-conceituais e metodol?gicas que comp?em a elabora??o de controle de vocabul?rio (processo document?rio) e o vocabul?rio controlado (produto document?rio) sob a ?tica da Arquiv?stica. Numa abordagem explorat?ria e de natureza qualitativa procura revisitar, a partir do di?logo com a Ci?ncia da Informa??o, especificamente com a ?rea Organiza??o e Tratamento da Informa??o com a finalidade de apreender os principais postulados te?rico-conceituais e metodol?gicos para subsidiar a constru??o desse processo. Apresenta panorama evolutivo do pensar e do fazer arquiv?stico, visando compreender o movimento hist?rico-social da ?rea, tece breve sistematiza??o, assinalando algumas diferen?as e similaridades institucionais entre: Arquivos, Bibliotecas e Centros de Documenta??o. Tamb?m ? apresentado as principais contribui??es do Movimento da Documenta??o na des(constru??o) de paradigmas e seu impacto nas pr?ticas de organiza??o e tratamento da informa??o. Revisita a evolu??o conceitual da tr?ade: arquivo, documento e informa??o desde o paradigma custodial ao p?s-custodial, al?m de demarcar conceitualmente as especificidades da informa??o arquiv?stica. Enfatiza a necessidade de se compreender as institui??es arquiv?sticas enquanto sistemas de informa??o diante da perspectiva informacional imposta pelo contexto da p?s-modernidade, com destaque para as implica??es te?rico-conceituais relacionadas com os processos de representa??o e recupera??o de conte?dos documentais, procurando delimitar conceitualmente os elementos: documento, dado, informa??o e conhecimento como objetos de gest?o dos sistemas de recupera??o da informa??o. Aborda o arcabou?o te?rico-conceitual concernentes aos processos de organiza??o, representa??o e recupera??o da informa??o arquiv?stica. Tece algumas considera??es em torno da legitimidade do assunto/tema como ponto de acesso nos arquivos permanentes. Apresenta uma breve contribui??o da Teoria da Terminologia Geral para subsidiar no processo de controle de vocabul?rio, al?m de sistematizar um breve percurso hist?rico e te?ricoconceitual do vocabul?rio controlado (produto document?rio). E por fim apresenta proposi??es em busca de uma metodologia para o desenvolvimento de vocabul?rios controlados no ?mbito da Arquiv?stica. Conclui-se que o controle de vocabul?rio e o vocabul?rio controlado contemplam recursos e dispositivos metodol?gicos para subsidiar a organiza??o e tratamento da informa??o. arquiv?stica.
270

Méthodes qualitatives et quantitatives pour la détection d'information cachée

Mathieu, Sassolas 28 November 2011 (has links) (PDF)
Les systèmes informatiques sont devenus omniprésents et sont utilisés au quotidien pour gérer toujours plus d'information. Ces informations sont de plus en plus souvent confidentielles: informations stratégiques militaires ou financières, données personnelles. La fuite de ces informations peut ainsi avoir des conséquences graves telles que des pertes humaines, financières, des violations de la vie privée ou de l'usurpation d'identité. Les contributions de cette thèse se découpent en trois parties. Tout d'abord, nous étudions le problème de synthèse d'un canal de communication dans un système décrit par un transducteur. Malgré les limites imposées par ce modèle, nous montrons que le problème de synthèse est indécidable en général. Cependant, lorsque le système est fonctionnel, c'est-à-dire que son fonctionnement externe est toujours le même, le problème devient décidable. Nous généralisons ensuite le concept d'opacité aux systèmes probabilistes, en donnant des mesures groupées en deux familles. Lorsque le système est opaque, nous évaluons la robustesse de cette opacité vis-à-vis des informations données par les lois de probabilités du système. Lorsque le système n'est pas opaque, nous évaluons la taille de la faille de sécurité induite par cette non opacité. Enfin, nous étudions le modèle des automates temporisés à interruptions (ITA) où les informations sur l'écoulement du temps sont organisées en niveaux comparables à des niveaux d'accréditation. Nous étudions les propriétés de régularité et de clôture des langages temporisés générés par ces automates et proposons des algorithmes de model-checking pour des fragments de logiques temporelles temporisées.

Page generated in 0.4024 seconds