• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 35
  • 27
  • 9
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 171
  • 130
  • 78
  • 71
  • 52
  • 50
  • 48
  • 48
  • 44
  • 41
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Analysis and Design of Raptor Codes for Multicast Wireless Channels

Venkiah, Auguste 01 November 2008 (has links) (PDF)
In this thesis, we investigate the optimization of Raptor codes for various channels of interest in practical wireless systems. First, we present an analytical asymptotic analy- sis of jointly decoded Raptor codes over a BIAWGN channel. Based on the analysis, we derive an optimization method for the design of efficient output degree distributions. We show that even though Raptor codes are not universal on other channels than the BEC, Raptor code optimized for a given channel capacity also perform well on a wide range of channel capacities when joint decoding is considered. Then, we propose a rate splitting strategy that is efficient for the design of finite length Raptor codes. We next investigate the extension of the analysis to the uncorrelated Rayleigh-fading chan- nel with perfect channel state information (CSI) at the receiver, and optimize Raptor codes for quasi-static fading channels when CSI is available at the receiver but not at the transmitter. Finally, we show that in presence of imperfect CSI at the receiver, it is possible to improve the performance with no additional complexity, by using an appropriate metric for the computation of the LLR at the output of the channel. In the second part of this thesis, we investigate the construction of efficient finite length LDPC codes. In particular, we present some improvements for the Progressive Edge- Growth algorithm that allow to construct minimal graphs. The proposed algorithm is used to construct protographs with large girth that perform well under iterative decoding. Moreover, we propose an efficient structured search procedure for the design of quasi-cyclic LDPC codes.
172

Codes LDPC multi-binaires hybrides et méthodes de décodage itératif

Sassatelli, Lucile 03 October 2008 (has links) (PDF)
Cette thèse porte sur l'analyse et le design de codes de canal définis par des graphes creux. Le but est de construire des codes ayant de très bonnes performances sur de larges plages de rapports signal à bruit lorsqu'ils sont décodés itérativement. Dans la première partie est introduite une nouvelle classe de codes LDPC, nommés code LDPC hybrides. L'analyse de cette classe pour des canaux symétriques sans mé- moire est réalisée, conduisant à l'optimisation des paramètres, pour le canal gaussien à entrée binaire. Les codes LDPC hybrides résultants ont non seulement de bonnes proprié- tés de convergence, mais également un plancher d'erreur très bas pour des longueurs de mot de code inférieures à trois mille bits, concurrençant ainsi les codes LDPC multi-edge. Les codes LDPC hybrides permettent donc de réaliser un compromis intéressant entre ré- gion de convergence et plancher d'erreur avec des techniques de codage non-binaires. La seconde partie de la thèse a été consacrée à étudier quel pourrait être l'apport de méthodes d'apprentissage artificiel pour le design de bons codes et de bons décodeurs itératifs, pour des petites tailles de mot de code. Nous avons d'abord cherché comment construire un code en enlevant des branches du graphe de Tanner d'un code mère, selon un algorithme d'apprentissage, dans le but d'optimiser la distance minimale. Nous nous sommes ensuite penchés sur le design d'un décodeur itératif par apprentissage artificiel, dans l'optique d'avoir de meilleurs résultats qu'avec le décodeur BP, qui devient sous- optimal dès qu'il y a des cycles dans le graphe du code. Dans la troisième partie de la thèse, nous nous sommes intéressés au décodage quan- tifié dans le même but que précédemment : trouver des règles de décodage capables de décoder des configurations d'erreur difficiles. Nous avons proposé une classe de déco- deurs utilisant deux bits de quantification pour les messages du décodeur. Nous avons prouvé des conditions suffisantes pour qu'un code LDPC, avec un poids de colonnes égal à quatre, et dont le plus petit cycle du graphe est de taille au moins six, corrige n'importe quel triplet d'erreurs. Ces conditions montrent que décoder avec cette règle à deux bits permet d'assurer une capacité de correction de trois erreurs pour des codes de rendements plus élevés qu'avec une règle de décodage à un bit.
173

Utilisation d'ontologies comme support à la recherche et à la navigation dans une collection de documents

Sy, Mohameth-François 11 December 2012 (has links) (PDF)
Les ontologies modélisent la connaissance d'un domaine avec une hiérarchie de concepts. Cette thèse porte sur leur utilisation dans les Systèmes de Recherche d'Information (SRI) pour estimer la pertinence des documents par rapport à une requête. Nous calculons cette pertinence à l'aide d'un modèle des préférences de l'utilisateur et d'une mesure de similarité sémantique associée à l'ontologie. Cette approche permet d'expliquer à l'utilisateur pourquoi les documents sélectionnés sont pertinents grâce à une visualisation originale. La RI étant un processus itératif, l'utilisateur doit être guidé dans sa reformulation de requête. Une stratégie de reformulation de requêtes conceptuelles est formalisée en un problème d'optimisation utilisant les retours faits par l'utilisateur sur les premiers résultats proposés comme base d'apprentissage. Nos modèles sont validés sur la base de performances obtenues sur des jeux de tests standards et de cas d'études impliquant des experts biologistes.
174

Informationsöverflödets dystopi : En intertextuell diskursanalys från Future Shock till The Shallows / Information Overload Dystopia : An intertextual discursive analysis from Future Shock to The Shallows

Johansson, Ingrid January 2013 (has links)
Today it is common to state that we are living in an information overloaded society. But there are many different definitions of what can be said to constitute Information Overload and there is a lack of substantial research on the subject. Conclusions in the available literature on Information Overload are often drawn on anecdotal evidence and carries a dramatized picture of the causes and effects of the phenomenon. With the tools of discursive analysis this two years master’s thesis explores how the phenomenon Information Overload is portrayed in six popular science books that deals with the subject: Alvin Toffler (1970) Future Shock, Orrin Klapp (1986) Overload and Boredom, Richard Wurman (1989) Information Anixety, Andrew Keen (2007) The cult of the amateur, Maggie Jackson (2008), Distracted and Nicholas Carr (2010) The Shallows. The result of the analysis shows that there is a common discourse of how the subject of Information Overload is represented, which stretches in and between the books intertextually. In this study that discourse is called the dystopian discourse of Information Overload. It is structured by a unified use of narratives, concepts, themes, metaphors and statements and by its separation from the opposite utopian discourse of Information Overload. In the final discussion the results of the analysis are compared to postmodern theory, a problematisation of the concept of distraction and to the Swedish government’s 2012 investigation of reading habits of young people in the country. The conclusion of the study is that the two binary discourses discovered in the analysis – the dystopian and the utopian – should be avoided in the debate and research on Information Overload. Instead the discussion should be influenced by pluralism, complexity and awareness.
175

Connecting Science Communication To Science Education: A Phenomenological Inquiry Into Multimodal Science Information Sources Among 4th And 5th Graders

Gelmez Burakgazi, Sevinc 01 November 2012 (has links) (PDF)
Science communication, as a multidisciplinary field, serves to transfer scientific information to individuals to promote interest and awareness in science. This process resembles science education. Rooted in science education and science communication studies, this study examines the 4th and 5th grade students` usage of prominent science information sources (SIS), the features of these sources, and their effective and ineffective uses and processes in communicating science to students. Guided by situated learning and uses and gratifications (U&amp / G) theories, this study is a phenomenological qualitative inquiry. Data were gathered through approximately 64 hours of classroom observations / focus group and individual interviews from four elementary schools (two public, two private schools) in Ankara, T&uuml / rkiye. Focus group interviews were conducted with 47 students, and individual interviews were carried out with 17 teachers and 10 parents. The data were analyzed manually and MAXQDA software respectively. The results revealed that students used various SIS in school-based and beyond contexts to satisfy their cognitive, affective, personal, and social integrative needs. They used SIS for (a) science courses, (b) homework/project assignments, (c) exam/test preparations, and (d) individual science related research. Moreover, the results indicated that comprehensible, enjoyable, entertaining, interesting, credible, brief, updated, and visual aspects of content and content presentation of SIS were among the key drivers affecting students` use of SIS. The results revealed that accessibility of SIS was an important variable in students` use of these sources. Results further shed light on the connection between science education and science communication in terms of promoting science learning.
176

An open framework for developing distributed computing environments for multidisciplinary computational simulations

Bangalore, Purushotham Venkataramaiah. January 2003 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Computational Engineering. / Title from title screen. Includes bibliographical references.
177

Decisions to delete : subjectivity in information deletion and retention

Macknet, David Taylor January 2012 (has links)
This research examines the decision-making process of computer users with reference to deletion and preservation of digital objects. Of specific interest to this research is whether people provide different reasons for deleting or preserving various types of digital object dependant upon whether they are making such decisions at home or at work, whether such decisions are to any extent culturally determined, and whether they consider others in the course of making such decisions. This study considers the sociological implications of such decisions within organisations, and various psychological errors to be expected when such decisions are made. It analyses the reasons given for these decisions, within the contexts of home and work computing. It quantifies the frequency with which these activities are undertaken, the locations in which such objects are stored, and what aids the user in making such decisions. This research concludes that, while computer users generally desire their digital objects to be organised, they are not provided with adequate support from their computer systems in the decision to delete or preserve digital objects. It also concludes that such decisions are made without taking advantage of metadata, and these decisions are made for the same reasons both at home and at work: there is no discernible difference between the two contexts in terms of reasons given for such decisions. This study finds no correlation between subjects' culture and reasons given for deletion / preservation decisions, nor does it find any correlation between age and such reasons. This study further finds that users are generally averse to conforming to records management policies within the organisation. For archivists and records managers, this research will be of particular interest in its consideration of the usage of and attitudes towards records management systems. Specifically, in organisations possessing formal records management systems, this research investigates the frequency with which individuals violate records management procedures and why they consider such violations to be necessary or desirable. This research also argues towards a more proceduralised decision-making process on the part of the ordinary user and a deeper integration between records management systems and computer operating systems. Designers of formal information systems should consider this research for its implications regarding the way in which decisions are affected by the context in which those decisions are made. Information systems design may be best suited to understanding---and ameliorating---certain types of cognitive error such that users are enabled to make better deletion and preservation decisions. User interface designers are uniquely positioned to address certain cognitive errors simply by changing how information is presented; this research provides insight into just what those errors are and offers suggestions towards addressing them. For sociologists concerned with institutional memory, this research should be of interest because the deletion and preservation decisions of members of an organisation are those which shape the collection of digital artefacts available for study. Understanding the reasons for these decisions is likely to inform what interpretations can be drawn from the study of such collections. Also of interest to sociologists will be the variety of reasons given for deletion or preservation, as those reasons and decisions are what shape, to some extent, institutional memory.
178

Automation bias and prescribing decision support : rates, mediators and mitigators

Goddard, Kate January 2012 (has links)
Purpose: Computerised clinical decision support systems (CDSS) are implemented within healthcare settings as a method to improve clinical decision quality, safety and effectiveness, and ultimately patient outcomes. Though CDSSs tend to improve practitioner performance and clinical outcomes, relatively little is known about specific impact of inaccurate CDSS output on clinicians. Although there is high heterogeneity between CDSS types and studies, reviews of the ability of CDSS to prevent medication errors through incorrect decisions have generally been consistently positive, working by improving clinical judgement and decision making. However, it is known that the occasional incorrect advice given may tempt users to reverse a correct decision, and thus introduce new errors. These systematic errors can stem from Automation Bias (AB), an effect which has had little investigation within the healthcare field, where users have a tendency to use automated advice heuristically. Research is required to assess the rate of AB, identify factors and situations involved in overreliance and propose says to mitigate risk and refine the appropriate usage of CDSS; this can provide information to promote awareness of the effect, and ensure the maximisation of the impact of benefits gained from the implementation of CDSS. Background: A broader literature review was carried out coupled with a systematic review of studies investigating the impact of automated decision support on user decisions over various clinical and non-clinical domains. This aimed to identify gaps in the literature and build an evidence-based model of reliance on Decision Support Systems (DSS), particularly a bias towards over-using automation. The literature review and systematic review revealed a number of postulates - that CDSS are socio-technical systems, and that factors involved in CDSS misuse can vary from overarching social or cultural factors, individual cognitive variables to more specific technology design issues. However, the systematic review revealed there is a paucity of deliberate empirical evidence for this effect. The reviews identified the variables involved in automation bias to develop a conceptual model of overreliance, the initial development of an ontology for AB, and ultimately inform an empirical study to investigate persuasive potential factors involved: task difficulty, time pressure, CDSS trust, decision confidence, CDSS experience and clinical experience. The domain of primary care prescribing was chosen within which to carry out an empirical study, due to the evidence supporting CDSS usefulness in prescribing, and the high rate of prescribing error. Empirical Study Methodology: Twenty simulated prescribing scenarios with associated correct and incorrect answers were developed and validated by prescribing experts. An online Clinical Decision Support Simulator was used to display scenarios to users. NHS General Practitioners (GPs) were contacted via emails through associates of the Centre for Health Informatics, and through a healthcare mailing list company. Twenty-six GPs participated in the empirical study. The study was designed so each participant viewed and gave prescriptions for 20 prescribing scenarios, 10 coded as “hard” and 10 coded as “medium” prescribing scenarios (N = 520 prescribing cases were answered overall). Scenarios were accompanied by correct advice 70% of the time, and incorrect advice 30% of the time (in equal proportions in either task difficulty condition). Both the order of scenario presentation and the correct/incorrect nature of advice were randomised to prevent order effects. The planned time pressure condition was dropped due to low response rate. Results: To compare with previous literature which took overall decisions into account, taking individual cases into account (N=520), the pre advice accuracy rate of the clinicians was 50.4%, which improved to 58.3% post advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of AB, as measured by decision switches from correct pre advice, to incorrect post advice was 5.2% of all cases at a CDSS accuracy rate of 70% - leading to a net improvement of 8%. However, the above by-case type of analysis may not enable generalisation of results (but illustrates rates in this specific situation); individual participant differences must be taken into account. By participant (N = 26) when advice was correct, decisions were more likely to be switched to a correct prescription, when advice was incorrect decisions were more likely to be switched to an incorrect prescription. There was a significant correlation between decision switching and AB error. By participant, more immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching (but not higher AB rate). The rate of AB was somewhat problematic to analyse due to low number of instances – the effect could potentially have been greater. The between subjects effect of time pressure could not be investigated due to low response rate. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. Conclusion: There is a gap in the current literature investigating inappropriate CDSS use, but the general literature supports an interactive multi-factorial aetiology for automation misuse. Automation bias is a consistent effect with various potential direct and indirect causal factors. It may be mitigated by altering advice characteristics to aid clinicians’ awareness of advice correctness and support their own informed judgement – this needs further empirical investigation. Users’ own clinical judgement must always be maintained, and systems should not be followed unquestioningly.
179

Science as ideology : the problem of science and the media reconsidered

Dornan, Chris. January 1987 (has links)
This study seeks to undertake an analysis of the topic of 'science and the media' as it has been constituted in academic discourse since the end of the Second World War. It finds that concern has polarized in two distinct camps: The larger, participant in the traditional project of North American media studies, blames the press for what it perceives as a widespread and deleterious "scientific illiteracy" on the part of the laity. The more recent, indebted to critical developments in social theory, philosophy of science, and the study of mass communication, works to expose the assumptions on which press coverage of science has been based and the interests which have benefited. / The thesis argues that the adequacy of the dominant concern to its object of analysis is at best suspect, but that nevertheless its agitations have been chiefly responsible for the form which popular science has predominantly assumed.
180

Modélisation et recherche de graphes visuels : une approche par modèles de langue pour la reconnaissance de scènes

Pham, Trong-Ton 02 December 2010 (has links) (PDF)
Content-based image indexing and retrieval (CBIR) system needs to consider several types of visual features and spatial information among them (i.e., different point of views) for better image representation. This thesis presents a novel approach that exploits an extension of the language modeling approach from information retrieval to the problem of graph-based image retrieval. Such versatile graph model is needed to represent the multiple points of views of images. This graph-based framework is composed of three main stages: Image processing stage aims at extracting image regions from the image. It also consists of computing the numerical feature vectors associated with image regions. Graph modeling stage consists of two main steps. First, extracted image regions that are visually similar will be grouped into clusters using an unsupervised learning algorithm. Each cluster is then associated with a visual concept. The second step generates the spatial relations between the visual concepts. Each image is represented by a visual graph captured from a set of visual concepts and a set of spatial relations among them. Graph retrieval stage is to retrieve images relevant to a new image query. Query graphs are generated following the graph modeling stage. Inspired by the language model for text retrieval, we extend this framework for matching the query graph with the document graphs from the database. Images are then ranked based on the relevance values of the corresponding image graphs. Two instances of the visual graph model have been applied to the problem of scene recognition and robot localization. We performed the experiments on two image collections: one contained 3,849 touristic images and another composed of 3,633 images captured by a mobile robot. The achieved results show that using visual graph model outperforms the standard language model and the Support Vector Machine method by more than 10% in accuracy.

Page generated in 0.393 seconds