• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 47
  • 1
  • Tagged with
  • 387
  • 324
  • 320
  • 320
  • 320
  • 320
  • 320
  • 87
  • 79
  • 70
  • 65
  • 59
  • 57
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Förändring vid en implementering av ett röstigenkänningsprogram : fallstudie med NU-sjukvården

Magnusson, Caroline, Bremert, Lina January 2005 (has links)
No description available.
82

Object Based Concurrency for Data Parallel Applications : Programmability and Effectiveness

Diaconescu, Roxana Elena January 2002 (has links)
Increased programmability for concurrent applications in distributed systems requires automatic support for some of the concurrent computing aspects. These are: the decomposition of a program into parallel threads, the mapping of threads to processors, the communication between threads, and synchronization among threads. Thus, a highly usable programming environment for data parallel applications strives to conceal data decomposition, data mapping, data communication, and data access synchronization. This work investigates the problem of programmability and effectiveness for scientific, data parallel applications with irregular data layout. The complicating factor for such applications is the recursive, or indirection data structure representation. That is, an efficient parallel execution requires a data distribution and mapping that ensure data locality. However, the recursive and indirect representations yield poor physical data locality. We examine the techniques for efficient, load-balanced data partitioning and mapping for irregular data layouts. Moreover, in the presence of non-trivial parallelism and data dependences, a general data partitioning procedure complicates arbitrary locating distributed data across address spaces. We formulate the general data partitioning and mapping problems and show how a general data layout can be used to access data across address spaces in a location transparent manner. Traditional data parallel models promote instruction level, or loop-level parallelism. Compiler transformations and optimizations for discovering and/or increasing parallelism for Fortran programs apply to regular applications. However, many data intensive applications are irregular (sparse matrix problems, applications that use general meshes, etc.). Discovering and exploiting fine-grain parallelism for applications that use indirection structures (e.g. indirection arrays, pointers) is very hard, or even impossible. The work in this thesis explores a concurrent programming model that enables coarse-grain parallelism in a highly usable, efficient manner. Hence, it explores the issues of implicit parallelism in the context of objects as a means for encapsulating distributed data. The computation model results in a trivial SPMD (Single Program Multiple Data), where the non-trivial parallelism aspects are solved automatically. This thesis makes the following contributions: - It formulates the general data partitioning and mapping problems for data parallel applications. Based on these formulations, it describes an efficient distributed data consistency algorithm. - It describes a data parallel object model suitable for regular and irregular data parallel applications. Moreover, it describes an original technique to map data to processors such as to preserve locality. It also presents an inter-object consistency scheme that tries to minimize communication. - It brings evidence on the efficiency of the data partitioning and consistency schemes. It describes a prototype implementation of a system supporting implicit data parallelism through distributed objects. Finally, it presents results showing that the approach is scalable on various architectures (e.g. Linux clusters, SGI Origin 3800).
83

"The clinical eye" : constructing and computerizing an anesthesia patient record

Beckerman, Carina January 2006 (has links)
The overall purpose in this research has been to investigate what happens when somebody or something intervenes in a knowledge worker´s every-day life. Empirically, the author has chosen to explore how an anesthesia patient record is constructed to be what it becomes and then computerized and the implications of this for the anesthesist and the anesthesia nurse. The research takes place among a group of people that call themselves emergency people. Some of them think that the art of the performance will be at risk if the anesthesia patient record is computerized. The author has used a theoretical framework integrating ideas about knowledge management with concepts from structuration theory and theories about sensemaking, representations and schema use. Integrating knowledge management with structuration theory makes it possible to capture the complexity of what takes place when a knowledge worker shuttles between transformation and routine in an organizational setting in the knowledge society. “The clinical eye” emerges as a concept that influences how an anesthesist searches for information, how knowledge is exercised in anesthesia and how a patient record should be designed. The author concludes that the clinical eye is a central concept for understanding how an anesthesist exercises his or her knowledge, how the content of a patient record is constructed and designed and how reactions to a changed evolve. The author introduces two concepts “knowledge structuring” and “knowledge domination” that are considered important and interrelated. Exercising knowledge is a structured activity. In our heads we make plans for what to do, how to do it and what to do next. When an organizational setting is structured the knowledge that is exercised in this setting also becomes structured. An anesthesist exercises the practice of anesthesia in a structured order in a certain space during a certain time-period. When upgrading and computerizing the anesthesia patient record, both a transformation and an additional structuring of how knowledge is exercised take place. The question then becomes how this new structuring influences the practice of performing anesthesia. In addition to this the author theorizes that if the computerized patient record is conceptualized as a knowledge management system the way it is used changes. Many more services are included, and it is not “just” a patient record anymore. / Diss. Stockholm : Handelshögskolan i Stockholm, 2006
84

Webbdistribuerad pedagogisk multimediaproduktion : en studie i designarbetets tvärvetenskapliga natur

Vaktel, Andreas, Ohlsson, Johannes January 2005 (has links)
No description available.
85

Förändring vid en implementering av ett röstigenkänningsprogram : fallstudie med NU-sjukvården

Magnusson, Caroline, Bremert, Lina January 2005 (has links)
No description available.
86

Samkörning av databaser-Är lagen ett hinder?

Ankarberg, Alexander January 2006 (has links)
<p>Title:Comparison of databases – Is the law an obstacle?</p><p>Authors:Alexander Ankarberg, Applied Information Science.</p><p>Tutors:Lars- Eric Ljung</p><p>Problem: Cross running databases is getting more and more significant during the development of the information flow. There are huge benefits if we start to use the technique that already exists. The law is today an obstacle, so what would happen if the law wasn’t so stern. My question is:” why don’t we cross run databases more efficient between parts of institutions”</p><p>Aim:The purpose of this essay is to evaluate why institutions does not cross run databases and start a discussion. There are possibilities that we today does not use. One aim is also to find solutions so that we can start to use the techniques. The essay will explain the fundamentals and discuss both the advantages and the disadvantages in depth.</p><p>Method:The author has approached the problem from two ways. From induction and deduction which combined is abduction. The author hopes that this results in as many points of angles as possible. And the answers will be as complete as possible. The essay also includes an inquiry which is based on interview with ordinary people.</p><p>Conclusions:The law is not up to date nor made for today’s technique. It is in some ways an obstacle for a more efficient system and it could save enormous amounts of money for both the government and common man. There is hope though, and small revolutions happen every day. There is also ways to go around the law and make things possible and make the system more efficient. That is with agreement from the person that the information is about. There is also one possibility with safety classes, to put a number on information.</p>
87

Operational semantics for PLEX : a basis for safe parallelization /

Lindhult, Johan. January 2008 (has links)
Lic.-avh. Västerås : Mälardalens högskola, 2008. / S. 75-79: Bibliografi.
88

A Pipeline for Automatic Lexical Normalization of Swedish Student Writings

Liu, Yuhan January 2018 (has links)
In this thesis, we aim to explore the combination of different lexical normalization methods and provide a practical lexical normalization pipeline for Swedish student writings within the framework of SWEGRAM(Näsman et al., 2017). An important improvement in my implementation is that the pipeline design should consider the unique morphological and phonological characteristics of the Swedish language. This kind of localization makes the system more robust for Swedish at the cost of being less applicable to other languages in similar tasks. The core of the localization lies in a phonetic algorithm we designed specifically for the Swedish language and a compound processing step for Swedish compounding phenomenon. The proposed pipeline consists of four steps, namely preprocessing, identification of out-of-vocabulary words, generation of normalization candidates and candidate selection. For each step we use different approaches. We perform experiments on the Uppsala Corpus of Student Writings (UCSW) (Megyesi et al., 2016), and evaluate the results in termsof precision, recall and accuracy measures. The techniques applied to the raw data and their impacts on the final result are presented. In our evaluation, we show that the pipeline can be useful in the lexical normalization task and our phonetic algorithm is proven to be effective for the Swedish language.
89

On High-Dimensional Transformation Vectors

Feuchtmüller, Sven January 2018 (has links)
No description available.
90

Semantic Text Matching Using Convolutional Neural Networks

Wang, Run Fen January 2018 (has links)
Semantic text matching is a fundamental task for many applications in NaturalLanguage Processing (NLP). Traditional methods using term frequencyinversedocument frequency (TF-IDF) to match exact words in documentshave one strong drawback which is TF-IDF is unable to capture semanticrelations between closely-related words which will lead to a disappointingmatching result. Neural networks have recently been used for various applicationsin NLP, and achieved state-of-the-art performances on many tasks.Recurrent Neural Networks (RNN) have been tested on text classificationand text matching, but it did not gain any remarkable results, which is dueto RNNs working more effectively on texts with a short length, but longdocuments. In this paper, Convolutional Neural Networks (CNN) will beapplied to match texts in a semantic aspect. It uses word embedding representationsof two texts as inputs to the CNN construction to extract thesemantic features between the two texts and give a score as the output ofhow certain the CNN model is that they match. The results show that aftersome tuning of the parameters the CNN model could produce accuracy,prediction, recall and F1-scores all over 80%. This is a great improvementover the previous TF-IDF results and further improvements could be madeby using dynamic word vectors, better pre-processing of the data, generatelarger and more feature rich data sets and further tuning of the parameters.

Page generated in 0.0785 seconds