• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 310
  • 1
  • Tagged with
  • 813
  • 813
  • 813
  • 813
  • 813
  • 172
  • 170
  • 59
  • 51
  • 48
  • 42
  • 41
  • 36
  • 36
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Förändringsarbete : faktorer som påverkar en anställds förändringsbenägenhet vid införandet av en ny systemutvecklingsmetod

Furåker, Linda January 2000 (has links)
<p>Dagens samhälle förändras i en alltmer ökande takt. För att företag och organisationer skall hänga med i denna utveckling, krävs förändringar och omorganisationer även här. För att en förändring skall få ett önskat resultat finns dock vissa faktorer som bör beaktas. Det kan vara faktorer som ledningen i viss mån kan påverka, exempelvis information, delaktighet och trygghet, men det kan också vara så att en individs personlighet kan vara avgörande för om en förändring lyckas eller inte.</p><p>En av de viktigaste faktorerna vid ett förändringsarbete är att de anställda får veta varför förändringen skall genomföras. Det är viktigt att de som berörs av en förändring kan se nyttan av densamma.</p><p>Arbetet bygger på en fallstudie, som har utförts på Volvo IT, och den förändring som genomförts har resulterat i en ny systemutvecklingsmetod.</p>
162

The Business Value of Data Warehouses : Opportunities, Pitfalls and Future Directions

Strand, Matthias January 2000 (has links)
<p>Organisations have spent billions of dollars (USD) on investments in data warehouses. Many have succeeded, but many have also failed. These failures are considered mostly to be of an organisational nature and not of a technological, as one might have expected. Due to the failures, organisations have problems to derive business value from their data warehouse investments. Obtaining business value from data warehouses is necessary, since the investment is of such a magnitude that it is clearly visible in the balance sheet. In order to investigate how the business value may be increased, we have conducted an extensive literature study, aimed at identifying opportunities and future directions, which may alleviate the problem of low return on investment. To balance the work, we have also identified pitfalls, which may hinder organisations to derive business value from their data warehouses.</p><p>Based on the literature survey, we have identified and motivated possible research areas, which we consider relevant if organisations are to derive real business value from their data warehouses. These areas are:</p><p>* Integrating data warehouses in knowledge management.</p><p>* Data warehouses as a foundation for information data super stores.</p><p>* Using data warehouses to predict the need for business change.</p><p>* Aligning data warehouses and business processes.</p><p>As the areas are rather broad, we have also included examples of more specific research problems, within each possible research area. Furthermore, we have given initial ideas regarding how to investigate those specific research problems.</p>
163

Svenska patientjournaler på Internet är det möjligt?

Vallgren, Annika January 2001 (has links)
<p>Idag går det att publicera och hämta information från databaser på Internet. Inom hälso- och sjukvården kan förbättringar av vården ske om patientjournaler fanns tillgängliga på Internet. Här undersöks om det under svenska förhållanden är genomförbart. Studien syftar till att belysa för och nackdelar med att publicera patientjournaler på Internet samt att belysa specifika egenskaper hos patientjournaler som måste tas hänsyn till vid en eventuell publicering.</p><p>Informationen om en patient måste skyddas mot att obehöriga kommer åt eller ändrar den. Den måste även vara åtkomlig just då den erfordras av en auktoriserad användare. Vården är ett områden där bristande säkerhet kan bli livshotande.</p><p>De svenska lagar som måste beaktas då patientjournaler görs åtkomliga på Internet är: Personuppgiftslagen, Lag om vårdregister, Sekretesslagen, Patientjournallagen samt Lag om hälsodataregister. Lagarna utgör inte något hinder mot åtkomst av patientjournaler på Internet under förutsättning att detta kan utföras på ett för patienten säkert sätt.</p>
164

Kunskapsöverföring : En teoretisk verklighet?

Andersson, Johan, Serbner, Martin, Ståhl, Maria January 2008 (has links)
<p>Kraven på dagens företag har ökat markant de senaste åren. För dagens</p><p>företag är det viktigt att hela tiden kunna mäta sig och helst även vara</p><p>steget före sina konkurrenter. En viktig del i detta är att ha rätt typ av</p><p>kompetens inom olika områden och positioner inom det egna företaget.</p><p>För att uppnå det är det idag vanligt att företagen arrangerar traineeprogram.</p><p>I denna uppsats beskrivs problematiken vid kunskapsöverföring till</p><p>en trainee och de problemområden som finns kring den processen.</p><p>För att kunna utforma ett väl fungerande traineeprogram från företagets</p><p>sida är det mycket viktigt att förstå vilka mål som företaget vill uppnå genom</p><p>att genomföra ett traineeprogram.</p><p>Vårt tillvägagångssätt och metodval var att intervjua personer som genomgått</p><p>fallföretagets traineeprogram. Vi formulerat vårt problem som är;</p><p>Hur överförs kunskap till trainees inom vårt fallföretag idag och kan den</p><p>förbättras?</p><p>Under arbetets gång upptäckte vi brister i fallföretagets traineeprogram</p><p>men vi fann även många positiva delar. Empiri avsnittet gav oss mycket</p><p>värdefull kunskap som vi hade nytta av i analys samt slutsats avsnittet. I</p><p>vår slutsats redovisar vi olika förslag och rekommendationer till möjliga</p><p>åtgärder beträffande metodval för kunskapsöverföring.</p>
165

Textual information retrieval : An approach based on language modeling and neural networks

Georgakis, Apostolos A. January 2004 (has links)
<p>This thesis covers topics relevant to information organization and retrieval. The main objective of the work is to provide algorithms that can elevate the recall-precision performance of retrieval tasks in a wide range of applications ranging from document organization and retrieval to web-document pre-fetching and finally clustering of documents based on novel encoding techniques.</p><p>The first part of the thesis deals with the concept of document organization and retrieval using unsupervised neural networks, namely the self-organizing map, and statistical encoding methods for representing the available documents into numerical vectors. The objective of this section is to introduce a set of novel variants of the self-organizing map algorithm that addresses certain shortcomings of the original algorithm.</p><p>In the second part of the thesis the latencies perceived by users surfing the Internet are shortened with the usage of a novel transparent and speculative pre-fetching algorithm. The proposed algorithm relies on a model of behaviour for the user browsing the Internet and predicts his future actions when surfing the Internet. In modeling the users behaviour the algorithm relies on the contextual statistics of the web pages visited by the user.</p><p>Finally, the last chapter of the thesis provides preliminary theoretical results along with a general framework on the current and future scientific work. The chapter describes the usage of the Zipf distribution for document organization and the usage of the adaboosting algorithm for the elevation of the performance of pre-fetching algorithms. </p>
166

Object Based Concurrency for Data Parallel Applications : Programmability and Effectiveness

Diaconescu, Roxana Elena January 2002 (has links)
<p>Increased programmability for concurrent applications in distributed systems requires automatic support for some of the concurrent computing aspects. These are: the decomposition of a program into parallel threads, the mapping of threads to processors, the communication between threads, and synchronization among threads.</p><p>Thus, a highly usable programming environment for data parallel applications strives to conceal data decomposition, data mapping, data communication, and data access synchronization.</p><p>This work investigates the problem of programmability and effectiveness for scientific, data parallel applications with irregular data layout. The complicating factor for such applications is the recursive, or indirection data structure representation. That is, an efficient parallel execution requires a data distribution and mapping that ensure data locality. However, the recursive and indirect representations yield poor physical data locality. We examine the techniques for efficient, load-balanced data partitioning and mapping for irregular data layouts. Moreover, in the presence of non-trivial parallelism and data dependences, a general data partitioning procedure complicates arbitrary locating distributed data across address spaces. We formulate the general data partitioning and mapping problems and show how a general data layout can be used to access data across address spaces in a location transparent manner.</p><p>Traditional data parallel models promote instruction level, or loop-level parallelism. Compiler transformations and optimizations for discovering and/or increasing parallelism for Fortran programs apply to regular applications. However, many data intensive applications are irregular (sparse matrix problems, applications that use general meshes, etc.). Discovering and exploiting fine-grain parallelism for applications that use indirection structures (e.g. indirection arrays, pointers) is very hard, or even impossible.</p><p>The work in this thesis explores a concurrent programming model that enables coarse-grain parallelism in a highly usable, efficient manner. Hence, it explores the issues of implicit parallelism in the context of objects as a means for encapsulating distributed data. The computation model results in a trivial SPMD (Single Program Multiple Data), where the non-trivial parallelism aspects are solved automatically.</p><p>This thesis makes the following contributions:</p><p>- It formulates the general data partitioning and mapping problems for data parallel applications. Based on these formulations, it describes an efficient distributed data consistency algorithm.</p><p>- It describes a data parallel object model suitable for regular and irregular data parallel applications. Moreover, it describes an original technique to map data to processors such as to preserve locality. It also presents an inter-object consistency scheme that tries to minimize communication.</p><p>- It brings evidence on the efficiency of the data partitioning and consistency schemes. It describes a prototype implementation of a system supporting implicit data parallelism through distributed objects. Finally, it presents results showing that the approach is scalable on various architectures (e.g. Linux clusters, SGI Origin 3800).</p>
167

Interactive Process Models

Jørgensen, Håvard D. January 2004 (has links)
<p>Contemporary business process systems are built to automate routine procedures. Automation demands well-understood domains, repetitive processes, clear organisational roles, an established terminology, and predefined plans. Knowledge work is not like that. Plans for knowledge intensive processes are elaborated and reinterpreted as the work progresses. Interactive process models are created and updated by the project participants to reflect evolving plans. The execution of such models is controlled by users and only partially automated. An interactive process system should</p><p>- Enable modelling by end users,</p><p>- Integrate support for ad-hoc and routine work,</p><p>- Dynamically customise functionality and interfaces, and</p><p>- Integrate learning and knowledge management in everyday work.</p><p>This thesis reports on an engineering project, where an interactive process environment called WORKWARE was developed. WORKWARE combines workflow and groupware. Following an incremental development method, multiple versions of systems have been designed, implemented and used. In each iteration, usage experience, validation data, and the organisational science literature generated requirements for the next version.</p>
168

Discernibility and Rough Sets in Medicine: Tools and Applications

Øhrn, Aleksander January 2000 (has links)
<p>This thesis examines how discernibility-based methods can be equipped to posses several qualities that are needed for analyzing tabular medical data, and how these models can be evaluated according to current standard measures used in the health sciences. To this end, tools have been developed that make this possible, and some novel medical applications have been devised in which the tools are put to use.</p><p>Rough set theory provides a framework in which discernibility-based methods can be formulated and interpreted, and also forms an appealing foundation for data mining and knowledge discovery. When the medical domain is targeted, several factors become important. This thesis examines some of these factors, and holds them up to the current state-of-the-art in discernibility-based empirical modelling. Bringing together pertinent techniques, suitable adaptations of relevant theory for model construction and assessment are presented. Rough set classifiers are brought together with ROC analysis, and it is outlined how attribute costs and semantics can enter the modelling process.</p><p>ROSETTA, a comprehensive software system for conducting data analyses within the framework of rough set theory, has been developed. Under the hypothesis that the accessibility of such tools lowers the threshold for abstract ideas to migrate into concrete realization, this aids in reducing a gap between theoreticians and practitioners, and enables existing problems to be more easily attacked. The ROSETTA system boasts a set of flexible and powerful algorithms, and sets these in a user-friendly environment designed to support all phases of the discernibility-based modelling methodology. Researchers world-wide have already put the system to use in a wide variety of domains.</p><p>By and large, discernibility-based data analysis can be varied along two main axes: Which objects in the universe of discourse that we deem it necessary to discern between, and how we define that discernibility among these objects is allowed to take place. Using ROSETTA, this thesis has explored various facets of this also in three novel and distinctly different medical applications:</p><p>*A method is proposed for identifying population subgroups for which expensive tests may be avoided, and experiments with a real-world database on a cardiological prognostic problem suggest that significant savings are possible.</p><p>* A method is proposed for anonymizing medical databases with sensitive contents via cell suppression, thus aiding to preserve patient confidentiality.</p><p>* Very simple rule-based classifiers are employed to diagnose acute appendicitis, and their relative performance is compared to a team of experienced surgeons. The added value of certain biochemical tests is also demonstrated.</p>
169

Hur förankras en policy? : En studie av Stockholms stads informationssäkerhet

Granström, Carl, Mårtensson, Markus January 2005 (has links)
No description available.
170

Statistical Considerations in the Analysis of Matched Case-Control Studies. With Applications in Nutritional Epidemiology

Hansson, Lisbeth January 2001 (has links)
<p>The case-control study is one of the most frequently used study designs in analytical epidemiology. This thesis focuses on some methodological aspects in the analysis of the results from this kind of study.</p><p>A population based case-control study was conducted in northern Norway and central Sweden in order to study the associations of several potential risk factors with thyroid cancer. Cases and controls were individually matched and the information on the factors under study was provided by means of a self-completed questionnaire. The analysis was conducted with logistic regression. No association was found with pregnancies, oral contraceptives and hormone replacement after menopause. Early pregnancy and artificial menopause were associated with an increased risk, and cigarette smoking with a decreased risk, of thyroid cancer (paper I). The relation with diet was also examined. High consumption with fat- and starch-rich diet was associated with an increased risk (paper II).</p><p>Conditional and unconditional maximum likelihood estimations of the parameters in a logistic regression were compared through a simulation study. Conditional estimation had higher root mean square error but better model fit than unconditional, especially for 1:1 matching, with relatively little effect of the proportion of missing values (paper III). Two common approaches to handle partial non-response in a questionnaire when calculating nutrient intake from diet variables were compared. In many situations it is reasonable to interpret the omitted self-reports of food consumption as indication of "zero-consumption" (paper IV).</p><p>The reproducibility of dietary reports was presented and problems for its measurements and analysis discussed. The most advisable approach to measure repeatability is to look at different correlation methods. Among factors affecting reproducibility frequency and homogeneity of consumption are presumably the most important ones (paper V). Nutrient variables can often have a mixed distribution form and therefore transformation to normality will be troublesome. When analysing nutrients we therefore recommend comparing the result from a parametric test with an analogous distribution-free test. Different methods to transform nutrient variables to achieve normality were discussed (paper VI). </p>

Page generated in 0.0713 seconds