• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 310
  • 1
  • Tagged with
  • 813
  • 813
  • 813
  • 813
  • 813
  • 172
  • 170
  • 59
  • 51
  • 48
  • 42
  • 41
  • 36
  • 36
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

New Directions in Symbolic Model Checking

d'Orso, Julien January 2003 (has links)
<p>In today's computer engineering, requirements for generally high reliability have pushed the notion of testing to its limits. Many disciplines are moving, or have already moved, to more formal methods to ensure correctness. This is done by comparing the behavior of the system as it is implemented against a set of requirements. The ultimate goal is to create methods and tools that are able to perform this kind of verfication <i>automatically</i>: this is called <i>Model Checking</i>.</p><p>Although the notion of model checking has existed for two decades, adoption by the industry has been hampered by its poor applicability to complex systems. During the 90's, researchers have introduced an approach to cope with large (even infinite) state spaces: <i>Symbolic Model Checking</i>. The key notion is to represent large (possibly infinite) sets of states by a small formula (as opposed to enumerating all members). In this thesis, we investigate applying symbolic methods to different types of systems:</p><p><b>Parameterized systems.</b> We work whithin the framework of <i>Regular Model Chacking</i>. In regular model checking, we represent a global state as a word over a finite alphabet. A transition relation is represented by a regular length-preserving transducer. An important operation is the so-called transitive closure, which characterizes composing a transition relation with itself an arbitrary number of times. Since completeness cannot be achieved, we propose methods of computing closures that work as often as possible.</p><p><b>Games on infinite structures.</b> Infinite-state systems for which the transition relation is monotonic with respect to a well quasi-ordering on states can be analyzed. We lift the framework of well quasi-ordered domains toward games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games: downward-closed games. We propose an algorithm to analyze such games with a winning condition expressed as a safety property.</p><p><b>Probabilistic systems.</b> We present a framework for the quantitative analysis of probabilistic systems with an infinite state-space: given an initial state <i>s</i><i>init</i>, a set <i>F</i> of final states, and a rational <i>Θ</i> > 0, compute a rational ρ such that the probability of reaching <i>F</i> form <i>s</i><i>init</i> is between ρ and ρ + <i>Θ</i>. We present a generic algorithm and sufficient conditions for termination.</p>
172

Statistical Considerations in the Analysis of Matched Case-Control Studies. With Applications in Nutritional Epidemiology

Hansson, Lisbeth January 2001 (has links)
The case-control study is one of the most frequently used study designs in analytical epidemiology. This thesis focuses on some methodological aspects in the analysis of the results from this kind of study. A population based case-control study was conducted in northern Norway and central Sweden in order to study the associations of several potential risk factors with thyroid cancer. Cases and controls were individually matched and the information on the factors under study was provided by means of a self-completed questionnaire. The analysis was conducted with logistic regression. No association was found with pregnancies, oral contraceptives and hormone replacement after menopause. Early pregnancy and artificial menopause were associated with an increased risk, and cigarette smoking with a decreased risk, of thyroid cancer (paper I). The relation with diet was also examined. High consumption with fat- and starch-rich diet was associated with an increased risk (paper II). Conditional and unconditional maximum likelihood estimations of the parameters in a logistic regression were compared through a simulation study. Conditional estimation had higher root mean square error but better model fit than unconditional, especially for 1:1 matching, with relatively little effect of the proportion of missing values (paper III). Two common approaches to handle partial non-response in a questionnaire when calculating nutrient intake from diet variables were compared. In many situations it is reasonable to interpret the omitted self-reports of food consumption as indication of "zero-consumption" (paper IV). The reproducibility of dietary reports was presented and problems for its measurements and analysis discussed. The most advisable approach to measure repeatability is to look at different correlation methods. Among factors affecting reproducibility frequency and homogeneity of consumption are presumably the most important ones (paper V). Nutrient variables can often have a mixed distribution form and therefore transformation to normality will be troublesome. When analysing nutrients we therefore recommend comparing the result from a parametric test with an analogous distribution-free test. Different methods to transform nutrient variables to achieve normality were discussed (paper VI).
173

Object Based Concurrency for Data Parallel Applications : Programmability and Effectiveness

Diaconescu, Roxana Elena January 2002 (has links)
Increased programmability for concurrent applications in distributed systems requires automatic support for some of the concurrent computing aspects. These are: the decomposition of a program into parallel threads, the mapping of threads to processors, the communication between threads, and synchronization among threads. Thus, a highly usable programming environment for data parallel applications strives to conceal data decomposition, data mapping, data communication, and data access synchronization. This work investigates the problem of programmability and effectiveness for scientific, data parallel applications with irregular data layout. The complicating factor for such applications is the recursive, or indirection data structure representation. That is, an efficient parallel execution requires a data distribution and mapping that ensure data locality. However, the recursive and indirect representations yield poor physical data locality. We examine the techniques for efficient, load-balanced data partitioning and mapping for irregular data layouts. Moreover, in the presence of non-trivial parallelism and data dependences, a general data partitioning procedure complicates arbitrary locating distributed data across address spaces. We formulate the general data partitioning and mapping problems and show how a general data layout can be used to access data across address spaces in a location transparent manner. Traditional data parallel models promote instruction level, or loop-level parallelism. Compiler transformations and optimizations for discovering and/or increasing parallelism for Fortran programs apply to regular applications. However, many data intensive applications are irregular (sparse matrix problems, applications that use general meshes, etc.). Discovering and exploiting fine-grain parallelism for applications that use indirection structures (e.g. indirection arrays, pointers) is very hard, or even impossible. The work in this thesis explores a concurrent programming model that enables coarse-grain parallelism in a highly usable, efficient manner. Hence, it explores the issues of implicit parallelism in the context of objects as a means for encapsulating distributed data. The computation model results in a trivial SPMD (Single Program Multiple Data), where the non-trivial parallelism aspects are solved automatically. This thesis makes the following contributions: - It formulates the general data partitioning and mapping problems for data parallel applications. Based on these formulations, it describes an efficient distributed data consistency algorithm. - It describes a data parallel object model suitable for regular and irregular data parallel applications. Moreover, it describes an original technique to map data to processors such as to preserve locality. It also presents an inter-object consistency scheme that tries to minimize communication. - It brings evidence on the efficiency of the data partitioning and consistency schemes. It describes a prototype implementation of a system supporting implicit data parallelism through distributed objects. Finally, it presents results showing that the approach is scalable on various architectures (e.g. Linux clusters, SGI Origin 3800).
174

Interactive Process Models

Jørgensen, Håvard D. January 2004 (has links)
Contemporary business process systems are built to automate routine procedures. Automation demands well-understood domains, repetitive processes, clear organisational roles, an established terminology, and predefined plans. Knowledge work is not like that. Plans for knowledge intensive processes are elaborated and reinterpreted as the work progresses. Interactive process models are created and updated by the project participants to reflect evolving plans. The execution of such models is controlled by users and only partially automated. An interactive process system should - Enable modelling by end users, - Integrate support for ad-hoc and routine work, - Dynamically customise functionality and interfaces, and - Integrate learning and knowledge management in everyday work. This thesis reports on an engineering project, where an interactive process environment called WORKWARE was developed. WORKWARE combines workflow and groupware. Following an incremental development method, multiple versions of systems have been designed, implemented and used. In each iteration, usage experience, validation data, and the organisational science literature generated requirements for the next version.
175

Textual information retrieval : An approach based on language modeling and neural networks

Georgakis, Apostolos A. January 2004 (has links)
This thesis covers topics relevant to information organization and retrieval. The main objective of the work is to provide algorithms that can elevate the recall-precision performance of retrieval tasks in a wide range of applications ranging from document organization and retrieval to web-document pre-fetching and finally clustering of documents based on novel encoding techniques. The first part of the thesis deals with the concept of document organization and retrieval using unsupervised neural networks, namely the self-organizing map, and statistical encoding methods for representing the available documents into numerical vectors. The objective of this section is to introduce a set of novel variants of the self-organizing map algorithm that addresses certain shortcomings of the original algorithm. In the second part of the thesis the latencies perceived by users surfing the Internet are shortened with the usage of a novel transparent and speculative pre-fetching algorithm. The proposed algorithm relies on a model of behaviour for the user browsing the Internet and predicts his future actions when surfing the Internet. In modeling the users behaviour the algorithm relies on the contextual statistics of the web pages visited by the user. Finally, the last chapter of the thesis provides preliminary theoretical results along with a general framework on the current and future scientific work. The chapter describes the usage of the Zipf distribution for document organization and the usage of the adaboosting algorithm for the elevation of the performance of pre-fetching algorithms.
176

A Design Rationale for Pervasive Computing : User Experience, Contextual Change and Technical Requirements

Bylund, Markus January 2005 (has links)
The vision of pervasive computing promises a shift from information tech-nology per se to what can be accomplished by using it, thereby fundamen-tally changing the relationship between people and information technology. In order to realize this vision, a large number of issues concerning user ex-perience, contextual change, and technical requirements should be ad-dressed. We provide a design rationale for pervasive computing that encom-passes these issues, in which we argue that a prominent aspect of user ex-perience is to provide user control, primarily founded in human values. As one of the more significant aspects of the user experience, we provide an extended discussion about privacy. With contextual change, we address the fundamental change in previously established relationships between the practices of individuals, social institutions, and physical environments that pervasive computing entails. Finally, issues of technical requirements refer to technology neutrality and openness—factors that we argue are fundamen-tal for realizing pervasive computing. We describe a number of empirical and technical studies, the results of which have helped to verify aspects of the design rationale as well as shap-ing new aspects of it. The empirical studies include an ethnographic-inspired study focusing on information technology support for everyday activities, a study based on structured interviews concerning relationships between con-texts of use and everyday planning activities, and a focus group study of laypeople’s interpretations of the concept of privacy in relation to informa-tion technology. The first technical study concerns the model of personal service environments as a means for addressing a number of challenges con-cerning user experience, contextual change, and technical requirements. Two other technical studies relate to a model for device-independent service de-velopment and the wearable server as a means to address issues of continu-ous usage experience and technology neutrality respectively. / QC 20100929
177

Discernibility and Rough Sets in Medicine: Tools and Applications

Øhrn, Aleksander January 2000 (has links)
This thesis examines how discernibility-based methods can be equipped to posses several qualities that are needed for analyzing tabular medical data, and how these models can be evaluated according to current standard measures used in the health sciences. To this end, tools have been developed that make this possible, and some novel medical applications have been devised in which the tools are put to use. Rough set theory provides a framework in which discernibility-based methods can be formulated and interpreted, and also forms an appealing foundation for data mining and knowledge discovery. When the medical domain is targeted, several factors become important. This thesis examines some of these factors, and holds them up to the current state-of-the-art in discernibility-based empirical modelling. Bringing together pertinent techniques, suitable adaptations of relevant theory for model construction and assessment are presented. Rough set classifiers are brought together with ROC analysis, and it is outlined how attribute costs and semantics can enter the modelling process. ROSETTA, a comprehensive software system for conducting data analyses within the framework of rough set theory, has been developed. Under the hypothesis that the accessibility of such tools lowers the threshold for abstract ideas to migrate into concrete realization, this aids in reducing a gap between theoreticians and practitioners, and enables existing problems to be more easily attacked. The ROSETTA system boasts a set of flexible and powerful algorithms, and sets these in a user-friendly environment designed to support all phases of the discernibility-based modelling methodology. Researchers world-wide have already put the system to use in a wide variety of domains. By and large, discernibility-based data analysis can be varied along two main axes: Which objects in the universe of discourse that we deem it necessary to discern between, and how we define that discernibility among these objects is allowed to take place. Using ROSETTA, this thesis has explored various facets of this also in three novel and distinctly different medical applications: *A method is proposed for identifying population subgroups for which expensive tests may be avoided, and experiments with a real-world database on a cardiological prognostic problem suggest that significant savings are possible. * A method is proposed for anonymizing medical databases with sensitive contents via cell suppression, thus aiding to preserve patient confidentiality. * Very simple rule-based classifiers are employed to diagnose acute appendicitis, and their relative performance is compared to a team of experienced surgeons. The added value of certain biochemical tests is also demonstrated.
178

New Directions in Symbolic Model Checking

d'Orso, Julien January 2003 (has links)
In today's computer engineering, requirements for generally high reliability have pushed the notion of testing to its limits. Many disciplines are moving, or have already moved, to more formal methods to ensure correctness. This is done by comparing the behavior of the system as it is implemented against a set of requirements. The ultimate goal is to create methods and tools that are able to perform this kind of verfication automatically: this is called Model Checking. Although the notion of model checking has existed for two decades, adoption by the industry has been hampered by its poor applicability to complex systems. During the 90's, researchers have introduced an approach to cope with large (even infinite) state spaces: Symbolic Model Checking. The key notion is to represent large (possibly infinite) sets of states by a small formula (as opposed to enumerating all members). In this thesis, we investigate applying symbolic methods to different types of systems: Parameterized systems. We work whithin the framework of Regular Model Chacking. In regular model checking, we represent a global state as a word over a finite alphabet. A transition relation is represented by a regular length-preserving transducer. An important operation is the so-called transitive closure, which characterizes composing a transition relation with itself an arbitrary number of times. Since completeness cannot be achieved, we propose methods of computing closures that work as often as possible. Games on infinite structures. Infinite-state systems for which the transition relation is monotonic with respect to a well quasi-ordering on states can be analyzed. We lift the framework of well quasi-ordered domains toward games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games: downward-closed games. We propose an algorithm to analyze such games with a winning condition expressed as a safety property. Probabilistic systems. We present a framework for the quantitative analysis of probabilistic systems with an infinite state-space: given an initial state sinit, a set F of final states, and a rational Θ &gt; 0, compute a rational ρ such that the probability of reaching F form sinit is between ρ and ρ + Θ. We present a generic algorithm and sufficient conditions for termination.
179

Webbaserat system för dugga

Larsson, Henrik, Björkegren, Mikael January 2006 (has links)
<p>Detta är ett 10 poängs examensarbete på C-nivå vid Karlstads Universitet. Målet med examensarbetet var att utveckla ett befintligt system för webbaserade kurstest så att det också kan användas för duggor. Säkerheten skulle förbättras så att studenter blir tvungna att logga in med användarnamn och lösenord när de ska skriva duggor. Resultaten från de studenter som skriver duggor skulle sparas i en databas. Vi hade också som mål att låta studenter testa systemet och att låta dem fylla i en enkät med frågor om vad de tycker om ett webbaserat system för duggor.</p><p>Det resultat vi har kommit fram till är ett system som både hanterar webbaserade kurstest och webbaserade duggor. Vi har gjort om systemet så att studenter som skriver en dugga måste fylla i användarnamn och lösenord. Vi lät dessutom studenterna på Datakommunikation 1 under höstterminen 2005 prova systemet. De genomförde en dugga med hjälp av systemet, och sedan lät vi dem fylla i en enkät där vi ställde frågor om vad de tycker om att göra webbaserade duggor. Efter att studenterna hade provat systemet och svarat på enkätfrågorna kom vi fram till att de flesta studenter tyckte att systemet fungerade väl och hade ett bra användargränssnitt. Systemet bör dock utvecklas ytterligare innan det tas i bruk av universitetet.</p> / <p>This is a bachelor’s thesis at the University of Karlstad. The goal of this work was to further develop a system for online course testing so that it can be used for small exams. The security was going to be improved so that students have to log on with usernames and passwords when they are going to take exams. The results from the students who take exams were going to be stored in a database. Our goal was also to let students test our system, and to let them fill out a survey with questions about what they think of an online exam system.</p><p>The result is a system that manages both online course tests and small online exams. We have redeveloped the system so students who take an exam have to fill in username and password. We let the students on Datacommunications 1, fall 2005 test the system. They took a small exam in the new system, and after we let them fill out a survey in which we asked questions about what they think about writing exams online. After the students had tried the system and answered the survey-questions we concluded that most students thought the system was working well and that it had a good user interface. The system should be further developed though, before it can be used by the university.</p>
180

Six Sigma och processförbättring : En fallstudie på Siemens Industrial Turbomachinery AB / Six Sigma and process improvement : A case study at Siemens Industrial Turbomachinery AB

Andersson, Patrik, Norén, Erik January 2009 (has links)
<p>Denna rapport syftar till att undersöka hur Siemens Industrial Turbomachinery AB i Finspång har valt att arbeta med processförbättringsmetoden Six Sigma, som är en mycket populär metod för att genomföra processförbättringsprojekt och mycket attraktiv för företag som ska arbeta med sådana.</p><p> </p><p>Metoden bygger på statistik och att fatta välinformerade beslut. Detta görs genom att samla team-medlemmar från den process som ska förbättras och genom att göra mätningar inom sagda process.  Man börjar ett projekt genom att definiera problemet, går vidare med att mäta den aktuella processen, analyserar sedan de data man fått in, försöker komma på lösningar och slutligen implementerar man den lösning man bedömt som bäst.</p><p> </p><p>Vi gjorde en kvalitativ fallstudie på företaget och intervjuade över ett dussin personer som hade olika grader av bekanthet med Six Sigma, för att få utsagor från personer med olika perspektiv på metoden.  Av detta fick vi veta att de som har varit med i ett förbättringsprojekt eller har utbildat sig inom Six Sigma var begränsade till en knapp tiondel av de anställda på företaget, men att de som var insatta i metoden var ganska väl insatta.</p><p> </p><p>Vi går igenom ett antal faktorer inom Six Sigma och projekt baserade på metoden och ställer dessa mot relaterade teorier så att vi kan dra slutsatser.</p><p> </p><p>Slutligen tar vi upp våra slutsatser och avslutande reflektioner där vi kommer fram till att mycket av problemen med metoden ligger i om man inte använder den fullt ut utan försöker klara sig utan att ge de resurser som krävs men att metoden annars ger ett stabilt ramverk för processutveckling.</p>

Page generated in 0.1073 seconds