• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Clustering as applied to a general practitioner's record

Lunde, Christina January 2005 (has links)
The electronic patient record is primarily used as a way for clinicians to remember what has happened during the care of a patient. The electronic record also introduces an additional possibility, namely the use of computer based methods for searching, extracting and interpreting data patterns from the patient data. Potentially, such methods can help to reveal undiscovered medical knowledge from the patient record. This project aims to evaluate the usefulness of applying clustering methods to the patient record. Two clustering tasks are designed and accomplished, one that considers clustering of ICPC codes and one that considers medical certificates. The clusterings are performed by use of hierarchical clustering and k-means clustering. Distance measures used for the experiments are Lift correlation, the Jaccard coefficient and the Euclidian distance. Three indices for clustering validation are implemented and tested, namely the Dunn index, the modified Hubert $Gamma$ index and the Davies-Bouldin index. The work also points to the importance of dimensionality reduction for high dimensional data, for which PCA is utilised. The strategies are evaluated according to what degree they retrieve well-known medical knowledge owing to the fact that a strategy that retrieves a high degree of well-known knowledge are more likely to identify unknown medical information compared to a strategy that retrieves a lower degree of known information. The experiments show that, for some of the methods, clusters are formed that represent interesting medical knowledge, which indicates that clustering of a general practitioner's record can potentially constitute a contribution to further medical research.
302

Evaluation of Intelligent Transport System Applications

Berg, Morten January 2005 (has links)
Most people in the developed world depend on transportation, both privately and in business. Overpopulated roads lead to problems like traffic inefficiency, e.g. congestion, and traffic accidents. Intelligent Transport Systems (ITS) deals with the integration of information technology into the transport system. Through this, applications for improving traffic efficiency, traffic safety and the driving experience are introduced. This report is going to look at ITS systems in general, explore an international standard under development for communication systems designed for these kinds of applications (CALM), look at a project aimed to use this standard to create a international system for ITS applications (CVIS), and explore some of the proposed applications for this system. A few applications have been thoroughly described and analysed through the use of use cases. This has resulted in a set of test cases from which the applications can be evaluated. Through the execution of these test cases it would be possible to draw conclusions on whether or not the applications proposed will be viable in a real world situation.
303

Web Application Security

Foss, Julie-Marie, Ingvaldsen, Nina January 2005 (has links)
As more and more sensitive information is entering web based applications, and thus are available through a web browser, securing these systems is of increasing importance. A software system accessible through the web is continuously exposed to threats, and is accessible to anyone who would like to attempt a break-in. These systems can not rely on only external measures like separate network zones and firewalls for security. Symantecs1 Internet Security Threat Report [34] is published every six months. Main findings in the last one published prove that there is an increase in threats to confidential information and more attacks aimed at web applications. Almost 48 percent of all vulnerabilities documented the last six months of 2004 were vulnerabilities in web applications. Security principles that one should pay attention to developing web applications do exist. This report have taken a look at existing guidelines, and provided an independent guide to developing secure web applications. These guidelines will be published at the homepage of The Centre for Information Security2 (SIS), www.norsis.no. The report also describes how a web application has been developed using the provided security guidelines as reference points. Relevant vulnerabilities and threats were identified and described. Misuse cases have related the various threats to specific system functionality, and a risk analysis ranks the threats in order to see which ones are most urgent. During the design phase, the application areas exposed to threats with a high rank from the risk analysis, have been at center of attention. This is also the case in the implementation phase, where countermeasures to some of these threats are provided on the Java platform. The implemented solutions can be adapted by others developing applications on this platform. The report comes to the conclusion, that the use of security guidelines throughout the entire development process is useful when developing a secure system. 1Symantec works with information security providing software, appliances and services designed to secure and manage IT infrastructures [33]. 2The Centre for Information Security (SIS) is responsible for coordinating activities related to Information and Communications Technology (ICT) security in Norway. The centre receives reports about security related incidents from companies and departments, and is working on obtaining an overall impression of threats towards Norwegian ICT systems [30].
304

Role-Based Information Ranking and Access Control

Stenbakk, Bjørn-Erik Sæther, Øie, Gunnar René January 2005 (has links)
This thesis presents a formal role-model based on a combination of approaches towards rolebased access control. This model is used both for access control and information ranking. Purpose: Healthcare information is required by law to be strictly secured. Thus an access control policy is needed, especially when this information is stored in a computer system. Roles, instead of just users, have been used for enforcing access control in computer systems. When a healthcare employee is granted access to information, only the relevant information should be presented by the system, providing better overview and highlighting critical information stored among less important data. The purpose of this thesis is to enable efficiency and quality improvements in healthcare by using IT-solutions that address both access control and information highlighting. Methods: We have developed a formal role model in a previous project. It has been manually tested, and some possible design choices were identified. The project report pointed out that more work was required, in the form of making design choices, implementing a prototype, and extending the model to comply with the Norwegian standard for electronic health records. In preparing this thesis, we reviewed literature about the extensions that we wanted to make to that model. This included deontic logic, delegation and temporal constraints. We made decisions on some of the possible design choices. Some of the topics that were presented in the previous project are also re-introduced in this thesis. The theories are explained through examples, which are later used as a basis for an illustrating scenario. The theory and scenario were used for requirement elicitation for the role-model, and for validating the model. Based on these requirements a formal role-model was developed. To comply with the Norwegian EHR standard the model includes delegation and context based access control. An access control list was also added to allow for patients to limit or deny access to their record information for any individual. To validate the model, we implemented parts of the model in Prolog and tested it with data from the scenario. Results: The test results show rankings for information and controls access to it correctly, thus validating the implemented parts of the model. Other results are a formal model, an executable implementation of parts of the model, recommendations for model design, and the scenario. Conclusions: Using the same role-model for access control and information ranking works, and allows using flexible ways to define policies and information needs.
305

Making substitution matrices metric

Anfinsen, Jarle January 2005 (has links)
With the emergence and growth of large databases of information, efficient methods for storage and processing are becoming increasingly important. The existence of a metric distance measure between data entities enables efficient index structures to be applied when storing the data. Unfortunately, this is often not the case. Amino acid substitution matrices, which are used to estimate similarities between proteins, do not yield metric distance measures. Finding efficient methods for converting a non-metric matrix into a metric one is therefore highly desirable. In this work, the problem of finding such conversions is approached by embedding the data contained in the non-metric matrix into a metric space. The embedding is optimized according to a quality measure which takes the original data into account, and a distance matrix is then derived using the metric distance function of the space. More specifically, an evolutionary scheme is proposed for constructing such an embedding. The work shows how a coevolutionary algorithm can be used to find a spatial embedding and a metric distance function which try to preserve as much of the proximity structure of the non-metrix matrix as possible. The evolutionary scheme is compared to three existing embedding algorithms. Some modifications to the existing algorithms are proposed, with the purpose of handling the data in the non-metric matrix more efficiently. At a higher level, the strategy of deriving a metric distance function from a spatial embedding is compared to an existing algorithm which enforces metricity by manipulating the data in the non-metric matrix directly (the triangle fixing algorithm). The methods presented and compared are general in the sense that they can be applied in any case where a non-metric matrix must be converted into a metric one, regardless of how the data in the non-metric matrix was originally derived. The proposed methods are tested empirically on amino acid substitution matrices, and the derived metric matrices are used to search for similarity in a database of proteins. The results show that the embedding approach outperforms the triangle fixing approach when applied to matrices from the PAM family. Moreover, the evolutionary embedding algorithms perform best among the embedding algorithms. In the case of the PAM250 scoring matrix, a metric distance matrix is found which is more sensitive than the mPAM250 matrix presented in a recent paper. Possible advantages of choosing one method over another are shown to be unclear in the case of matrices from the BLOSUM family.
306

MOWAHS - Optimised support for XML in mobile environments

Walla, Anders Kristian Harang January 2005 (has links)
This report describes a prototype middleware system for optimising transfer and processing times of XML based data between mobile, heterogeneous clients, supporting servers and context providers. The system will achieve these objectives by compressing or compacting the XML data in different ways, and using different parsing techniques. Two such techniques are examined more thoroughly, namely tag redundancy reduction, and binary compression. These optimisation techniques are implemented in a fully functioning XML data optimising system, and their effectiveness is tested and compared. A long term goal is discussed and considered in relation to these techniques: To develop a set of heuristic rules that will allow the system to determine dynamically which optimisation methods are most efficient at any given time based on available context data. The prototype system described is developed in Java, with a client for mobile devices written in Java2ME.
307

Towards improving an organization's ability to procure software intensive systems : A survey

Engene, Knut Steinar January 2005 (has links)
This report presents a three-step investigation conducted to identify problems and challenges experienced by small and medium sized organizations procuring software intensive systems. Archival research is carried out to see if the available procurement guidelines are applicable for small and medium sized organizations. Data has been collected through questionnaires and interviews with the organizations’ employees responsible for software procurements. The quantitative data has been analyzed using statistical methods, in an attempt to identify the main weaknesses in the current procurement procedures. In addition, the qualitative data are analyzed to complement the findings made from the quantitative data. Results indicate that the organizations who participated in the survey seldom follow a predefined procedure when they execute software procurements. However, organizations that do have a defined, formalized procurement procedure are significantly more satisfied with their procurements. In addition, risk management is seldom integrated in software procurements despite the fact that the organizations to some extent consider software procurement as a risky activity. Recommendations derived from the survey results are offered to increase the organization’s ability to procure and use software intensive systems.
308

An improved web-based solution for specifying transaction models for CAGISTrans

Bjørge, Thomas Eugen January 2005 (has links)
Transactions have been used for several decades to handle concurrent access to data in databases. These transactions adhere to a strict set of transactional rules that ensure that the correctness of the database is maintained. But transactions are also useful in other settings such as supporting cooperative work over computer networks like the Internet. However the original transaction model is too strict for this. To enable cooperation between transactions on shared objects, a framework for specifying and executing transaction models adapted to the environment in which they are running has been developed. Additionally, a web based user interface for the specification of transaction models for the framework has also been created. In this thesis we look at how the process of specifying transaction models for the framework can be improved. More specifically, we start by carefully reviewing the current web based solution for specifying transaction models. In our review we focus on usability, design and the technical aspects of the solution. We then continue with a thorough look at Web Services in the context of the transaction model framework. Our main objective at this stage is evaluating the possibility of implementing a new solution for specifying transaction models using Web Services. The last part of our work is the actual implementation of an improved application for specifying transaction models. This implementation is based on the results from our evaluation of the current solution and our evaluation of Web Services. We identified several issues in our review of the current solution. The main problem is that it is difficult for the user to get a good overview of the transaction model she is creating during the specification process. This is due to the lack of a visual representation of the model. The specification process is also very tedious containing a large number of steps, a number we feel can be reduced. The technical aspects of the solution also have a lot of room for improvement. The overall design can easily be improved, and additionally utilizing different technologies would make the application less error prone, and also easier to maintain and update. We also reached the conclusion that Web Services is not an ideal technology for a transaction model specification application. The main problem is that the client needs to have a complete overview over the specification process leading to a lot of duplication of data between the client and the web service. In the end this situation leads to a very complex web service that does not improve the transaction model specification process. Based on our results, we decided to implement a web based solution for specifying transaction models. Our solution is similar to the original one, but we had strong focus on improving its shortcomings, both on the usability side and the technical side. This meant focusing on giving the user a good overview of the transaction model during the specification process and also reducing the number of steps in the process. Additionally, we put a lot of effort into developing a solution that is based on technological best practices, leading to a solution that is less error prone than the original solution. It should also be easier to maintain and update.
309

EventSeer: Testing Different Approaches to Topical Crawling for Call for Paper Announcements

Brennhaug, Knut Eivind January 2005 (has links)
The goal of the Eventseer project is to build a digital library of call for paper announcements. Today call for papers are collected from different mailing lists, the goal is to develop topical crawlers so that Eventseer also may collect call for paper announcements from the Web.
310

Identification of biomedical entities from Medline abstracts using a dictionary-based approach

Skuland, Magnus January 2005 (has links)
The aim of this paper was to develop a system for identification of biomedical entities, such as protein and gene names, from a corpora of Medline abstracts. Another aim was to manage to extract the most relevant terms from the set of identified biomedical terms and make them readily presentable for an end-user. The developed prototype, named iMasterThesis, uses a dictionary-based approach to the problem. A dictionary, consisting of 21K gene names and 425K protein names, was constructed in an automatic fashion. With the realization of the protein name dictionary as a multi-level tree structure of hash tables, the approach tries to facilitate a more flexible and relaxed matching scheme than previous approaches. The system was evaluated against a golden standard consisting of 101 expert-annotated Medline abstracts. It is capable of identifying protein and gene names from these abstracts with a 10% recall and 14% precision. It seems clear that for further improvements of the obtained results, the quality of the dictionary needs to be increased, possibly through manual inspection by domain experts. A graphical user interface, presenting an end-user with the most relevant terms identified, has been developed as well.

Page generated in 0.0828 seconds