• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

On the feasability of automatic segmentation with active contour models in image databases for shape extraction

Smestad, Ole Marius January 2005 (has links)
<p>In this thesis the image segmentation system EDISON, was tested against an automatic version snake, which is an algorithm active contour models. The algorithms were tested against each to see if an automatic snake algorithm could be feasible for use in an image database for shape extraction. The conducted tests showed that EDISON yielded the best results, and that snake should be given further work before being considered.</p>
152

Preserving privacy in UbiCollab: Extending privacy support in a ubiquitous collaborative environment

Braathen, Anders Magnus, Rasmussen, Hans Steien January 2005 (has links)
<p>UbiCollab is a platform that supports the development of cooperative applications for collaboration in a ubiquitous environment. The platform enables entities of different types and technologies to work together and share a common set of resources. In a collaborative setting, trust is crucial for creating bonds between the different participants and the system. People using these kinds of systems need to feel secure and trust the system enough to give personal information away and feel that they can control the use of this gathered information. By personal information we mean name, title, email etc., but also location or type of task the user is performing within the system. This thesis explores multiple identities in ubiquitous collaboration, as a mechanism for improving the privacy of UbiCollab. The thesis also explores the building and displaying of a reputation from past collaborative experiences in connection with the different identities. To realize these mechanisms the system allows anonymous access to services by communicating through a privacy proxy. UbiCollab uses a privacy policy description engine that enables negotiation on how private data is gathered and used by the system. The different identities will be supplied with a set of preferences that describes what actions the system is allowed to perform on their personal data. This provides a way to give the user control over the gathering and sharing of personal information. The policy description is based on an adaptation of the P3P standard, designed to suit policy descriptions in a service-based architecture. Privacy extensions to the existing or new services will be easily performed by adding a reference to where the policies can be found. As a counterpart to the P3P policies, the P3P Preference Exchange Language (APPEL) has been incorporated into the platform to allow the users a way to post their privacy preferences. The adapted API has been redefined to better suit the development of UbiCollab applications. The resulting prototype demonstrates the use of these privacy mechanisms and their value to the UbiCollab platform.</p>
153

Dynamic indexes vs. static hierarchies for substring search

Grimsmo, Nils January 2005 (has links)
<p>This report explores the problem of substring search in a dynamic document set. The operations supported are document inclusion, document removal and queries. This is a well explored field for word indexes, but not for substring indexes. The contributions of this report is the exploration of a multi-document dynamic suffix tree (MDST), which is compared with using a hierarchy of static indexes using suffix arrays. Only memory resident data structures are explored. The concept of a ``generalised suffix tree'', indexing a static set of strings, is used in bioinformatics. The implemented data structure adds online document inclusion, update and removal, linear on the single document size. Various models for the hierarchy of static indexes is explored, some which of give faster update, and some faster search. For the static suffix arrays, the BPR cite{SS05} construction algorithm is used, which is the fastest known. This algorithm is about 3-4 times faster than the implemented suffix tree construction. Two tricks for speeding up search and hit reporting in the suffix array are also explored: Using a start index for the binary search, and a direct map of global addresses to document IDs and local addresses. The tests show that the MDST is much faster than the hierarchic indexes when the index freshness requirement is absolute, and the documents are small. The tree uses about three times as much memory as the suffix arrays. When there is a large number of hits, the suffix arrays are slightly faster on reporting hits, as there they have better memory locality. If you have enough primary memory, the MDST seems to be the best choice in general.</p>
154

Measuring on Large-Scale Read-Intensive Web sites

Ruud, Jørgen, Tveiten, Olav Gisle January 2005 (has links)
<p>We have in this thesis continued the work started in our project, i.e. to explore the practical and economic feasibility of assessing the scalability of a read-intensive large-scale Internet site. To do this we have installed the main components in a news site using open source software. This scalability exploration has been driven by the scaling scenario of increased article size. We have managed to assess the scalability of our system in a good way, but it has been more time consuming and knowledge demanding than expected. This means that the feasibility of such a study is lesser than we expected, but if the experiences and the method of this thesis are applied, such a study should be more feasible. We have assessed the scalability of a general web architecture, and this means that our approach can be applied to all read-intensive web sites and not just the one looked at in the cite{prosjekt}. This general focus is one of the strengths with this thesis. One of the objectives in our thesis was to make a resource function workbench (RFW) that is a framework which aids in the measuring and data interpretation. We feel that our RFW is one of the most important outcomes from this thesis, because it should be easy to reuse, thus saving time for future projects and making the feasibility of such a study higher. One of the most important is that the impact of increased article size on the throughput is bigger than expected. A small increase in article size, especially image size, leads to a clear decrease in the throughput. This reduction is larger on the small image sizes that on the large ones. This has wide implications for news sites, as many of them expect to increase the article size and still use the same system. Another major finding is that it is hard to predict the effects a scale-up of one or more components (a non-uniform scaling) will have on the throughput. This is because the throughput have different levels of dependency on the components on different image/text sizes. As we have seen the effects of the scale-up on the throughput varied between the different image sizes (a increase in throughput by 4.5 on 100 KB, but only an increase by a factor of 3.2 on image size 300 KB). In our case we have performed a non-uniform scaling, where we have increased the CPU by 2.4 and the disk by 1.1 On some image sizes and text sizes, the overall throughput was increased by a factor 10, but on others there was almost no improvement. The implications this have for web sites, is that it is hard for them to predict how system alternations will affect the overall throughput. As it is dependant on the current image and article size. It was an open question whether or not a dynamic model of the system could be constructed and solved. We have managed to construct the dynamic model, but the predictions it makes are a bit crude. However, we feel that creating a dynamic model has been very useful, and we believe it can make valuable predictions if the accuracy of the parameters are improved. This should be feasible, as our measurements should be easy to recreate. This thesis has been very demanding, because scalability requires a wide field of knowledge (statistics, hardware, software, programming, measurements etc). This has made this work very instructive, as we have gained knowledge in so many different aspects of computer science. Ideally, the thesis should have a larger time span, as the there are so many time consuming phases, which would have been interesting to spend more time on. As consequence of this short time span there are some further work which can be conducted in order to gain further valuable knowledge.</p>
155

Integrity checking of operating systems with respect to kernel level malware

Melcher, Tobias January 2005 (has links)
<p>Kernel-mode rootkits represent a considerable threat to any computer system, as they provide an intruder with the ability to hide the presence of his malicious activity. These rootkits make changes to the operating system’s kernel, thereby providing particularly stealthy hiding techniques. This thesis addresses the problem of collecting reliable information from a system compromised by kernel-mode rootkits. It looks at the possibility of using virtualization as a means to facilitate kernel-mode rootkit detection through integrity checking. It describes several areas within the Linux kernel, which are commonly subverted by kernel-mode rootkits. Further, it introduces the reader to the concept of virtualization, before the kernel-mode rootkit threat is addressed through a description of their hiding methodologies. Some of the existing methods for malware detection are also described and analysed. A number of general requirements, which need to be satisfied by a general model enabling kernel-mode rootkit detection, are identified. A model addressing these requirements is suggested, and a framework implementing the model is set-up.</p>
156

A study of practices for domain and architecture knowledge management in a Scandinavian software company

Person, Anders January 2005 (has links)
<p>Knowledge management has become increasingly popular in the software industry. Knowledge is one of the software companies main asset, and large amounts of resources are being used to manage and re-use this knowledge. Management of architectural knowledge is also important, especially when dealing with software development. This is because a team with good architectural understanding will have a good chance at efficiently creating re-usable assets. In this thesis I will describe how a Scandinavian software company deals with knowledge management. I have also analyzed the management of architectural knowledge. These subjects have been viewed from both the managers and employees point of view, and I have compared the intentions of the managers, with how the employees actually perform. The research question: "How is domain- and architecture-knowledge managed in a Scandinavian software company?" is answered by describing and analyzing the data gathered by interviews in such a company. The thesis is concluded by summaries of the discussion and the analysis that has been done. My findings in the researched areas suggest that knowledge management practices are important but that they often are underestimated. The company wherein I have conducted my research does have a QA team and a re-use culture, this culture is described, however this thesis also points out areas in which the company can improve. The case-study is based upon qualitative analysis of the results from eight interviews conducted among managers and developers in the company. In the thesis I discuss the findings and report upon issues such as company culture, routines and goals in the areas of knowledge management. My findings have been generalized, and hopefully other companies can make use of them to improve their own knowledge management processes and goals.</p>
157

Virtual control rooms assisted by 3D sound

Sjøvoll, Håvard January 2005 (has links)
<p>A high amount of complex and urgent information needs timely attention in an operational environment. This requires specialized systems. These systems should provide immediate access to accurate and pertinent information when troubleshooting or controlling abnormal situations. This study is a collaboration between NTNU and Statoil Research Center. It aims at designing and developing a prototype to improve the operator's awareness of alarms, by means of a multi-modal virtual environment. This will be achieved by creating an extension to the virtual model SnøhvitSIM, using a spatial auditory display in addition to visual elements. The auditory display will provide (1) spatial information about new alarms and (2) information about the overall state of the facility. The system also offers (3) beacons to aid navigation within the virtual environment. To reach these goals, a comprehensive literature study was carried out, investigating similar concepts and various techniques for developing such systems. The development was prioritized in the following order, according to the main objectives: (1) design, (2) functionality and (3) architecture. Within the design-process, the main focus has been on the spatial auditory display. The development of the prototype proved successful. The feedback on the prototype reflects its value as a showcase for future development, containing new and potentially very effective solutions for tomorrow's alarm management systems.</p>
158

Clustering as applied to a general practitioner's record

Lunde, Christina January 2005 (has links)
<p>The electronic patient record is primarily used as a way for clinicians to remember what has happened during the care of a patient. The electronic record also introduces an additional possibility, namely the use of computer based methods for searching, extracting and interpreting data patterns from the patient data. Potentially, such methods can help to reveal undiscovered medical knowledge from the patient record. This project aims to evaluate the usefulness of applying clustering methods to the patient record. Two clustering tasks are designed and accomplished, one that considers clustering of ICPC codes and one that considers medical certificates. The clusterings are performed by use of hierarchical clustering and k-means clustering. Distance measures used for the experiments are Lift correlation, the Jaccard coefficient and the Euclidian distance. Three indices for clustering validation are implemented and tested, namely the Dunn index, the modified Hubert $Gamma$ index and the Davies-Bouldin index. The work also points to the importance of dimensionality reduction for high dimensional data, for which PCA is utilised. The strategies are evaluated according to what degree they retrieve well-known medical knowledge owing to the fact that a strategy that retrieves a high degree of well-known knowledge are more likely to identify unknown medical information compared to a strategy that retrieves a lower degree of known information. The experiments show that, for some of the methods, clusters are formed that represent interesting medical knowledge, which indicates that clustering of a general practitioner's record can potentially constitute a contribution to further medical research.</p>
159

Evaluation of Intelligent Transport System Applications

Berg, Morten January 2005 (has links)
<p>Most people in the developed world depend on transportation, both privately and in business. Overpopulated roads lead to problems like traffic inefficiency, e.g. congestion, and traffic accidents. Intelligent Transport Systems (ITS) deals with the integration of information technology into the transport system. Through this, applications for improving traffic efficiency, traffic safety and the driving experience are introduced. This report is going to look at ITS systems in general, explore an international standard under development for communication systems designed for these kinds of applications (CALM), look at a project aimed to use this standard to create a international system for ITS applications (CVIS), and explore some of the proposed applications for this system. A few applications have been thoroughly described and analysed through the use of use cases. This has resulted in a set of test cases from which the applications can be evaluated. Through the execution of these test cases it would be possible to draw conclusions on whether or not the applications proposed will be viable in a real world situation.</p>
160

Web Application Security

Foss, Julie-Marie, Ingvaldsen, Nina January 2005 (has links)
<p>As more and more sensitive information is entering web based applications, and thus are available through a web browser, securing these systems is of increasing importance. A software system accessible through the web is continuously exposed to threats, and is accessible to anyone who would like to attempt a break-in. These systems can not rely on only external measures like separate network zones and firewalls for security. Symantecs1 Internet Security Threat Report [34] is published every six months. Main findings in the last one published prove that there is an increase in threats to confidential information and more attacks aimed at web applications. Almost 48 percent of all vulnerabilities documented the last six months of 2004 were vulnerabilities in web applications. Security principles that one should pay attention to developing web applications do exist. This report have taken a look at existing guidelines, and provided an independent guide to developing secure web applications. These guidelines will be published at the homepage of The Centre for Information Security2 (SIS), www.norsis.no. The report also describes how a web application has been developed using the provided security guidelines as reference points. Relevant vulnerabilities and threats were identified and described. Misuse cases have related the various threats to specific system functionality, and a risk analysis ranks the threats in order to see which ones are most urgent. During the design phase, the application areas exposed to threats with a high rank from the risk analysis, have been at center of attention. This is also the case in the implementation phase, where countermeasures to some of these threats are provided on the Java platform. The implemented solutions can be adapted by others developing applications on this platform. The report comes to the conclusion, that the use of security guidelines throughout the entire development process is useful when developing a secure system. 1Symantec works with information security providing software, appliances and services designed to secure and manage IT infrastructures [33]. 2The Centre for Information Security (SIS) is responsible for coordinating activities related to Information and Communications Technology (ICT) security in Norway. The centre receives reports about security related incidents from companies and departments, and is working on obtaining an overall impression of threats towards Norwegian ICT systems [30].</p>

Page generated in 0.0797 seconds