• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Software Defect Analysis : An Empirical Study of Causes and Costs in the Information Technology Industry

Kristiansen, Jan Maximilian Winther January 2010 (has links)
The area of software defects is not thoroughly studied in current research, even though it is estimated to be one of the most expensive topics in industries. Hence, certain researchers characterise the lack of research as a scandal within software engineering. Little research has been performed in investigating the root causes of defects, even thought we have classification schemes which aims to classify the what, where and why regarding software defects. We want to investigate the root causes of software defects through both qualitative and quantitative methods.We collected defect reports from three different types of projects in the defect tracking system of Company X. The first project was a project concerned with development of a general core of functionality which other projects could use. The second was a project aim at the mass-software market, while the third project was tailored software to a the needs of a client. These defect reports were analysed by both qualitative and quantitative methods. The qualitative methods were based on grounded theory. The methods tried to establish a theory of why some defect require extensive effort to correct through analysis of the discussions in the defect reports. The quantitative methods were used to describe differences between defects which required extensive or little effort to correct.In the qualitative analysis, we found four main root causes which explain why a group of defects require extensive effort to correct: hard to determine the location of the defect, long discussion or clarification of the defect, incorrect corrections introduces new defects, and implementation of missing functionality or re-implementation of existing functionality. A comparison between the four root causes and project types revealed the root causes were influenced by the project types. The first project had a larger degree of discussion and incorrect corrections than the second and third projects. The second and third projects were more concerned with hard to locate defects and implementation of missing functionality or re-implementation of functionality. Similarly, a comparison against another organisation showed there were differences with regard to root causes for extensive effort. This showed how systematic analysis of defect reports can yield software process improvement opportunities.In the quantitative analysis, we found differences among extensive or little effort to correct defects and project types. The extensive to correct defects of the first project were due to incorrect algorithms or methods, injected during the design phase, and high risk of regressions. In the second project, the extensive effort to correct defects were due to algorithms, methods, functions, classes and objects, were concerned with the core, platform, and user interface layers and injected during the design phase, and lower regression risks. In the third project, the defects which required extensive effort to correct were due to assignation and initialisation of variables, or function, classes and objects, related to the core-layer, injected during the coding phase, and average regression risk of medium. The little effort to correct defects in the core project were concerned with assignation or initialisation of variables, checking statements, lower regression risk, injected during the code phase. In the second project, easy to correct defects were concerned with checking statements in the code which had a low regression risk. In the third project, defects which required little effort to correct were due to checking statements, interfaces with third party libraries, lower regression risk and stem from requirements. The quantitative analysis contained high levels of unspecified values for little effort to correct defect. The levels of unspecified attributes were lower for defects which required extensive effort to correct.We concluded there were differences among project types with regard to root causes for defects, and that there were differences similar between different levels of effort required to correct defects. However, the study were not able to measure how these differences influenced the root causes as the study was performed in a descriptive manner.
452

Customer Engagement in Agile Sofware Development

Worren, Marianne January 2010 (has links)
Agile methods promise an ideal approach to customer involvement. However, the success relies on having a full-time dedicated, on-site customer representative working in closecollaboration with the developers throughout all phases of the project in order to provide the team with ongoing domain expertize. For many projects, providing this form of customer involvement is infeasible. Organizations are therefore left with finding a more viable way of practicing customer involvementThrough a case study of medium-sized, multi-national organization, practicing agile software development with off-site customer representatives, I illuminated the challenges emerging from this situation. By providing a framework for practitioners, I present my suggestions on how to decided on the right customer representative, and what support functions that needs to be established in order for the customer involvement to be successful.
453

Forensic analysis of an unknown embedded device

Eide, Jarle, Olsen, Jan Ove Skogheim January 2006 (has links)
Every year thousands of new digital consumer device models come on the market. These devices include video cameras, photo cameras, computers, mobile phones and a multitude of different combinations. Most of these devices have the ability to store information in one form or another. This is a problem for law enforcement agencies as they need access to all these new kinds of devices and the information on them in investigations. Forensic analysis of electronic and digital equipment has become much more complex lately because of the sheer number of new devices and their increasing internal technological sophistication. This thesis tries to help the situation by reverse engineering a Qtek S110 device. More specifically we analyze how the storage system of this device, called the object store, is implemented on the device’s operating system, Windows Mobile. We hope to figure out how the device stores user data and what happens to this data when it is "deleted". We further try to define a generalized methodology for such forensic analysis of unknown digital devices. The methodology takes into account that such analysis will have to be performed by teams of reverse-engineers more than single individuals. Based on prior external research we constructed and tested the methodology successfully. We were able to figure our more or less entirely the object store’s internal workings and constructed a software tool called BlobExtractor that can extract data, including "deleted", from the device without using the operating system API. The main reverse engineering strategies utilized was black box testing and disassembly. We believe our results can be the basis for future advanced recovery tools for Windows Mobile devices and that our generalized reverse engineering methodology can be utilized on many kinds of unknown digital devices.
454

Kriminalteknisk analyse av rutingsinformasjon / Routing information, BGP, and IP hijacking

Andresen, Njaal Brøvig January 2007 (has links)
I denne oppgaven skulle jeg vurdere metoder og usikkerheter relatert til bruk av kriminalteknisk analyse av rutingsinformasjon for etterforskningsformål. Jeg har gjort dette ved å se nærmere på feltet IP-hijacking. Jeg presenterer her først litt bakgrunnsstoff rundt ruting og IP-hijacking, for så å ta en dypere analyse av de mest kjente deteksjonsteknikkene for IP-hijacking, med tanke på bruk i etterforskning. Ut fra dette arbeidet presenterer jeg et forslag til å forbedre en eksiterende teknikk ”Reflect Scan”. Og en ny deteksjonsmetode som baserer seg på Idlescan teknikken. Jeg har også med et forslag til en implementasjon av et ikke-distribuert system for deteksjon av IP-hijacking. Alt arbeid er utført i samarbeid med KRIPOS, som er oppdragsgiveren for oppgaven.
455

LogWheels: A Security Log Visualizer

Egeland, Vegard January 2011 (has links)
Logging security incidents is a required security measure in every moderately complex computer system. But while most systems produce large quantities of textual logs, these logs are often neglected or infrequently monitored by untrained personnel. One of the reasons for this neglect is the poor usability offered by distributed repositories of plain text log data, using different log formats and contradictory terminology. The use of security visualization has established itself as a promising research area, aiming to improve the usability of security logs by utilizing the visual perception system's abilities to absorb large data quantities. This thesis examines the state of the art in security log usability, and proposes two ideas to the areas of security log usability and security visualization: First, we introduce LogWheels, an interactive dashboard offering remote monitoring of security incident logs, through a user friendly visualization interface. By offering three levels of granularity, LogWheels provides both an overview of the entire system, and the opportunity to request details on demand. Second, we introduce the incident wheel, the core visualization component of LogWheels. The incident wheel presents three key dimensions of security incidents -- 'what', 'when', and 'where' -- all within a single screen. In addition to a specification of LogWheels architecture and visualization scheme, the thesis is accompanied by a functional proof-of-concept, which allows demonstrations of the system on real or simulated security data.
456

Educational implementation of SSL/TLS

Vinje, Eivind January 2011 (has links)
.
457

UbiCollab: A Service Architecture for Supporting Ubiquitous Collaboration

Brustad, Andreas Larsen, Mosveen, Christian Hågensen January 2006 (has links)
Ubiquitous computing integrates computation into the environment, and enables users to move around and interact with computers more naturally than they currently do. This helps to address some of the traditional challenges of computer supported collaborative work (CSCW), as users are not bound to a desk and a personal computer, and are not forced to stay in a static environment where ad-hoc collaboration is impossible. UbiCollab is a platform for the support of ubiquitous collaboration, and it provides such functionality as context-awareness and automatic device discovery. The vision of UbiCollab is to be both flexible and extendible, so that it can provide ubiquitous collaboration support for many different existing and non-existing domains and settings. A previous study has compiled a set of requirements that needs to be fulfilled in order for a platform to reach this vision. This work re-designs the architecture and the platform components of UbiCollab so that they conform to these requirements. OSGi is chosen as the underlying architecture, supporting the requirements of flexibility and extendibility, and a suitable OSGi framework for the platform is chosen. The platform components and their application programming interfaces (APIs) are designed, and a selected number of these are implemented with full or partial functionality. A testbed of applications and external services is used throughout development to test the flexibility and functionality of the platform and the completeness of the APIs.
458

Next generation privacy policy

Lillebo, Ole Kristian January 2011 (has links)
Privacy policies are commonly used by service providers to notify users what information is collected, how it will be used and with whom it will be shared. These policies are however known to be notoriously long and hard to understand, and several studies have shown that very few users actually read them. Alternative solutions that accurately communicates the most important parts of the policy in a way that is more enjoyable to read, is therefore needed to aid the users in making informed decisions on whether or not to share information with a provider.By following a design science strategy we first explore current solutions, and based on an initial evaluation we find the Nutrition Label to be the current approach best suited to base further work on. Through an assess and refine cycle we first evaluate the Nutrition Label based on usability literature, and propose a set of design criteria which is used as a basis for developing an alternative solution, entitled the Privacy Table. By following an iterative design process, we evaluate the Privacy Table in terms of accuracy, time-to-response and likeability through a pre-test, a laboratory experiment with 15 participants, and finally through an Internet experiment with 24 participants, where each iteration results in a re-designed version of the Privacy Table. While we don't find clear evidence for any difference between the formats, we find indications for that they perform similarly in terms of accuracy and enjoyability. We discover several issues regarding the Nutrition Label where some are related to the terminology used, which could indicate that it would need modifications in order to be usable among non-native English speakers. We also suggest that future research on the Nutrition Label should focus on its usability rather than further expansion, and that it should be considered to base it on a more simplified underlying technology than the P3P language. Finally we find that a merged version of the Privacy Table and the Nutrition Label could be advantageous to use in relation with current and future privacy enhancing technologies, as a top layer to communicate the most important privacy practices.
459

Semi-automatic Test Case Generation

Undheim, Olav January 2011 (has links)
In the European research project CESAR, requirements can be specified using templates called boilerplates. Each statement of a requirement consists of a boilerplate with inserted values for the attributes of the boilerplate. By choosing attribute values from a domain ontology, a consistent language can be achieved. This thesis seeks to use the combination of boilerplates and a domain ontology in a semi-automatic test generation process.There are multiple ways to automate the test generation process, with various degrees of automation involved. One option is to use the boilerplates and the domain ontology to create a test model that can be used to generate tests. Another option is to use the information from the domain ontology to assist the user when he creates tests. In this thesis, the latter option is investigated and a tool named WikiTest is developed. WikiTest uses Semantic MediaWiki and Semantic Forms to utilize the ontology and assist the user in the test creation process. Using a Cucumber syntax, the tests can be specified in a relatively free format that does not sacrifice the ability to automate test execution. An experiment is conducted, where the results show that WikiTest is easier to use and leads to a higher test case quality than the alternatives can do. Being able to inspect the domain ontology while creating tests did not give the same results as when the ontology was integrated directly in the tool.
460

Testing of safety mechanisms in software-intensive systems

Bjørgan, Arne January 2011 (has links)
As software systems increasingly are used to control critical infrastructure, transportation systems and factory equipment, the use of proper testing methods has become more important. Systems that can cause harm to people, equipment or the environment they operate in are called safety critical systems.The suppliers of safety critical systems makes use of safety analysis methods to investigate possible hazards. The ouput from the analysis are possible causes and effects of the hazards found. These results are a large part of the basis for writing safety requirements for the system.The safety requirements should be tested thoroughly to avoid accidents. It is important that the right testing technique is applied to test these systems. The consequences of a system failure can be very high, so it is crucial to make use of a testing technique that has an approach that fits safety testing best. This thesis presents an experiment that looks into these questions. Also, the experiment investigates how the barrier model and safety analysis results helps in writing test cases for these systems.

Page generated in 0.0859 seconds