• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 116
  • Tagged with
  • 498
  • 498
  • 498
  • 498
  • 498
  • 471
  • 27
  • 10
  • 10
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Analysis of Software Faults using Safety-techniques with Respect to the Software System DAIM

Dyre-Hansen, Jostein January 2007 (has links)
<p>In this master thesis we have analyzed the software system DAIM, which is a web-based delivery system used at NTNU in connection with master theses and master students, with respect to software faults. Based on the documentation from the design stage of the DAIM project we have performed a technique called Preliminary Hazard Analysis (PHA), which is an analysis technique from safety-critical development. The results from this analysis have been compared with existing fault reports containing actual faults discovered in the system. Some of the intention behind our work has been to find if hazards identified with PHA can be related to actual faults found in the fault reports. In [17] it is stated that correcting software faults in later phases of the software development is much more expensive than in earlier phases and we have performed the PHA to see if some of the faults could have been avoided. We found that there were some connections between some of the faults and hazards identified, but the results were not entirely as expected. In our previous work we did a similar kind of analysis as we have done in this work regarding the analysis of fault reports and we have compared the results from our previous work with some of the results that we have obtained from this work to see how the distribution of fault types varies between the projects. The results showed that there were several differences between the projects, but some similarities were also discovered.</p>
212

Study of the Release Process of Open Source Software : Case Study

Eide, Tor Erik January 2007 (has links)
<p>This report presents the results of a case study focusing on the release process of open source projects initiated with commercial motives. The purpose of the study is to gain an increased understanding of the release process, how a community can be attracted to the project, and how the interaction with the community evolves in commercial open source initiatives. Data has been gathered from four distinct sources to form the basis of this thesis. A thorough review of the open source literature has been performed. To further substantiate the data gathered from the literature study and to gain qualitative insights from companies heavily involved with open source development, four Norwegian companies adopting open source strategies have been interviewed. Data has also been gathered from active participation in the release process of the Keywatch networking software, including the creation of a web site and promotion of the project to build a community. Finally, the web sites of six company-initiated open source projects have been studied to gain further insight into how commercial open source projects are presented. The contributions of this report can be divided into two parts; a description of the open source phenomenon and theoretical guidelines describing important measures to be taken into consideration when releasing software as open source. The description of the open source phenomenon is derived from reviewing the open source literature and includes a description of the history of open source, its characteristics, licenses, legal issues related to open source, and motivations for adopting open source software. The theoretical guidelines are based on corroboration of data gathered from qualitative interviews, reviewing of commercial open source web sites, and findings in the research literature. The guidelines are summarized in the concluding section of the report together with suggestions for future research. Keywords: Open Source, Qualitative Research, Commercial Open Source Adoption, Open Source Preparations, Open Source Release Process, Open Source Community Management</p>
213

Text Mining in Health Records : Classification of Text to Facilitate Information Flow and Data Overview

Rose, Øystein January 2007 (has links)
<p>This project consists of two parts. In the first part we apply techniques from the field of text mining to classify sentences in encounter notes of the electronic health record (EHR) into classes of {it subjective}, {it objective} and {it plan} character. This is a simplification of the {it SOAP} standard, and is applied due to the way GPs structure the encounter notes. Structuring the information in a subjective, objective, and plan way, may enhance future information flow between the EHR and the personal health record (PHR). In the second part of the project we seek to use apply the most adequate to classify encounter notes from patient histories of patients suffering from diabetes. We believe that the distribution of sentences of a subjective, objective, and plan character changes according to different phases of diseases. In our work we experiment with several preprocessing techniques, classifiers, and amounts of data. Of the classifiers considered, we find that Complement Naive Bayes (CNB) produces the best result, both when the preprocessing of the data has taken place and not. On the raw dataset, CNB yields an accuracy of 81.03%, while on the preprocessed dataset, CNB yields an accuracy of 81.95%. The Support Vector Machines (SVM) classifier algorithm yields results comparable to the results obtained by use of CNB, while the J48 classifier algorithm performs poorer. Concerning preprocessing techniques, we find that use of techniques reducing the dimensionality of the datasets improves the results for smaller attribute sets, but worsens the result for larger attribute sets. The trend is opposite for preprocessing techniques that expand the set of attributes. However, finding the ratio between the size of the dataset and the number of attributes, where the preprocessing techniques improve the result, is difficult. Hence, preprocessing techniques are not applied in the second part of the project. From the result of the classification of the patient histories we have extracted graphs that show how the sentence class distribution after the first diagnosis of diabetes is set. Although no empiric research is carried out, we believe that such graphs may, through further research, facilitate the recognition of points of interest in the patient history. From the same results we also create graphs that show the average distribution of sentences of subjective, objective, and plan character for 429 patients after the first diagnosis of diabetes is set. From these graphs we find evidence that there is an overrepresentation of subjective sentences in encounter notes where the diagnosis of diabetes is first set. However, we believe that similar experiments for several diseases, may uncover patterns or trends concerning the diseases in focus.</p>
214

Improve Expert Estimation Process : Practice Assessment And Proposals For A Consultant Company.

Drange, Knut January 2007 (has links)
<p>This thesis presents results of the estimation effort improvement study for a major consultant company in Norway. The company have already established an effort estimation process, but want additional help in improving the estimation process and tools. Two major problems are identified; some estimates have very low accuracy, and they use multiple estimation tools and methodologies. Part of the main research on the state of practice was to determine the effort estimation models used and effort estimation accuracy. To better understand how the effort estimation process worked we compared the effort estimation practice against best practices and looked further into the relation between estimation models and expert judgement. The last part of the state of practice research was to check project reports to see if they used a common tool and had a risk checklist. The main part of the work has consisted of researching the state of practice at the consultant company and comparing it against known best practices and proposing improvements. Based on literature available this thesis presents practical improvements for the estimation process. The state of practice was determined by conducting interviews and going through project reports. The state of practice showed that they lacked a too for early effort estimation, so we conducted a case study for early estimation using use case point. This thesis proposes solutions to issues on tools and practices. The main contribution is a powerful effort estimation template.</p>
215

Going Open : Building the Platform to Reach Out

Schanke, Per Kristian January 2007 (has links)
<p>This report presents the results of the development of a portal for open source software. The work is done in collaboration with Keymind Computing AS in context of the European ITEA project COSI. The purpose of this project is to develop a portal so that companies that got commodity software they want to go open source with can do so without loosing control of the development. The portal is built up using already existing tools to fulfill as many tasks as possible. The thesis also try to explain why making a portal for the release of open source by looking at the history of open source. Some of the focus here is on the development of the Open Source 2.0 which i identified by the growing interest among software companies to release their software under and open license.</p>
216

Open Source Software in Software Intensive Industry - A Survey

Hauge, Øyvind January 2007 (has links)
<p>The use of Open Source Software (OSS) has increased in both the industry and the public sector. The software intensive industry integrates OSS into their products, participates in the development of OSS products, and develops its own OSS products. The understanding of how and why the industry is approaching OSS is so far limited. To help fill this gap, this thesis intends to explore how and why the software intensive industry approaches OSS. This is done by performing an extensive literature study and by executing a web-based survey. This survey is distributed to a near representative sample of companies from the Norwegian software intensive industry and to a convenience sample of participants in the ITEA 2 research program. The research presented here shows that OSS components are widely used in the software intensive industry. Close to 50% of the Norwegian software intensive industry uses OSS in its development. The industry is mainly motivated to use OSS by practical reasons. OSS components provide functionality of high quality and the industry is satisfied with its use of these components. When using OSS, the industry benefits from the availability of source code, and easy access to components and information about these components. Companies participate in OSS projects because they use the software and because of the learning effect of this participation. The participation is however limited. However, some companies provide commercial services related to the OSS projects they participate in. Releasing a product as OSS attracts more users and customers to a product. These community members may contribute with implemented code, feedback, and requirements. There are, however some side-effects related to releasing an OSS product and companies should be aware of these consequences. The main contributions of this thesis are new understanding of how and why companies approach OSS, a reusable research design, and experiences performing survey research.</p>
217

Using Public Displays for the Presentation of User Statistics

Hansen, Torborg Skjevdal January 2008 (has links)
<p>The aim of the project has been to look at how the knowledge about statistics of use might influence the usage of a wireless network. The project has been conducted in cooperation with Wireless Trondheim. A public display was set up at a café that has the wireless network available. It showed different sorts of statistics collected from the network control system, in addition to news and advertisements. No significant increase in use was experienced during the period when the screen was up, and the project needs to be conducted in a larger scale to see more obvious results. However, the project has provided Wireless Trondheim with insight on how public displays can be used to increase the awareness and hence the usage of the wireless network. Keywords: Digital signage, public displays, wireless networks, awareness, context, XML-feeds</p>
218

Joining in Apache Derby: Removing the Obstacles

Holum, Henrik, Løvland, Svein Erik Reknes January 2008 (has links)
<p>Over the last decade, the amount of commercial interest in Open Source has been growing rapidly. This has led to commercially driven Open Source projects. Those projects have problems keeping their newcomers and needs ways to ease the joining process. Therefore we ask these research questions: RQ1: Which obstacles are encountered by Newcomers to Apache Derby when Joining? RQ2: What can be done to ease the Joining process? There has been very little research on what the OSS projects can do in this area. As a consequence it is hard to find good reliable theory to cross-reference this research. If the research is successful, it can contribute to the literature on joining in OSS projects. This literature will then contain all obstacles encountered by newcomers to OSS projects and ways to mitigate these. In this master's thesis Canonical Action Research was used to study the Open Source project Apache Derby. Canonical Action Research is a qualitative research method where the researchers enters the environment they are researching to extract the data needed. We have three contributions in this thesis. The first contribution is a list of obstacles in the joining process of Apache Derby. The second contribution is suggestions on how a project can mitigate the contribution barriers we found. The third contribution is a refined version of CAR to use when studying Open Source Software Development. The list of obstacles is a contribution specific to the Apache Derby project, and it is very unlikely that other non Apache projects will benefit from it. Our suggestions on how a project can mitigate contribution barriers are potentially generalizable. Different projects have different structures, and some of the contribution barriers might therefore not apply to them all. The refined CAR model is general for all research on OSS projects. This is the result we think can have the biggest impact on the research community if proven successful.</p>
219

Security in a Service-Oriented Architecture

Rodem, Magne January 2008 (has links)
<p>In a service-oriented architecture (SOA), parts of software applications are made available as services. These services can be combined across multiple applications, technologies, and organizations. As a result, functionality can be more easily reused, and new business processes can be assembled at a low cost. However, as more functionality is exposed outside of the traditional boundaries of applications, new approaches to security are needed. While SOA shares many of the security threats of traditional systems, the countermeasures to some of these threats may differ. Most notably, eavesdropping, data tampering, and replay attacks must be countered on the message level in a complex SOA environment. In addition, the open and distributed nature of SOA leads to new ways of handling authentication, authorization, logging, and monitoring. Web Services are the most popular way of realizing SOA in practice, and make use of a set of standards such as WS-Security, XML Encryption, XML Signature, and SAML for handling these new security approaches. Guidelines exist for development of secure software systems, and provide recommendations for things to do or to avoid. In this thesis, I use my findings with regard to security challenges, threats, and countermeasures to create a set of security guidelines that should be applied during requirements engineering and design of a SOA. Practical use of these guidelines is demonstrated by applying them during development of a SOA-based system. This system imports personal data into multiple administrative systems managed by UNINETT FAS, and is designed using Web Services and XML-based security standards. Through this practical demonstration, I show that my guidelines can be used as a reference for making appropriate security decisions during development of a SOA.</p>
220

Context-Aware Goods : Combining RFID Tracking and Environment Sensing

Albretsen, Sigve, Larsen, Mikael André January 2008 (has links)
<p>Technology is becoming increasingly important in the effort to ensure safe food and good food quality, especially in the fresh food industry. Examples of such technology are systems for tracking and tracing food products, and the use of sensors to obtain context information about the environment. This technology is becoming more mature, and various standards are starting to emerge, but little work has been done combining these technologies or respective standards. This thesis presents an example software architecture combining an RFID tracking system with context information retrieved from sensors. The sensors can be located both on the RFID tag itself and in locations where the items are, or have been, located. Two frameworks are combined in this architecture; EPC Architecture Framework for item tracking and Sensor Web Enablement for sensor and context information. A set of scenarios describing potential uses of this technology is also presented. They are grouped by topics, with categories such as quality deterioration, temperature profiles, sensor collaboration and intelligent goods, hierarchy of goods with sensors, and proximity control. Each scenario is independent of the technical solution used, and does not require our architecture. The focus is on what can be achieved when a context-enabled tracking solution is implemented. These scenarios form the basis for the requirements specification of the architecture. The thesis shows that the integration of the two standard frameworks can be achieved with relatively small modifications, and that the technology needed to achieve what is presented in the scenarios is already available. It is, however, necessary to perform pilot implementations and testing in order to find how best to utilize the technology.</p>

Page generated in 0.0961 seconds