461 |
Selection of Open Source Components - A Qualitative Survey in Norwegian IT IndustryGerea, Marina Marinela January 2007 (has links)
<p>Empirical research is performed to verify theories, develop new theories, or extend the existing ones, and improve practice. This study is mainly used to gain understanding about the selection of OSS components with the ultimate goal to improve the software development practice in industry, particularly to improve the practice of selection of OSS components. This study does can not be used directly to improve the practice of selection and evaluation of OSS components, because further and larger studies needs to be performed in the future to support our results. It is anyway a good step toward the final goal. We have used the role: integrator of open source, because this is the most appropriate for the research we have performed. More and more companies integrate open source components in their products because the benefits are large. Therefore, improving the practice about the selection of OSS components may help software companies to decrease the time spent. If the time spent is too large, this can offset the advantages of integrating OSS components. The results of the interview are presented, 16 descriptive findings are formulated based on this. The literature study was very useful to gain understanding about the state-off-the-art but also to define the research questions.</p>
|
462 |
Use Cases in Practice: : A Study in the Norwegian Software IndustryKjeøy, Margrethe Adde, Stalheim, Gerd Melteig January 2007 (has links)
<p>This Master's thesis investigates how project teams apply Use Cases and what problems they encounter with the employment of Use Cases by interviewing and surveying a number of Norwegian software companies. The thesis examines what developers and clients think is difficult and easy about Use Cases, how well the technique worked in a specific project, and how well the technique works in discussions with clients. A list of improvement suggestions for the Use Case technique is made based on the interviews, survey and literature study. The key findings in this thesis are summarized as eight improvement suggestions. The three most important are: (1) that Use Cases should be supplied with user interface prototypes when used in discussions with clients, (2) that companies should make use of a tool that makes it easier to get the overview of related Use Cases, and (3) that one should avoid to write details about the user interface in Use Cases. Other findings are that Use Cases are most commonly used for requirements specification, estimation, programming and constructing test cases, and that it is difficult to find the right level of detail when writing Use Cases.</p>
|
463 |
MOOSES Game Concepts : Game Concepts for the Multiplayer on One Screen Entertainment SystemKvasbø, Audun January 2007 (has links)
<p>Today, video games are mostly played at home while alone or together with friends. Multiplayer games are played by sharing a single screen or meeting up online to play against others. Within the MOOSES (Multiplayer On One Screen Entertainment System) project we seek to create a set of games for the new gaming paradigm that allows large numbers of players to share the fun of playing together on a single, large screen. To create games that are playable within the MOOSES context, one has to consider a series of special factors. These include elements of both user interface design, hardware and software architecture. In this study we describe these factors and how they can be handled with respect to making fun games for a large number of players on a single screen. Then, in the final and most important part of the study we describe five games that are suited for the special demands of the MOOSES framework - a war game, a football game, a music game, a survival based game and a quiz game.</p>
|
464 |
Social Tagging of Services to Support End User Development in Ubiquitous Collaborative EnvironmentsLaverton, Christian January 2007 (has links)
<p>Tailorability in ubiquitous computing systems is needed at different levels, depending on the targeted end users. For inexperienced end users lacking computer competency, high level mechanisms for tailoring are needed. Systems such as ASTRA, which use a service oriented architecture, can provide such high level tailorability through service composition. With service composition, services can be combined and configured to form applications. However, using service composition introduces new challenges for end users. To find appropriate services, users need mechanisms for searching and browsing services. Equally important is it that users are able to understand how services work and what functionality they offer. Service descriptions can ease this task, but the problem with existing approaches to service descriptions is that they are not intended for end users and are hard to understand. This work looks at social tagging, which is a collaborative process where users attach labels or tags to items. This leads to user created metadata, as opposed to metadata created by experts. By introducing social tagging in ASTRA to describe services, users are provided with a framework for sharing their understanding of services with fellow users. To create a solution for social tagging for service descriptions, a thorough problem analysis was performed. The analysis considered the design space of tagging systems to find appropriate design choices in the problem context. Providing several tag visibility levels was identified as important, especially community tagging. The quality of tags as seen from the community members' perspective is likely to increase, as members of communities often share similar opinions and understandings. An important difference identified between existing tagging systems and tagging of services is that services can be embedded in physical devices. Thus, services can be discovered and accessed physically, which means that physical access to the services' tags should be supported. A requirements specification for a tagging system was specified, focusing on the platform requirements for basic tagging mechanisms, tag based navigation, and searching. The requirements lead to a design of platform architecture, aiming at extending the UbiCollab platform with social tagging functionality. The architecture uses a client/server solution, where the server service is shared among a network of users and handles public and community level tags. The client service is a local service which handles private tags, and acts as an intermediary between end user tools and the server service. A prototype of the platform services and an end user tool was implemented. The implementation is demonstrated through scenarios, showing possible uses of the tagging system.</p>
|
465 |
Automated tuning of MapReduce performance in Vespa Document StoreGrythe, Knut Auvor January 2007 (has links)
<p>MapReduce is a programming model for distributed processing, originally designed by Google Inc. It is designed to simplify the implementation and deployment of distributed programs. Vespa Document Store (VDS) is a distributed document storage solution developed by Yahoo! Technologies Norway. VDS does not currently have any feature allowing distributed aggregation of data. Therefore, a prototype of the MapReduce distributed programming model was previously developed. However, the implementation requires manual tuning of several parameters before each deployment. The goal of this thesis is to allow as many as possible of these parameters to be either automatically configured or set to universally suitable defaults. We have created a working MapReduce implementation based on previous work, and a framework for monitoring of VDS nodes. Various VDS features have been documented in detail, this documentation has been used to analyse how the performance of these features may be improved. We have also performed various experiments to validate the analysis and gain additional insight. Numerous configuration options for either VDS in general or the MapReduce implementation have been considered, and recommended settings have been proposed. The propositions are either in the form of default values or algorithms for computing the most suitable setting. Finally, we provide a list of suggested further work, with suggestions for both general VDS improvements and MapReduce-specific research.</p>
|
466 |
Analysis of Software Faults using Safety-techniques with Respect to the Software System DAIMDyre-Hansen, Jostein January 2007 (has links)
<p>In this master thesis we have analyzed the software system DAIM, which is a web-based delivery system used at NTNU in connection with master theses and master students, with respect to software faults. Based on the documentation from the design stage of the DAIM project we have performed a technique called Preliminary Hazard Analysis (PHA), which is an analysis technique from safety-critical development. The results from this analysis have been compared with existing fault reports containing actual faults discovered in the system. Some of the intention behind our work has been to find if hazards identified with PHA can be related to actual faults found in the fault reports. In [17] it is stated that correcting software faults in later phases of the software development is much more expensive than in earlier phases and we have performed the PHA to see if some of the faults could have been avoided. We found that there were some connections between some of the faults and hazards identified, but the results were not entirely as expected. In our previous work we did a similar kind of analysis as we have done in this work regarding the analysis of fault reports and we have compared the results from our previous work with some of the results that we have obtained from this work to see how the distribution of fault types varies between the projects. The results showed that there were several differences between the projects, but some similarities were also discovered.</p>
|
467 |
An Application of Image Processing Techniques for Enhancement and Segmentation of Bruises in Hyperspectral ImagesGundersen, Henrik Mogens, Rasmussen, Bjørn Fossan January 2007 (has links)
<p>Hyperspectral images contain vast amounts of data which can provide crucial information to applications within a variety of scientific fields. Increasingly powerful computer hardware has made it possible to efficiently treat and process hyperspectral images. This thesis is interdisciplinary and focuses on applying known image processing algorithms to a new problem domain, involving bruises on human skin in hyperspectral images. Currently, no research regarding image detection of bruises on human skin have been uncovered. However, several articles have been written on hyperspectral bruise detection on fruits and vegetables. Ratio, difference and principal component analysis (PCA) were commonly applied enhancement algorithms within this field. The three algorithms, in addition to K-means clustering and the watershed segmentation algorithm, have been implemented and tested through a batch application developed in C# and MATLAB. The thesis seeks to determine if the enhancement algorithms can be applied to improve bruise visibility in hyperspectral images for visual inspection. In addition, it also seeks to answer if the enhancements provide a better segmentation basis. Known spectral characteristics form the experimentation basis in addition to identification through visual inspection. To this end, a series of experiments were conducted. The tested algorithms provided a better description of the bruises, the extent of the bruising, and the severity of damage. However, the algorithms tested are not considered robust for consistency of results. It is therefore recommended that the image acquisition setup is standardised for all future hyperspectral images. A larger, more varied data set would increase the statistical power of the results, and improve test conclusion validity. Results indicate that the ratio, difference, and principal component analysis (PCA) algorithms can enhance bruise visibility for visual analysis. However, images that contained weakly visible bruises did not show significant improvements in bruise visibility. Non-visible bruises were not made visible using the enhancement algorithms. Results from the enhancement algorithms were segmented and compared to segmentations of the original reflectance images. The enhancement algorithms provided results that gave more accurate bruise regions using K-means clustering and the watershed segmentation. Both segmentation algorithms gave the overall best results using principal components as input. Watershed provided less accurate segmentations of the input from the difference and ratio algorithms.</p>
|
468 |
Open Digital CanvasMendoza, Nicolas January 2007 (has links)
<p>http://odc.opentheweb.org/</p>
|
469 |
Study of the Release Process of Open Source Software : Case StudyEide, Tor Erik January 2007 (has links)
<p>This report presents the results of a case study focusing on the release process of open source projects initiated with commercial motives. The purpose of the study is to gain an increased understanding of the release process, how a community can be attracted to the project, and how the interaction with the community evolves in commercial open source initiatives. Data has been gathered from four distinct sources to form the basis of this thesis. A thorough review of the open source literature has been performed. To further substantiate the data gathered from the literature study and to gain qualitative insights from companies heavily involved with open source development, four Norwegian companies adopting open source strategies have been interviewed. Data has also been gathered from active participation in the release process of the Keywatch networking software, including the creation of a web site and promotion of the project to build a community. Finally, the web sites of six company-initiated open source projects have been studied to gain further insight into how commercial open source projects are presented. The contributions of this report can be divided into two parts; a description of the open source phenomenon and theoretical guidelines describing important measures to be taken into consideration when releasing software as open source. The description of the open source phenomenon is derived from reviewing the open source literature and includes a description of the history of open source, its characteristics, licenses, legal issues related to open source, and motivations for adopting open source software. The theoretical guidelines are based on corroboration of data gathered from qualitative interviews, reviewing of commercial open source web sites, and findings in the research literature. The guidelines are summarized in the concluding section of the report together with suggestions for future research. Keywords: Open Source, Qualitative Research, Commercial Open Source Adoption, Open Source Preparations, Open Source Release Process, Open Source Community Management</p>
|
470 |
Text Mining in Health Records : Classification of Text to Facilitate Information Flow and Data OverviewRose, Øystein January 2007 (has links)
<p>This project consists of two parts. In the first part we apply techniques from the field of text mining to classify sentences in encounter notes of the electronic health record (EHR) into classes of {it subjective}, {it objective} and {it plan} character. This is a simplification of the {it SOAP} standard, and is applied due to the way GPs structure the encounter notes. Structuring the information in a subjective, objective, and plan way, may enhance future information flow between the EHR and the personal health record (PHR). In the second part of the project we seek to use apply the most adequate to classify encounter notes from patient histories of patients suffering from diabetes. We believe that the distribution of sentences of a subjective, objective, and plan character changes according to different phases of diseases. In our work we experiment with several preprocessing techniques, classifiers, and amounts of data. Of the classifiers considered, we find that Complement Naive Bayes (CNB) produces the best result, both when the preprocessing of the data has taken place and not. On the raw dataset, CNB yields an accuracy of 81.03%, while on the preprocessed dataset, CNB yields an accuracy of 81.95%. The Support Vector Machines (SVM) classifier algorithm yields results comparable to the results obtained by use of CNB, while the J48 classifier algorithm performs poorer. Concerning preprocessing techniques, we find that use of techniques reducing the dimensionality of the datasets improves the results for smaller attribute sets, but worsens the result for larger attribute sets. The trend is opposite for preprocessing techniques that expand the set of attributes. However, finding the ratio between the size of the dataset and the number of attributes, where the preprocessing techniques improve the result, is difficult. Hence, preprocessing techniques are not applied in the second part of the project. From the result of the classification of the patient histories we have extracted graphs that show how the sentence class distribution after the first diagnosis of diabetes is set. Although no empiric research is carried out, we believe that such graphs may, through further research, facilitate the recognition of points of interest in the patient history. From the same results we also create graphs that show the average distribution of sentences of subjective, objective, and plan character for 429 patients after the first diagnosis of diabetes is set. From these graphs we find evidence that there is an overrepresentation of subjective sentences in encounter notes where the diagnosis of diabetes is first set. However, we believe that similar experiments for several diseases, may uncover patterns or trends concerning the diseases in focus.</p>
|
Page generated in 0.0638 seconds