• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 171
  • Tagged with
  • 799
  • 799
  • 799
  • 557
  • 471
  • 471
  • 136
  • 136
  • 94
  • 94
  • 88
  • 88
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Experience transfer in professional networks in Statoil : The use of information technology

Ulven, Mette January 2006 (has links)
<p>With competition increasing in the market, companies are seeking new ways to sustain and enhance their effeciency and competitiveness. In this regard, companies have recently been focusing on knowledge as a competitive resource, and it has become a challenge for organisations to locate and share their knowledge. Many different approaches to managing knowledge exist, but the two extreme points are facial interaction and the use of information technology (IT). When people in an organisation are co-located, they can interact on a frequent basis to learn from each other. This is by citet{REF52} referred to as Communities-of-Practice (CoP), where knowledge is shared in its natural context, for example through story telling. In large organisations where people are geographically spread out however, IT is considered to be helpful for connecting people and spreading knowledge.In Statoil, professional networks are established to enable experience transfer between network members. Network members can both be co-located in the organisation or geographically spread out, and I have therefore looked at how experiences are transferred between network members. Special attention has been paid to the value of IT for connecting network members who are spread out geographically. Throughout this report I have argued that professional networks are similar to CoPs, and they are continuously being compared. To find information about professional networks, I conducted an empirical study where network members and leaders from three professional networks were interviewed.</p>
442

Access Control in Heterogenous Health Care Systems : A comparison of Role Based Access Control Versus Decision Based Access Control

Magnussen, Gaute, Stavik, Stig January 2006 (has links)
<p>Role based access control (RBAC) is widely used in health care systems today. Some of the biggest systems in use at Norwegian hospitals utilizes role based integration. The basic concept of RBAC is that users are assigned to roles, permissions are assigned to roles and users acquire permissions by being members of roles. An alternative approach to the role based access distribution, is that information should be available only to those who are taking active part in a patient’s treatment. This approach is called decision based access control (DBAC). While some RBAC implementations grant access to a groups of people by ward, DBAC ensures that access to relevant parts of the patient’s medical record is given for treatment purposes regardless of which department the health care worker belongs to. Until now the granularity which the legal framework describes has been difficult to follow. The practical approach has been to grant access to entire wards or organizational units in which the patient currently resides. Due to the protection of personal privacy, it is not acceptable that any medical record is available to every clinician at all times. The most important reason to implement DBAC where RBAC exists today, is to get an access control model that is more dynamic. The users should have the access they need to perform their job at all times, but not more access than needed. With RBAC, practice has shown that it is very hard to make dynamic access rules when properties such as time and tasks of an employee’s work change. This study reveals that pretty much all security measures in the RBAC systems can be overridden by the use of emergency access features. These features are used extensively in everyday work at the hospitals, and thereby creates a security risk. At the same time conformance with the legal framework is not maintained. Two scenarios are simulated in a fictional RBAC and DBAC environment in this report. The results of the simulation show that a complete audit of the logs containing access right enhancements in the RBAC environment is unfeasible at a large hospital, and even checking a few percent of the entries is also a very large job. Changing from RBAC to DBAC would probably affect this situation to the better. Some economical advantages are also pointed out. If a change is made, a considerable amount of time that is used by health care workers to unblock access to information they need in their everyday work will be saved.</p>
443

Benchmarking significant DBMS costs on Niagara in order to perform a relative performance comparison between the Shared Nothing and the Shared Everything DBMS memory architectures

Bjørk, Lars-Erik, Jørgensen, Truls Rinnan January 2006 (has links)
<p>This report carries out a relative performance comparison between two DBMS architectures on the Multi Core, Single Die (MCSD) realization Niagara. The two DBMS architectures in question are Shared Nothing (SN) and Shared Everything (SE). The MCSD field is rapidly evolving, and we expect that this technology will become increasingly important in the near future. In order to carry out the comparison, the performance of the architectures must be calculated. This calculation depends on the cost figures associated with each architectural approach. To identify these costs, we present the design solutions made and results discovered in our previous work. Based on this, the most significant costs are determined and scheduled to be micro benchmarked. The natural next step is to examine possible techniques to implement the benchmarks. In order to do this, we first expand on the Niagara chip and the platform on which the micro benchmarks will run. Having a sufficient theoretical platform to continue, we move on to describe the implementation of each micro benchmark in detail. After benchmarking all the most significant costs, we thoroughly discuss the results, some of which are indeed surprising. The costs which are not benchmarked are based on assumptions from our previous work and recalculated to apply to Niagara. For both SN and SE, we evaluate the system for two classes of transactions. The first class is transactions touching one tuple (called simple), the second is transactions touching four tuples (called complex). Each class has two instances, read and update. In order to perform the subsequent analysis, the decomposition of each transaction is presented in detail. When analyzing the outcome of the calculations, interesting results emerge. First, we note that SE is the cheapest alternative when evaluating the simple transactions. This is because the SN approach includes an administrative overhead component that does not pay off when the transaction only touches one tuple. However, for complex transactions, the overhead component results in a parallel gain for SN which outperforms SE. Based on the most dominant costs of both architectures, we perform a sensitivity analysis. For SN, the analysis is based on the cost for message passing. For SE, it is based on the cost for synchronization. The goal of this analysis is two folded. First, it is interesting to see how the results vary. For example, what the ratio between the cost for message passing and the cost for synchronization must be in order to make the two approaches perform equally well. Second, the analysis indicate how error-prone each architecture is to erroneous estimation. The sensitivity analysis examine the performance of SN and SE when the ratio between the cost for message passing and the cost for synchronization is varied. This is done in both the read and the update cases. In addition to examining the simple and the complex transactions, we examine general transactions were the number of operations are not predetermined. The analysis of the general read transaction suggests that when the number of operations increases, the message passing and synchronization costs wipe out the impact of the other costs. It also suggests that when the cost of message passing is greater than 4 times the cost of synchronization, SE performs better when increasing the number of read operations. Similarly, if message passing is cheaper than 4 times the cost of synchronizing, SN is preferable. When increasing the number of update operations, the ratio is 3.33. After concluding the analysis, we suggest a hybrid architecture that might combine the advantages of SN and SE. At the cost of introducing both message passing and synchronization, the architecture introduce parallelism in SE. Lastly, we identify suggestions for future work. Realized and applied to the DBMS model introduced in this report, we believe that several of these suggestions can shrink some of the costs presented.</p>
444

Development of a Demand Driven Dom Parser

Alvestad, Gaute Odin, Gausnes, Ole Martin, Kråkenes, Ole-Jakob January 2006 (has links)
<p>XML is a tremendous popular markup language in internet applications as well as a storage format. XML document access is often done through an API, and perhaps the most important of these is the W3C DOM. The recommendation from W3C defines a number of interfaces for a developer to access and manipulate XML documents. The recommendation does not define implementation specific approaches used behind the interfaces. A problem with the W3C DOM approach however, is that documents often are loaded in to memory as a node tree of objects, representing the structure of the XML document. This tree is memory consuming and can take up to 4-10 times the document size. Lazy processing have been proposed, building the node tree as it accesses new parts of the document. But when the whole document has been accessed, the overhead compared to traditional parsers, both in terms of memory usage and performance, is high. In this thesis a new approach is introduced. With the use of well known indexing schemes for XML, basic techniques for reducing memory consumption, and principles for memoryhandling in operation systems, a new and alternative approach is introduced. By using a memory cache repository for DOM nodes and simultaneous utilize principles for lazy processing, the proposed implementation has full control over memory consumption. The proposed prototype is called Demand Driven Dom Parser, D3P. The proposed approach removes least recently used nodes from the memory when the cache has exceeded its memory limit. This makes the D3P able to process the document with low memory requirements. An advantage with this approach is that the parser is able to process documents that exceed the size of the main memory, which is impossible with traditional approaches. The implementation is evaluated and compared with other implementations, both lazy and traditional parsers that builds everything in memory on load. The proposed implementation performs well when the bottleneck is memory usage, because the user can set the desired amount of memory to be used by the XML node tree. On the other hand, as the coverage of the document increases, time spend processing the node tree grows beyond what is used by traditional approaches.</p>
445

Scenario testing in a real environment : Key card Administration System at the University Hospital in North Norway

Halmø, Yngve, Jenssen, Geir-Arne January 2006 (has links)
<p>Software is gradually replacing paper based administration systems. The migration to electronic systems is supposed to make life easier for the users. If this is to be the case then these software systems must be created in such a way that the end users are able to use them effectively. To achieve usable systems, software testing must be utilized. There are many ways to test a program, with or without involving real users. Scenario testing is a somewhat poorly documented discipline in software testing, with ambiguous definitions. It does however seem to be well suited in combination with users to test external parts of a software system in a late state of development. This project is based on the work done in the software engineering depth study [12]. There we conducted empirical work and internal testing of the software system KAS, and laid the foundation for this Master’s thesis. In this report we have continued the work with this software and concentrated on its external characteristics and user testing. We have analyzed scenario testing further through a software test of this system involving its future users. The users have been given tasks to complete through stories that explain what to do but not how to do it. We have observed the test subjects closely throughout the tests, and collected important data. The results have been evaluated in order to assess their usefulness, which further points to the quality of scenario testing as a testing method. The results have also spawned functional requirements which have been implemented into the KAS. Through this project we have gained experience that can be useful to others conducting scenario tests or doing research in software testing in the future.</p>
446

Software Architecture of the Algorithmic Music System ImproSculpt

Semb, Thor Arne Gald, Småge, Audun January 2006 (has links)
<p>This document investigates how real-time algorithmic music composition software constrains and shapes software architecture. To accomplish this, we have employed a method known as Action Research on the software system ImproSculpt. ImproSculpt is real-time algorithmic music composition system for use in both live performances and studio contexts, created by Øyvind Brandtsegg. Our role was to improve the software architecture of ImproSculpt, while gathering data for our research goal. To get an overview of architecture and architectural tactics we could use to improve the structure of the system, a literature study was first conducted on this subject. A design phase followed, where the old architecture was analyzed, and a suggestion for a new system architecture was proposed. After the design phase was completed, we performed four iterations of the action resesarch cyclical process model, where we implemented our new architecture step by step, evaluating and learning from the process as we went along. This project is a follow up of our previous research project, “Artistic Software” [3], that investigated how algorithmic composition was influenced by software.</p>
447

Multi-Formalism Modelling of a Submarine Combat System Test Facility: an Application of DEVS

Skogstad, Kjell-Inge January 2006 (has links)
<p>This thesis aims at applying and exploring the DEVS-based theory for the purpose of gaining experiences and recommendations with regards to the usefulness of DEVS in the analysis of a submarine combat system and in establishing simulation credibility. In this regard, first a literature study of the DEVS-based literature is performed, before a case study is carried out, targeting a subset of the submarine combat system test bed under construction at Forsvarets forskningsinstitutt (FFI). In doing so, an architectural description of the subset based on DEVS is created and special requirements are discussed with regards to legacy components and technologies. Finally, the usefulness of DEVS is discussed based on experiences made during the first two tasks. Note that implementation and aspects with regards to an executable framework for the simulator is not covered. The findings of this study indicates that DEVS with its formal nature can be a valuable tool both with regards to analysis as well as in establishing credibility. Especially with regards to couplings and composability DEVS might prove helpful. The formal specifications and definitions can ensure that consistency is achieved. A formal experimental frame also ensures an unambiguous foundation for creating the simulation. With this in mind, an issue was discovered with regards to how far DEVS should go in covering aspects which can be argued to be part of the simulator within the abstract model. A solution involving the use of one or more lumped models is suggested, but need further study. Finally, the need for a proper tools and a graph notation is emphasised if DEVS is to be practical in complex simulations.</p>
448

BUCS Implementing safety : An approach as to how to implement safety concerns

Vindegg, Ole-Johan Sikkeland January 2005 (has links)
<p>BUCS Implementing safety An approach as to how to implement safety concernsting safety</p>
449

An empirical study of component-based software engineering in Statoil

Ha, Vu, Tran, Kiet Ve January 2006 (has links)
<p>Our master thesis is an extension based on our thesis written in the autumn 2005.</p>
450

Component Based System Development in the Norwegian Software Industry

Sommerseth, Marius January 2006 (has links)
<p>Today it has become common practice to apply systematic reuse during software development. By reuse, the gain from creating a piece of software can be multiplied, as instead of creating a new component each time, old ones can be reused. This increases productivity (shorter time-to-market, less cost) and also software quality, as the components get well tested through using them in different systems. There are, however, many ways of applying reuse. There are different types of components that can be applied in systematic reuse. The most common ones are internally developed, OSS, COTS, or outsourced components. There are also many different ways to share and access the components among the developers. Today all companies who apply reuse have some sort of distributed way of sharing. To use product families is also one way of applying reuse. This can take reuse to another level as the reused parts can be vast, but it can also be used for branding a line of products. The main part of this thesis is a quantitative survey that was executed with a questionnaire. 32 Norwegian software companies participated in the survey. The questionnaire asked about who applied reuse and product families, how they applied it, and what the respondents thought were important when applying it. The data collected is used to answer 3 research questions and are also discussed against related research. The data is also used to see if there are any differences between how reuse is applied in companies of different sizes and internally in departments as well as for whole companies. Also the impact of different program languages and development processes/methods on reuse is explored. This survey builds upon the pre-study “Reuse through product-families and framework” [MS00]. In the pre-study subjects from 12 Norwegian software development companies were interviewed about how they utilized reuse and product families. This was a qualitative survey with open questions, which was used to discover trends in Norwegian software development companies, and these trends are in this thesis examined. The data from another survey done by IKT-Norge is also used in this thesis, but only the questions added extra for NTNU. These were about process improvement as well as reuse. There were a total of 142 Norwegian companies that responded, and 60 who answered the extra questions. The IKT-Norge survey is also compared against the thesis survey.</p>

Page generated in 0.0585 seconds