• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 189
  • 50
  • 48
  • 29
  • 24
  • 19
  • 16
  • 13
  • 11
  • 10
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 779
  • 197
  • 131
  • 118
  • 107
  • 93
  • 91
  • 88
  • 82
  • 81
  • 79
  • 77
  • 76
  • 70
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Workflow testing

Ho, Kam Seng January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
402

Improving maintainability on modern cross-platform projects

Berglund, Dan January 2013 (has links)
As software systems grow in size they will also grow in complexity. If the increased complexity is not managed the system will be increasingly difficult to maintain. The effect of unmaintainable software is even more distinct when using a agile development process. By increasing the maintainability of the system these problems will be dealt with and the system can be extended with sustained efficiency. This thesis will evaluate the development process of a modern, agile company in order to find changes that will promote increased maintainability. The result is an modified process that will increase the maintainability with the smallest possible overhead for the development organisation. The result is based on earlier studies of development technologies that have proven to increase the maintainability. The implementation of these technologies are adjusted to fit the development team, and some of the technologies that are not suitable for the team are rejected.
403

Simulation Software as a Service and Service-Oriented Simulation Experiment

Guo, Song 28 July 2012 (has links)
Simulation software is being increasingly used in various domains for system analysis and/or behavior prediction. Traditionally, researchers and field experts need to have access to the computers that host the simulation software to do simulation experiments. With recent advances in cloud computing and Software as a Service (SaaS), a new paradigm is emerging where simulation software is used as services that are composed with others and dynamically influence each other for service-oriented simulation experiment on the Internet. The new service-oriented paradigm brings new research challenges in composing multiple simulation services in a meaningful and correct way for simulation experiments. To systematically support simulation software as a service (SimSaaS) and service-oriented simulation experiment, we propose a layered framework that includes five layers: an infrastructure layer, a simulation execution engine layer, a simulation service layer, a simulation experiment layer and finally a graphical user interface layer. Within this layered framework, we provide a specification for both simulation experiment and the involved individual simulation services. Such a formal specification is useful in order to support systematic compositions of simulation services as well as automatic deployment of composed services for carrying out simulation experiments. Built on this specification, we identify the issue of mismatch of time granularity and event granularity in composing simulation services at the pragmatic level, and develop four types of granularity handling agents to be associated with the couplings between services. The ultimate goal is to achieve standard and automated approaches for simulation service composition in the emerging service-oriented computing environment. Finally, to achieve more efficient service-oriented simulation, we develop a profile-based partitioning method that exploits a system’s dynamic behavior and uses it as a profile to guide the spatial partitioning for more efficient parallel simulation. We develop the work in this dissertation within the application context of wildfire spread simulation, and demonstrate the effectiveness of our work based on this application.
404

Simulation Software as a Service and Service-Oriented Simulation Experiment

Guo, Song 28 July 2012 (has links)
Simulation software is being increasingly used in various domains for system analysis and/or behavior prediction. Traditionally, researchers and field experts need to have access to the computers that host the simulation software to do simulation experiments. With recent advances in cloud computing and Software as a Service (SaaS), a new paradigm is emerging where simulation software is used as services that are composed with others and dynamically influence each other for service-oriented simulation experiment on the Internet. The new service-oriented paradigm brings new research challenges in composing multiple simulation services in a meaningful and correct way for simulation experiments. To systematically support simulation software as a service (SimSaaS) and service-oriented simulation experiment, we propose a layered framework that includes five layers: an infrastructure layer, a simulation execution engine layer, a simulation service layer, a simulation experiment layer and finally a graphical user interface layer. Within this layered framework, we provide a specification for both simulation experiment and the involved individual simulation services. Such a formal specification is useful in order to support systematic compositions of simulation services as well as automatic deployment of composed services for carrying out simulation experiments. Built on this specification, we identify the issue of mismatch of time granularity and event granularity in composing simulation services at the pragmatic level, and develop four types of granularity handling agents to be associated with the couplings between services. The ultimate goal is to achieve standard and automated approaches for simulation service composition in the emerging service-oriented computing environment. Finally, to achieve more efficient service-oriented simulation, we develop a profile-based partitioning method that exploits a system’s dynamic behavior and uses it as a profile to guide the spatial partitioning for more efficient parallel simulation. We develop the work in this dissertation within the application context of wildfire spread simulation, and demonstrate the effectiveness of our work based on this application.
405

Nej, men vad dålig jag känner mig nu! : En studie om att utforma ett användarvänligt arbetsflöde för elektronisk signering

Olby, Ida January 2011 (has links)
En kontinuerlig ökning av tillgängligheten till Internet har lett till att flera ärenden som tidigare utfördes utanför hemmet, såsom inköp av böcker, filmer och kläder, nu även sker online. Det finns idag även flera företag som erbjuder tjänster för att signera dokument elektroniskt. Jag har i detta examensarbete valt att studera arbetsflödet för just elektronisk signering. Studien är fokuserad på företaget Comfacts lösning där de använder SMS i identifieringsprocessen. De olika stegen användarna går igenom för att signera dokument är otydliga. Genom att studera arbetsflödet för Comfacts elektroniska signering hoppas jag bringa kunskap i hur man kan utforma flödet så användarna förstår processen och upplever signeringen som enkel och smidig. Förstudien bestod av en heuristisk utvärdering av tjänsten, en konkurrentanalys av liknande tjänster samt användbarhetstester. Resultatet visade att det största problemet med den nuvarande lösningen är vissa ord och meningar som är formulerade så att deltagarna inte förstår. Detta ihop med dålig information om vad som händer resulterar i ett otydligt arbetsflöde som leder till onödigt tänkande för användarnas del. Genom att använda mig av kunskaper jag tillgodosett mig genom teori och förstudie utvecklade jag ett förslag på ett nytt gränssnitt. Detta testade jag sedan på en ny grupp användare och kunde då påvisa mätbara förbättringar från det tidigare gränssnittet. Slutsatsen av arbetet är att ett gränssnitt bör utformas med följande aspekter i åtanke för att bättre guida användarna igenom signeringsprocessen: 1) Designa efter användarna, inte den bakomliggande tekniken. 2) Ge tydlig information om vad som händer 3) De olika elementen i gränssnittet ska vara synliga 4) Designen ska vara enhetlig 5) Använd funktioner som användarna känner igen Genom att designa efter dessa principer i framtiden kan man undvika användare som utbrister ”Nej, men vad dålig jag känner mig nu!”, utan istället säger: ”Aha, jag förstår!”. / A continuous increase of the accessibility to the Internet has lead to several instances where errands previously performed outside the home now can be carried out online as well, such as purchases of books, movies and clothes. Today, several companies offer services for the electronic signing of documents. In this thesis, I have chosen to study the workflow for such electronic signatures. The focus of the study is the company Comfacts solution, which uses SMS as part of the identification process. The steps that users go through to sign a document are not clearly defined. By studying the workflow of the system Comfact uses for electronic signage, I hope to bring insight of how to design a flow where users understand the process and feel that signing electronic documents online is a simple and straightforward process. The preliminary study consisted of a heuristic evaluation of the service, a competitor analysis of similar services and usability tests. The result showed that the main problem with the current solution is that certain words and sentences are structured in a way that the participants feel they don’t understand. This, combined with poor information on how the signing process works results in an overall unclear workflow. An unclear workflow will, in turn, cause the users to reflect upon the process itself more than should be necessary. By using the knowledge gained though theoretic studies and the preliminary study I developed a proposal for a new user interface. This interface was then tested by a new group of users, and I was able to demonstrate measurable improvements from the previous user interface. The conclusion of my study is that a workflow should be designed with the following aspects in mind, in order to better guide the users through the signing process: 1) Design with the users in mind, not the underlying technology 2) Provide clear information about what is going on 3) Different elements should be visible 4) The design should be consistent 5) Use functions familiar to the users By designing according to these principles in the future, users exclaiming “Oh, I feel so stupid!” can be avoided, and more users feel “Aha, I understand!”
406

A photographic journey through Vietnam

Häggström, Björn January 2005 (has links)
This degree project consists of “A photographic journey through Vietnam”. One month was spent in Vietnam where the different aspects of the Vietnamese life were documented in images. The journey began in Hanoi and descended down the country to Ho Chi Minh City.The report describes the compositional elements of photography and makes an attempt to describe what a “good” image is. Furthermore it explains what equipment that is necessary for such a journey and how you can interact with the local population.When the journey came to an end, a photographic book consisting of 200 images was created. The report details the used workflow step by step. Finally the author has commented 20 of the selected images regarding their photographic composition.
407

Enhancing Data Processing on Clouds with Hadoop/HBase

Zhang, Chen January 2011 (has links)
In the current information age, large amounts of data are being generated and accumulated rapidly in various industrial and scientific domains. This imposes important demands on data processing capabilities that can extract sensible and valuable information from the large amount of data in a timely manner. Hadoop, the open source implementation of Google's data processing framework (MapReduce, Google File System and BigTable), is becoming increasingly popular and being used to solve data processing problems in various application scenarios. However, being originally designed for handling very large data sets that can be divided easily in parts to be processed independently with limited inter-task communication, Hadoop lacks applicability to a wider usage case. As a result, many projects are under way to enhance Hadoop for different application needs, such as data warehouse applications, machine learning and data mining applications, etc. This thesis is one such research effort in this direction. The goal of the thesis research is to design novel tools and techniques to extend and enhance the large-scale data processing capability of Hadoop/HBase on clouds, and to evaluate their effectiveness in performance tests on prototype implementations. Two main research contributions are described. The first contribution is a light-weight computational workflow system called "CloudWF" for Hadoop. The second contribution is a client library called "HBaseSI" supporting transactional snapshot isolation (SI) in HBase, Hadoop's database component. CloudWF addresses the problem of automating the execution of scientific workflows composed of both MapReduce and legacy applications on clouds with Hadoop/HBase. CloudWF is the first computational workflow system built directly using Hadoop/HBase. It uses novel methods in handling workflow directed acyclic graph decomposition, storing and querying dependencies in HBase sparse tables, transparent file staging, and decentralized workflow execution management relying on the MapReduce framework for task scheduling and fault tolerance. HBaseSI addresses the problem of maintaining strong transactional data consistency in HBase tables. This is the first SI mechanism developed for HBase. HBaseSI uses novel methods in handling distributed transactional management autonomously by individual clients. These methods greatly simplify the design of HBaseSI and can be generalized to other column-oriented stores with similar architecture as HBase. As a result of the simplicity in design, HBaseSI adds low overhead to HBase performance and directly inherits many desirable properties of HBase. HBaseSI is non-intrusive to existing HBase installations and user data, and is designed to work with a large cloud in terms of data size and the number of nodes in the cloud.
408

A Systematic Approach to Offshore Fields Development Using an Integrated Workflow

Alqahtani, Mari H. 2010 August 1900 (has links)
I present a systematic method to primary develop existing black oil fields. This method uses integrated reservoir development workflow (IRDW) that relies on integrated asset model (IAM). Developing any existing field means providing a plan that generally serves the development goal(s) specified by management. However, serving the development goal(s) by itself does not guarantee an optimal development plan. Plans that do not rely on an IAM are less accurate. Some plans do not include economics in their evaluation. Such plans are technically accepted but usually impractical or unprofitable. Plans that only evaluate the field based on current, or short-term, conditions are potential candidates for bottlenecks, thus costly reevaluations. In addition, plans that do not consider all suitable options are misleading and have no room for optimization. Finally, some plans are based on “rules of thumb,” ease of operations, or operators’ preference, not on technical evaluation. These plans mostly lower long-term profitability and cause further production problems. To overcome these problems, project management must form a multidisciplinary team that uses the IRDW. The IRDW guides the team through its phases, stages, and steps to selecting the optimal development plan. The IAM consists of geological, reservoir, wellbore, facility, and economic models. The IRDW dictates building an IAM for the base (do nothing) case and for each development plan. The team must evaluate each scenario over the lifetime of the field, or over the timeframe the management specifies. Net present value (NPV) and Present value ratio (PVR) for all options are compared to the base case and against each other. The optimum development plan is the one that have the highest NPV and highest PVR. The results of the research showed that forming a multidisciplinary team and using a LDFC saves time and it guarantees selecting the optimal development plan if all applicable development options are considered.
409

Bio-jETI : a framework for semantics-based service composition

Lamprecht, Anna-Lena, Margaria, Tiziana, Steffen, Bernhard January 2009 (has links)
Background: The development of bioinformatics databases, algorithms, and tools throughout the last years has lead to a highly distributedworld of bioinformatics services. Without adequatemanagement and development support, in silico researchers are hardly able to exploit the potential of building complex, specialized analysis processes from these services. The Semantic Web aims at thoroughly equipping individual data and services with machine-processable meta-information, while workflow systems support the construction of service compositions. However, even in this combination, in silico researchers currently would have to deal manually with the service interfaces, the adequacy of the semantic annotations, type incompatibilities, and the consistency of service compositions. Results: In this paper, we demonstrate by means of two examples how Semantic Web technology together with an adequate domain modelling frees in silico researchers from dealing with interfaces, types, and inconsistencies. In Bio-jETI, bioinformatics services can be graphically combined to complex services without worrying about details of their interfaces or about type mismatches of the composition. These issues are taken care of at the semantic level by Bio-jETI’s model checking and synthesis features. Whenever possible, they automatically resolve type mismatches in the considered service setting. Otherwise, they graphically indicate impossible/incorrect service combinations. In the latter case, the workflow developermay either modify his service composition using semantically similar services, or ask for help in developing the missing mediator that correctly bridges the detected type gap. Newly developed mediators should then be adequately annotated semantically, and added to the service library for later reuse in similar situations. Conclusion: We show the power of semantic annotations in an adequately modelled and semantically enabled domain setting. Using model checking and synthesis methods, users may orchestrate complex processes from a wealth of heterogeneous services without worrying about interfaces and (type) consistency. The success of this method strongly depends on a careful semantic annotation of the provided services and on its consequent exploitation for analysis, validation, and synthesis. We are convinced that these annotations will become standard, as they will become preconditions for the success and widespread use of (preferred) services in the Semantic Web
410

The Design and Implementation of a Business Process Analyzer

Yu, Chia-ping 21 May 2000 (has links)
Business process reengineering (BPR) has been considered as one of the key approaches to increasing the competitive edge of many modern enterprises. Many big enterprises have taken diversified degree of reengineering to their business processes. The importance of understanding the existing business processes and evaluating the new business processes before they are actually deployed is commonly recognized. Without careful examination of the existing and new business processes, the change in business process. In this research, we look into the business process analysis issues under the scope of BPR. We first examine various models for business processes. As each model is invented with a purpose, e.g., for identifying the critical path in a factory manufacturing environment, for automating workflow in an office environment, etc., they may not be completely suitable for business process analysis. We try to identify the requirements of business process analysis and propose a model to meet these requirements. We finally design and implement a business process analyzer. This business process analyzer use our proposed business process model and is able to answer the queries from the BPR team expressed by our proposed query language.

Page generated in 0.0522 seconds