• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 378
  • 210
  • 70
  • 41
  • 32
  • 30
  • 24
  • 18
  • 14
  • 12
  • 11
  • 11
  • 8
  • 6
  • 4
  • Tagged with
  • 963
  • 963
  • 229
  • 213
  • 123
  • 119
  • 113
  • 113
  • 109
  • 108
  • 106
  • 101
  • 94
  • 94
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Autonomic Product Development Process Automation

Daley, John E. 12 July 2007 (has links) (PDF)
Market globalization and mass customization requirements are forcing companies towards automation of their product development processes. Many task-specific software solutions provide localized automation. Coordinating these local solutions to automate higher-level processes requires significant software maintenance costs due to the incompatibility of the software tools and the dynamic nature of the product development environment. Current automation methods do not provide the required level of flexibility to operate in this dynamic environment. An autonomic product development process automation strategy is proposed in order to provide a flexible, standardized approach to product development process automation and to significantly reduce the software maintenance costs associated with traditional automation methods. Key elements of the strategy include a formal approach to decompose product development processes into services, a method to describe functional and quality attributes of services, a process modeling algorithm to configure processes composed of services, a method to evaluate process utility based on quality metrics and user preferences, and an implementation that allows a user to instantiate the optimal process. Because the framework allows a user to rapidly reconfigure and select optimal processes as new services are introduced or as requirements change, the framework should reduce burdensome software maintenance costs associated with traditional automation methods and provide a more flexible approach.
852

Performance of frameworks for declarative data fetching : An evaluation of Falcor and Relay+GraphQL

Cederlund, Mattias January 2016 (has links)
With the rise of mobile devices claiming a greater and greater portion of internet traffic, optimizing performance of data fetching becomes more important. A common technique of communicating between subsystems of online applications is through web services using the REpresentational State Transfer (REST) architectural style. However, REST is imposing restrictions in flexibility when creating APIs that are potentially introducing suboptimal performance and implementation difficulties. One proposed solution for increasing efficiency in data fetching is through the use of frameworks for declarative data fetching. During 2015 two open source frameworks for declarative data fetching, Falcor and Relay+ GraphQL, were released. Because of their recency, no information of how they impact performance could be found. Using the experimental approach, the frameworks were evaluated in terms of latency, data volume and number of requests using test cases based on a real world news application. The test cases were designed to test single requests, parallel and sequential data flows. Also the filtering abilities of the frameworks were tested. The results showed that Falcor introduced an increase in response time for all test cases and an increased transfer size for all test cases but one, a case where the data was filtered extensively. The results for Relay+GraphQL showed a decrease in response time for parallel and sequential data flows, but an increase for data fetching corresponding to a single REST API access. The results for transfer size were also inconclusive, but the majority showed an increase. Only when extensive data filtering was applied the transfer size could be decreased. Both frameworks could reduce the number of requests to a single request independent of how many requests the corresponding REST API needed. These results led to a conclusion that whenever it is possible, best performance can be achieved by creating custom REST endpoints. However, if this is not feasible or there are other implementation benefits and the alternative is to resort to a "one-size-fits-all" API, Relay+GraphQL can be used to reduce response times for parallel and sequential data flows but not for single request-response interactions. Data transfer size can only be reduced if filtering offered by the frameworks can reduce the response size more than the increased request size introduced by the frameworks. / Alteftersom användningen av mobila enheter ökar och står för en allt större andel av trafiken på internet blir det viktigare att optimera prestandan vid datahämtning. En vanlig teknologi för kommunikation mellan delar internet-applikationer är webbtjänster användande REpresentational State Transfer (REST)-arkitekturen. Dock introducerar REST restriktioner som minskar flexibiliteten i hur API:er bör konstrueras, vilka kan leda till försämrad prestanda och implementations-svårigheter. En möjlig lösning för ökad effektivitet vid data-hämtning är användningen av ramverk som implementerar deklarativ data-hämtning. Under 2015 släpptes två sådana ramverk med öppen källkod, Falcor och Relay+GraphQL. Eftersom de nyligen introducerades kunde ingen information om dess prestanda hittas. Med hjälp av den experimentella metoden utvärderades ramverken beträffande svarstider, datavolym och antalet anrop mellan klient och server. Testerna utformades utifrån en verklig nyhetsapplikation med fokus på att skapa testfall för enstaka anrop och anrop utförda både parallellt och sekventiellt. Även ramverkens förmåga att filtrera svarens data-fält testades. Vid användning av Falcor visade resultaten på en ökad svarstid i alla testfall och en ökad datavolym för alla testfall utom ett. I testfallet som utgjorde undantaget utfördes en mycket omfattande filtrering av datafälten. Resultaten för Relay+GraphQL visade på minskad svarstid vid parallella och sekventiella anrop, medan ökade svarstider observerades för hämtningar som motsvarades av ett enda anrop till REST API:et. Även resultaten gällande datavolym var tvetydiga, men majoriteten visade på en ökning. Endast vid en mer omfattande filtrering av datafälten kunde datavolymen minskas. Antalet anrop kunde med hjälp av båda ramverken minskas till ett enda oavsett hur många som krävdes vid användning av motsvarande REST API. Dessa resultat ledde till slutsatsen att när det är möjligt att skräddarsy REST API:er kommer det att ge den bästa prestandan. När det inte är möjligt eller det finns andra implementations-fördelar och alternativet är att använda ett icke optimerat REST API kan användande av Relay+ GraphQL minska svarstiden för parallella och sekventiella anrop. Däremot leder det i regel inte till någon förbättring för enstaka interaktioner. Den totala datavolymen kan endast minskas om filtreringen tar bort mer data från svaret än vad som introduceras genom den ökade anrops-storleken som användningen av ett frågespråk innebär.
853

Web-based Tidal Toolbox Of Astronomic Tidal Data For The Atlantic Intracoastal Waterway, Esturaries Sic] And Continental Shelf Of The South Atlantic Bight

Ruiz, Alfredo 01 January 2011 (has links)
A high-resolution astronomic tidal model has been developed that includes detailed inshore regions of the Atlantic Intracoastal Waterway and associated estuaries along the South Atlantic Bight. The unique nature of the model’s development ensures that the tidal hydrodynamic interaction between the shelf and estuaries is fully described. Harmonic analysis of the model output results in a database of tidal information that extends from a semi-circular arc (radius ~750 km) enclosing the South Atlantic Bight from the North Carolina coast to the Florida Keys, onto the continental shelf and into the full estuarine system. The need for tidal boundary conditions (elevation and velocity) for driving inland waterway models has motivated the development of a software application to extract results from the tidal database which is the basis of this thesis. In this tidal toolbox, the astronomic tidal constituents can be resynthesized for any open water point in the domain over any interval of time in the past, present, or future. The application extracts model results interpolated to a user’s exact geographical points of interest, desired time interval, and tidal constituents. Comparison plots of the model results versus historical data are published on the website at 89 tidal gauging stations. All of the aforementioned features work within a zoom-able geospatial interface for enhanced user interaction. In order to make tidal elevation and velocity data available, a web service serves the data to users over the internet. The tidal database of 497,847 nodes and 927,165 elements has been preprocessed and indexed to enable timely access from a typical modern web server. The iii preprocessing and web services required are detailed in this thesis, as well as the reproducibility of the Tidal Toolbox for new domains.
854

A service orientated architecture and wireless sensor network approach applied to the measurement and visualisation of a micro injection moulding process. Design, development and testing of an ESB based micro injection moulding platform using Google Gadgets and business processes for the integration of disparate hardware systems on the factory shop floor

Raza, Umar January 2014 (has links)
Factory shop floors of the future will see a significant increase in interconnected devices for monitoring and control. However, if a Service Orientated Architecture (SOA) is implemented on all such devices then this will result in a large number of permutations of services and composite services. These services combined with other business level components can pose a huge challenge to manage as it is often difficult to keep an overview of all the devices, equipment and services. This thesis proposes an SOA based novel assimilation architecture for integrating disparate industrial hardware based processes and business processes of an enterprise in particular the plastics machinery environment. The key benefits of the proposed architecture are the reduction of complexity when integrating disparate hardware platforms; managing the associated services as well as allowing the Micro Injection Moulding (µIM) process to be monitored on the web through service and data integration. An Enterprise Service Bus (ESB) based middleware layer integrates the Wireless Sensor Network (WSN) based environmental and simulated machine process systems with frontend Google Gadgets (GGs) based web visualisation applications. A business process framework is proposed to manage and orchestrate the resulting services from the architecture. Results from the analysis of the WSN kits in terms of their usability and reliability showed that the Jennic WSN was easy to setup and had a reliable communication link in the polymer industrial environment with the PER being below 0.5%. The prototype Jennic WSN based µIM process monitoring system had limitations when monitoring high-resolution machine data, therefore a novel hybrid integration architecture was proposed. The assimilation architecture was implemented on a distributed server based test bed. Results from test scenarios showed that the architecture was highly scalable and could potentially allow a large number of disparate sensor based hardware systems and services to be hosted, managed, visualised and linked to form a cohesive business process.
855

Comparative Study of Open-Source Performance Testing tools versus OMEXUS / Komparerande studie av verktyg för prestandatestning med öppen källkod jämfört med OMEXUS

Xia, Ziqi January 2021 (has links)
With the development of service digitalization and the increased adoption of web services, modern large-scale software systems often need to support a large volume of concurrent transactions. Therefore, performance testing focused on evaluating the performance of systems under workload has gained greater attention in current software development. Although there are many performance testing tools available for providing assistance in load generation, there is a lack of a systematic evaluation process to provide guidance and parameters for tool selection for a specific domain. Focusing on business operations as the specific domain and the Nasdaq Central Securities Depository (NCSD) system as an example of large-scale software systems, this thesis explores opportunities and challenges of existing open- source performance testing tools as measured by usability and feasibility metrics. The thesis presents an approach to evaluate performance testing tools concerning requirements from the business domain and the system under test. This approach consists of a user study conducted with four quality assurance experts discussing general performance metrics and specific analytical needs. The outcome of the user study provided the assessment metrics for a comparative experimental evaluation of three open-source performance testing tools (JMeter, Locust, and Gatling) with a realistic test scenario. These three tools were evaluated in terms of their affordance and limitations in presenting analytical details of performance metrics, efficiency of load generation, and ability to implement realistic load models. The research shows that the user study with potential tool users provided a clear direction when evaluating the usability of the three tools. Additionally, the realistic test case was sufficient to reveal each tool’s capability to achieve the same scale of performance as the Nasdaq’s in-house testing tool OMEXUS and provide additional value with realistic simulation of user population and user behavior during performance testing with regard to the specified requirements. / Med utvecklingen av tjänste-digitalisering och ökad användning av webbtjänster behöver moderna storskaliga mjukvarusystem ofta stödja en stor mängd samtidiga transaktioner. Prestandatestning med fokus på att utvärdera prestanda för system under arbetsbelastning har därför fått större uppmärksamhet i den aktuella programvaru utvecklingen. Även om det finns många verktyg för prestandatestning tillgängliga för att ge hjälp i belastnings generering, saknas det en systematisk utvärderingsprocess för att ge vägledning och parametrar för verktygsval för en viss domän. Med fokus på affärsverksamhet som den specifika domänen och Nasdaq Central Securities Depository (NCSD) -systemet, som ett exempel på storskaliga mjukvarusystem, utforskar denna avhandling möjligheter och utmaningar med befintliga verktyg för prestandatestning med öppen källkod mätt med användbarhets- och genomförbarhet mått. Avhandlingen presenterar ett tillvägagångssätt för att utvärdera prestandatestverktyg avseende krav från företagsdomänen och det system som testas. Detta tillvägagångssätt består av en användarstudie utförd med fyra kvalitetssäkringsexperter som diskuterar allmänna prestandamått och specifika analytiska behov. Resultatet av användarstudien gav bedömningsmåtten för en jämförande experimentell utvärdering av tre verktyg för prestandatestning med öppen källkod (JMeter, Locust och Gatling) med ett realistiskt testscenario. Dessa tre verktyg utvärderades i termer av deras överkomlighet och begränsningar när det gäller att presentera analytiska detaljer om prestandamått, effektiviteten i lastgenereringen och förmågan att implementera realistiska belastningsmodeller. Forskningen visar att användarstudien med potentiella verktygsanvändare gav en tydlig riktning vid utvärdering av användbarheten av de tre verktygen. Dessutom var det realistiska testfallet tillräckligt för att avslöja varje verktygs förmåga att uppnå samma skala av prestanda som Nasdaqs interna testverktyg OMEXUS och ge ytterligare värde med realistisk simulering av användarpopulation och användarbeteende under prestandatestning med avseende på de angivna kraven.
856

IoT-Based DigitalTwin Frameworkfor environmentalmonitoring in theIndoor Environment:Design and Implementation

Adnan Abdullah, Ahmad, Alshehada, Essa January 2022 (has links)
Purpose: This thesis aims to describe how to design and implement an IoT-Based digital twin framework for environmental monitoring in the indoor environment.  To fulfill the purpose of the study, the following research question is answered. How to create a digital twin solution utilizing AWS to establish interaction and convergence between the physical environment in a classroom and the virtual environment?  Method: As a research method, the research has conducted design science research (DSR). DSR is a new method, and it is an effective tool for enhancing engineering education research methods.  Results: The study describes in detail the steps required to create the framework. The framework enabled interaction and convergence between the physical and virtual environments in a particular location.  Implications: The research contributes to broadening the knowledge on using the Internet of things (IoT), digital twin (DT), and Amazon web services (AWS). The study provides future research with reference data and a framework to build upon.  Research Limitation: Due to time constraints, the study's scope and limitations are limited to the technologies that the participating company, Knowit, provides. Knowit AB is a Swedish IT consulting company that supports companies and organizations with services in digital transformation and system development. The study aims to create an AWS-based IoT framework, not improve the digital twin concept. The framework was implemented at Jönköping University. This work is also limited to temperature and light intensity as environmental parameters.
857

Autentiseringsprocesser i molnbaserade datortjänster

Göthesson, Richard, Hedman, Gustav January 2016 (has links)
Tidigare forskning har påvisat brister i olika former av autentiseringsprocesser som leder till autentiseringsattacker. Målet med vår studie är att presentera ett antal riktlinjer som företag och privatpersoner kan följa för att minimera risken för autentiseringsattacker. Metoderna som användes för att komma fram till dessa riktlinjer var kvalitativa där en praktisk observationsstudie, en litteraturstudie samt en enkätundersökning låg till grund för vår insamlade data. Resultatet av studien pekar på att Google Cloud Platform, Amazon Web Services och Microsoft Azure alla har en stark autentiseringsprocess i jämförelse med kritik från tidigare forskning. Enkätundersökningen visade dessutom att olika former av alternativ autentisering, såsom Two Factor Authentication (2FA) och Multi Factor Authentication (MFA), rekommenderas för ett starkt försvar mot autentiseringsattacker.Uppsatsens resultat pekar även på att användarens egenansvar i autentiseringsprocessen är av stor vikt för att minimera risken för autentiseringsattacker. Säkra lösenord bör konstrueras och frekvent bytas ut. Även alternativ autentisering och begränsning av användarens tillgång till känslig information bör tillämpas. / Previous research has shown deficiencies in various forms of authentication processes that lead to authentication attacks. The goal of our study is to present a number of guidelines that businesses and individuals can follow to minimize the risk of authentication attacks. The methods used to reach these guidelines were qualitative. They consisted of a practical observational study, a literature review and a survey, which formed the basis of our collected data. The results of the study indicate that Google Cloud Platform, Amazon Web Services and Microsoft Azure all have a strong authentication process in comparison with the criticism of previous research. The survey also showed that different forms of authentication methods, such as the Two Factor Authentication (2FA) and Multi Factor Authentication (MFA), are recommended for a strong defense against authentication attacks.The thesis’ results also points to the user’s own responsibility in the authentication process are essential to minimize the risk of authentication attacks. Secure passwords should be designed and frequently replaced. Alternative authentication and restricted access to sensitive information for the user should also be applied.
858

[en] EXTENSION OF AN INTEGRATION SYSTEM OF LEARNING OBJECTS REPOSITORIES AIMING AT PERSONALIZING QUERIES WITH FOCUS ON ACCESSIBILITY / [pt] EXTENSÃO DE UM SISTEMA DE INTEGRAÇÃO DE REPOSITÓRIOS DE OBJETOS DE APRENDIZAGEM VISANDO A PERSONALIZAÇÃO DAS CONSULTAS COM ENFOQUE EM ACESSIBILIDADE

RAPHAEL GHELMAN 16 October 2006 (has links)
[pt] Hoje em dia e-learning está se tornando mais importante por possibilitar a disseminação de conhecimento e informação através da internet de uma forma mais rápida e menos dispendiosa. Consequentemente, de modo a filtrar o que é mais relevante e/ou de interesse do usuário, arquiteturas e técnicas de personalização vêm sendo abordadas. Dentre as muitas possibilidades de personalização existentes, a que lida com acessibilidade está se tornando essencial, pois garante que uma grande variedade de usuários possa ter acesso à informação conforme suas necessidades e características. Acessibilidade não é apenas garantir que pessoas com alguma deficiência, ou dificuldade, possam ter acesso à informação, apesar de ser importante e eventualmente ser uma exigência legal. Acessibilidade é também garantir que uma larga variedade de usuários e interfaces possam obter acesso à informação, maximizando assim a audiência potencial. Esta dissertação apresenta uma extensão do LORIS, um sistema de integração de repositórios de objetos de aprendizagem, descrevendo as alterações na sua arquitetura para ser capaz de lidar com acessibilidade e reconhecer diferentes versões de um mesmo objeto de aprendizagem, permitindo assim que um usuário execute uma consulta considerando seu perfil e preferências. Foi desenvolvido um protótipo dos serviços descritos na arquitetura utilizando serviços Web e navegação facetada, bem como padrões web, de e-learning e de acessibilidade. O uso de serviços Web e de padrões visa promover flexibilidade e interoperabilidade, enquanto a navegação facetada, como implementada, permite que o usuário aplique múltiplos filtros aos resultados da consulta sem a necessidade de re-submetê-la. / [en] Nowadays e-learning is becoming more important as it makes possible the dissemination of knowledge and information through the internet in a faster and costless way. Consequently, in order to filter what is more relevant and/or of users interest, architectures and personalization techniques have been raised. Among the many existing possibilities of personalization, the one that deals with accessibility is becoming essential because it guarantees that a wide variety of users may have access to the information according to their preferences and needs. Accessibility is not just about ensuring that disabled people can access information, although this is important and may be a legal requirement. It is also about ensuring that the wide variety of users and devices can all gain access to information, thereby maximizing the potential audience. This dissertation presents an extension of LORIS, an integration system of learning object repositories, describing the changes on its architecture to make it able to deal with accessibility and to recognize different versions of the same learning object, thus allowing a user to execute a query considering his/her preferences and needs. A prototype of the services that are described in the architecture was developed using web services and faceted navigation, as well as e-learning and accessibility standards. The use of web services and standards aims at providing flexibility and interoperability, while the faceted navigation, as implemented, allows the user to apply multiple filters to the query results without the need to resubmit it.
859

SweetDeal: Representing Agent Contracts With Exceptions using XML Rules, Ontologies, and Process Descriptions

GROSOF, BENJAMIN, POON, TERRENCE C. 16 September 2003 (has links)
SweetDeal is a rule-based approach to representation of business contracts that enables software agents to create, evaluate, negotiate, and execute contracts with substantial automation and modularity. It builds upon the situated courteous logic programs knowledge representation in RuleML, the emerging standard for Semantic Web XML rules. Here, we newly extend the SweetDeal approach by also incorporating process knowledge descriptions whose ontologies are represented in DAML+OIL (the close predecessor of W3C's OWL, the emerging standard for Semantic Web ontologies), thereby enabling more complex contracts with behavioral provisions, especially for handling exception conditions (e.g., late delivery or non-payment) that might arise during the execution of the contract. This provides a foundation for representing and automating deals about services – in particular, about Web Services, so as to help search, select, and compose them. We give a detailed application scenario of late delivery in manufacturing supply chain management (SCM). In doing so, we draw upon our new formalization of process ontology knowledge from the MIT Process Handbook, a large, previously-existing repository used by practical industrial process designers. Our system is the first to combine emerging Semantic Web standards for knowledge representation of rules (RuleML) with ontologies (DAML+OIL/OWL) with each other, and moreover for a practical e-business application domain, and further to do so with process knowledge. This also newly fleshes out the evolving concept of Semantic Web Services. A prototype (soon public) i
860

Enabling One-Phase Commit (1PC) Protocol for Web Service Atomic Transaction (WS-AT)

Rana, Chirag N. 01 January 2014 (has links)
Business transactions (a.k.a., business conversations) are series of message exchanges that occur between software applications coordinating to achieve a business objective. Web service has been proven to be a promising technology in supporting business transactions. Business transaction can either be long-running or short-lived. A transaction whether in a database or web service paradigm consists of an “all-or-nothing” property. A transaction could either succeed or fail. Web Service Atomic Transactions (WS-AT) is a specification that currently supports Two-Phase Commit (2PC) protocol in a short-lived transaction. WS-AT is developed by OASIS–a standards development organization. However, not all business process scenarios require a 2PC, in that case, just a One-Phase Commit (1PC) would be sufficient. But unfortunately, WS-AT currently does not support 1PC optimization. The ideal scenario where 1PC can be used instead of 2PC is when there is only a single participant. Short-lived transactions involving only one participant can commit without requiring initial “prepare” phase. Thus, there is no overhead to check whether the participant is prepared to either commit or rollback. This research focuses on designing a mechanism that can add 1PC support in WS-AT. The technical implementation of this mechanism is developed by using JBoss Transaction API. As a part of this thesis, 1PC mechanism for a single participant scenario was implemented. This mechanism optimizes the web service transaction process in terms of overhead and performance in terms of execution time. The technical implementation solution for 1PC mechanism was evaluated using three different business process scenarios in a controlled experiment as a presence or absence test. Evaluation results show that 1PC mechanism has a lower mean for execution time and performed significantly better than 2PC mechanism. Based on the contributions made by this thesis, we recommend OASIS to consider including 1PC mechanism as a part of the WS-AT specification.

Page generated in 0.0606 seconds