• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Uppe bland molnen : Hur affärssystemleverantörer hanterar de huvudsakliga riskerna med Cloud Computing. / In the clouds : how ERP providers manage the main risks with Cloud Computing.

Johansson, Tina, Bergström, Johannes January 2012 (has links)
IT-världen är under ständig förändring, en lösning som var tänkbar för tio år sedan är nukanske föråldrad. Ett alternativ som de senaste åren har växt sig större inom IT-världen ärCloud Computing. Att lägga ut delar eller hela affärssystemet i en molnbaserad tjänst viainternet är i dagsläget fullt möjligt. Det kan leda till kostnadsreduceringar gentemot etttraditionellt affärssystem då företaget inte behöver spendera pengar på egna servrar förlagring. Det är även flexibelt i det hänseendet att företaget kan bestämma själv hurmycket lagringutrymme de vill ha. Samtidigt så har forskning visat på att det finns ettantal risker med molnbaserade tjänster, exempelvis hur prestanda påverkas genom attanvända sig av en tjänst placerad på internet istället för internt på företaget. Det finnsockså en osäkerhet gällande hur lagringen sker när det gäller konfidentiell information.Då det finns mycket information om att det upplevs risk med användande av CloudComputing så finns det inte definierat vilka som är de mest oroande riskerna. Det stårockså oklart hur affärssystemleverantörer hanterar risker med molnbaserade tjänster.Syftet med denna uppsats är därmed att identifiera de huvudsakliga riskerna som finnsmed Cloud Computing samt att låta affärssystemleverantörer svara på hur de hanterardessa risker.Utgångspunkten har varit befintlig litteratur och undersökningar för att skapa denteoretiska referensramen. I teorin har också de huvudsakliga riskerna med CloudComputing identifierats. De har sedan använts som grund för att formulera deintervjufrågor som har använts för insamling av empiri. Detta har gjorts genom intervjuermed fem affärssystemleverantörer som har fått berätta om hur de hanterar dehuvudsakliga riskerna med Cloud Computing. Svaren har sedan dokumenterats för attrelatera de med varandra för att skapa en samlad bild hur risker med Cloud Computinghanteras.Studiens resultat visar på att det finns olika tjänster inom Cloud Computing. Software asa Service, Platform as a Service och Infrastructure as a Service är tre olika tjänsteformersom finns inom Cloud Computing. Inom Cloud Computing finns det olikasäkerhetsaspekter att ta hänsyn till på olika nivåer. Dessa aspekter finns på nätverk-,värd- och applikationsnivå. Företag kan även nyttja Service-level agreement vidinförandet av en molntjänst, vilket är ett avtal som reglerar vad kunden samt leverantörenhar för krav på sig vid användandet av en molntjänst. Det har identifierats femhuvudsakliga risker med Cloud Computing. Sekretess och integritet, juridiska krav,tillgänglighet, kontroll och lagring samt leverantörsberoende. Under den empiriskastudien så har fem affärssystemleverantörer besvarat hur dessa risker hanteras. Sättet dehanterar riskerna visar på både likheter och skillnader affärssystemleverantörernaemellan. / Program: Dataekonomutbildningen
512

Leverans av IT-system : Tentativa faktorer som påverkar valet / Delivery of IT systems : Tentative factors affecting the choice

Berggren, Theresé January 2012 (has links)
Cloud computing diskuteras överallt trots det finns det fortfarande ingen gemensam syn på vad det egentligen är. Om tio specialister inom området fick frågan vad cloud computing innebär skulle tio olika svar ges. Om en definition på cloud görs har den inte en likadan betydelse som begreppet cloud computing. Ordet cloud är en vanlig metafor för Internet men när cloud kombineras med computing blir innebörden större och otydligare. Termen cloud i en striktare mening innebär användandet av datorresurser online snarare än över hela Internet. Specialister inom området är överens om att termen cloud computing innefattar att programvaror, datalagring och processorkraft finns tillgängligt över Internet. Den nya trenden cloud computing har inte längre ett datacenter lokalt i företaget som det finns i traditional computing. Begreppet traditional computing är inget vanligt begrepp som används men nu när cloud computing är ett aktuellt diskussionsämne behöver de ha någon annan form av computing för att kunna jämföra med, det för att förstå cloud computing bättre. Traditional computing innebär att företagen själva äger och driver sina applikationer på deras egen infrastruktur. Leverantören installerar mjukvara lokalt hos kundens datorenheter och kundens data lagras på plats. Många diskussioner rör sig omkring hur användarna upplever det och inte hur leverantörer inom de både områdena uppfattar det. Att kunder upplever många för- och nackdelar med områdena gjorde det intressant att göra en studie kring hur leverantörer upplever att leverera tjänster via cloud computing och traditional computing. För att studien skulle få ett resultat genomfördes en teoretisk undersökning för att få en förståelse för områdena och som sedan skulle vara till hjälp inför den empiriska undersökningen. Den empiriska undersökningen var dessutom en grund för att tolka och förstå hur systemutvecklingsföretag uppfattar vilka faktorer leverantörer bör uppmärksamma på. Att samla in empirimaterial genomfördes med kvalitativa intervjuer och teorimaterial via litteraturstudier. Forskaren har efter denna undersökning kunna presentera vilka tentativa faktorer en IT-leverantör bör uppmärksamma vid val av leveranssätt. / Program: Dataekonomutbildningen
513

Cloud Computing Adoption in Iran as a Developing Country : A Tentative Framework Based on Experiences from Iran.

Mousavi Shoshtari, Seyed Farid January 2013 (has links)
The employment of the right technology in an organisation can provide major competitiveadvantages. Not only in organisations, but at a higher level, governments are seeking for newtechnologies to enhance their services while minimising the costs. Although, there might beno precise definition for cloud computing, the tremendous advantages and benefits of this newtechnology has turn cloud computing to the hottest topic in Information Technology.The remarkable effects of cloud computing in economy have already stimulated thedeveloped countries to deploy this technology in national level. Nonetheless, the adoption ofcloud computing could transform the workflow in the organisations. Therefore, in order toensure the smooth transition with minimal casualties, preparations needs to be done and aclear road map has to be followed.However, the approach to cloud adoption process in developing countries can be entirelydifferent. While it has been pointed out that cloud computing can bring more advantages todeveloping countries, it adoption can be profoundly challenging. Consequently, a set offundamental and yet vital preparation are required to facilitate the process of cloud adoption.Moreover, a definite framework that is formed based on the current state of the country isabsolutely necessary.In this research, we focus on the process of cloud adoption in Iran as a developing country.We start by providing a comprehensive background on cloud computing by studying itsaspects, features, advantages and disadvantages and continue to identify the vital cloudreadiness criteria. Next, we conduct an empirical study in order to assess the state of cloudreadiness in Iran by performing interviews, observations and discussions. Finally, after weanalyse our data from the empirical study, we present our results by presenting a clear and definitive framework for cloud adoption in Iran. / Program: Masterutbildning i Informatik
514

Design and Optimization of Mobile Cloud Computing Systems with Networked Virtual Platforms

Jung, Young Hoon January 2016 (has links)
A Mobile Cloud Computing (MCC) system is a cloud-based system that is accessed by the users through their own mobile devices. MCC systems are emerging as the product of two technology trends: 1) the migration of personal computing from desktop to mobile devices and 2) the growing integration of large-scale computing environments into cloud systems. Designers are developing a variety of new mobile cloud computing systems. Each of these systems is developed with different goals and under the influence of different design constraints, such as high network latency or limited energy supply. The current MCC systems rely heavily on Computation Offloading, which however incurs new problems such as scalability of the cloud, privacy concerns due to storing personal information on the cloud, and high energy consumption on the cloud data centers. In this dissertation, I address these problems by exploring different options in the distribution of computation across different computing nodes in MCC systems. My thesis is that "the use of design and simulation tools optimized for design space exploration of the MCC systems is the key to optimize the distribution of computation in MCC." For a quantitative analysis of mobile cloud computing systems through design space exploration, I have developed netShip, the first generation of an innovative design and simulation tool, that offers large scalability and heterogeneity support. With this tool system designers and software programmers can efficiently develop, optimize, and validate large-scale, heterogeneous MCC systems. I have enhanced netShip to support the development of ever-evolving MCC applications with a variety of emerging needs including the fast simulation of new devices, e.g., Internet-of-Things devices, and accelerators, e.g., mobile GPUs. Leveraging netShip, I developed three new MCC systems where I applied three variations of a new computation distributing technique, called Reverse Offloading. By more actively leveraging the computational power on mobile devices, the MCC systems can reduce the total execution times, the burden of concentrated computations on the cloud, and the privacy concerns about storing personal information available in the cloud. This approach also creates opportunities for new services by utilizing the information available on the mobile device instead of accessing the cloud. Throughout my research I have enabled the design optimization of mobile applications and cloud-computing platforms. In particular, my design tool for MCC systems becomes a vehicle to optimize not only the performance but also the energy dissipation, an aspect of critical importance for any computing system.
515

On SIP Server Clusters and the Migration to Cloud Computing Platforms

Kim, Jong Yul January 2016 (has links)
This thesis looks in depth at telephony server clusters, the modern switchboards at the core of a packet-based telephony service. The most widely used de facto standard protocols for telecommunications are the Session Initiation Protocol (SIP) and the Real Time Protocol (RTP). SIP is a signaling protocol used to establish, maintain, and tear down communication channel between two or more parties. RTP is a media delivery protocol that allows packets to carry digitized voice, video, or text. SIP telephony server clusters that provide communications services, such as an emergency calling service, must be scalable and highly available. We evaluate existing commercial and open source telephony server clusters to see how they differ in scalability and high availability. We also investigate how a scalable SIP server cluster can be built on a cloud computing platform. Elasticity of resources is an attractive property for SIP server clusters because it allows the cluster to grow or shrink organically based on traffic load. However, simply deploying existing clusters to cloud computing platforms is not good enough to take full advantage of elasticity. We explore the design and implementation of clusters that scale in real-time. The database tier of our cluster was modified to use a scalable key-value store so that both the SIP proxy tier and the database tier can scale separately. Load monitoring and reactive threshold-based scaling logic is presented and evaluated. Server clusters also need to reduce processing latency. Otherwise, subscribers experience low quality of service such as delayed call establishment, dropped calls, and inadequate media quality. Cloud computing platforms do not guarantee latency on virtual machines due to resource contention on the same physical host. These extra latencies from resource contention are temporary in nature. Therefore, we propose and evaluate a mechanism that temporarily distributes more incoming calls to responsive SIP proxies, based on measurements of the processing delay in proxies. Availability of SIP server clusters is also a challenge on platforms where a node may fail anytime. We investigated how single component failures in a cluster can lead to a complete system outage. We found that for single component failures, simply having redundant components of the same type are enough to mask those failures. However, for client-facing components, smarter clients and DNS resolvers are necessary. Throughout the thesis, a prototype SIP proxy cluster is re-used, with variations in the architecture or configuration, to demonstrate and address issues mentioned above. This allows us to tie all of our approaches for different issues into one coherent system that is dynamically scalable, is responsive despite latency varations of virtual machines, and is tolerant of single component failures in cloud platforms.
516

Essays on Cloud Pricing and Causal Inference

Kilcioglu, Cinar January 2016 (has links)
In this thesis, we study economics and operations of cloud computing, and we propose new matching methods in observational studies that enable us to estimate the effect of green building practices on market rents. In the first part, we study a stylized revenue maximization problem for a provider of cloud computing services, where the service provider (SP) operates an infinite capacity system in a market with heterogeneous customers with respect to their valuation and congestion sensitivity. The SP offers two service options: one with guaranteed service availability, and one where users bid for resource availability and only the "winning" bids at any point in time get access to the service. We show that even though capacity is unlimited, in several settings, depending on the relation between valuation and congestion sensitivity, the revenue maximizing service provider will choose to make the spot service option stochastically unavailable. This form of intentional service degradation is optimal in settings where user valuation per unit time increases sub-linearly with respect to their congestion sensitivity (i.e., their disutility per unit time when the service is unavailable) -- this is a form of "damaged goods." We provide some data evidence based on the analysis of price traces from the biggest cloud service provider, Amazon Web Services. In the second part, we study the competition on price and quality in cloud computing. The public "infrastructure as a service" cloud market possesses unique features that make it difficult to predict long-run economic behavior. On the one hand, major providers buy their hardware from the same manufacturers, operate in similar locations and offer a similar menu of products. On the other hand, the competitors use different proprietary "fabric" to manage virtualization, resource allocation and data transfer. The menus offered by each provider involve a discrete number of choices (virtual machine sizes) and allow providers to locate in different parts of the price-quality space. We document this differentiation empirically by running benchmarking tests. This allows us to calibrate a model of firm technology. Firm technology is an input into our theoretical model of price-quality competition. The monopoly case highlights the importance of competition in blocking "bad equilibrium" where performance is intentionally slowed down or options are unduly limited. In duopoly, price competition is fierce, but prices do not converge to the same level because of price-quality differentiation. The model helps explain market trends, such the healthy operating profit margin recently reported by Amazon Web Services. Our empirically calibrated model helps not only explain price cutting behavior but also how providers can manage a profit despite predictions that the market "should be" totally commoditized. The backbone of cloud computing is datacenters, whose energy consumption is enormous. In the past years, there has been an extensive effort on making the datacenters more energy efficient. Similarly, buildings are in the process going "green" as they have a major impact on the environment through excessive use of resources. In the last part of this thesis, we revisit a previous study about the economics of environmentally sustainable buildings and estimate the effect of green building practices on market rents. For this, we use new matching methods that take advantage of the clustered structure of the buildings data. We propose a general framework for matching in observational studies and specific matching methods within this framework that simultaneously achieve three goals: (i) maximize the information content of a matched sample (and, in some cases, also minimize the variance of a difference-in-means effect estimator); (ii) form the matches using a flexible matching structure (such as a one-to-many/many-to-one structure); and (iii) directly attain covariate balance as specified ---before matching--- by the investigator. To our knowledge, existing matching methods are only able to achieve, at most, two of these goals simultaneously. Also, unlike most matching methods, the proposed methods do not require estimation of the propensity score or other dimensionality reduction techniques, although with the proposed methods these can be used as additional balancing covariates in the context of (iii). Using these matching methods, we find that green buildings have 3.3% higher rental rates per square foot than otherwise similar buildings without green ratings ---a moderately larger effect than the one previously found.
517

O conhecimento e a pesquisa nas nuvens: uma pesquisa social sobre a aplicação das práticas de gestão do conhecimento associadas às tecnologias de computação em nuvem nos ambientes de pesquisas. / Knowledge and research in the clouds: a social survey on the implementation of knowledge management practices associated with cloud computing technologies in research environments.

Santos, Domingos Bernardo Gomes 17 March 2016 (has links)
As tecnologias de computação em nuvem estão se tornando uma tendência na indústria de TI. Tratam-se de tecnologias que buscam um melhor aproveitamento dos recursos computacionais que são utilizados no âmbito empresarial e que passaram a ser adotadas pelas universidades e instituições de pesquisas. A gestão do conhecimento está se transformando em um valioso recurso estratégico para as empresas e tem sido apontada por estudiosos, pesquisadores e cientistas como relevante e obrigatória para o crescimento das organizações nas mais variadas áreas de atuação. Este estudo teve como objetivo investigar em que medida as práticas de gestão do conhecimento associadas com as tecnologias de computação em nuvem podem contribuir com a produção do conhecimento nos ambientes de pesquisas. O estudo foi realizado através de uma pesquisa social, cujo instrumento de pesquisa foi um questionário aplicado a alunos de pós graduação, pesquisadores mestres e doutores e professores nas áreas da computação e engenharia de universidades públicas no Brasil, cuja taxa de respostas obtida foi de 37.80%. Esta pesquisa social avaliou quais são os impactos causados pela adoção das práticas de gestão do conhecimento sobre a produção do conhecimento científico. Para tanto, optou-se por empregar como referência o modelo de gestão do conhecimento de Nonaka e Takeuchi (1997) para identificar e classificar ações de socialização, externalização, combinação e internalização dos conhecimentos científicos produzidos no ambiente de pesquisas. Conclui-se, que a adoção das práticas de gestão do conhecimento podem estabelecer uma cultura organizacional com enfoque no conhecimento onde são valorizadas todas as ações que venham contribuir com a produção do conhecimento científico. Esta pesquisa social também avaliou como as tecnologias de computação em nuvem podem favorecer o desenvolvimento das atividades relacionadas a pesquisa científica. Apurou-se que as tecnologias computacionais se tornaram indispensáveis e a maioria dos entrevistados informaram que utilizam ou já utilizaram as tecnologias de computação em nuvem no desenvolvimento das atividades relacionadas com a pesquisa científica. Os resultados obtidos sugerem que a adoção das práticas de gestão do conhecimento associadas a utilização de tecnologias de computação em nuvem podem trazer diversos benefícios e contribuições aos ambientes de pesquisas e consequentemente a produção do conhecimento científico. Por fim, espera-se também que os gestores de grupos de pesquisas possam utilizar as informações apresentadas neste trabalho para apoiar a adoção de práticas de gestão do conhecimento e incentivar a utilização das tecnologias computacionais em nuvem que encontram-se disponíveis nos ambientes de pesquisas científicas. / Cloud computing technologies are becoming a trend in the IT industry. These are technologies that seek to make better use of computing resources that are used in the business sector and that began to be adopted by universities and research institutions. Knowledge management is becoming a valuable strategic resource for companies and has been pointed out by scholars, researchers and scientists as relevant and required for the growth of organizations in various areas of expertise. This study aimed to investigate the extent to which knowledge management practices associated with cloud computing technologies can contribute to the production of knowledge in research environments. The study was conducted through a social research, whose research instrument was a questionnaire applied to graduate students, researchers and teachers in the areas of computer and engineer of public universities in Brazil, whose reply rate was 37.80%. This social research assesses what are the impacts caused by the adoption of knowledge management practices on the production of scientific knowledge. It was decided to use as a reference the model of knowledge management Nonaka and Takeuchi (1997) to identify and classify actions socialization, externalization, combination and internalization of scientific knowledge produced in the research environment. In conclusion, that the adoption of knowledge management practices can establish an organizational culture focused on knowledge which is valued all actions that contribute to the production of scientific knowledge. This social research also assessed how cloud computing technologies can foster the development of activities related to scientific research. It was found that computer technologies have become indispensable and the majority of respondents reported that they use or have used cloud computing technologies in the development of activities related to scientific research. The results suggest that the adoption of knowledge management practices associated with the use of cloud computing technology can bring many benefits and contributions to research environments and consequently the production of scientific knowledge. Finally, it is also expected that managers of research groups can use the information presented in this work to support the adoption of knowledge management practices and encourage the use of computer technologies in cloud that had been available in scientific research environments.
518

Oferecimento de QoS para computação em nuvens por meio de metaescalonamento / Providing QoS to cloud computing by means metascheduling

Peixoto, Maycon Leone Maciel 13 August 2012 (has links)
Este projeto apresenta a proposta de uma arquitetura de Metaescalonador que leva em consideração o emprego de qualidade de serviço (QoS) para o ambiente de Computação em Nuvem. O Metaescalonador é capaz de realizar a alocação dos recursos dinamicamente, procurando atender as restrições temporais. Em resposta a esse dilema de escalonamento aplicado a Computação em Nuvem, este projeto propõe uma abordagem chamado MACC: Metascheduler Architecture to provide QoS in Cloud Computing. A função principal do MACC é distribuir e gerenciar o processamento das requisições de serviços entre os recursos disponíveis, cumprindo os termos agregados na SLA - Service Level Agreement. São apresentados resultados obtidos considerando-se diferentes algoritmos de roteamento e de alocação de máquinas virtuais. Os resultados apresentados são discutidos e analisados de acordo com as técnicas de planejamento de experimentos / This project proposes a Metascheduler architecture that takes into account the use of quality of service (QoS) to cloud computing environment. The Metascheduler is capable of dynamically allocating resources, trying to meet the timing constraints. In response to this scheduling dilemma applied to cloud computing, this project proposes an approach called MACC - Metascheduler Architecture to Provide QoS in Cloud Computing. The main function of the MACC is to distribute and to manage the service requests among the available resources, meeting the aggregate terms in the SLA - Service Level Agreement. Results are presented considering different routing algorithms and allocation of virtual machines. The results are discussed and analyzed in accordance with the techniques of experimental design
519

Análise de desempenho de interfaces de rede virtualizadas com NAPI / Performance analysis of virtualized network interfaces with NAPI

Kuroda, Eduardo Hideo 26 November 2013 (has links)
Em ambientes virtualizados, como nuvens computacionais, a capacidade efetiva de transmissão de dados via rede tende a ser inferior à de ambientes não virtualizados quando aplicações que fazem uso intensivo da rede são executadas. Uma das principais causas para essa diferença na capacidade de transmissão é a arquitetura da virtualização de rede, que adiciona passos para o sistema operacional transmitir e receber um pacote. Esses passos adicionais acarretam em maior utilização de memória e de processamento. Em ambientes virtualizados com o sistema operacional GNU/Linux, a New Application Programming Interface (NAPI) é utilizada para reduzir os impactos negativos da virtualização por meio de agregação de interrupções. Nesta dissertação de mestrado, são estudados mecanismos que modificam a configuração da NAPI. Experimentos mostram que esses mecanismos afetam o desempenho de máquinas virtuais e tem consequências diretas nas aplicações que fazem uso intensivo de rede e que são executadas em ambientes com os softwares de virtualização Xen, VMware e VirtualBox. / In virtualized environments, such as cloud computing, the effective capacity of data transmission via network cards tends to be lower than that in non-virtualized environments, when network intensive applications are executed. A major cause for this difference in the transmission capacity is the architecture of network virtualization, which adds some steps to be performed by the system when packets are transmitted or received. These additional steps cause more memory and processing usage. In virtualized environments with the GNU/Linux operating system, the New Application Programming Interface (NAPI) is used to reduce the negative impacts of virtualization through interrupt coalescence. In this dissertation, mechanisms that modify the configuration of NAPI are studied. Experiments show that these mechanisms affect the performance of virtual machines and have direct effects in applications that make intensive use of the network in environments with Xen, VMware and VirtualBox.
520

Cost-efficient resource management for scientific workflows on the cloud

Pietri, Ilia January 2016 (has links)
Scientific workflows are used in many scientific fields to abstract complex computations (tasks) and data or flow dependencies between them. High performance computing (HPC) systems have been widely used for the execution of scientific workflows. Cloud computing has gained popularity by offering users on-demand provisioning of resources and providing the ability to choose from a wide range of possible configurations. To do so, resources are made available in the form of virtual machines (VMs), described as a set of resource characteristics, e.g. amount of CPU and memory. The notion of VMs enables the use of different resource combinations which facilitates the deployment of the applications and the management of the resources. A problem that arises is determining the configuration, such as the number and type of resources, that leads to efficient resource provisioning. For example, allocating a large amount of resources may reduce application execution time however at the expense of increased costs. This thesis investigates the challenges that arise on resource provisioning and task scheduling of scientific workflows and explores ways to address them, developing approaches to improve energy efficiency for scientific workflows and meet the user's objectives, e.g. makespan and monetary cost. The motivation stems from the wide range of options that enable to select cost-efficient configurations and improve resource utilisation. The contributions of this thesis are the following. (i) A survey of the issues arising in resource management in cloud computing; The survey focuses on VM management, cost efficiency and the deployment of scientific workflows. (ii) A performance model to estimate the workflow execution time for a different number of resources based on the workflow structure; The model can be used to estimate the respective user and energy costs in order to determine configurations that lead to efficient resource provisioning and achieve a balance between various conflicting goals. (iii) Two energy-aware scheduling algorithms that maximise the number of completed workflows from an ensemble under energy and budget or deadline constraints; The algorithms address the problem of energy-aware resource provisioning and scheduling for scientific workflow ensembles. (iv) An energy-aware algorithm that selects the frequency to be used for each workflow task in order to achieve energy savings without exceeding the workflow deadline; The algorithm takes into account the different requirements and constraints that arise depending on the workflow and system characteristics. (v) Two cost-based frequency selection algorithms that choose the CPU frequency for each provisioned resource in order to achieve cost-efficient resource configurations for the user and complete the workflow within the deadline; Decision making is based on both the workflow characteristics and the pricing model of the provider.

Page generated in 0.0798 seconds