• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Microphysical Analysis and Modeling of Amazonic Deep Convection / Análise e Modelagem Microfísica da Convecção Profunda Amazônica

Basso, João Luiz Martins 16 July 2018 (has links)
Atmospheric moist convection is one of the main topics discussed on weather and climate. This study purpose is to understand why different and similar cloud microphysics parameterizations produce different patterns of precipitation at the ground through several numerical sensitivity tests with the WRF model in the simulation of a squall line case observed on the Amazon region. Four different bulk microphysics parameterizations (Lin, WSM6, Morrison, and Milbrandt) were tested, and the main results show that statistical errors do not change significantly among each other for the four numerical domains (from 27 km up to 1 km grids). The correlations between radar rainfall data and the simulated precipitation fields show the double-moment parameterization Morrison scheme was the one that displayed better results in the overall: While Morrison scheme show 0.6 correlation in the western box of the 1 km domain, WSM6 and Lin schemes show 0.39 and 0.05, respectively. Nevertheless, because this scheme presents good correlations with the radar rain rates, it also shows a fairly better system lifecycle, evolution, and propagation when compared to the satellite data. Although, the complexity that the way microphysics variables are treated in both one-moment and double-moment schemes in this case study do not highly affect the simulatios results, the tridimensional vertical cross-sections show that the Purdue Lin and Morrison schemes display more intense systems compared to WSM6 and Milbrandt schemes, which may be associated with the different treatments of the ice-phase microphysics. In the specific comparison between double-moment schemes, the ice quantities generated by both Morrison and Milbrandt schemes highly affected thesystem displacement and rainfall intensity. This also affects the vertical velocities intensity which, in its, turn, changes the size of the cold pools. Differences in ice quantities were responsible for distinct quantities of total precipitable water content, which is related with the verticallly integrated ice mixing ratio generated by Morrison. The system moves faster in Milbrandt scheme compared to Morrison because the scheme generated more graupel quantities, which is smaller in size than hail, and it evaporates easier in the processes inside the cloud due to its size. This fact also changed the more intense cold pools intensity for Milbrandt scheme compared to Morrison. / A convecção atmosférica é um dos principais tópicos discutidos no tempo e clima. O objetivo deste estudo é entender por que diferentes e semelhantes parametrizações de microfísica de nuvens produzem diferentes padrões de precipitação no solo através de vários testes numéricos de sensibilidade com o modelo WRF na simulação de um caso de linha de instabilidade observado na região amazônica. Quatro diferentes parametrizações microfísicas de tipo bulk (Lin, WSM6, Morrison e Milbrandt) foram testadas, e os principais resultados mostram que os erros estatísticos não se alteram significativamente entre si para os quatro domínios numéricos (da grade de 27 km até a de 1 km). As correlações entre dados pluviométricos de radar e os campos de precipitação simulados mostram que o esquema Morrison de parametrização de duplo momento foi o que apresentou melhores resultados, no geral: enquanto o esquema de Morrison mostra correlação 0,6 na caixa oeste do domínio de 1 km, os esquemas WSM6 e Lin mostram 0,39 e 0,05, respectivamente. No entanto, como esse esquema apresenta boas correlações com as taxas de chuva do radar, ele também mostra um ciclo de vida, evolução e propagação do sistema relativamente melhores quando comparado aos dados de satélite. Embora a complexidade com que as variáveis microfísicas são tratadas nos esquemas de um momento e de duplo momento neste estudo de caso não afetam muito os resultados simulados, as seções transversais verticais tridimensionais mostram que os esquemas de Purdue Lin e Morrison exibem mais intensos em comparação com os esquemas WSM6 e Milbrandt, que podem estar associados aos diferentes tratamentos da microfísica da fase de gelo. Na comparação específica entre esquemas de momento duplo, as quantidades de gelo geradas pelos esquemas de Morrison e Milbrandt afetaram muito o deslocamento do sistema e a intensidade da chuva. Isso também afeta a intensidade das velocidades verticais que, por sua vez, altera o tamanho das piscinas frias. As diferençaas nas quantidades de gelo foram responsáveis por quantidades distintas de conteúdo total de água, que está relacionado com a razão de mistura de gelo verticalmente integrada gerada por Morrison. O sistema se move mais rápido no esquema de Milbrandt comparado a Morrison porque o esquema gerou mais quantidades de graupel, que é menor em tamanho do que o granizo, e evapora mais facilmente nos processos dentro da nuvem devido ao seu tamanho. Este fato também mudou a intensidade das piscinas frias mais intensas, porém menores em extensão horizontal, para o esquema Milbrandt em comparação com Morrison.
672

A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network models

Nilsson, Kristian, Jönsson, Hans-Eric January 2019 (has links)
Recent advancements in machine learning has contributed to an explosive growth of the image recognition field. Simultaneously, multiple Information Technology (IT) service providers such as Google and Amazon have embraced cloud solutions and software as a service. These factors have helped mature many computer vision tasks from scientific curiosity to practical applications. As image recognition is now accessible to the general developer community, a need arises for a comparison of its capabilities, and what can be gained from choosing a cloud service over a custom implementation. This thesis empirically studies the performance of five general image recognition services (Google Cloud Vision, Microsoft Computer Vision, IBM Watson, Clarifai and Amazon Rekognition) and image recognition models of the Convolutional Neural Network (CNN) architecture that we ourselves have configured and trained. Image and object level annotations of images extracted from different datasets were tested, both in their original state and after being subjected to one of the following six types of distortions: brightness, color, compression, contrast, blurriness and rotation. The output labels and confidence scores were compared to the ground truth of multiple levels of concepts, such as food, soup and clam chowder. The results show that out of the services tested, there is currently no clear top performer over all categories and they all have some variations and similarities in their output, but on average Google Cloud Vision performs the best by a small margin. The services are all adept at identifying high level concepts such as food and most mid-level ones such as soup. However, in terms of further specifics, such as clam chowder, they start to vary, some performing better than others in different categories. Amazon was found to be the most capable at identifying multiple unique objects within the same image, on the chosen dataset. Additionally, it was found that by using synonyms of the ground truth labels, performance increased as the semantic gap between our expectations and the actual output from the services was narrowed. The services all showed vulnerability to image distortions, especially compression, blurriness and rotation. The custom models all performed noticeably worse, around half as well compared to the cloud services, possibly due to the difference in training data standards. The best model, configured with three convolutional layers, 128 nodes and a layer density of two, reached an average performance of almost 0.2 or 20%. In conclusion, if one is limited by a lack of experience with machine learning, computational resources and time, it is recommended to make use of one of the cloud services to reach a more acceptable performance level. Which to choose depends on the intended application, as the services perform differently in certain categories. The services are all vulnerable to multiple image distortions, potentially allowing adversarial attacks. Finally, there is definitely room for improvement in regards to the performance of these services and the computer vision field as a whole.
673

Strategic behavior and revenue management of cloud services with reservation-based preemption of customer instances

Chamberlain, Jonathan Daniel 04 June 2019 (has links)
Cloud computing is a multi billion dollar industry, based around outsourcing the provisioning and maintenance of computing resources. In particular, Infrastructure as a Service (IaaS) enables customers to purchase virtual machines in order to run arbitrary software. IaaS customers are given the option to purchase priority access, while providers choose whether customers are preempted based on priority level. The customer decision is based on their tolerance for preemption. However, this decision is a reaction to the provider choice of preemption policy and cost to purchase priority. In this work, a non-cooperative game is developed for an IaaS system offering resource reservations. An unobservable $M|G|1$ queue with priorities is used to model customer arrivals and service. Customers receive a potential priority from the provider, and choose between purchasing a reservation for that priority and accepting the lowest priority for no additional cost. Customers select the option which minimizes their total cost of waiting. This decision is based purely on statistics, as customers cannot communicate with each other. This work presents the impact of the provider preemption policy choice on the cost customers will pay for a reserved instance. A provider may implement a policy in which no customers are preempted (NP); a policy in which all customers are subject to preemption (PR); or a policy in which only the customers not making reservations are subject to preemption (HPR). It is shown that only the service load impacts the equilibrium possibilities in the NP and PR policies, but that the service variance is also a factor under the HPR policy. These factors impact the equilibrium possibilities associated to a given reservation cost. This work shows that the cost leading to a given equilibrium is greater under the HPR policy than under the NP or PR policies, implying greater incentive to purchase reservations. From this it is proven that a provider maximizes their potential revenue from customer reservations under an HPR policy. It is shown that this holds in general and under the constraint that the reservation cost must correspond to a unique equilibrium. / 2020-06-03T00:00:00Z
674

Performance Comparison Study of Clusters on Public Clouds / Prestandajämförelse av cluster på offentliga molnleverantörer

Wahlberg, Martin January 2019 (has links)
As cloud computing has become the more popular choice to host clusters in recent times there are multiple providers that offer their services to the public such as Amazon web services, Google cloud platform and Microsoft Azure. The decision of cluster provider is not only a decision of provider, it is also an indirect decision of cluster infrastructure. The indirect choice of infrastructure makes it important to consider any potential differences in cluster performance caused by the infrastructure in combination with the workload type, but also the cost of the infrastructure on the available public cloud providers. To evaluate whether or not there are any significant differences in either cluster cost or performance between the available public cloud providers, a performance comparison study was conducted. The study consisted of multiple clusters hosted on Amazon Web Services and the Google Cloud Platform. The clusters had access to five different instance types that each correspond to a specific number of available cores, amount of memory and storage. All clusters executed a CPU intensive, I/O intensive, and MapReduce workload while simultaneously having its performance monitored with regard to CPU, memory, and disk usage. The performance comparison study revealed that there are significant performance differences between clusters hosted on Amazon web services and Google cloud platform for the chosen workload types. Since there are significant differences it can be concluded that the choice of provider is crucial as it impacts the cluster performance. Comparing the selected instance types against each other with regard to performance and cost, reveals that a subset of them have better performance as well as lower cost. The instance types that is not a part of this subset, have either better performance or lower cost than its counterpart on the other provider.
675

Comparação de diferentes densidades de pontos em perfilamentos LiDAR aerotransportado para ambiente urbano regular. / Comparison of different densities of points in airborne LiDAR profiling for regular urban environment.

Paula, César Francisco de 11 May 2017 (has links)
A utilização do sistema LiDAR na obtenção de dados da superfície da Terra vem se disseminando cada vez mais pelo alto desempenho na aquisição da informação e pela efetiva utilização dos produtos e subprodutos. Diversos segmentos passaram a adotar os produtos LiDAR como insumo básico e fundamental em suas rotinas de trabalho e estudos. O sucesso em um projeto que envolve aquisição e utilização desse tipo de dado está atrelado diretamente à definição e seu planejamento. O usuário deve ser capaz de definir o escopo básico e as diretrizes técnicas fundamentais que irão garantir que a demanda final seja alcançada. Diante disto se faz necessário elaborar um planejamento estabelecendo a melhor configuração para a aquisição dos dados e também os tipos de informações que serão extraídas destes produtos. Referente ao primeiro aspecto pode-se dizer que este é a base para a todo o projeto. Por meio dele é garantido a obtenção de produtos conforme a necessidade do usuário (resoluções espaciais dos produtos, o nível de detalhamento dos objetos, representação da topografia e outros). Muitos dos usuários que contratam serviços de perfilamento LiDAR não possuem embasamento técnico suficiente para definir a melhor especificação a ser adotada. Isto faz com que a maioria deles opte por adquirir uma alta densidade de pontos que, muitas vezes é desnecessária e ainda que atendam à demanda final tornam o projeto financeiramente oneroso. Esta pesquisa mostra que para um ambiente urbano regular, nuvens de pontos com baixas densidades (4 pts/m² e 8 pts/m²) apresentam uma equivalência na qualidade geométrica para produtos e subprodutos obtidos que serão utilizados para determinadas aplicações, não havendo a necessidade de utilizar nuvens com uma alta densidade (16 pts/m²) em projetos que utilizam estes dados em estudos altimétricos: geração do Modelo Digital de Terreno, curvas de nível, pontos cotados e também planimétricos: Modelo Digital de Elevação (Superfície e Normalizada) e seus derivados (altura e contorno de objetos, imagem de intensidade, cobertura vegetal e outros). / The use of the LiDAR system in obtaining data from the Earth\'s surface has been increasingly disseminated by the high performance in the acquisition of information and the effective use of products and by-products. Several segments started to adopt LiDAR products as basic and fundamental input in their works and studies. Success in a project that involves the acquisition and use of this type of data is directly linked to the definition and its planning. The user must be able to define the basic scope and the fundamental technical guidelines that will guarantee that the final demand is reached . In view of this, it is necessary to prepare a planning establishing the best configuration for the data acquisition and also the types of information that will be extracted from these products. Regarding the first aspect can be said that this is the basis for the whole project. Through it is guaranteed to obtain products according to the user is needs (spatial resolutions of the products, the level of detail of the objects, representation of the topography and others). Many of the users who hire LiDAR profiling services do not have sufficient technical background to define the best specification to adopt. This makes the majority of them choose to acquire a high density of points that is often unnecessary and even if they meet the final demand make the project financially costly. This research shows that for a regular urban environment, low point density clouds (4 pts/m² and 8 pts/m²) present an equivalence in the geometric quality for products and by-products obtained that will be used for certain applications. The need to use clouds with a high density (16 pts/m²) in projects that use this data in altimetric studies: generation of Digital Terrain Model, contourlines, quoted points and also planimetric: Digital Elevation Model (Surface and Normalized) and its derivatives (height and contour of objects, intensity image, vegetation cover and others).
676

Data stream mining in fog computing environment with feature selection using ensemble of swarm search algorithms

Ma, Bin Bin January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
677

Investigating Abandonment Processes in the Cloud Forest: An Archaeological and Ethnoarchaeological study of Manteño site abandonment from Manabí, Ecuador.

Unknown Date (has links)
This thesis provides an analysis of Manteño site abandonment in the cloud forest of Manabí, Ecuador. First, the types, frequency, and distribution of artifacts at site C4-044 were recorded, mapped, and compared to levels of phosphate in the soil to determine activity areas. The obtained evidence allowed me to make general approximations of the site’s pre-abandonment behavior. Then, the archaeological data together with environmental and bioarchaeological information from the region were assessed to propose the mode of departure from site C4-044. Through ethnography and ethnoarchaeology, a recent historical account of abandonment in the cloud forest was obtained as well, providing additional insight regarding adaptive strategies and behavioral choices to changing contextual circumstances. The culmination of this evidence shows a gradual mode of abandonment from site C4-044 in the cloud forest that was planned and executed accordingly. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
678

Performance Optimization of a Service in Virtual and Non - Virtual Environment

Tamanampudi, Monica, Sannareddy, Mohith Kumar Reddy January 2019 (has links)
In recent times Cloud Computing has become an accessible technology which makes it possible to provide online services to end user by the network of remote servers. With the increase in remote servers and resources allocated to these remote servers leads to performance degradation of service. In such a case, the environment on which service is made run plays a significant role in order to provide better performance and adds up to Quality of Service. This paper focuses on Bare metal and Linux container environments as request response time is one of the performance metrics to determine the QOS. To improve request response time platforms are customized using real-time kernel and compiler optimization flags to optimize the performance of a service. UDP packets are served to the service made run in these customized environments. From the experiments performed, it concludes that Bare metal using real-time kernel and level 3 Compiler optimization flag gives better performance of a service.
679

MPSF: cloud scheduling framework for distributed workflow execution. / MPSF: um arcabouço para escalonamento em computação em nuvem para execução distribuída de fluxos de trabalho.

Gonzalez, Nelson Mimura 16 December 2016 (has links)
Cloud computing represents a distributed computing paradigm that gained notoriety due to its properties related to on-demand elastic and dynamic resource provisioning. These characteristics are highly desirable for the execution of workflows, in particular scientific workflows that required a great amount of computing resources and that handle large-scale data. One of the main questions in this sense is how to manage resources of one or more cloud infrastructures to execute workflows while optimizing resource utilization and minimizing the total duration of the execution of tasks (makespan). The more complex the infrastructure and the tasks to be executed are, the higher the risk of incorrectly estimating the amount of resources to be assigned to each task, leading to both performance and monetary costs. Scenarios which are inherently more complex, such as hybrid and multiclouds, rarely are considered by existing resource management solutions. Moreover, a thorough research of relevant related work revealed that most of the solutions do not address data-intensive workflows, a characteristic that is increasingly evident for modern scientific workflows. In this sense, this proposal presents MPSF, the Multiphase Proactive Scheduling Framework, a cloud resource management solution based on multiple scheduling phases that continuously assess the system to optimize resource utilization and task distribution. MPSF defines models to describe and characterize workflows and resources. MPSF also defines performance and reliability models to improve load distribution among nodes and to mitigate the effects of performance fluctuations and potential failures that might occur in the system. Finally, MPSF defines a framework and an architecture to integrate all these components and deliver a solution that can be implemented and tested in real applications. Experimental results show that MPSF is able to predict with much better accuracy the duration of workflows and workflow phases, as well as providing performance gains compared to greedy approaches. / A computação em nuvem representa um paradigma de computação distribuída que ganhoudestaque devido a aspectos relacionados à obtenção de recursos sob demanda de modo elástico e dinâmico. Estas características são consideravelmente desejáveis para a execução de tarefas relacionadas a fluxos de trabalho científicos, que exigem grande quantidade de recursos computacionais e grande fluxo de dados. Uma das principais questões neste sentido é como gerenciar os recursos de uma ou mais infraestruturas de nuvem para execução de fluxos de trabalho de modo a otimizar a utilização destes recursos e minimizar o tempo total de execução das tarefas. Quanto mais complexa a infraestrutura e as tarefas a serem executadas, maior o risco de estimar incorretamente a quantidade de recursos destinada para cada tarefa, levando a prejuízos não só em termos de tempo de execução como também financeiros. Cenários inerentemente mais complexos como nuvens híbridas e múltiplas nuvens raramente são considerados em soluções existentes de gerenciamento de recursos para nuvens. Além destes fatores, a maioria das soluções não oferece mecanismos claros para tratar de fluxos de trabalho com alta intensidade de dados, característica cada vez mais proeminente em fluxos de trabalho moderno. Neste sentido, esta proposta apresenta MPSF, uma solução de gerenciamento de recursos baseada em múltiplas fases de gerenciamento baseadas em mecanismos dinâmicos de alocação de tarefas. MPSF define modelos para descrever e caracterizar fluxos de trabalho e recursos de modo a suportar cenários simples e complexos, como nuvens híbridas e nuvens integradas. MPSF também define modelos de desempenho e confiabilidade para melhor distribuir a carga e para combater os efeitos de possíveis falhas que possam ocorrer no sistema. Por fim, MPSF define um arcabouço e um arquitetura que integra todos estes componentes de modo a definir uma solução que possa ser implementada e utilizada em cenários reais. Testes experimentais indicam que MPSF não só é capaz de prever com maior precisão a duração da execução de tarefas, como também consegue otimizar a execução das mesmas, especialmente para tarefas que demandam alto poder computacional e alta quantidade de dados.
680

State-Of-The-Art on eHealth@home System Architectures

Heravi, Benjamin January 2019 (has links)
With growing life expectancy and decreasing of fertility rates, demands of additional healthcare services is increasing day by day. This results in a rising need for additional healthcare services which leads to more medical care costs. Modern technology can play an important role to reduce the healthcare costs. In the new era of IoT, secure, fast, low energy consumption and reliable connectivity are necessary qualities to meet demands of health service. New protocols such as IEEE 802.11ax and the fifth generation of mobile broadband have a revolutionary impact over the wireless connectivity. At the same time, new technologies such as cloud computing and Close Loop Medication Management open a new horizon in the medical environment. This thesis studies different eHealth@home architectures in terms of their wireless communication technologies, data collection and data storage strategies. The functionality, benefits and gaps of current distance health monitoring architecture have been presented and discussed. Additionally, this thesis proposes solutions for the integration of new wireless technologies for massive device connectivity, low end-to-end latency, high security, Edge-Computing mechanism, Close Loop Medication Management and cloud services.

Page generated in 0.0913 seconds