Spelling suggestions: "subject:"computation."" "subject:"omputation.""
301 |
Robotic Fabrication Workflows for Environmentally Driven FacadesCabrera, Pablo Marcelo 25 July 2019 (has links)
Even though computer simulation of environmental factors and manufacturing technologies have experienced a fast development, architectural workflows that can take advantage of the possibilities created by these developments have been left behind and architectural design processes have not evolved at the same rate. This research presents design to fabrication workflows that explore data driven design to improve performance of facades, implementing for this purpose computational tools to handle environmental data complexity and proposes robotic fabrication technologies to facilitate façade components fabrication.
During this research three design experiments were conducted that tested variations on the design to fabrication workflow, approaching the flow of information in top-down and bottom-up processes. Independent variables such as material, environmental conditions and structural behavior, are the framework in which workflow instances are generated based on dependent variables such as geometry, orientation and assembly logic. This research demonstrates the feasibility of a robotic based fabrication method informed by a multi-variable computational framework plus a simulation evaluator integrated into a design to fabrication workflow and put forward the discussion of a fully automated scenario. / Master of Science
|
302 |
Benchmarking measurement-based quantum computation on graph statesQin, Zhangjie 26 August 2024 (has links)
Measurement-based quantum computation is a form of quantum computing that operates on a prepared entangled graph state, typically a cluster state. In this dissertation, we will detail the creation of graph states across various physical platforms using different entangling gates. We will then benchmark the quality of graph states created with error-prone interactions through quantum wire teleportation experiments. By leveraging underlying symmetry, we will design graph states as measurement-based quantum error correction codes to protect against perturbations, such as ZZ crosstalk in quantum wire teleportation. Additionally, we will explore other measurement-based algorithms used for the quantum simulation of time evolution in fermionic systems, using the Kitaev model and the Hubbard model as examples. / Doctor of Philosophy / A quantum computer refers to a device that performs general computational functions relying on logic gates using units dominated by microscopic quantum properties. The fundamental difference between quantum computers and classical computers lies in the distinction be- tween the basic quantum unit, the qubit, and the classical computational unit, the bit. Both qubits and bits can exist in states 0 and 1. However, qubits possess two characteristics that classical computational units do not: superposition and entanglement. Superposition allows a qubit to exist in a combination of both states 0 and 1 simultaneously. Entanglement refers to the phenomenon where qubits interact and form an inseparable unified state. The effec- tive utilization of these unique properties enables quantum computers to exhibit capabilities far surpassing those of classical computers.
Analogous to classical computers, qubits can be interconnected in a circuit-like manner sim- ilar to classical bits, forming an architecture known as circuit-based quantum computation (CBQC). However, given the unique properties of quantum systems, particularly entan- glement, a novel architecture called measurement-based quantum computing (MBQC) can also be designed. MBQC relies on pre-entangled graph states, usually cluster states, and only requires single-qubit measurements to implement quantum algorithms. The MBQC framework also includes a universal gate set, similar to other quantum computing architec- tures like CBQC. In this dissertation, we will introduce the creation of graph states and the implementation of measurement-based quantum algorithms.
|
303 |
Power Analysis and Prediction for Heterogeneous ComputationDutta, Bishwajit 12 February 2018 (has links)
Power, performance, and cost dictate the procurement and operation of high-performance computing (HPC) systems. These systems use graphics processing units (GPUs) for performance boost. In order to identify inexpensive-to-acquire and inexpensive-to-operate systems, it is important to do a systematic comparison of such systems with respect to power, performance and energy characteristics with the end use applications. Additionally, the chosen systems must often achieve performance objectives without exceeding their respective power budgets, a task that is usually borne by a software-based power management system. Accurately predicting the power consumption of an application at different DVFS levels (or more generally, different processor configurations) is paramount for the efficient functioning of such a management system.
This thesis intends to apply the latest in the state-of-the-art in green computing research to optimize the total cost of acquisition and ownership of heterogeneous computing systems. To achieve this we take a two-fold approach. First, we explore the issue of greener device selection by characterizing device power and performance. For this, we explore previously untapped opportunities arising from a special type of graphics processor --- the low-power integrated GPU --- which is commonly available in commodity systems. We compare the greenness (power, energy, and energy-delay product $rightarrow$ EDP) of the integrated GPU against a CPU running at different frequencies for the specific application domain of scientific visualization. Second, we explore the problem of predicting the power consumption of a GPU at different DVFS states via machine-learning techniques. Specifically, we perform statistically rigorous experiments to uncover the strengths and weaknesses of eight different machine-learning techniques (namely, ZeroR, simple linear regression, KNN, bagging, random forest, SMO regression, decision tree, and neural networks) in predicting GPU power consumption at different frequencies. Our study shows that a support vector machine-aided regression model (i.e., SMO regression) achieves the highest accuracy with a mean absolute error (MAE) of 4.5%. We also observe that the random forest method produces the most consistent results with a reasonable overall MAE of 7.4%. Our results also show that different models operate best in distinct regions of the application space. We, therefore, develop a novel, ensemble technique drawing the best characteristics of the various algorithms, which reduces the MAE to 3.5% and maximum error to 11% from 20% for SMO regression. / MS / High-performance computing (HPC) systems or supercomputers, and data centers consume immense amount of power. Power consumption of a supercomputer generally exceeds the capacity of a small power plant. In fact, turning on a supercomputer caused a brown-out in a neighboring town once. These systems increasingly use graphics processing units (GPUs) for performance boost. For example the November 2017 top 500 supercomputers ranking has 101 GPU accelerated systems. Power, performance, and cost dictate the procurement and operation of these systems. In this thesis we intend to apply the latest in the state-of-the-art in green computing research to optimize the total cost of acquisition and ownership of heterogeneous CPU-GPU computing systems.
In order to identify inexpensive-to-acquire and inexpensive-to-operate systems, it is important to do a systematic comparison of such systems with respect to power, performance and energy characteristics with the end user applications. Additionally, the chosen systems must often achieve performance objectives without exceeding their respective power budgets, a task that is usually borne by a software-based power management system. Accurately predicting the power consumption of an application at different processor configurations is paramount for the efficient functioning of such a management system.
To achieve this we take a two-fold approach. First, we explore the issue of greener device selection by characterizing device power and performance. For this, we explore previously untapped opportunities arising from a special type of graphics processor — the low-power integrated GPU — which is commonly available in commodity systems. We compare the greenness (power, energy, and energy-delay product → EDP) of the integrated GPU against a CPU running at different frequencies for the specific application domain of scientific visualization. Second, we explore the problem of predicting the power consumption of a GPU at different configurations via machine-learning techniques. Specifically, we perform statistically rigorous experiments to uncover the strengths and weaknesses of different machine-learning techniques in predicting GPU power consumption at different frequencies. Our study shows that a support vector machine-aided regression model which fits a complex curve achieves the highest accuracy with a mean absolute error of 4.5%. Our results also show that different models operate best in distinct regions of the application space. We, therefore, develop a novel, ensemble technique drawing the best characteristics of the various algorithms, which reduces the MAE to 3.5% and maximum error to 11% from 20%.
|
304 |
Divergência populacional e expansão demográfica de Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) no final do Quaternário / Population divergence and demographic expansion of Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) in the late QuaternaryCampos Junior, Ricardo Fernandes 29 October 2012 (has links)
Dendrocolaptes platyrostris é uma espécie de ave florestal associada às matas de galeria do corredor de vegetação aberta da América do sul (D. p. intermedius) e à Floresta Atlântica (D. p. platyrostris). Em um trabalho anterior, foi observada estrutura genética populacional associada às subespécies, além de dois clados dentro da Floresta Atlântica e evidências de expansão na população do sul, o que é compatível com o modelo Carnaval-Moritz. Utilizando approximate Bayesian computation, o presente trabalho avaliou a diversidade genética de dois marcadores nucleares e um marcador mitocondrial dessa espécie com o objetivo de comparar os resultados obtidos anteriormente com os obtidos utilizando uma estratégia multi-locus e considerando variação coalescente. Os resultados obtidos sugerem uma relação de politomia entre as populações que se separaram durante o último período interglacial, mas expandiram após o último máximo glacial. Este resultado é consistente com o modelo de Carnaval-Moritz, o qual sugere que as populações sofreram alterações demográficas devido às alterações climáticas ocorridas nestes períodos. Trabalhos futuros incluindo outros marcadores e modelos que incluam estabilidade em algumas populações e expansão em outras são necessários para avaliar o presente resultado / Dendrocolaptes platyrostris is a forest specialist bird associated to gallery forests of the open vegetation corridor of South America (D. p. intermedius) and to the Atlantic forest (D. p. platyrostris). A previous study showed a population genetic structure associated with the subspecies, two clades within the Atlantic forest, and evidence of population expansion in the south, which is compatible with Carnaval- Moritz\'s model. The present study evaluated the genetic diversity of two nuclear and one mitochondrial markers of this species using approximate Bayesian computation, in order to compare the results previously obtained with those based on a multi-locus strategy and considering the coalescent variation. The results suggest a polytomic relationship among the populations that split during the last interglacial period and expanded after the last glacial maximum. This result is consistent with the model of Carnaval-Moritz, which suggests that populations have undergone demographic changes due to climatic changes that occurred in these periods. Future studies including other markers and models that include stability in some populations and expansion in others are needed to evaluate the present result
|
305 |
Consumo de energia em dispositivos móveis Android: análise das estratégias de comunicação utilizadas em Computation Offloading / Energy consumption on Android mobile devices: communication strategies analysis used in Computation OffloadingChamas, Carolina Luiza 14 December 2017 (has links)
Os dispositivos móveis passaram por grandes transformações na última década e tornaram-se complexos computadores dotados de grande poder de processamento e memória, além de prover aos usuários diversos recursos como sensores e câmeras de alta resolução. O uso de dispositivos móveis para diversas tarefas aumentou consideravelmente, o que levantou uma grande preocupação com o o alto consumo de energia desses dispositivos. Portanto, estudos tem sido realizados no sentido de encontrar soluções para diminuir o custo de energia das aplicações que executam em dispositivos móveis. Uma das alternativas mais utilizadas é o \\textit{computation offloading}, cujo objetivo é transferir a execução de uma tarefa para uma plataforma externa com o intuito de aumentar desempenho e reduzir consumo de recursos, como a bateria, por exemplo. Decidir sobre usar ou não esta técnica implica entender a influência de fatores como a quantidade de dados processados, a quantidade de computação envolvida, e o perfil da rede. Muitos estudos tem sido realizados para estudar a influência de diversas opções de rede wireless, como 3G, 4G e Wifi, mas nenhum estudo investigou a influência das escolhas de comunicação no custo de energia. Portanto, o objetivo deste trabalho é apresentar uma investigação sobre a influência da quantidade de dados, da quantidade de computação e dos protocolos de comunicação ou estilo arquitetural no consumo de energia quando a técnica de \\textit{computation offloading} é utilizada. Neste estudo, foram comparados REST, SOAP, Socket e RPC na execução de algoritmos de ordenação de diferentes complexidades aplicados sobre vetores de diversos tamanhos e tipos de dados. Os resultados mostram que a execução local é mais econômica com algoritmos menos complexos, pequeno tamanho de entrada e tipo de dados menos complexos. Quando se trata de execução remota, o REST é a escolha mais econômica seguida por Socket. Em geral, REST é mais econômico com vetores do tipo Object, independentemente da complexidade do algoritmo e tamanho do vetor, enquanto Socket é mais econômico com entradas maiores e com vetores de tipos primitivos, como Int e Float / Mobile devices have significantly changed in the last decade and they become complex computer machines equipped with large processing power and memory. Moreover, they provide users with several resources such as sensors and high resolution cameras. The usage of mobile devices has significantly increased in the past years, which raised an important concern regarding the high energy consumption. Therefore, several investigations have been conducted aiming at finding solutions to reduce the energy cost of mobile applications. One of the most used strategy is called computation offloading, whose main goal is to transfer the execution of a task to an external platform aiming at increasing performance and reducing resource consumption, including the battery. Deciding towards offloading certain tasks requires to understand the influence of the amount of data, amount of computation, and the network profile. Several studies have investigated the influence of different wireless flavours, such as 3G, 4G and wifi, but no study has investigated the influence of the communication choices on the energy cost. Therefore, the purpose of this research project is to present an investigation on the influence of the amount of data, amount of computation and the communication protocols and architectural style on the energy consumption in the context of the computation offloading technique. In this study, we compare REST, SOAP, Socket and RPC when executing algorithms of different complexities and different input sizes and types. Results show that local execution is more economic with less complex algorithms and small input data. When it comes to remote execution, REST is the most economic choice followed by Socket. In general, REST is the most economic choice when applied on Object type arrays, regardless the complexity and size, while Socket is the most economic choice with large arrays and primitive types such as integers and floats
|
306 |
Aplicações do approximate Bayesian computation a controle de qualidade / Applications of approximate Bayesian computation in quality controlCampos, Thiago Feitosa 11 June 2015 (has links)
Neste trabalho apresentaremos dois problemas do contexto de controle estatístico da qualidade: monitoramento \"on-line\'\' de qualidade e environmental stress screening, analisados pela óptica bayesiana. Apresentaremos os problemas dos modelos bayesianos relativos a sua aplicação e, os reanalisamos com o auxílio do ABC o que nos fornece resultados de uma maneira mais rápida, e assim possibilita análises diferenciadas e a previsão novas observações. / In this work we will present two problems in the context of statistical quality control: on line quality monitoring and environmental stress screening, analyzed from the Bayesian perspective. We will present problems of the Bayesian models related to their application, and also we reanalyze the problems with the assistance of ABC methods which provides results in a faster way, and so enabling differentiated analyzes and new observations forecast.
|
307 |
Aplicações do approximate Bayesian computation a controle de qualidade / Applications of approximate Bayesian computation in quality controlThiago Feitosa Campos 11 June 2015 (has links)
Neste trabalho apresentaremos dois problemas do contexto de controle estatístico da qualidade: monitoramento \"on-line\'\' de qualidade e environmental stress screening, analisados pela óptica bayesiana. Apresentaremos os problemas dos modelos bayesianos relativos a sua aplicação e, os reanalisamos com o auxílio do ABC o que nos fornece resultados de uma maneira mais rápida, e assim possibilita análises diferenciadas e a previsão novas observações. / In this work we will present two problems in the context of statistical quality control: on line quality monitoring and environmental stress screening, analyzed from the Bayesian perspective. We will present problems of the Bayesian models related to their application, and also we reanalyze the problems with the assistance of ABC methods which provides results in a faster way, and so enabling differentiated analyzes and new observations forecast.
|
308 |
Consumo de energia em dispositivos móveis Android: análise das estratégias de comunicação utilizadas em Computation Offloading / Energy consumption on Android mobile devices: communication strategies analysis used in Computation OffloadingCarolina Luiza Chamas 14 December 2017 (has links)
Os dispositivos móveis passaram por grandes transformações na última década e tornaram-se complexos computadores dotados de grande poder de processamento e memória, além de prover aos usuários diversos recursos como sensores e câmeras de alta resolução. O uso de dispositivos móveis para diversas tarefas aumentou consideravelmente, o que levantou uma grande preocupação com o o alto consumo de energia desses dispositivos. Portanto, estudos tem sido realizados no sentido de encontrar soluções para diminuir o custo de energia das aplicações que executam em dispositivos móveis. Uma das alternativas mais utilizadas é o \\textit{computation offloading}, cujo objetivo é transferir a execução de uma tarefa para uma plataforma externa com o intuito de aumentar desempenho e reduzir consumo de recursos, como a bateria, por exemplo. Decidir sobre usar ou não esta técnica implica entender a influência de fatores como a quantidade de dados processados, a quantidade de computação envolvida, e o perfil da rede. Muitos estudos tem sido realizados para estudar a influência de diversas opções de rede wireless, como 3G, 4G e Wifi, mas nenhum estudo investigou a influência das escolhas de comunicação no custo de energia. Portanto, o objetivo deste trabalho é apresentar uma investigação sobre a influência da quantidade de dados, da quantidade de computação e dos protocolos de comunicação ou estilo arquitetural no consumo de energia quando a técnica de \\textit{computation offloading} é utilizada. Neste estudo, foram comparados REST, SOAP, Socket e RPC na execução de algoritmos de ordenação de diferentes complexidades aplicados sobre vetores de diversos tamanhos e tipos de dados. Os resultados mostram que a execução local é mais econômica com algoritmos menos complexos, pequeno tamanho de entrada e tipo de dados menos complexos. Quando se trata de execução remota, o REST é a escolha mais econômica seguida por Socket. Em geral, REST é mais econômico com vetores do tipo Object, independentemente da complexidade do algoritmo e tamanho do vetor, enquanto Socket é mais econômico com entradas maiores e com vetores de tipos primitivos, como Int e Float / Mobile devices have significantly changed in the last decade and they become complex computer machines equipped with large processing power and memory. Moreover, they provide users with several resources such as sensors and high resolution cameras. The usage of mobile devices has significantly increased in the past years, which raised an important concern regarding the high energy consumption. Therefore, several investigations have been conducted aiming at finding solutions to reduce the energy cost of mobile applications. One of the most used strategy is called computation offloading, whose main goal is to transfer the execution of a task to an external platform aiming at increasing performance and reducing resource consumption, including the battery. Deciding towards offloading certain tasks requires to understand the influence of the amount of data, amount of computation, and the network profile. Several studies have investigated the influence of different wireless flavours, such as 3G, 4G and wifi, but no study has investigated the influence of the communication choices on the energy cost. Therefore, the purpose of this research project is to present an investigation on the influence of the amount of data, amount of computation and the communication protocols and architectural style on the energy consumption in the context of the computation offloading technique. In this study, we compare REST, SOAP, Socket and RPC when executing algorithms of different complexities and different input sizes and types. Results show that local execution is more economic with less complex algorithms and small input data. When it comes to remote execution, REST is the most economic choice followed by Socket. In general, REST is the most economic choice when applied on Object type arrays, regardless the complexity and size, while Socket is the most economic choice with large arrays and primitive types such as integers and floats
|
309 |
Proposta de uma infraestrutura de baixo custo com multiprocessamento e utilizando software abertoSilva, Everaldo Lopes 16 May 2012 (has links)
Made available in DSpace on 2016-04-29T14:23:06Z (GMT). No. of bitstreams: 1
Everaldo Lopes Silva.pdf: 2560111 bytes, checksum: 80d8866035ff50f6bddd07b19b709209 (MD5)
Previous issue date: 2012-05-16 / This dissertation has the objective of identifying the technical aspects that deal with the utilization of computer cluster, specially the platforms with Linux operational system. It will be presented some cluster models in Linux, recognizing its advantages and its disadvantages and finally indicating the chosen model with the due justification. As part of this work, we will propose a laboratory with a cluster of two equipments connected with two gigabit interfaces each one and one computer working stand-alone. It will run Artificial Intelligence and Digital Design programs in this cluster, comparing its performance with only one computer running the same programs. The measuring and analysis will indicate if the Linux cluster would be a feasible infrastructure in technical and financial terms for AI and Digital Design application.
The research method will be naturally the experimental and the approach method will be inductive, for through the results of the experimentation and technical analysis, it will be able to apply the knowledge achieved in others similar environments.
For putting the experimental activity in the correct context, it will be used the more significant and contemporary research theories to establish in a clear way the scientific approach that it will lead the whole work / Esta dissertação visa identificar os aspectos técnicos e teóricos que envolvem a utilização de cluster de computadores, tratando especialmente de plataformas com o sistema operacional Linux. Serão apresentados alguns modelos de cluster em Linux, reconhecendo suas vantagens e desvantagens e por fim indicando o modelo escolhido com a devida justificativa. Como parte do trabalho, proporemos um laboratório com um agrupamento de dois equipamentos conectados com duas interfaces de rede gigabit ethernet em cada um e um computador trabalhando isoladamente. Executaremos programas de Inteligência Artificial e Design Digital nesse cluster e compararemos o seu desempenho com apenas um computador executando esses mesmos programas. As medições e análise servirão como base para análise para a verificação se um cluster de Linux seria uma infraestrutura viável em termos técnicos e financeiros para aplicações de Inteligência Artificial e Design Digital.
O método de pesquisa será naturalmente a pesquisa experimental e o método de abordagem será indutivo, pois através dos resultados da experimentação e da análise técnica se poderá aplicar o conhecimento obtido em situações semelhantes.
Para contextualizar a atividade experimental abordaremos as teorias de pesquisa mais significativas e contemporâneas para que se estabeleça de maneira clara a abordagem científica que norteará o trabalho como um todo
|
310 |
A novel sequential ABC algorithm with applications to the opioid crisis using compartmental modelsLangenfeld, Natalie Rose 01 May 2018 (has links)
The abuse of and dependence on opioids are major public health problems, and have been the focus of intense media coverage and scholarly inquiry. This research explores the problem in Iowa through the lens of infectious disease modeling. We wanted to identify the current state of the crisis, factors affecting the progression of the addiction process, and evaluate interventions as data becomes available. We introduced a novel sequential Approximate Bayesian Computation technique to address shortcomings of existing methods in this complex problem space, after surveying the literature for available Bayesian computation techniques.
A spatial compartmental model was used which allowed forward and backward progression through susceptible, exposed, addicted, and removed disease states. Data for this model were compiled over the years 2006-2016 for Iowa counties, from a variety of sources. Prescription overdose deaths and treatment data were obtained from the Iowa Department of Public Health, possession and distribution arrest data were acquired from the Iowa Department of Public Safety, a measure of total available pain reliever prescriptions was derived from private health insurance claims data, and population totals were obtained from the US Census Bureau.
Inference was conducted in a Bayesian framework. A measure called the empirically adjusted reproductive number which estimates the expected number of new users generated from a single user was used to examine the growth of the crisis. Results expose the trend in recruitment of new users, and peak recruitment times. While we identify an overall decrease in the rate of spread during the study period, the scope of the problem remains severe, and interesting outlying trends require further investigation. In addition, an examination of the reproductive numbers estimated for contact within and between counties indicates that medical exposure, rather than spread through social networks, may be the key driver of this crisis.
|
Page generated in 0.0838 seconds