• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Estudos e avaliações de compiladores para arquiteturas reconfiguráveis / A compiler analysis for reconfigurable hardware

Joelmir José Lopes 25 May 2007 (has links)
Com o aumento crescente das capacidades dos circuitos integrado e conseqüente complexidade das aplicações, em especial as embarcadas, um requisito tem se tornado fundamental no desenvolvimento desses sistemas: ferramentas de desenvolvimento cada vez mais acessíveis aos engenheiros, permitindo, por exemplo, que um programa escrito em linguagem C possa ser convertido diretamente em hardware. Os FPGAs (Field Programmable Gate Array), elemento fundamental na caracterização de computação reconfigurável, é um exemplo desse crescimento, tanto em capacidade do CI como disponibilidade de ferramentas. Esse projeto teve como objetivos: estudar algumas ferramentas de conversão C, C++ ou Java para hardware reconfigurável; estudar benchmarks a serem executadas nessas ferramentas para obter desempenho das mesmas, e ter o domínio dos conceitos na conversão de linguagens de alto nível para hardware reconfigurável. A plataforma utilizada no projeto foi a da empresa Xilinx XUP V2P / With the growing capacities of Integrated Circuits (IC) and the complexity of the applications, especially in embedded systems, there are now requisites for developing tools that convert algorithms C direct into the hardware. As a fundamental element to characterize Reconfigurable Computing, FPGA (Field Programmable Gate Array) is an example of those CIs, as well as the tools that have been developed. In this project we present different tools to convert C into the hardware. We also present benchmarks to be executed on those tools for performance analysis. Finally we conclude the project presenting results relating the experience to implement C direct into the hardware. The Xilinx XUP V2P platform was used in the project
222

Avaliação de desempenho de controladores preditivos multivariáveis

Santos, Rodrigo Ribeiro 11 November 2013 (has links)
In advanced process control, the Model Predictive Control (MPC) may be considered the most important innovation in recent years and the standard tool for industrial applications due to the fact that it keeps the plant operating in the constraints more profitable. However, like every control algorithm, the MPC after some time in operation rarely works as originally designed. Thus, to preserve the benefits of MPC systems for a long period of time, their performance needs to be monitored and evaluated during the operation. This task require the presence of reliable and effective tools to detect when the controller performance is below of the desirable, to define the need, or not, of recommissioning the system. Thus, the objective of this work is development of techniques for monitoring and evaluating the performance of multivariable predictive controllers, being developed two new tools: LQG benchmark Modified and IHMC benchmark. The results obtained from numerical simulations were satisfactory and consistent with the technical literature applied in the developments of the evaluators, which were used in the monitoring of the control system MPC of the oil-water-gas three-phase separation process, offering an appropriate solution and providing subsidies for implementations in real industrial systems. / Em controle avançado de processos, o controlador preditivo ou MPC (Model Predictive Control) pode ser considerado como a mais importante inovação dos últimos anos e a ferramenta padrão para aplicações industriais, devido ao fato do MPC manter a planta operando dentro das suas restrições de forma mais lucrativa. Entretanto, como todo algoritmo de controle, o MPC depois de algum tempo em operação dificilmente funciona como quando fora inicialmente projetado. Desta forma, com o objetivo de manter os benefícios dos sistemas MPC por um longo período de tempo, seu desempenho precisa ser monitorado e avaliado durante a operação. Esta tarefa requer a presença de ferramentas efetivas e confiáveis para detectar quando o desempenho do controlador estiver abaixo do desejável, para definir a necessidade, ou não, de um recomissionamento do sistema. Destarte, aborda-se neste trabalho o desenvolvimento de técnicas para monitoramento e avaliação de desempenho de controladores preditivos multivariáveis, sendo desenvolvidas duas novas ferramentas: LQG benchmark Modificado e IHMC benchmark. Os resultados obtidos a partir de simulações numéricas foram satisfatórios e coerentes com a literatura técnica aplicada no desenvolvimento dos avaliadores, os quais foram utilizados no monitoramento do sistema de controle MPC do processo de separação trifásica água-óleo-gás, oferecendo assim uma solução apropriada e fornecendo subsídios para implementações em sistemas industrias reais.
223

NFV performance benchmarking with OVS and Linux containers

Rang, Tobias January 2017 (has links)
One recent innovation in the networking industry, is the concept of Network FunctionVirtualization (NFV). NFV is based on a networking paradigm in which network functions,which have typically been implemented in the form of dedicated hardware appliances in thepast, are implemented in software and deployed on commodity hardware using modernvirtualization techniques. While the most common approach is to place each virtual networkfunction in a virtual machine - using hardware-level virtualization – the growing influenceand popularity of Docker and other container-based solutions has naturally led to the idea ofcontainerized deployments. This is a promising concept, as containers (or operating systemlevel virtualization) can offer a flexible and lightweight alternative to hardware-levelvirtualization, with the ability to use the resources of the host directly. The main problem withthis concept, is the fact that the default behavior of Docker and similar technologies is to relyon the networking stack of the host, which typically isn’t performant enough to handle theperformance requirements associated with NFV. In this dissertation, an attempt is made toevaluate the feasibility of using userspace networking to accelerate the network performanceof Docker containers, bypassing the standard Linux networking stack by moving the packetprocessing into userspace.
224

M dwarfs from the SDSS, 2MASS and WISE surveys : identification, characterisation and unresolved ultracool companionship

Cook, Neil James January 2016 (has links)
The aim of this thesis is to use a cross-match between WISE, 2MASS and SDSS to identify a large sample of M dwarfs. Through the careful characterisation and quality control of these M dwarfs I aim to identify rare systems (i.e. unresolved UCD companions, young M dwarfs, late M dwarfs and M dwarfs with common proper motion companions). Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. This is done by using an optimised method for identifying M dwarfs which may have unresolved ultracool companions. To do this I construct a catalogue of 440 694 M dwarf candidates, from WISE, 2MASS and SDSS, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints I isolate a sub-sample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). I present 1 082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, I estimate that the occurrence of unresolved ultracool companions amongst my M dwarf + ultracool dwarf candidates should be at least four times the average for my full M dwarf catalogue. I discuss yields of candidates based on my simulations. The possible contamination and bias from misidentified M dwarfs is then discussed, from chance alignments with other M dwarfs and UCDs, from chance alignments with giant stars, from chance alignments with galaxies, and from blended systems (via visual inspection). I then use optical spectra from LAMOST to spectral type a subset of my M dwarf + ultracool dwarf candidates. These candidates need confirming as true M dwarf + ultracool dwarf systems thus I present a new method I developed to use low resolution near-infrared spectra which relies on two colour similar objects (one an excess candidate, one not) having very similar spectra. A spectral difference of these two colour similar objects should leave the signature of a UCD in the residual of their differences, which I look for using the difference in two spectral bands designed to identify UCD spectral features. I then present the methods used to identify other rare systems from my full M dwarf catalogue. Young M dwarfs were identified by measuring equivalent widths of Hα from the LAMOST spectra, and by measuring rotation periods from Kepler 2 light curves. I identify late M dwarfs photometrically (using reduced proper motion and colour cuts) and spectroscopically (using the LAMOST spectra with spectral indices from the literature). Also I present common proper motion analysis aimed at finding Tycho-2 primaries for my M dwarfs and look for physically separated M dwarf + M dwarf pairs (internally within my full M dwarf catalogue).
225

On the Performance of the Solaris Operating System under the Xen Security-enabled Hypervisor

Bavelski, Alexei January 2007 (has links)
This thesis presents an evaluation of the Solaris version of the Xen virtual machine monitor and a comparison of its performance to the performance of Solaris Containers under similar conditions. Xen is a virtual machine monitor, based on the paravirtualization approach, which provides an instruction set different to the native machine environment and therefore requires modifications to the guest operating systems. Solaris Zones is an operating system-level virtualization technology that is part of the Solaris OS. Furthermore, we provide a basic performance evaluation of the security modules for Xen and Zones, known as sHype and Solaris Trusted Extensions, respectively. We evaluate the control domain (know as Domain-0) and the user domain performance as the number of user domains increases. Testing Domain-0 with an increasing number of user domains allows us to evaluate how much overhead virtual operating systems impose in the idle state and how their number influences the overall system performance. Testing one user domain and increasing the number of idle domains allows us to evaluate how the number of domains influences operating system performance. Testing concurrently loaded increasing numbers of user domains we investigate total system efficiency and load balancing dependent on the number of running systems. System performance was limited by CPU, memory, and hard drive characteristics. In the case of CPU-bound tests Xen exhibited performance close to the performance of Zones and to the native Solaris performance, loosing 2-3% due to the virtualization overhead. In case of memory-bound and hard drive-bound tests Xen showed 5 to 10 times worse performance.
226

Prestandajämförelse mellan Amazon EC2 och privat datacenter / Performance comparison between Amazon EC2 and private computer center

Johansson, Daniel, Jibing, Gustav, Krantz, Johan January 2013 (has links)
Publika moln har sedan några år tillbaka blivit ett alternativ för olika företag att använda istället för lokala datacenter. Vad publika moln erbjuder är en tjänst som gör det möjligt för företag och privatpersoner att hyra datorkapacitet. Vilket gör att de inte längre behöver spendera pengar på resurser som inte används. Istället för att köpa en stor andel hårdvara och uppskatta hur stor kapacitet som man behöver kan man nu istället så smått börja utöka efter behov eller minska ifall det önskas. Därmed behöver företag inte spendera pengar på hårdvara som inte används eller har för lite datorkapacitet, vilket skulle kunna resultera i att stora batcharbeten inte blir färdiga i tid och i och med det kan företaget förlora potentiella kunder. Potentiella problem  kan dock uppstå när man i ett moln virtualiserar och försöker fördela datorkapacitet mellan flera tusen instanser. Där även skalbarhet inte ska ha några begränsningar, enligt moln-leverantörerna. I denna rapport har vi med hjälp av olika benchmarks analyserat prestandan hos den största publika moln-leverantören på marknaden, Amazon, och deras EC2- och S3-tjänster. Vi har genomfört prestandatester på systemminne, MPI och hårddisk I/O. Då dessa är några av de faktorer som hindrar publika moln från att ta över marknaden, enligt artikeln Above The Clouds -  A Berkely View of Cloud Computing [3].  Sedan har vi jämfört resultaten med prestandan på ett privat moln i ett datacenter. Våra resultat indikerar att prestandan på det publika molnet inte är förutsägbar och måste få en ordentlig skjuts för att stora företag ska ha en anledning till att börja använda det.
227

Studie av utvecklingsverktyg med inriktning mot PLC-system

Brax, christoffer January 1999 (has links)
Datoranvändningen i samhället växer för varje dag. Det är då viktigt att programvara håller hög kvalité, då vissa programvaror styr kritiska maskiner som exempelvis flygplan. Ett sätt att få kvalitativa programvaror är att använda bra utvecklingsverktyg. I detta arbete utvärderas fem olika utvecklingsverktyg: GNAT (Ada), Microsoft Visual C++, Microsoft J++, Borland Delphi och Active Perl. Inriktningen på utvärderingen är mot utveckling av programvara för PLC-system. Det som utvärderats är effektivitet av genererad kod, kompileringstid, storlek på genererad kod, kodportabilitet och utbud av färdiga komponenter. Undersökningen har genomförts med hjälp av praktiska tester.
228

Automated usability analysis and visualisation of eye tracking data

De Bruin, Jhani Adre January 2014 (has links)
Usability is a critical aspect of the success of any application. It can be the deciding factor for which an application is chosen and can have a dramatic effect on the productivity of users. Eye tracking has been successfully utilised as a usability evaluation tool, because of the strong link between where a person is looking and their cognitive activity. Currently, eye tracking usability evaluation is a time–intensive process, requiring extensive human expert analysis. It is therefore only feasible for small–scale usability testing. This study developed a method to reduce the time expert analysts spend interpreting eye tracking results, by automating part of the analysis process. This was accomplished by comparing the visual strategy of a benchmark user against the visual strategies of the remaining participants. A comparative study demonstrates how the resulting metrics highlight the same tasks with usability issues, as identified by an expert analyst. The method also produces visualisations to assist the expert in identifying problem areas on the user interface. Eye trackers are now available for various mobile devices, providing the opportunity to perform large–scale, remote eye tracking usability studies. The proposed approach makes it feasible to analyse these extensive eye tracking datasets and improve the usability of an application. / Dissertation (MSc)--University of Pretoria, 2014. / Computer Science / unrestricted
229

A sectoral benchmark-and-trade system to improve electricity efficiency in South Africa

Inglesi-Lotz, Roula 13 October 2011 (has links)
The continuously increasing energy intensity internationally is recognised as one of the greatest dangers the human race is facing nowadays with regards to future climate change and its detrimental consequences. Improving the intensity of energy consumption is an important step towards decreasing greenhouse gas emissions originating from fossil fuel-based electricity generation and consumption. As a result of this, South Africa took the bold step in 2010 to commit itself to the Secretariat of the United Nations Framework Convention on Climate Change (UNFCCC) in taking all the necessary actions to decrease the country’s greenhouse gas emissions by 34% to below the “business-as-usual” scenario by 2020 (Republic of South Africa, 2010). In order to do so, the country has to substantially reduce its energy consumption. This should be done without affecting the economic output; however, major energy consumers might prefer to decrease their output in order to comply with the rules focusing on the reduction of energy use. In South Africa, harmful environmental effects are created mainly from the electricity consumption’s unprecedented rise. The bulk of the country’s greenhouse gas emissions (more than 60%) originate from the electricity generation sector which is heavily dependent on coal-fired power stations. The purpose of this study is to promote a benchmark-and-trade system to improve electricity efficiency in South Africa with the ultimate objective to improve the country’s greenhouse gas emissions. The uniqueness of this study is two-fold. On the one side, South African policy-makers have rarely discussed or proposed the implementation of a cap-and- trade system. On the other side, the same mechanism has never been proposed regarding electricity efficiency. In order to do so, it is first required to acquire an in-depth knowledge of the electricity consumption and efficiency of the South African economy in its entirety and on a sectoral level. The key findings of the empirical analysis are as follows: A decreasing effect of electricity prices to electricity consumption existed during the period 1980 to 2005, contrary to the increasing effect of total output to electricity consumption. Also, the results indicated that the higher the prices, the higher the price sensitivity of consumers to changes in prices (price elasticity) and vice versa. The relationship between electricity consumption and electricity prices differ among various sectors. The findings of the exercise point towards ambiguous results and even lack of behavioural response towards price changes in all but the industrial sector, where electricity consumption increased with price decreases. On the other side, economic output affected the electricity consumption of two sectors (industrial and commercial) presenting high and statistically significant coefficients. Based on a decomposition exercise, the change in production was the main factor that increased electricity consumption, while efficiency improvement was a driver in the decrease of electricity consumption. In the sectoral analysis, increases in production were part of the rising electricity usage for all the sectors with ‘iron and steel’, ‘transport’ and ‘non-ferrous metals’ being the main contributors to the effect. On the decreasing side of consumption, only five out of fourteen sectors were influenced by efficiency improvements. The country’s electricity intensity more than doubled from 1990 to 2007 and the country’s weighted growth of intensity was higher than the majority of the OECD countries by a considerable margin. Also, nine of the thirteen South African sectors were substantially more intensive than their OECD counterparts. Although the picture presented is rather dismal, there is scope for improvement. This study proposes a sectoral benchmark-and-trade system. This system aspires to steadily improve the participants’ efficiency performance by awarding the successful participants with monetary incentives through trading with the less successful ones. The benchmark is chosen to be subject to the average of OECD members for each sector. Depending on the sectors’ performance compared with the standard chosen, they will be awarded credits or allowances to sell if they do better than the benchmark. If they are worse-off, they will have to buy credits in the market created. The price per credit will be determined by the interaction of demand and supply in the market. The findings of a comparison with a carbon tax system show that the proposed system benefits the majority of the sectors and gives them better incentives to change their behaviour and production methods to more efficient ones. The system also fulfils the desired characteristics of a benchmark-and-trade system: certainty of environmental performance; business certainty; flexibility; administrative ease and transparency. / Thesis (PhD)--University of Pretoria, 2011. / Economics / unrestricted
230

Development Of A New Finite-Volume Lattice Boltzmann Formulation And Studies On Benchmark Flows

Vilasrao, Patil Dhiraj 07 1900 (has links) (PDF)
This thesis is concerned with the new formulation of a finite-volume lattice Boltzmann equation method and its implementation on unstructured meshes. The finite-volume discretization with a cell-centered tessellation is employed. The new formulation effectively adopts a total variation diminishing concept. The formulation is analyzed for the modified partial differential equation and the apparent viscosity of the model. Further, the high-order extension of the present formulation is laid out. Parallel simulations of a variety of two-dimensional benchmark flows are carried out to validate the formulation. In Chapter 1, the important notions of the kinetic theory and the most celebrated equation in the kinetic theory, ‘the Boltzmann equation’ are given. The historical developments and the theory of a discrete form of Boltzmann equation are briefly discussed. Various off-lattice schemes are introduced. Various methodologies adopted in the past for the solution of the lattice Boltzmann equation on finite-volume discretization are reviewed. The basic objectives of this thesis are stated. In Chapter2,the basic formulations of lattice Boltzmann equation method with a rational behind different boundary condition implementations are discussed. The benchmark flows are studied for various flow phenomenon with the parallel code developed in-house. In particular, the new benchmark solution is given for the flow induced inside a rectangular, deep cavity. In Chapter 3, the need for off-lattice schemes and a general introduction to the finite-volume approach and unstructured mesh technology are given. A new mathematical formulation of the off-lattice finite-volume lattice Boltzmann equation procedure on a cell-centered, arbitrary triangular tessellation is laid out. This formulation employs the total variation diminishing procedure to treat the advection terms. The implementation of the boundary condition is given with an outline of the numerical implementation. The Chapman-Enskog (CE) expansion is performed to derive the conservation equations and an expression for the apparent viscosity from the finite-volume lattice Boltzmann equation formulation in Chapter 4. Further, the numerical investigations are performed to analyze the apparent viscosity variation with respect to the grid resolution. In Chapter 5, an extensive validation of the newly formulated finite-volume scheme is presented. The benchmark flows considered are of increasing complexity and are namely (1) Posieuille flow, (2) unsteady Couette flow, (3) lid-driven cavity flow, (4) flow past a backward step and (5) steady flow past a circular cylinder. Further, a sensitivity study to the various limiter functions has also been carried out. The main objective of Chapter6is to enhance the order of accuracy of spatio-temporal calculations in the newly presented finite-volume lattice Boltzmann equation formulation. Further, efficient implementation of the formulation for parallel processing is carried out. An appropriate decomposition of the computational domain is performed using a graph partitioning tool. The order-of-accuracy has been verified by simulating a flow past a curved surface. The extended formulation is employed to study more complex unsteady flows past circular cylinders. In Chapter 7, the main conclusions of this thesis are summarized. Possible issues to be examined for further improvements in the formulation are identified. Further, the potential applications of the present formulation are discussed.

Page generated in 0.035 seconds