• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 7
  • Tagged with
  • 119
  • 119
  • 107
  • 46
  • 46
  • 32
  • 29
  • 29
  • 27
  • 27
  • 12
  • 12
  • 7
  • 7
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Scalability Modeling for Optimal Provisioning of Data Centers in Telenor : A better balance between under- and over-provisioning

Rygg, Knut Helge January 2012 (has links)
The scalability of an information system describes the relationship between system ca-pacity and system size. This report studies the scalability of Microsoft Lync Server 2010 in order to provide guidelines for provisioning hardware resources. Optimal pro-visioning is required to reduce both deployment and operational costs, while keeping an acceptable service quality.All Lync servers in the test setup are virtualizedusingVMware ESXi 5.0 and the system runs on a Cisco Unified Computing System (UCS) platform. The scenario is a typical hosted Lync deployment with external users only and telephone integration. While several companies offer hosted virtual Lync deployments and the Cisco UCS platform has a rich market share, Microsoft’s capacity planning guides don’t provide help for such a deployment scenario or hardware platform. This report consequently fill an information gap.The scalability is determined by using empirical measurements with different work-loads and system sizes. The workload is determined by the number of Lync end-users and the system size varies from 1/4 to 4/4 Cisco UCS blade server. The results show a linear scaling in the range of 1/4 to 4/4 blade servers. The processor is found to be the main bottleneck resource in this deployment. Themean opinion score (MOS) aswell as the front end server utilization are the best metrics formonitoring service quality.
92

Terrain Rendering Techniques for the HPC-Lab Snow Simulator

Babington, Kjetil January 2012 (has links)
This thesis presents a technique for GPU-based terrain rendering and the changes made to the HPC-lab snow simulator to integrate the new terrain rendering technique into the simulator. Our novel terrain rendering technique combines ideas from existing terrain rendering techniques such as CDLODcite{CDLOD} and Geometry Clipmapscite{geoclipmaps} into a hybrid method. The terrain rendering works on patches of quads, that are tessellated, using hardware tessellation based on the level of detail needed. The tessellated patches are then displaced, using a vertex texture fetch of the heightmap in the tessellation shader. The implemented GPU terrain rendering technique is then added to the HPC-lab snow simulator, and changes to the simulator are implemented to facilitate the new terrain rendering technique, all of the old GLSL shaders are updated to the newest standard, the code structure is changed, and the collision detection of the snow simulator is updated to accommodate changes made to the terrain.The results from our benchmarks show that the tessellation pipeline can be used to facilitate terrain triangle count of over 16 million triangles while maintaining a stable frame rate of over 1400 FPS. When used in combination with the simulator, the implementation is still able to achieve frame rates that are vastly greater than the old implementation in the snow simulator. The visual results acheived from using Perlin noise gives the simulator a more realistic feel, while not degrading the performance of the implementation. Suggestions for futher improvements are also included.
93

Implementing a Heterogeneous Multi-Core Prototype in an FPGA

Rusten, Leif Tore, Sortland, Gunnar Inge January 2012 (has links)
Since the mid-1980s processor performance growth has been remarkable, with an annual growth of about 52 %. Methods such as architectural enhancements exploiting ILP and frequency scaling have been effective at increasing performance, but are now limited by its diminishing returns and the power wall. Heterogeneous processors as an alternative source for continued growth looks promising, but research on heterogeneous software is made difficult as heterogeneous hardware is in low supply. This thesis cover the design and implementation of a heterogeneous processor called SHMAC and its framework. Flexibility of the delivered system allows rapid exploration of both hardware and software sides of heterogeneous processor research questions. The system is intended for research at CARD at NTNU. Two processor tiles and a set of additional tiles for extended functionality are provided, yielding a wide range of possible hardware setups in the delivered framework. Using a Xilinx Virtex 6 we were able to implement 40 integer cores or 16 floating-point cores.
94

Empirically Based Design Guidelines for Gaze Interaction in Windows 7

Raudsandmoen, Håkon, Rødsjø, Børge January 2012 (has links)
The purpose of this study has been to test the use of gaze interaction in common everyday computer tasks, with the intent to suggest design guidelines for gaze interaction in Microsoft Windows 7. This has been done by organizing a user test with fifteen participants, using a self-made gaze interactive software called Discovery and a Tobii X60 eye tracker.Five demo applications have been created within the Discovery software, all utilizing gaze interaction. They are customized to reflect five user test tasks; playing a video game, exploring a picture gallery, doing drag and drop operations, browsing a web page and interacting with different Microsoft Windows controls. The four types of controls tested are command buttons, links, check boxes and sliders. Both quantitative and qualitative data were gathered during the user test.Through a discussion of the test results, we were able to suggest ten specific design guidelines for gaze interaction. These covers both the tested controls, drag and drop operations, automatic scrolling as well as the use of head gestures. Additional findings indicate that gaze interaction is more suitable for passive tasks such as reading with automatic scrolling, than for more physical tasks like doing drag and drop operations. To support gaze interaction, we found that current software will either require a major redesign or to be used in a combination with other interaction styles.Eye tracking technology has improved over the last years, becoming increasingly affordable and accurate. Through this study we have seen that gaze interaction has much to offer everyday computing. By recommending fundamental design guidelines we hope to aid software developers and designers in the development of future gaze interactive systems.
95

A Survey of Combining Association Rules for Pre-warning of Oil Production Problems

Helland, Per Kristian January 2007 (has links)
Periods of sub-optimal production rates, or complete shut-downs, add negative numbers to the revenuegraph for oil companies. Oil and gas are produced from several reservoirs and through many wells withvarying gas/oil proportion, making it a complex process that is difficult to control. As a part of a threestepprocess for utilizing data in the oil production domain, this thesis derive methods for combiningevent patterns, called restricted association rules, in time series in order to warn about future anomalies inoil production processes. Two problems have been considered: Network learning and network reasoning.The suggested solution consists of building an Association Rules Network (ARN) from the rule set givenas input. After transforming the hypergraph-based ARN to a directed acyclic graph, correlations betweennodes are found by applying the shortest-path principle. Motivated by the shortcomings of this simplesolution, it is shown how a method for learning Bayesian networks with support for representation oftemporal dependencies can be derived from the initial ARN. The concept, named Temporal BayesianNetwork of Events (TBNE), is a powerful, but yet complex solution that enjoys the properties of Bayesiannetwork reasoning while at the same time representing temporal information. This thesis has shown thatit is theoretically feasible to combine restricted association rules in order to create a network structurefor reasoning. It is concluded that the final choice of solution must be based on a carefully considerationof the trade-off between complexity and expressiveness, and that a natural continuation is testing thesuggested concepts with real data.
96

A Framework for discovering Interesting Rules from Event Sequences with the purpose of pre-warning Oil Production Problems

Christiansen, Joacim Lunewski January 2007 (has links)
Periods of sub-optimal production rates, or complete shut-downs, add negative numbers to the revenue graph for oil companies. Oil and gas are produced from several reservoirs and through many wells with varying gas/oil proportion, making it a complex process that is difficult to control. As a part of a three step process for utilizing data in the oil production domain, this thesis derive methods for discovering event patterns, called restricted association rules, from time series in order to pre-warn about future problems in oil production processes. A restricted rule syntax and semantics is derived to explicitly target rules suited for prediction. Based on the defined rule syntax, a two step process is derived where restricted rule mining based on the concept of minimal occurrences is used to discover restricted association rules from a sequence of events. Next, redundant rules are removed based on the concept of minimum improvement and chaining of rules, during a rule selection phase. Information theory is applied in order to identify the most interesting rules, which can be submitted to an expert for validation. Both a simple solution for easy implementation in ConocoPhillips and a more advanced solution appropriate for general prediction cases are derived. This thesis concludes that it is feasible to discover dependencies between events from actual process data. It is also concluded that a large number of rules can be pruned, in order to get a manageable set of rules which is believed to have good predictive performance.
97

Rule Engine

Eriksen, Øystein, Leite, Andreas Smogeli January 2007 (has links)
This project is a study of the development of the Rule Engine, which is a validation system for quality assurance of product data used in the grocery business. The authors was asked by Cogitare AS to develop the Rule Engine. A system where users without programming skills can build rules and validate product data. The main quality attribute focus is robustness and user friendliness. A survey has been used by the authors to be able to explore if our objectives have been achieved and to identify further work. The questionnaire has been conducted on students and software developers.
98

Self-organized Synaptic Learning of Gaits in Virtual Creatures : A neural simulation study within Connectology

Axelsen, Vebjørn Wærsted January 2007 (has links)
The theory of Connectology sets forth three psychologically founded synaptic learning mechanisms that may describe all aspects of animal learning. Of particular interest to this thesis is the learning of animal motion behavior, or, more specifically, the development of synchronized and repetitive movement patterns - gaits.Computer simulations are performed according to the methodology of computational neuroethology: Artificial neural networks are simulated operating in a tight feedback loop with a structurally simple but mechanically realistic body and a physically realistic environment. Neural network learning is purely synaptical and is performed solely within the lifetime of one such ANN-controlled system. Additionally, the configuration parameter space is searched by means of genetic algorithms.Simulation results show examples of synchronized and repetitive movement patterns developing when neuronal and mechanical model parameters are appropriately specified. These simulations thereby provide the first examples known to us of a fully unsupervised and self-organized artificial neural system that synaptically learns synchronized and repetitive motor control. In spite of limited mechanical model complexity, the most efficient movement patterns to some degree resemble the gaits seen in nature.
99

Case-Based Reasoning in identifying causes of fish death in industrial fish farming

Garaas, Marte, Hiåsen Stevning, Geir Ole January 2011 (has links)
Fish farming is a million dollar business world wide, and fish is in fact the third mostimportant export product after oil/gas and metal in Norway. There are a lot of different aquaculture sites which produce fish along our long coast line and they all have somedifferences in the production rates and procedures. The fish farmer at these sites holdvaluable information about the production, which is almost impossible to derive onlyfrom empirical data.In this thesis we introduce Glaucus, a Case-Based Reasoning system which aims tohelp the fish farmers with their decision making when conduction sorting operations attheir aquaculture sites. The system is built in Java and uses the jColibri developmentframework for Case-Based Reasoning. It retrieves cases based on similarity function frommyCBR and jColibri in addition to custom made ones. The case base is generated fromreal world data and the case queries are populated by a combination of user input anddata from a database with continuous data flow.Our approach is just the beginning of what we hope will be a even greater journeytowards a complete decision support system that will meet the expectations of the fishfarmers.Keywords: Case-Based Reasoning, Machine learning, Fish farming, jColibri, myCBR
100

OpenVG: Benchmarking and artistic opportunities

Chevillard, Brice January 2007 (has links)
OpenVG is a new open standard for 2 dimensions vector graphics for handheld devices. This project, which is a master thesis and an internship, aims to study the standard itself deeply before to study the role it can play in the future of artistic content creation.We will see that under some few conditions, OpenVG has everything to fulfil its role in the market and to attract digital artists who would like to enlarge their creation field. But the major aim of the project is to develop a benchmark for both hardware and software implementations. And to achieve this goal, a study of the theory of performance evaluation is necessary. And after developing the benchmark, it is interesting to run some few tests to illustrate some principles enounced while studying performance evaluation.

Page generated in 0.4044 seconds