• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Establishment of Helicopter Subsystem Design-to-Cost Estimates by Use of Parametric Cost Estimating Models

Gilliland, Johnny J. 08 1900 (has links)
The purpose of this research was to develop parametric Design-to-Cost models for selected major subsystems of certain helicopters. This was accomplished by analyzing the relationships between historical production costs and certain design parameters which are available during the preliminary design phase of the life cycle. Several potential contributions are identified in the areas of academia, government, and industry. Application of the cost models will provide estimates beneficial to the government and DoD by allowing derivation of realistic Design-to-Cost estimates. In addition, companies in the helicopter industry will benefit by using the models for two key purposes: (1) optimizing helicopter design through cost-effective tradeoffs, and (2) justifying a proposal estimate.
12

EXTRACTION AND PREDICTION OF SYSTEM PROPERTIES USING VARIABLE-N-GRAM MODELING AND COMPRESSIVE HASHING

Muthukumarasamy, Muthulakshmi 01 January 2010 (has links)
In modern computer systems, memory accesses and power management are the two major performance limiting factors. Accesses to main memory are very slow when compared to operations within a processor chip. Hardware write buffers, caches, out-of-order execution, and prefetch logic, are commonly used to reduce the time spent waiting for main memory accesses. Compiler loop interchange and data layout transformations also can help. Unfortunately, large data structures often have access patterns for which none of the standard approaches are useful. Using smaller data structures can significantly improve performance by allowing the data to reside in higher levels of the memory hierarchy. This dissertation proposes using lossy data compression technology called ’Compressive Hashing’ to create “surrogates”, that can augment original large data structures to yield faster typical data access. One way to optimize system performance for power consumption is to provide a predictive control of system-level energy use. This dissertation creates a novel instruction-level cost model called the variable-n-gram model, which is closely related to N-Gram analysis commonly used in computational linguistics. This model does not require direct knowledge of complex architectural details, and is capable of determining performance relationships between instructions from an execution trace. Experimental measurements are used to derive a context-sensitive model for performance of each type of instruction in the context of an N-instruction sequence. Dynamic runtime power prediction mechanisms often suffer from high overhead costs. To reduce the overhead, this dissertation encodes the static instruction-level predictions into a data structure and uses compressive hashing to provide on-demand runtime access to those predictions. Genetic programming is used to evolve compressive hash functions and performance analysis of applications shows that, runtime access overhead can be reduced by a factor of ~3x-9x.
13

Modelling the water quality in dams within the Umgeni Water operational area with emphasis on algal relations / Philip Mark Graham

Graham, Philip Mark January 2007 (has links)
Based on many years of water quality (including algal) and water treatment cost data, available at Umgeni Water, a study was undertaken to better understand the water quality relationships in man made lakes within the company's operational area, and to investigate how water quality affected the cost of treating water from these lakes. The broad aims to the study were to: identify the key environmental variables that were affecting algal populations in lakes; and if these were significant to establish predictive models relating algae to the water quality; and to develop models relating the water quality in lakes to the cost of treating water from the lakes. Semi-quantitative models were developed relating algal abundances with important environmental variables. In most cases, the models developed were related to algal populations that were known to adversely affect water treatment. Direct algal impact on water treatment processes was through the production of either taste and odour forming compounds (requiring advanced water treatment, such as use of activated carbon), or their ability to clog sand filters and so reduce filter run times (requiring more frequent backwashing of filters). Thereafter lake water quality parameters (which included water physico-chemistry and algae) were investigated to determine which factors were most significantly impacting on water treatment and hence treatment costs at selected water works (WW) within the Umgeni Water operational area. Models were developed relating raw water quality entering respective water works with costs incurred in treating that water. The models allowed simulations to be developed illustrating how changes in water quality might impact on water treatment costs. The impact of eutrophication and contamination of rivers and lakes, and its subsequent impact on surface water resources, was quantified. / Thesis (Ph.D. (Botany))--North-West University, Potchefstroom Campus, 2004.
14

Modelling the water quality in dams within the Umgeni Water operational area with emphasis on algal relations / Philip Mark Graham

Graham, Philip Mark January 2007 (has links)
Based on many years of water quality (including algal) and water treatment cost data, available at Umgeni Water, a study was undertaken to better understand the water quality relationships in man made lakes within the company's operational area, and to investigate how water quality affected the cost of treating water from these lakes. The broad aims to the study were to: identify the key environmental variables that were affecting algal populations in lakes; and if these were significant to establish predictive models relating algae to the water quality; and to develop models relating the water quality in lakes to the cost of treating water from the lakes. Semi-quantitative models were developed relating algal abundances with important environmental variables. In most cases, the models developed were related to algal populations that were known to adversely affect water treatment. Direct algal impact on water treatment processes was through the production of either taste and odour forming compounds (requiring advanced water treatment, such as use of activated carbon), or their ability to clog sand filters and so reduce filter run times (requiring more frequent backwashing of filters). Thereafter lake water quality parameters (which included water physico-chemistry and algae) were investigated to determine which factors were most significantly impacting on water treatment and hence treatment costs at selected water works (WW) within the Umgeni Water operational area. Models were developed relating raw water quality entering respective water works with costs incurred in treating that water. The models allowed simulations to be developed illustrating how changes in water quality might impact on water treatment costs. The impact of eutrophication and contamination of rivers and lakes, and its subsequent impact on surface water resources, was quantified. / Thesis (Ph.D. (Botany))--North-West University, Potchefstroom Campus, 2004.
15

Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

Moura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
16

Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

Moura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
17

Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

Moura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
18

Distributed frequent subgraph mining in the cloud / Fouille de sous-graphes fréquents dans les nuages

Aridhi, Sabeur 29 November 2013 (has links)
Durant ces dernières années, l’utilisation de graphes a fait l’objet de nombreux travaux, notamment en bases de données, apprentissage automatique, bioinformatique et en analyse des réseaux sociaux. Particulièrement, la fouille de sous-graphes fréquents constitue un défi majeur dans le contexte de très grandes bases de graphes. De ce fait, il y a un besoin d’approches efficaces de passage à l’échelle pour la fouille de sous-graphes fréquents surtout avec la haute disponibilité des environnements de cloud computing. Cette thèse traite la fouille distribuée de sous-graphe fréquents sur cloud. Tout d’abord, nous décrivons le matériel nécessaire pour comprendre les notions de base de nos deux domaines de recherche, à savoir la fouille de sous-graphe fréquents et le cloud computing. Ensuite, nous présentons les contributions de cette thèse. Dans le premier axe, une nouvelle approche basée sur le paradigme MapReduce pour approcher la fouille de sous-graphes fréquents à grande échelle. L’approche proposée offre une nouvelle technique de partitionnement qui tient compte des caractéristiques des données et qui améliore le partitionnement par défaut de MapReduce. Une telle technique de partitionnement permet un équilibrage des charges de calcul sur une collection de machine distribuée et de remplacer la technique de partitionnement par défaut de MapReduce. Nous montrons expérimentalement que notre approche réduit considérablement le temps d’exécution et permet le passage à l’échelle du processus de fouille de sous-graphe fréquents à partir de grandes bases de graphes. Dans le deuxième axe, nous abordons le problème d’optimisation multi-critères des paramètres liés à l’extraction distribuée de sous-graphes fréquents dans un environnement de cloud tout en optimisant le coût monétaire global du stockage et l’interrogation des données dans le nuage. Nous définissons des modèles de coûts de gestion et de fouille de données avec une plateforme de fouille de sous-graphe à grande échelle sur une architecture cloud. Nous présentons une première validation expérimentale des modèles de coûts proposés. / Recently, graph mining approaches have become very popular, especially in certain domains such as bioinformatics, chemoinformatics and social networks. One of the most challenging tasks in this setting is frequent subgraph discovery. This task has been highly motivated by the tremendously increasing size of existing graph databases. Due to this fact, there is urgent need of efficient and scaling approaches for frequent subgraph discovery especially with the high availability of cloud computing environments. This thesis deals with distributed frequent subgraph mining in the cloud. First, we provide the required material to understand the basic notions of our two research fields, namely graph mining and cloud computing. Then, we present the contributions of this thesis. In the first axis, we propose a novel approach for large-scale subgraph mining, using the MapReduce framework. The proposed approach provides a data partitioning technique that consider data characteristics. It uses the densities of graphs in order to partition the input data. Such a partitioning technique allows a balanced computational loads over the distributed collection of machines and replace the default arbitrary partitioning technique of MapReduce. We experimentally show that our approach decreases significantly the execution time and scales the subgraph discovery process to large graph databases. In the second axis, we address the multi-criteria optimization problem of tuning thresholds related to distributed frequent subgraph mining in cloud computing environments while optimizing the global monetary cost of storing and querying data in the cloud. We define cost models for managing and mining data with a large scale subgraph mining framework over a cloud architecture. We present an experimental validation of the proposed cost models in the case of distributed subgraph mining in the cloud.
19

Alternativas de transporte rodo-marítimo na distribuição de cargas frigoríficas no Brasil / Road-waterway transportation alternatives in the distribution of reefer cargo in Brazil

Rorato, Rafael José 22 August 2003 (has links)
Esta pesquisa investiga alternativas de transporte de cargas frigoríficas entre fábricas e centros de distribuição, avaliando as possíveis vantagens econômicas que resultariam do transporte intermodal rodo-hidroviário de contêineres ISO e do uso da CVC (Combinação de Veículo Carga) do tipo 3S3B3 - cavalo mecânico de três eixos mais dois semi-reboques de três eixos unidos por uma conexão do tipo B com PBTC de 74t, em relação ao cenário atual de transporte rodoviário porta-a-porta com a CVC do tipo 2S3 – cavalo mecânico de dois eixos com um semi-reboque de três eixos de 41,5t de PBTC. Com o auxílio de um Sistema de Informações Geográficas (TransCAD) dimensiona-se a frota e elabora-se um modelo de custos de transporte na rede de rotas para diversos cenários alternativos. As tarifas de pedágios das principais concessionárias de rodovias brasileiras e as tarifas portuárias dos portos considerados no modelo serviram como fonte de informações para a criação de mapas temáticos, rotas de caminho mínimo e a obtenção do custo por tonelada transportada. Conclui-se, através da análise dos resultados obtidos, que existem potenciais ganhos econômicos da alternativa de transporte intermodal rodo-marítimo, associado a CVC 3S3B3 – bitrem com implemento porta-contêineres, no acesso aos portos e no trajeto do porto ao centro de distribuição em relação à tecnologia de transporte utilizada atualmente, de transporte rodoviário porta-a-porta com a CVC 2S3 – semi-reboque com implemento rodoviário baú frigorífico e capacidade de carga de 26 paletes padrão PBR. Conclui-se também que a tecnologia do tipo 3S3B3 com baú frigorífico e capacidade de carga de 40 paletes oferece aos transportadores uma alternativa que, na grande maioria das rotas estudadas, é mais competitiva que a integração modal rodo-hidroviária. / This research investigates transportation alternatives between industries and distribution centers by evaluating the possible advantages of the road-waterway intermodal transport of ISO containers and the use of a LCV-Long Combination Vehicle of the 3S3B3 type – a tractor plus two semi-trailers linked by a B-train connection and 74t of GVW, in relation to the present roadway house-to-house operating scenario using a five axles semi-trailer of the 2S3 type and 41,5t of GVW. Using a Geographic Information System (TransCAD) the fleet size and the operating costs of the transportation network are calculated. For different alternative scenarios tolls charged by the brasilian private road administration partnership and current port rates are used in the model, and served as the basis to elaborate the thematic maps, minimum cost routes and costs per transported ton. From the analysis of the results it is concluded that the intermodal road-waterway alternative, associated to the transport of containers to the ports and from the ports to the distribution centers using the 3S3B3 with two container flatbeds, offers potencial economic gains in relation to the present technology using the house-to-house road transport with the 2S3 and a reefer trailer for 26 PBR pallets. It is also concluded that the 3S3B3 technology with 2 reefer units and capacity of 40 PBR pallets offers an alternative to the transportation industry that outperforms the road-waterway intermodal integration on most studied routes.
20

Structured arrows : a type-based framework for structured parallelism

Castro, David January 2018 (has links)
This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.

Page generated in 0.0562 seconds