• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optimization and Execution of Complex Scientific Queries

Fomkin, Ruslan January 2009 (has links)
Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development. This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses. Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.
12

Srovnání vybraných způsobů ocenění pro nemovitost typu garáž na území města Brna / Comparison of Selected Ways of the Garage Type Real Property Evaluation in Brno

Odehnal, Tomáš January 2010 (has links)
This master‘s thesis aim is to provide a comparison of methods in garage appraisals within the territory of the city Brno. This comparison is being done on several immovables and using multiple apprising methods. The acquired data are further evaluated afterwards. First, the garage market in Brno is analyzed. This is done by comparing the supply with the demand incl. those characteristics that could influence the prices of the immovable properties. This analysis is based on real-world data of garage sales from an estate agency. For the appraisal itself, 10 immovables of the type “garage” were chosen. These are then appraised using the cost-based and comparative methods. The location distribution of these estates in the territory of the city Brno and within the city quarters has been chosen in such a manner to provide the most different ones. Then the acquired data are being compared and statistically evaluated. Such information can be used in the practice when appraising of such immovables on the market with garages within the territory of the city Brno is being done.
13

Stanovení rozdílu cen garáží u RD v Olomouci / Determination of the Difference of Garage Prices at a House in Olomouc

Páleník, Petr January 2014 (has links)
Master thesis is focused on determination of differences in ways of valuation in city Olomouc. Fifteen garages were used to price estimate in this city. The first part of work deals with basic terms and theoretical definition of methods of valuation. Due to instructions of the work, the methods of valuation were chosen: cost method, comparison according to law and comparison based on databases. In the following parts are described chosen garages and there is undertaken evaluation. Differences of ways of valuation and their causes, which are based on obtained data, are discussed at the end of work.
14

Cost-Based Optimization of Integration Flows

Böhm, Matthias 15 March 2011 (has links)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation.
15

Cost-Based Vectorization of Instance-Based Integration Processes

Boehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang 19 January 2023 (has links)
The inefficiency of integration processes - as an abstraction of workflow-based integration tasks - is often reasoned by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. Here, the term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of available threads. In this paper, we present an advanced optimization approach that addresses the mentioned problems. Therefore, we generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner. Due to the exponential complexity, we provide a heuristic computation approach and formally analyze its optimality. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based process vectorization.
16

A Framework for Integrating Value and Uncertainty in the Sustainable Options Analysis in Real Estate Investment

Bozorgi, Alireza 10 April 2012 (has links)
Real estate professionals, such as investors, owner-occupants, and lenders who are involved in the investment decision-making process are increasingly interested in sustainability and energy efficiency investment. However, current tools and techniques, both technical and financial, typically used for assessing sustainability on their own are unable to provide comprehensive and reliable financial information required for making high-quality investment decisions. Sustainability investment often includes non-cost benefits, value implications, as well as substantial risk and uncertainty that current methods do not simultaneously incorporate in their assessment process. Through a combined quantitative and qualitative approach, this research creates a new systematic assessment process to consider both cost and non-cost savings and therefore the true financial performance of a set of sustainable options in the context of value and risk, while explicitly deriving and including various uncertainties inherent in the process. The framework integrates assessment tools of technical decision-makers with those of investment decision-makers into a single platform to improve the quality of financial performance projections, and therefore, investment decisions concerning sustainable options in real estate. A case study is then conducted to test and demonstrate the numeric application of the proposed framework in the context of a non-green office building. The case study presents how to connect the technical outcomes to financial inputs, present the information, and estimate the true financial performance of a green retrofit option, where incremental value and uncertainty have been modeled and included. Three levels of financial analysis are performed to estimate the distribution of financial outcomes including: 1) Cost-based level-1: only energy related costs savings were considered; 2) Cost-based level-2: the non-energy cost savings, including heath and productivity, were also considered; and 3) Value-based level: the value implications of the green retrofit option were considered in addition to items in level 2. As a result of applying the proposed framework when evaluating sustainability investment options, many investment opportunities that were otherwise ignored may be realized, and therefore, the breadth and depth of sustainability investment in real estate will increase. / Ph. D.
17

The Transfer Pricing Problem in a Service Firm : A Case Study on a Swedish Multinational Enterprise

Husain, Shakir, Yilmaz, Emre January 2015 (has links)
The purpose of this study is to answer the research question of how a service company (ServiceCo) could achieve a transfer price of its services. This is of particular interest, due to the growth of service firms that have rapidly increased and surpassed the manufacturing firms, as well as the dominant logic shifting towards services. However, the problem with this field of study is that transfer pricing with regards to the service industry, is a rather unexplored phenomenon in which the guidelines and theories are mostly directed towards manufacturing firms. This study uses a single case study approach where ServiceCo’s organizational characteristics were analyzed in order to attain the information required to understand how ServiceCo could achieve a transfer price of its services. Furthermore, this study uses the Eccles (1983) MAP and the OECD Guidelines, as well as incorporating Porter’s (1985) value chain. This study assesses that ServiceCo, in its current state, uses a sub-optimal transfer pricing method of its services. Therefore, a change in the transfer pricing method was suggested to ServiceCo. Given the organizational characteristics of ServiceCo, the results led to the conclusion that ServiceCo could benefit from a residual analysis in the profit split method, in which an actual full cost plus mark-up compensation could be used on its routine functions, and the residual profit could be split between the entities based on the intangible assets employed, functions performed and the risks carried.
18

Inferring Genetic Regulatory Networks Using Cost-based Abduction and Its Relation to Bayesian Inference

Andrews, Emad Abdel-Thalooth 16 July 2014 (has links)
Inferring Genetic Regulatory Networks (GRN) from multiple data sources is a fundamental problem in computational biology. Computational models for GRN range from simple Boolean networks to stochastic differential equations. To successfully model GRN, a computational method has to be scalable and capable of integrating different biological data sources effectively and homogeneously. In this thesis, we introduce a novel method to model GRN using Cost-Based Abduction (CBA) and study the relation between CBA and Bayesian inference. CBA is an important AI formalism for reasoning under uncertainty that can integrate different biological data sources effectively. We use three different yeast genome data sources—protein-DNA, protein-protein, and knock-out data—to build a skeleton (unannotated) graph which acts as a theory to build a CBA system. The Least Cost Proof (LCP) for the CBA system fully annotates the skeleton graph to represent the learned GRN. Our results show that CBA is a promising tool in computational biology in general and in GRN modeling in particular because CBA knowledge representation can intrinsically implement the AND/OR logic in GRN while enforcing cis-regulatory logic constraints effectively, allowing the method to operate on a genome-wide scale.Besides allowing us to successfully learn yeast pathways such as the pheromone pathway, our method is scalable enough to analyze the full yeast genome in a single CBA instance, without sub-networking. The scalability power of our method comes from the fact that our CBA model size grows in a quadratic, rather than exponential, manner with respect to data size and path length. We also introduce a new algorithm to convert CBA into an equivalent binary linear program that computes the exact LCP for the CBA system, thus reaching the optimal solution. Our work establishes a framework to solve Bayesian networks using integer linear programming and high order recurrent neural networks through CBA as an intermediate representation.
19

Denial of service : prevention, modelling and detection

Smith, Jason January 2007 (has links)
This research investigates the denial of service problem, in the context of services provided over a network, and contributes to improved techniques for modelling, detecting, and preventing denial of service attacks against these services. While the majority of currently employed denial of service attacks aim to pre-emptively consume the network bandwidth of victims, a significant amount of research effort is already being directed at this problem. This research is instead concerned with addressing the inevitable migration of denial of service attacks up the protocol stack to the application layer. Of particular interest is the denial of service resistance of key establishment protocols (security protocols that enable an initiator and responder to mutually authenticate and establish cryptographic keys for establishing a secure communications channel), which owing to the computationally intensive activities they perform, are particularly vulnerable to attack. Given the preponderance of wireless networking technologies this research hasalso investigated denial of service and its detection in IEEE 802.11 standards based networks. Specific outcomes of this research include: - investigation of the modelling and application of techniques to improve the denial of service resistance of key establishment protocols; - a proposal for enhancements to an existing modelling framework to accommodate coordinated attackers; - design of a new denial of service resistant key establishment protocol for securing signalling messages in next generation, mobile IPv6 networks; - a comprehensive survey of denial of service attacks in IEEE 802.11 wireless networks; discovery of a significant denial of service vulnerability in the clear channel assessment procedure implemented by the medium access control layer of IEEE 802.11 compliant devices; and - design of a novel, specification-based intrusion detection system for detecting denial of service attacks in IEEE 802.11 wireless networks.
20

Protocol engineering for protection against denial-of-service attacks

Tritilanunt, Suratose January 2009 (has links)
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS) attempt to temporarily disrupt users or computer resources to cause service un- availability to legitimate users in the internetworking system. The most common type of DoS attack occurs when adversaries °ood a large amount of bogus data to interfere or disrupt the service on the server. The attack can be either a single-source attack, which originates at only one host, or a multi-source attack, in which multiple hosts coordinate to °ood a large number of packets to the server. Cryptographic mechanisms in authentication schemes are an example ap- proach to help the server to validate malicious tra±c. Since authentication in key establishment protocols requires the veri¯er to spend some resources before successfully detecting the bogus messages, adversaries might be able to exploit this °aw to mount an attack to overwhelm the server resources. The attacker is able to perform this kind of attack because many key establishment protocols incorporate strong authentication at the beginning phase before they can iden- tify the attacks. This is an example of DoS threats in most key establishment protocols because they have been implemented to support con¯dentiality and data integrity, but do not carefully consider other security objectives, such as availability. The main objective of this research is to design denial-of-service resistant mechanisms in key establishment protocols. In particular, we focus on the design of cryptographic protocols related to key establishment protocols that implement client puzzles to protect the server against resource exhaustion attacks. Another objective is to extend formal analysis techniques to include DoS- resistance. Basically, the formal analysis approach is used not only to analyse and verify the security of a cryptographic scheme carefully but also to help in the design stage of new protocols with a high level of security guarantee. In this research, we focus on an analysis technique of Meadows' cost-based framework, and we implement DoS-resistant model using Coloured Petri Nets. Meadows' cost-based framework is directly proposed to assess denial-of-service vulnerabil- ities in the cryptographic protocols using mathematical proof, while Coloured Petri Nets is used to model and verify the communication protocols using inter- active simulations. In addition, Coloured Petri Nets are able to help the protocol designer to clarify and reduce some inconsistency of the protocol speci¯cation. Therefore, the second objective of this research is to explore vulnerabilities in existing DoS-resistant protocols, as well as extend a formal analysis approach to our new framework for improving DoS-resistance and evaluating the performance of the new proposed mechanism. In summary, the speci¯c outcomes of this research include following results; 1. A taxonomy of denial-of-service resistant strategies and techniques used in key establishment protocols; 2. A critical analysis of existing DoS-resistant key exchange and key estab- lishment protocols; 3. An implementation of Meadows's cost-based framework using Coloured Petri Nets for modelling and evaluating DoS-resistant protocols; and 4. A development of new e±cient and practical DoS-resistant mechanisms to improve the resistance to denial-of-service attacks in key establishment protocols.

Page generated in 0.0293 seconds