Spelling suggestions: "subject:"ehe attributed"" "subject:"ehe attributes""
211 |
Mass valuation of urban land in Ukraine: from normative to a market-based approachKryvobokov, Marko January 2006 (has links)
QC 20100906
|
212 |
Cache conscious column organization in in-memory column storesSchwalb, David, Krüger, Jens, Plattner, Hasso January 2013 (has links)
Cost models are an essential part of database systems, as they are the basis of query performance optimization. Based on predictions made by cost models, the fastest query execution plan can be chosen and executed or algorithms can be tuned and optimised. In-memory databases shifts the focus from disk to main memory accesses and CPU costs, compared to disk based systems where input and output costs dominate the overall costs and other processing costs are often neglected. However, modelling memory accesses is fundamentally different and common models do not apply anymore.
This work presents a detailed parameter evaluation for the plan operators scan with equality selection, scan with range selection, positional lookup and insert in in-memory column stores. Based on this evaluation, a cost model based on cache misses for estimating the runtime of the considered plan operators using different data structures is developed. Considered are uncompressed columns, bit compressed and dictionary encoded columns with sorted and unsorted dictionaries. Furthermore, tree indices on the columns and dictionaries are discussed. Finally, partitioned columns consisting of one partition with a sorted and one with an unsorted dictionary are investigated. New values are inserted in the unsorted dictionary partition and moved periodically by a merge process to the sorted partition. An efficient attribute merge algorithm is described, supporting the update performance required to run enterprise applications on read-optimised databases. Further, a memory traffic based cost model for the merge process is provided. / Kostenmodelle sind ein essentieller Teil von Datenbanksystemen und bilden die Basis für Optimierungen von Ausführungsplänen. Durch Abschätzungen der Kosten können die entsprechend schnellsten Operatoren und Algorithmen zur Abarbeitung einer Anfrage ausgewählt und ausgeführt werden. Hauptspeicherresidente Datenbanken verschieben den Fokus von I/O Operationen hin zu Zugriffen auf den Hauptspeicher und CPU Kosten, verglichen zu Datenbanken deren primäre Kopie der Daten auf Sekundärspeicher liegt und deren Kostenmodelle sich in der Regel auf die kostendominierenden Zugriffe auf das Sekundärmedium beschränken.
Kostenmodelle für Zugriffe auf Hauptspeicher unterscheiden sich jedoch fundamental von Kostenmodellen für Systeme basierend auf Festplatten, so dass alte Modelle nicht mehr greifen. Diese Arbeit präsentiert eine detaillierte Parameterdiskussion, sowie ein Kostenmodell basierend auf Cache-Zugriffen zum Abschätzen der Laufzeit von Datenbankoperatoren in spaltenorientierten und hauptspeicherresidenten Datenbanken wie das Selektieren von Werten einer Spalte mittels einer Gleichheitsbedingung oder eines Wertebereichs, das Nachschlagen der Werte einzelner Positionen oder dem Hinzufügen neuer Werte. Dabei werden Kostenfunktionen für die Operatoren erstellt, welche auf unkomprimierten Spalten, mittels Substitutionskompression komprimierten Spalten sowie bit-komprimierten Spalten operieren. Des Weiteren werden Baumstrukturen als Index Strukturen auf Spalten und Wörterbüchern in die Betrachtung gezogen. Abschließend werden partitionierte Spalten eingeführt, welche aus einer lese- und einer schreib-optimierten Partition bestehen. Neu Werte werden in die schreiboptimierte Partition eingefügt und periodisch von einem Attribut-Merge-Prozess mit der leseoptimierten Partition zusammengeführt. Beschrieben wird eine Effiziente Implementierung für den Attribut-Merge-Prozess und ein Hauptspeicher-bandbreitenbasiertes Kostenmodell aufgestellt.
|
213 |
A Need Analysis Study For Faculty Development Programs In Metu And Structural Equation Modeling Of Faculty NeedsMoeini, Hosein 01 September 2003 (has links) (PDF)
The purpose of this doctoral thesis research study was first to investigate the
needs for a faculty development program in Middle East Technical University
(METU). Later, in the second phase, models that explained the linear structural
relationships among factors that might be influential on faculty& / #146 / s perceived
competencies about the skills necessary for the instructional practices, personal,
professional and organizational developments were proposed and compared.
In this study, a questionnaire considering different aspects of faculty
developments were sent to all of the academicians in METU. After collecting data
from faculty members and research assistants, they were analyzed both
descriptively and using principal component factor analysis. Based on the results of
factor analysis, linear structural relations models fitting the data were generated
through LISREL-SIMPLIS computer program runs.
The descriptive results indicated that there was a feeling for need to improve
the faculty' / s self-proficiency in different instructional issues. On the other hand,
both descriptive results and LISREL modeling results indicated that faculty
members and research assistants show different characteristics based on their needs
and factors affecting their self-proficiencies. These aspects will lead us to prepare
different faculty development programs based on their needs and priorities.
The result for both faculty members and research assistants showed that in a
faculty, instructional self-proficiency cannot be considered as a single absolute
parameter. Rather, it should be considered as several interrelated parameters
connected to different aspects of faculty' / s proficiencies.
|
214 |
Simplification Techniques for Interactive ApplicationsGonzález Ballester, Carlos 09 July 2010 (has links)
Interactive applications with 3D graphics are used everyday in a lot of different fields, such as games, teaching, learning environments and virtual reality. The scenarios showed in interactive applications usually tend to present detailed worlds and characters, being the most realistic as possible. Detailed 3D models require a lot of geometric complexity. But not always the available graphics hardware can handle and manage all this geometry maintaining a realistic frame rate. Simplification methods attempt to solve this problem, by generating simplified versions of the original 3D models. These simplified models present less geometry than the original ones. This simplification has to be done with a reasonable criterion in order to maintain as possible the appearance of the original models. But the geometry is not the only important factor in 3D models. They are also composed of additional attributes that are important for the final aspect of the models for the viewer. In the literature we can find a lot of work presented about simplification. However, there are still several points without an efficient solution. Therefore, this thesis focuses on simplification techniques for 3D models usually used in interactive applications.
|
215 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
216 |
Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to RelationLin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program.
This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar.
The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step.
For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created.
Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard.
Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators.
Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
|
217 |
Development and Application of Probabilistic Decision Support Framework for Seismic Rehabilitation of Structural SystemsPark, Joonam 22 November 2004 (has links)
Seismic rehabilitation of structural systems is an effective approach for reducing potential seismic losses such as social and economic losses. However, little or no effort has been made to develop a framework for making decisions on seismic rehabilitation of structural systems that systematically incorporates conflicting multiple criteria and uncertainties inherent in the seismic hazard and in the systems themselves.
This study develops a decision support framework for seismic rehabilitation of structural systems incorporating uncertainties inherent in both the system and the seismic hazard, and demonstrates its application with detailed examples. The decision support framework developed utilizes the HAZUS method for a quick and extensive estimation of seismic losses associated with structural systems. The decision support framework allows consideration of multiple decision attributes associated with seismic losses, and multiple alternative seismic rehabilitation schemes represented by the objective performance level. Three multi-criteria decision models (MCDM) that are known to be effective for decision problems under uncertainty are employed and their applicability for decision analyses in seismic rehabilitation is investigated. These models are Equivalent Cost Analysis (ECA), Multi-Attribute Utility Theory (MAUT), and Joint Probability Decision Making (JPDM). Guidelines for selection of a MCDM that is appropriate for a given decision problem are provided to establish a flexible decision support system. The resulting decision support framework is applied to a test bed system that consists of six hospitals located in the Memphis, Tennessee, area to demonstrate its capabilities.
|
218 |
A Decision Analytic Model For Early Stage Breast Cancer Patients: Lumpectomy Vs MastectomyElele, Tugba 01 September 2006 (has links) (PDF)
The purpose of this study was to develop a decision model for early-stage breast cancer patients. This model provides an opportunity for comparing two main treatment options, mastectomy and lumpectomy, with respect to quality of life by making use of Decision Theoretic Techniques.
A Markov chain was constructed to project the clinical history of breast carcinoma following surgery. Then, health states used in the model were characterized by transition probabilities and utilities for quality of life. A Multi Attribute Utility Model was developed for outcome evaluation. This study was performed on the sample population of female university students, and utilities were elicited from these healthy volunteers. The results yielded by Multi Attribute Utility Model were validated by using Von Neumann-Morgenstern Standard Gamble technique. Finally, Monte Carlo Simulation was utilized in Treeage-Pro 2006 Suit software program in order to solve model and calculate expected utility value generated by each treatment option. The results showed that lumpectomy is more favorable for people who participated in this study. Sensitivity analysis on transition probabilities to local recurrence and salvaged states was performed and two threshold values were observed. Additionally, sensitivity analysis on utilities showed that the model was more sensitive to no evidence of disease state / however, was not sensitive to utilities of local recurrence and salvaged states.
|
219 |
A semantic data model for intellectual database accessWatanabe, Toyohide, Uehara, Yuusuke, Yoshida, Yuuji, Fukumura, Teruo 03 1900 (has links)
No description available.
|
220 |
A framework for simulation-based multi-attribute optimum design with improved conjoint analysisRuderman, Alex Michael 24 August 2009 (has links)
Decision making is necessary to provide a synthesis scheme to design activities and identify the most preferred design alternative. There exist several methods that address modeling designer preferences in a graphical manner to aid the decision making process. For instance, the Conjoint Analysis has been proven effective for various multi-attribute design problems by utilizing a ranking- or rating-based approach along with the graphical representation of the designer preference. However, the ranking or rating of design alternatives can be inconsistent from different users and it is often difficult to get customer responses in a timely fashion. The high number of alternative comparisons required for complex engineering problems can be exhausting for the decision maker. In addition, many design objectives can have interdependencies that can increase complexity and uncertainty throughout the decision making process. The uncertainties apparent in the attainment of subjective data as well as with system models can reduce the reliability of decision analysis results. To address these issues, the use of a new technique, the Improved Conjoint Analysis, is proposed to enable the modeling of designer preferences and trade-offs under the consideration of uncertainty. Specifically, a simulation-based ranking scheme is implemented and incorporated into the traditional process of the Conjoint Analysis. The proposed ranking scheme can reduce user fatigue and provide a better schematic decision support process. In addition, the incorporation of uncertainty in the design process provides the capability of producing robust or reliable products. The efficacy and applicability of the proposed framework are demonstrated with the design of a cantilever beam, a power-generating shock absorber, and a mesostructured hydrogen storage tank.
|
Page generated in 0.069 seconds