Spelling suggestions: "subject:"ehe attributed"" "subject:"ehe attributes""
181 |
Using data mining to explore the regularity of genetic algorithms in job shop schedule problemsTsai, Shi-Chi January 1997 (has links)
No description available.
|
182 |
An Information Processing Perspective on Between-Brand Price Premiums: Antecedents and Consequences of MotivationMandrik, Carter A. 21 May 2003 (has links)
This dissertation examines between-brand price premiums from an information processing perspective. A literature review is conducted in which price premiums are shown to depend on consumer's ability, motivation and opportunity to process information relevant to making between-brand judgments of value. A conceptual model is developed that incorporates these three constraints on brand information processing, but focuses on the antecedents of the motivation construct. An experiment is conducted that tests the effects on information processing of four antecedents to motivation: involvement, brand evaluation motive, economic concern, and need for cognition. Results show that involvement interacts with motive in its effect on information processing amount, but not on processing style. Need for cognition is positively related to both amount and style of processing, but the economic concern results were mixed. Finally, implications of the results are discussed and future research directions suggested. / Ph. D.
|
183 |
A multi-attribute approach to conceptual system design decisions based on Quality Function Deployment (QFD) and the Analytic Hierarchy Process (AHP)Powers, Tipmuny C. 07 November 2008 (has links)
This research integrates a multi-attribute decision-support tool, the Analytic Hierarchy Process (AHP), with a customer-focused design methodology, Quality Function Deployment (QFD). The result is a hybrid methodology more complete than either of the two alone, involving synthesis, analysis, and evaluation activities necessary for completing conceptual system design.
An indicator was developed for the overall performance of an organization's product and its competitors’ products using the information in a QFD matrix. In addition, a methodology was developed to determine if essential customer requirements and design-dependent parameters (DDPs) have been adequately identified in the QFD matrix. A mathematical relationship was developed which relates technical and competitive assessments in the QFD matrix and helps test for inconsistencies. Finally, an indicator was developed to assess a new product concept for viability in the marketplace and to be used for accomplishing trade-off analyses. Examples are presented throughout this document to further illustrate the concepts.
This research is unique in its application. It adds to the body of knowledge for decision-making in the conceptual design phase of the systems engineering process. / Master of Science
|
184 |
Mapping Genotype to Phenotype using Attribute GrammarAdam, Laura 20 September 2013 (has links)
Over the past 10 years, several synthetic biology research groups have proposed tools and domain-specific languages to help with the design of artificial DNA molecules. Community standards for exchanging data between these tools, such as the Synthetic Biology Open Language (SBOL), have been developed. It is increasingly important to be able to perform in silico simulation before the time and cost consuming wet lab realization of the constructs, which, as technology advances, also become in themselves more complex. By extending the concept of describing genetic expression as a language, we propose to model relations between genotype and phenotype using formal language theory.
We use attribute grammars (AGs) to extract context-dependent information from genetic constructs and compile them into mathematical models, possibly giving clues about their phenotypes. They may be used as a backbone for biological Domain-Specific Languages (DSLs) and we developed a methodology to design these AG based DSLs. We gave examples of languages in the field of synthetic biology to model genetic regulatory networks with Ordinary Differential Equations (ODEs) based on various rate laws or with discrete boolean network models.
We implemented a demonstration of these concepts in GenoCAD, a Computer Assisted Design (CAD) software for synthetic biology. GenoCAD guides users from design to simulation. Users can either design constructs with the attribute grammars provided or define their own project-specific languages. Outputting the mathematical model of a genetic construct is performed by DNA compilation based on the attribute grammar specified; the design of new languages by users necessitated the generation on-the-fly of such attribute grammar based DNA compilers.
We also considered the impact of our research and its potential dual-use issues. Indeed, after the design exploration is performed in silico, the next logical step is to synthesize the designed construct's DNA molecule to build the construct in vivo. We implemented an algorithm to identify sequences of concern of any length that are specific to Select Agents and Toxins, helping to ensure safer use of our methods. / Ph. D.
|
185 |
Holistic Building Technology Selection for Sustainability: A Market Analysis and Multi-Attribute Decision Making Approach for Residential Water Heaters in U.S.Doshi, Pratik 31 August 2015 (has links)
Water heating in the United States has the largest energy consumption of any residential related use. It uses more energy than all other home appliances combined. They have also been implicated as the source of waterborne disease outbreaks. With such high stakes, it is recommended that a Decision Support Tool (DST) be used prior to selection of a water heater for new construction or replacement. Although there are numerous tools available, it is challenging to find a tool that takes into account all factors critical to the selection of water heaters, addresses gaps and barriers, provides adequate information to all stakeholders and finally, assists in rational decision making towards more sustainable choices.
The purpose of this research is threefold: (a) to inventory, organize and characterize web-based existing water heater Decision Support Tools (eDSTs) to highlight gaps and/or shortcomings; (b) to develop a Decision Support Tool Skeleton (DSTS) containing a comprehensive list of sustainability capital, criteria and indicators based on Multi-Attribute Decision Making (MADM) approach; (c) to create a stakeholder map comprising supply chain, stakeholder system, decision making process during water heater selections as well as other market factors, using metasynthesis of collected documents.
The findings of this research indicate that considerable gaps and shortcomings exist in the current pool of water heater DSTs. To address these barriers, information was captured from various documents in a process of qualitative data analysis called coding. The coding process generated attributes which were used to generate a comprehensive set of capital, criteria, subcriteria and indicators using MADM approach. This organizing structure developed on lines of sustainability assessment will serve as a starting point towards achieving global sustainability in real life. Importantly, information asymmetry between various stakeholders is evidence of the fact that the existing tools are not addressed in an equitable manner. This study will help determine the stakeholder system and the decision making process for selection of water heaters in the residential sector, so as to effectively implement new tools being created. / Master of Science
|
186 |
Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time ClassifiersAlorf, Abdulaziz Abdullah 19 February 2020 (has links)
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity.
In this dissertation, we proposed a real-time model for classifying 40 facial attributes, which preprocesses faces and then extracts 7 types of classical and deep features. These features were fused together to train 3 different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. We also developed a real-time model for classifying the states of human eyes and mouth (open/closed), and the presence/absence of eyeglasses in the wild. Our method begins by preprocessing a face by cropping the regions of interest (ROIs), and then describing them using RootSIFT features. These features were used to train a nonlinear support vector machine for each attribute. Our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers.
We also introduced a new facial attribute related to Middle Eastern headwear (called igal) along with its detector. Our proposed idea was to detect the igal using a linear multiscale SVM classifier with a HOG descriptor. Thereafter, false positives were discarded using dense SIFT filtering, bag-of-visual-words decomposition, and nonlinear SVM classification. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance.
Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. / Doctor of Philosophy / The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. Faces probably represent the most accurate biometric trait in our daily interactions. Thereby, it is not singular that so much effort from computer vision researchers have been invested in the analysis of faces. The automatic detection and analysis of faces within images has therefore received much attention in recent years. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation.
The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. Soft biometrics have many uses in the field of biometrics such as (1) they can be utilized in a fusion framework to strengthen the performance of a primary biometric system. For example, fusing a face with voice accent information can boost the performance of the face recognition. (2) They also can be used to create qualitative descriptions about a person, such as being an "old bald male wearing a necktie and eyeglasses."
Face detection and facial attribute classification are not easy problems because of many factors, such as image orientation, pose variation, clutter, facial expressions, occlusion, and illumination, among others. In this dissertation, we introduced novel techniques to classify more than 40 facial attributes in real-time. Our techniques followed the general facial attribute classification pipeline, which begins by detecting a face and ends by classifying facial attributes. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. The new facial attribute were fused with a face detector to improve the detection performance. In addition, we proposed a new method to evaluate the robustness of face detection, which is the first process in the facial attribute classification pipeline.
Detecting the states of human facial attributes in real time is highly desired by many applications. For example, the real-time detection of a driver's eye state (open/closed) can prevent severe accidents. These systems are usually called driver drowsiness detection systems. For classifying 40 facial attributes, we proposed a real-time model that preprocesses faces by localizing facial landmarks to normalize faces, and then crop them based on the intended attribute. The face was cropped only if the intended attribute is inside the face region. After that, 7 types of classical and deep features were extracted from the preprocessed faces. Lastly, these 7 types of feature sets were fused together to train three different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. It also achieved state-of-the-art performance in classifying 14 out of 40 attributes.
We also developed a real-time model that classifies the states of three human facial attributes: (1) eyes (open/closed), (2) mouth (open/closed), and (3) eyeglasses (present/absent). Our proposed method consisted of six main steps: (1) In the beginning, we detected the human face. (2) Then we extracted the facial landmarks. (3) Thereafter, we normalized the face, based on the eye location, to the full frontal view. (4) We then extracted the regions of interest (i.e., the regions of the mouth, left eye, right eye, and eyeglasses). (5) We extracted low-level features from each region and then described them. (6) Finally, we learned a binary classifier for each attribute to classify it using the extracted features. Our developed model achieved 30 FPS with a CPU-only implementation, and our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers.
We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. After that, we fused it with a face detector to improve the detection performance. The traditional Middle Eastern headwear that men usually wear consists of two parts: (1) the shemagh or keffiyeh, which is a scarf that covers the head and usually has checkered and pure white patterns, and (2) the igal, which is a band or cord worn on top of the shemagh to hold it in place. The shemagh causes many unwanted effects on the face; for example, it usually occludes some parts of the face and adds dark shadows, especially near the eyes. These effects substantially degrade the performance of face detection. To improve the detection of people who wear the traditional Middle Eastern headwear, we developed a model that can be used as a head detector or combined with current face detectors to improve their performance. Our igal detector consists of two main steps: (1) learning a binary classifier to detect the igal and (2) refining the classier by removing false positives. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance.
Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. Biometric systems that use face detection suffer from huge performance fluctuation. For example, users of biometric surveillance systems that utilize face detection sometimes notice that state-of-the-art face detectors do not show good performance compared with outdated detectors. Although state-of-the-art face detectors are designed to work in the wild (i.e., no need to retrain, revalidate, and retest), they still heavily depend on the datasets they originally trained on. This condition in turn leads to variation in the detectors' performance when they are applied on a different dataset or environment. To overcome this problem, we developed a novel optics-based blur simulator that automatically introduces the diffraction blur at different image scales/magnifications. Then we evaluated different face detectors on the output images using different IoU thresholds. Users, in the beginning, choose their own values for these three settings and then run our model to produce the efficient face detector under the selected settings. That means our proposed model would enable users of biometric systems to pick the efficient face detector based on their system setup. Our results showed that sometimes outdated face detectors outperform state-of-the-art ones under certain settings and vice versa.
|
187 |
Medias beskrivning av Natomedlemskap : En kvalitativ innehållsanalys av svenska tidningars beskrivning av ett svenskt Natomedlemskap och dess möjliga påverkan på den allmänna opinionen / A qualitative content analysis of Swedish newspapers' description of Swedish Natomembership and its possible impact on public opinionRömo Mella, Magnus January 2024 (has links)
The role of media in society is significant, and what the media reports influences people. The agenda-setting theory suggests that the issues covered by the media become important topics in society, and through descriptions and attributes, the media can shape public opinions. Sweden's military non-alignment ended with its membership in Nato becoming a reality in 2024. Until 2012, public opinion in Sweden strongly opposed joining Nato. However, a shift occurred, and the gap between supporters and opponents narrowed. This study, through a qualitative content analysis, examined how the editorial pages of three Swedish newspapers described Swedish Nato membership from 2008 to 2015, and how the relationship between these descriptions and public opinion evolved. The results clearly show that, over time, the editorials increasingly portrayed Swedish Nato membership positively rather than negatively. There is a correlation between these descriptions and public opinion. However, it is not possible to conclude whetherthe editorials influenced public opinion or if the reverse is true.
|
188 |
Round-trip Engineering of Template-based Code Generation in SkATNett, Tobias 04 August 2015 (has links) (PDF)
In recent years, the development of multi-core CPUs and GPUs with many cores has taken precedence over an increase in clock frequency. Therefore, writing parallel programs for multi-core and many-core systems becomes increasingly important. Due to the lack of inherently parallel language features in most programming languages, today many programs are written sequentially and then enhanced with special pragmas or framework calls hinting parallelizable parts of code. This hints are then used to modify and extend the code with parallel constructs in a preprocessing step. If it is crucial to optimize the run time of a program, the code generated by this step has to be inspected an manually tuned. To keep the original and the transformed code artifacts synchronized, an editor with a round-trip engineering (RTE) system can be used. RTE propagates changes made in the source artifacts to the generated artifacts and vice versa.
One tool that can be used to expand pragmas to parallelized source code is the invasive software composition framework SkAT. SkAT-based tools use reference attribute grammars (RAGs) to compose code fragments according to a composition program written in Java. To facilitate the creation of SkAT-based tools, a minimal composition system framework SkAT/Minimal on to of the SkAT core contains mechanisms to enable the incremental building of such tools. The principle of island parsing is employed to be able to express just as much of a language as is necessary for composition.
In this work, composition systems based on SkAT/Minimal are targeted. The task is split into two parts: first, approaches for RTE are analyzed and a concept for a RTE system is created. The focus lies on the analysis of features and requirements of existing RTE approaches and a thorough investigation of all relevant steps required to implement such a system for SkAT/Minimal. The second part of the task is the creation and evaluation of a prototypical implementation of the system.
|
189 |
User Attribute Inference via Mining User-Generated DataDing, Shichang 01 December 2020 (has links)
No description available.
|
190 |
Round-trip Engineering of Template-based Code Generation in SkATNett, Tobias 13 March 2015 (has links)
In recent years, the development of multi-core CPUs and GPUs with many cores has taken precedence over an increase in clock frequency. Therefore, writing parallel programs for multi-core and many-core systems becomes increasingly important. Due to the lack of inherently parallel language features in most programming languages, today many programs are written sequentially and then enhanced with special pragmas or framework calls hinting parallelizable parts of code. This hints are then used to modify and extend the code with parallel constructs in a preprocessing step. If it is crucial to optimize the run time of a program, the code generated by this step has to be inspected an manually tuned. To keep the original and the transformed code artifacts synchronized, an editor with a round-trip engineering (RTE) system can be used. RTE propagates changes made in the source artifacts to the generated artifacts and vice versa.
One tool that can be used to expand pragmas to parallelized source code is the invasive software composition framework SkAT. SkAT-based tools use reference attribute grammars (RAGs) to compose code fragments according to a composition program written in Java. To facilitate the creation of SkAT-based tools, a minimal composition system framework SkAT/Minimal on to of the SkAT core contains mechanisms to enable the incremental building of such tools. The principle of island parsing is employed to be able to express just as much of a language as is necessary for composition.
In this work, composition systems based on SkAT/Minimal are targeted. The task is split into two parts: first, approaches for RTE are analyzed and a concept for a RTE system is created. The focus lies on the analysis of features and requirements of existing RTE approaches and a thorough investigation of all relevant steps required to implement such a system for SkAT/Minimal. The second part of the task is the creation and evaluation of a prototypical implementation of the system.:1 Introduction 1
1.1 Motivation 1
1.2 Scope 2
1.3 Contributions 2
1.4 Organization 2
2 Background 5
2.1 Fundamentals 6
2.1.1 Syntax Trees 6
2.1.2 Parsing and Unparsing 6
2.2 Attribute Grammars 9
2.2.1 Reference Attribute Grammars 10
2.2.2 Reference Attribute Grammars in SkAT 10
2.3 Composition Systems 12
2.3.1 Software Composition Systems 13
2.3.2 Invasive Software Composition 13
2.3.3 SkAT 15
2.3.4 Template-based Code Generation 16
2.4 Round-trip Engineering 17
2.4.1 Motivation For Round-trip Engineering 17
2.4.2 Concepts of RTE 18
3 Analysis of RTE Approaches 19
3.1 Automatic Round-trip Engineering 19
3.2 RTE In Aspect Weaving Systems 21
3.2.1 CST Graftings 21
3.2.2 Update Propagation in Aspect Weaving Systems 22
3.3 RTE in Invasive Software Composition Systems 23
3.3.1 Tracing Composition Program Execution 23
3.3.2 Backpropagation of Changes 24
3.3.3 Implementation in the Reuseware Framework 26
3.4 Managing Fragments in RTE 27
3.5 Evaluation of RTE Approaches 28
4 Tracing in SkAT 31
4.1 Requirements 31
4.1.1 Objectives 32
4.1.2 Functional and Nonfunctional Requirements 32
4.2 Concept 33
4.3 Implementation 34
5 Building an RTE-editor Prototype 37
5.1 Prerequisites 37
5.2 Requirements 39
5.3 Concept 40
5.3.1 AST Interface 41
5.3.2 Composer Interface 41
5.3.3 Generating the Output 41
5.3.4 The Prototype Skeleton 42
5.4 Implementation 43
6 Designing an RTE-editor 49
6.1 Replay 50
6.2 AST Modifications 50
6.2.1 Modification Types 51
6.2.2 Detecting Modification Types 52
6.3 Origin Inference 53
6.3.1 Inference for Updated Elements 53
6.3.2 Inference for Deleted Elements 54
6.3.3 Inference for Inserted Elements 54
6.4 Gap Edit Problem 54
6.4.1 Inference in SkAT 57
6.4.2 Multiple Source Fragments 57
6.5 Applying Modifications 58
6.5.1 Propagating Terminal Updates 60
6.5.2 Propagating Non-terminal Updates 61
6.5.3 Propagating Deletions 62
6.5.4 Propagating Insertions 62
6.5.5 Propagating Composed Modifications 62
6.6 Adapting SkAT Composition Programs 63
7 Evaluation and Outlook on Future Works 65
7.1 Fragment Versioning 65
7.2 Composition Program DSL 66
7.3 Structured Editors 68
7.4 SkAT RTE System 68
Appendices 71
List of Figures 73
List of Listings 75
List of Abbreviations 77
Bibliography 79
CD Content 83
|
Page generated in 0.0584 seconds