Spelling suggestions: "subject:"dnified"" "subject:"nified""
321 |
Chopin's 24 Préludes, Opus 28: A Cycle Unified by Motion between the Fifth and Sixth Scale DegreesBoelcke, Andreas Maximilian January 2008 (has links)
No description available.
|
322 |
A GENERALIZED RESIDUALS MODEL FOR THE UNIFIED MATRIX POLYNOMIAL APPROACH TO FREQUENCY DOMAIN MODAL PARAMETER ESTIMATIONFLADUNG, JR., WILLIAM A. 11 October 2001 (has links)
No description available.
|
323 |
Synchronization of information in multiple heterogeneous manufacturing databasesDhamija, Dinesh January 1999 (has links)
No description available.
|
324 |
Phenomenology of SO(10) Grand Unified TheoriesPernow, Marcus January 2019 (has links)
Although the Standard Model (SM) of particle physics describes observations well, there are several shortcomings of it. The most crucial of these are that the SM cannot explain the origin of neutrino masses and the existence of dark matter. Furthermore, there are several aspects of it that are seemingly ad hoc, such as the choice of gauge group and the cancellation of gauge anomalies. These shortcomings point to a theory beyond the SM. Although there are many proposed models for physics beyond the SM, in this thesis, we focus on grand unified theories based on the SO(10) gauge group. It predicts that the three gauge groups in the SM unify at a higher energy into one, which contains the SM as a subgroup. We focus on the Yukawa sector of these models and investigate the extent to which the observables such as fermion masses and mixing parameters can be accommodated into different models based on the SO(10) gauge group. Neutrino masses and leptonic mixing parameters are particularly interesting, since SO(10) models naturally embed the seesaw mechanism. The difference in energy scale between the electroweak scale and the scale of unification spans around 14 orders of magnitude. Therefore, one must relate the parameters of the SO(10) model to those of the SM through renormalization group equations. We investigate this for several different models by performing fits of SO(10) models to fermion masses and mixing parameters, taking into account thresholds at which heavy right-handed neutrinos are integrated out of the theory. Although the results are in general dependent on the particular model under consideration, there are some general results that appear to hold true. The observ- ables of the Yukawa sector can in general be accommodated into SO(10) models only if the neutrino masses are normally ordered and that inverted ordering is strongly disfavored. We find that the observable that provides the most tension in the fits is the leptonic mixing angle θ2l3, whose value is consistently favored to be lower in the fits than the actual value. Furthermore, we find that numerical fits to the data favor type-I seesaw over type-II seesaw for the generation of neutrino masses. / <p>Examinator: Professor Mark Pearce, Fysik, KTH</p>
|
325 |
Connecting the usability and software engineering life cycles through a communication-fostering software development framework and cross-pollinated computer science coursesPyla, Pardha S. 17 September 2007 (has links)
Interactive software systems have both functional and user interface components. User interface design and development requires specialized usability engineering (UE) knowledge, training, and experience in topics such as psychology, cognition, specialized design guidelines, and task analysis. The design and development of a functional core requires specialized software engineering (SE) knowledge, training, and experience in topics such as algorithms, data structures, software architectures, calling structures, and database management.
Given that the user interface and the functional core are two closely coupled components of an interactive software system, with each constraining the design of the other, there is a need for the SE and UE life cycles to be connected to support communication among roles between the two development life cycles. Additionally, there is a corresponding need for appropriate computer science curricula to train the SE and UE roles about the connections between the two processes.
In this dissertation, we connected the SE and UE life cycles by creating the Ripple project development environment which fosters communication between the SE and UE roles and by creating a graduate-level cross-pollinated SE-UE joint course offering, with student teams spanning the two classes, to educate students about the intricacies of interactive-software development. Using this joint course we simulated different conditions of interactive-software development (i.e. with different types of project constraints and role playing) and assigned different teams to these conditions. As part of semester-long class projects these teams developed prototype systems for a real client using their assigned development condition. Two of the total of eight teams in this study used the Ripple framework.
As part of this experimental course offering, various instruments were employed throughout the semester to assess the effectiveness of a framework like Ripple and to investigate candidate factors that impact the quality of product and process of interactive-software systems. The study highlighted the importance of communication among the SE and UE roles and exemplified the need for the two roles to respect each other and to have the willingness to work with one another. Also, there appears to exist an inherent conflict of interest when the same people play both UE and SE roles as they seem to choose user interface features that are easy to implement and not necessarily easy to use by system's target users. Regarding pedagogy, students in this study indicated that this joint SE-UE course was more useful in learning about interactive-software development and that it provided a better learning experience than traditional SE-only or UE-only courses. / Ph. D.
|
326 |
Iterative Computing over a Unified Relationship Matrix for Information IntegrationXi, Wensi 06 September 2006 (has links)
In this dissertation I use a Unified Relationship Matrix (URM) to represent a set of heterogeneous data objects and their inter-relationships. I argue that integrated and iterative computations over the Unified Relationship Matrix can help overcome the data sparseness problem (a common situation in various information application scenarios), and detect latent relationships (such as latent term associations discovered by LSI) among heterogeneous data objects. Thus, this kind of computation can be used to improve the quality of various information applications that require combining information from heterogeneous data sources.
To support the argument, I further develop a unified link analysis algorithm, the Link Fusion algorithm, and a unified similarity-calculating algorithm, the SimFusion algorithm. Both algorithms attempt to better integrate information from heterogeneous sources by iteratively computing over the Unified Relationship Matrix in order to calculate some specific property of data object(s); such as the importance of a data object (as in the Link Fusion algorithm) and the similarity between a pair of data objects (as in the SimFusion algorithm).
Then, I develop two set of experiments on real-world datasets to investigate whether the algorithms proposed in this dissertation can better integrate information from multiple sources. The performance of the algorithms is compared to that of traditional link analysis and similarity-calculating algorithms. Experimental results show that the algorithms developed can significantly outperform the traditional link analysis and similarity-calculating algorithms.
I further investigate various pruning technologies aiming at improving efficiency and investigating the scalability of the algorithms designed. Experimental results showed that pruning technology can effectively be used to improve the efficiency of the algorithms. / Ph. D.
|
327 |
Unified Net Willans Line Model for Estimating the Energy Consumption of Battery Electric VehiclesLi, Candy Yuan 09 September 2022 (has links)
Due to increased urgency regarding environmental concerns within the transportation industry, sustainable solutions for combating climate change are in high demand. One solution is a widespread transition from internal combustion engine vehicles (ICEVs) to battery electric vehicles (BEVs). To facilitate this transition, reliable energy consumption modeling is desired for providing quick, high-level estimations for a BEV without requiring extensive vehicle and computational resources. Therefore, the goal of this paper is to create a simple, yet reliable vehicle model, that can estimate the energy consumption of most, if not all, electric vehicles on the market by using parameter normalization techniques. These vehicle parameters include the vehicle test weight and performance to obtain a unified net Willans line to describe the input/output power through a linear relationship. A base model and three normalized models are developed by fitting the UDDS and HWFET energy consumption test data published by the EPA for all BEVs in the U.S. market. Out of the models analyzed, the normalization with weight performs best with the lowest RMSE values at 0.384 kW, 0.747 kW, and 0.988 kW for predicting the UDDS, HWY, and US06 data points, respectively, and 0.653 kW for all three data sets combined. Consideration of accessory loads at 0.5 kW improves the model normalized by weight and performance by a reduction of over 20% in RMSE for predictions with all data sets combined. Removing outliers in addition to consideration of accessory loads improves the model normalized by weight and performance by a reduction of over 36% in RMSE for predictions with all data sets combined. Overall, results suggest that a unified net Willans line is largely achievable with accessible energy consumption data on U.S. regulatory cycles. / Master of Science / Due to increased urgency regarding environmental concerns within the transportation industry, sustainable solutions for combating climate change are in high demand. One solution is a widespread transition from conventional internal combustion engine vehicles (ICEVs) to battery electric vehicles (BEVs). To facilitate this transition, reliable energy consumption modeling is desired to support quick, high-level analyses for BEVs without requiring expensive resources. Therefore, the goal of this paper is to create a simple vehicle model that can estimate the energy consumption of most, if not all, electric vehicles by scaling the data using vehicle parameters. These parameters include the vehicle test weight and performance to obtain a unified net Willans line model describing the input/output power through a linear relationship. The UDDS (city) and HWFET (highway) energy consumption data points used to develop the model are easily accessible from published EPA data. Out of the models analyzed, the normalization with test weight performs best with the lowest error values at 0.384 kW, 0.747 kW, and 0.988 kW for predicting the UDDS, HWFET, and US06 (aggressive city/highway cycle) data points, respectively, and 0.653 kW for all three data sets combined. Consideration of accessory loads at 0.5 kW improves the model normalized by weight and performance by a reduction of over 20% in error for predictions with all data sets combined. Removing outliers in addition to consideration of accessory loads improves the model normalized by weight and performance by a reduction of over 36% in error for predictions with all data sets combined. Overall, results suggest that a unified net Willans line is largely achievable with accessible energy consumption data on U.S. regulatory cycles.
|
328 |
Directive-Based Data Partitioning and Pipelining and Auto-Tuning for High-Performance GPU ComputingCui, Xuewen 15 December 2020 (has links)
The computer science community needs simpler mechanisms to achieve the performance potential of accelerators, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi), due to their increasing use in state-of-the-art supercomputers. Over the past 10 years, we have seen a significant improvement in both computing power and memory connection bandwidth for accelerators. However, we also observe that the computation power has grown significantly faster than the interconnection bandwidth between the central processing unit (CPU) and the accelerator.
Given that accelerators generally have their own discrete memory space, data needs to be copied from the CPU host memory to the accelerator (device) memory before computation starts on the accelerator. Moreover, programming models like CUDA, OpenMP, OpenACC, and OpenCL can efficiently offload compute-intensive workloads to these accelerators. However, achieving the overlapping of data transfers with computation in a kernel with these models is neither simple nor straightforward. Instead, codes copy data to or from the device without overlapping or requiring explicit user design and refactoring.
Achieving performance can require extensive refactoring and hand-tuning to apply data transfer optimizations, and users must manually partition their dataset whenever its size is larger than device memory, which can be highly difficult when the device memory size is not exposed to the user. As the systems are becoming more and more complex in terms of heterogeneity, CPUs are responsible for handling many tasks related to other accelerators, computation and data movement tasks, task dependency checking, and task callbacks. Leaving all logic controls to the CPU not only costs extra communication delay over the PCI-e bus but also consumes the CPU resources, which may affect the performance of other CPU tasks. This thesis work aims to provide efficient directive-based data pipelining approaches for GPUs that tackle these issues and improve performance, programmability, and memory management. / Doctor of Philosophy / Over the past decade, parallel accelerators have become increasingly prominent in this emerging era of "big data, big compute, and artificial intelligence.'' In more recent supercomputers and datacenter clusters, we find multi-core central processing units (CPUs), many-core graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi) being used to accelerate many kinds of computation tasks.
While many new programming models have been proposed to support these accelerators, scientists or developers without domain knowledge usually find existing programming models not efficient enough to port their code to accelerators. Due to the limited accelerator on-chip memory size, the data array size is often too large to fit in the on-chip memory, especially while dealing with deep learning tasks. The data need to be partitioned and managed properly, which requires more hand-tuning effort. Moreover, performance tuning is difficult for developers to achieve high performance for specific applications due to a lack of domain knowledge. To handle these problems, this dissertation aims to propose a general approach to provide better programmability, performance, and data management for the accelerators. Accelerator users often prefer to keep their existing verified C, C++, or Fortran code rather than grapple with the unfamiliar code.
Since 2013, OpenMP has provided a straightforward way to adapt existing programs to accelerated systems. We propose multiple associated clauses to help developers easily partition and pipeline the accelerated code. Specifically, the proposed extension can overlap kernel computation and data transfer between host and device efficiently. The extension supports memory over-subscription, meaning the memory size required by the tasks could be larger than the GPU size. The internal scheduler guarantees that the data is swapped out correctly and efficiently. Machine learning methods are also leveraged to help with auto-tuning accelerator performance.
|
329 |
Reverse Software Engineering Large Object Oriented Software Systems using the UML NotationRamasubbu, Surendranath 30 April 2001 (has links)
A common problem experienced by the software engineering community traditionally has been that of understanding legacy code. A decade ago, legacy code was used to refer to programs written in COBOL, typically for large mainframe systems. However, current software developers predominantly use Object Oriented languages like C++ and Java. The belief prevalent among software developers and object philosophers that comprehending object-oriented software will be relatively easier has turned out to be a myth. Tomorrow's legacy code is being written today, since object oriented programs are even more complex and difficult to comprehend, unless rigorously documented. Reverse Engineering is a methodology that greatly reduces the time, effort and complexity involved in solving the program comprehension problem.
This thesis deals with Reverse Engineering complex object oriented software and the experiences with a sample case study. Extensive survey of literature and contemporary research on reverse engineering and program comprehension was undertaken as part of this thesis work. An Energy Information System (EIS) application created by a leading energy service provider and one that is being used extensively in the real world was chosen as a case study. Reverse engineering this industry strength Java application necessitated the definition of a formal process. An intuitive Reverse Engineering Process (REP) was defined and used for the reverse engineering effort. The learning experiences gained from this case study are discussed in this thesis. / Master of Science
|
330 |
Unified Control for the Permanent Magnet Generator and Rectifier SystemXu, Zhuxian 11 June 2010 (has links)
The structure of a permanent magnet generator (PMG) connected with an active front-end rectifier is very popular in the AC-DC architecture. Especially for certain applications like aircraft and vehicles, power density and efficiency is critical. Since the generator and the rectifier can be controlled simultaneously, it would be very desirable to develop a unified control. With this unified control, the boost inductors between the PMG and rectifier is eliminated, which significantly reduce the volume and the weight of the whole system and improve the system power density. Also the system efficiency can be improved with appropriate control strategy.
In this thesis, a unified control for the permanent magnet generator and rectifier system is presented. Firstly, the unified model of the PMG and rectifier system is given as the basis to design the control system. Secondly, a unified control method for PMG and rectifier system is introduced. The design procedure for each control loops are presented in detail, including current control loop, voltage control loop, reactive control loop and speed and rotor position estimator loop. Thirdly, the hardware is developed and the experiment is conducted to verify the control strategy. Fourthly, a method to optimize the overall system efficiency by appropriate reactive power distribution is proposed. The two cases when the DC link voltage is flexible and the DC link voltage is fixed are considered. / Master of Science
|
Page generated in 0.0483 seconds