Spelling suggestions: "subject:"[een] COMPONENTS"" "subject:"[enn] COMPONENTS""
691 |
Optimized Composition of Parallel Components on a Linux ClusterAl-Trad, Anas January 2012 (has links)
We develop a novel framework for optimized composition of explicitly parallel software components with different implementation variants given the problem size, data distribution scheme and processor group size on a Linux cluster. We consider two approaches (or two cases of the framework). In the first approach, dispatch tables are built using measurement data obtained offline by executions for some (sample) points in the ranges of the context properties. Inter-/extrapolation is then used to do actual variant-selection for a given execution context at run-time. In the second approach, a cost function of each component variant is provided by the component writer for variant-selection. These cost functions can internally lookup measurements' tables built, either offline or at deployment time, for computation- and communication-specific primitives. In both approaches, the call to an explicitly parallel software component (with different implementation variants) is made via a dispatcher instead of calling a variant directly. As a case study, we apply both approaches on a parallel component for matrix multiplication with multiple implementation variants. We implemented our variants using Message Passing Interface (MPI). The results show the reduction in execution time for the optimally composed applications compared to applications with hard-coded composition. In addition, the results show the comparison of estimated and measured times for each variant using different data distributions, processor group and problem sizes.
|
692 |
Inverse Discrete Cosine Transform by Bit Parallel Implementation and Power ComparisionBhardwaj, Divya Anshu January 2003 (has links)
The goal of this project was to implement and compare Invere Discrete Cosine Transform using three methods i.e. by bit parallel, digit serial and bit serial. This application describes a one dimensional Discrete Cosine Transform by bit prallel method and has been implemented by 0.35 ìm technology. When implementing a design, there are several considerations like word length etc. were taken into account. The code was implemented using WHDL and some of the calculations were done in MATLAB. The VHDL code was the synthesized using Design Analyzer of Synopsis; power was calculated and the results were compared.
|
693 |
Metamodel-Based Probabilistic Design for Dynamic Systems with Degrading ComponentsSeecharan, Turuna Saraswati January 2012 (has links)
The probabilistic design of dynamic systems with degrading components is difficult. Design of dynamic systems typically involves the optimization of a time-invariant performance measure, such as Energy, that is estimated using a dynamic response, such as angular speed. The mechanistic models developed to approximate this performance measure are too complicated to be used with simple design calculations and lead to lengthy simulations. When degradation of the components is assumed, in order to determine suitable service times, estimation of the failure probability over the product lifetime is required. Again, complex mechanistic models lead to lengthy lifetime simulations when the Monte Carlo method is used to evaluate probability.
Based on these problems, an efficient methodology is presented for probabilistic design of dynamic systems and to estimate the cumulative distribution function of the time to failure of a performance measure when degradation of the components is assumed. The four main steps include; 1) transforming the dynamic response into a set of static responses at discrete cycle-time steps and using Singular Value Decomposition to efficiently estimate a time-invariant performance measure that is based upon a dynamic response, 2) replacing the mechanistic model with an approximating function, known as a “metamodel” 3) searching for the best design parameters using fast integration methods such as the First Order Reliability Method and 4) building the cumulative distribution function using the summation of the incremental failure probabilities, that are estimated using the set-theory method, over the planned lifetime.
The first step of the methodology uses design of experiments or sampling techniques to select a sample of training sets of the design variables. These training sets are then input to the computer-based simulation of the mechanistic model to produce a matrix of corresponding responses at discrete cycle-times. Although metamodels can be built at each time-specific column of this matrix, this method is slow especially if the number of time steps is large. An efficient alternative uses Singular Value Decomposition to split the response matrix into two matrices containing only design-variable-specific and time-specific information. The second step of the methodology fits metamodels only for the significant columns of the matrix containing the design variable-specific information. Using the time-specific matrix, a metamodel is quickly developed at any cycle-time step or for any time-invariant performance measure such as energy consumed over the cycle-lifetime. In the third step, design variables are treated as random variables and the First Order Reliability Method is used to search for the best design parameters. Finally, the components most likely to degrade are modelled using either a degradation path or a marginal distribution model and, using the First Order Reliability Method or a Monte Carlo Simulation to estimate probability, the cumulative failure probability is plotted. The speed and accuracy of the methodology using three metamodels, the Regression model, Kriging and the Radial Basis Function, is investigated.
This thesis shows that the metamodel offers a significantly faster and accurate alternative to using mechanistic models for both probabilistic design optimization and for estimating the cumulative distribution function. For design using the First-Order Reliability Method to estimate probability, the Regression Model is the fastest and the Radial Basis Function is the slowest. Kriging is shown to be accurate and faster than the Radial Basis Function but its computation time is still slower than the Regression Model. When estimating the cumulative distribution function, metamodels are more than 100 times faster than the mechanistic model and the error is less than ten percent when compared with the mechanistic model. Kriging and the Radial Basis Function are more accurate than the Regression Model and computation time is faster using the Monte Carlo Simulation to estimate probability than using the First-Order Reliability Method.
|
694 |
Modelling of energy requirements by a narrow tillage toolAshrafi Zadeh, Seyed Reza 04 July 2006 (has links)
The amount of energy consumed during a tillage operation depends on three categories of parameters: (1) soil parameters (2) tool parameters and (3) operating parameters. Although many research works have been reported on the effects of those parameters on tillage energy, the exact number of affecting parameters and the contribution of each parameter in total energy requirement have not been specified. A study with the objectives of specifying energy consuming components and determining the amount of each component for a vertical narrow tool, particularly at high speeds of operation, was conducted in the soil bin facilities of the Department of Agricultural and Bioresource Engineering, University of Saskatchewan. <p>Based on studies by Blumel (1986) and Kushwaha and Linke (1996), four main energy consuming components were assumed: <p>(1) energy requirements associated with soil-tool interactions;<p>(2) energy requirements associated with interactions between tilled and fixed soil masses;<p>(3) energy requirements associated with soil deformation; and <p>(4) energy requirements associated with the acceleration of the tilled soil. <p> Energy requirement of a vertical narrow tool was calculated based on the draft requirement of the tool measured in the soil bin. The effects of three variables, moisture content, operating depth and forward speed, were studied at different levels: (1) moisture content at 14% and 20%; (2) depth at 40, 80, 120 and 160 mm; and (3) speed at 1, 8, 16 and 24 km h-1. Total energy requirement was divided into these four components based upon the procedure developed in the research. <p>Regression equations for different energy components were developed based on experimental data of two replicates and then validated by extra soil bin experiments conducted at same soil and tool but different operational conditions. The set up of energy components data in the model development showed good correlation with the available experimental data for all four components. Coefficients of all regression equations showed a first order energy-moisture content relationship best applicable to those equations of energy components. For the acceleration component, energy-depth relationship at all speed levels resulted in an equation which included first and second orders of depth. In contrast, if only two higher levels of speed were used in the regression model, the relationship between acceleration energy and depth resulted in the second order of depth. When experimental data of acceleration energy at 8, 16, and 24 km h-1 speeds were used in the regression equation, the acceleration energy-speed relationship resulted in both linear and quadratic relationships. It was concluded that for the tool and soil conditions used in the experiments, 8 km h-1 speed resulted in only linear relationship. On the other hand, 16 and 24 km h-1 speeds resulted in a quadratic relationship. Therefore, for all 3 speeds used in experiments, both linear and quadratic relationships were obtained. Considering that the tool was operating at high speeds, this research is expected to contribute valuable experimental data to the researchers working in the field of soil dynamics.
|
695 |
Automated Construction of Macromodels from Frequency Data for Simulation of Distributed Interconnect NetworksMin, Sung-Hwan 12 April 2004 (has links)
As the complexity of interconnects and packages increases and the rise and fall time of the signal decreases, the electromagnetic effects of distributed passive devices are becoming an important factor in determining the performance of gigahertz systems. The electromagnetic behavior extracted using an electromagnetic simulation or from measurements is available as frequency dependent data. This information can be represented as a black box called a macromodel, which captures the behavior of the passive structure at the input/output ports. In this dissertation, the macromodels have been categorized as scalable, passive and broadband macromodels. The scalable macromodels for building design libraries of passive devices have been constructed using multidimensional rational functions, orthogonal polynomials and selective sampling. The passive macromodels for time-domain simulation have been constructed using filter theory and multiport passivity formulae. The broadband macromodels for high-speed simulation have been constructed using band division, selector, subband reordering, subband dilation and pole replacement. An automated construction method has been developed. The construction time of the multiport macromodel has been reduced. A method for reducing the order of the macromodel has been developed. The efficiency of the methods was demonstrated through embedded passive devices, known transfer functions and distributed interconnect networks.
|
696 |
The location and significance of the O-acetyl groups in a glucomannan from Parana pineKatz, Gerald 01 January 1964 (has links)
see pdf
|
697 |
The Construction and Application of Hybrid Factor ModelTao, Yun-jhen 28 July 2010 (has links)
A Multifactor model is used to explain asset return and risk and its explanatory power depends on common factors that the model uses. Researchers strive to find reasonable factors to enhance multifactor model¡¦s efficiency. However, there are still some unknown factors to be discovered. Miller (2006) presents a general concept and structure of hybrid factor model. The study follows the idea of Miller (2006) and aims to build a complete flow of constructing hybrid factor model that is based on fundamental factor model and statistical factor models. We also apply the hybrid factor model to the Taiwan stock market.
We assume that a fundamental factor model is already developed and therefore this study focuses on building the second stage, statistical factor model. Principal Component Analysis is used to form statistical factor and spectral decomposition is used to prepare data for principal component analysis. Those methods are applied to stocks on the Taiwan Stock Exchange in the period of January 1, 2000 to December 31, 2009. This study presents a complete construction flow of hybrid factor models and further confirms that a hybrid factor model is able to find missing factors in a developing market such as Taiwan¡¦s stock market. The study also discovers that the missing factors might be market factor and extensive electronic industry factor.
|
698 |
Synergy of Management Buy-Out-The Case Study of Yageo Financial Analysis and Corporation ValuationFang, Chi-Wen 09 September 2012 (has links)
ABSTRACT
Globalization pushes more and more intensive competition and brings only minor profit to company globally. To keep key customers, to look for corporate expansion or growth up, and to earn more profit, company should keep improving cost stucture, key technology and strengthening the competition advantage by applying with both internal and external strategies.
¡§Leverage Buyout¡¨ and ¡§Management Buyout¡¨ are two typical operation models of Private Equity Fund and it is very popular and mature in the Western. Recently, the Asia emerging and booming market attracts the interest from Private Equity Fund. Moreover, because Taiwan enterprises have better system, more reasonable market price, and lower interest rate, they are aimed by Private Equity Fund as a gangway to China market. On the other hand, Taiwan domestic enterprises also look for investment from global Private Equity Fund to enhance corporate global competition.
This study selects ¡§Yageo Management Buyout¡¨ triggered by Oriont Investment Co., Ltd. in 2011. With analysis of Yageo financial reports, I get Yageo corporation valuation and the reasonable share price at the moment of MBO announcement by Oriont using Porter (1980) Five Forces Analysis to summarize passive components industry key competition and key successful factors. After comparing with actual share price, it is apparent that Yageo share price is undervalued for a long time because Taiwan minor share holders pursuing short term return on the investment. In 2011, Yageo management team intended to privatize the company so that management team can concentrate on long term performance and competition improvement. With the successful improvement, Yageo is expected to re-public the share into the global market to have the reasonable share price showing its actual corporate value.
The case study points out that Taiwan unique invest environment on minor share holders for any other global Private Equity Fund reference. Furthermore, the study concludes that, in addition to concerning on share holders¡¦ rights and Taiwan local market development, the government should take more aggressive policy to support domestic company leading to be a global one because globalization is a must and survival solution to an enterprise.
|
699 |
Historical Background Of Libyan Mosque Architecture: Assessment And Criticism Of Mosques In Ajdabiya CityBuhlfaia, Saeid Ali 01 December 2008 (has links) (PDF)
The study attempts to trace the development of mosque architecture in Islamic history, in modern architecture in the world, specifically focusing on the history of the mosque in Libya. It investigates whether the conformity of mosque components and finishing is due to historic, current and local Islamic prescriptions / due to functional purposes and necessities, or merely as an imitation of the stereotype styles
for loyalty to mental and habitual traditions regardless of functions.
The main objective of this thesis is to study &lsquo / the lack of innovation&rsquo / in mosque architecture, especially in the Libyan case. The thesis investigates the factors which may have caused this phenomenon and attempts to explore whether there is possibility to innovate mosque design. For this end, the thesis analyzes and evaluates urban, spatial, architectural and performance properties of the existing mosques in
city of Ajdabiya in Libya, the results of which are expected to help architects in developing the innovations in mosque design.
Finally, the study asserts that acceptance of innovation is possible under the given circumstances: there are differences from one mosque to another, there are variations
of mosque elements in terms of type and form, already varying from region to region. The main reasons for the absence of innovations are found to be due to unqualified designers who lack creativity, fear for the vulnerability of the heritage and some misconceptions and misinterpretations in terms of religious prescriptions.
|
700 |
Desig And Analysis Of Fixturing In Assembly Of Sheet Metal Components Of HelicoptersBayar, Fatih Mehmet 01 February 2007 (has links) (PDF)
Assembling of the compliant parts used in aviation industry is a challenging process. Assembly fixtures are quite important tools in this effort and widely used in industry. In fixturing of easily deformable sheet metal parts, besides restraining the rigid body motion of the parts, the possible deformations that
may occur during the assembly process and the spring-back effect on the final product need to be taken in to consideration. In order to guarantee a successful assembling, in other words, to obtain the final product within specified tolerances, a systematic approach to the fixture design problem is required. The designer should predict the correlation between the input variations and the final assembly variation, especially, for the complex assemblies.
This study proposes a design and analysis approach in fixturing of sheet metal assemblies for helicopter components. The design of an assembly fixture for a particular tail cone has been completed convenient to the existing locating principles. Finite Element Analysis (FEA) has been realized
in simulating the assembling process in order to predict the possible variation of the interested feature on a complex assembly due to deformations.
|
Page generated in 0.0272 seconds