Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
351 |
An Experimental Investigation of the Fire Characteristics of the University of Waterloo Burn House StructureKlinck, Amanda January 2006 (has links)
This thesis reports on the procedure, results and analysis of four full scale fire tests that were performed at the University of Waterloo's Live Fire Research Facility. The purpose of these tests was to investigate the thermal characteristics of one room of the Burn House structure. Comparisons were made of Burn House experimental data to previous residential fire studies undertaken by researchers from the University of Waterloo. This analysis showed similarities in growth rate characteristics, illustrating that fire behaviour in the Burn House is typical of residential structure fire behaviour. The Burn House experimental data was also compared to predictions from a fire model, CFAST. Recommendations were made for future work in relation to further investigation of the fire characteristics of the Burn House.
|
352 |
Thermal Characterization of a Pool Fire in Crosswind With and Without a Large Downwind Blocking ObjectLam, Cecilia January 2009 (has links)
Experiments were conducted to investigate the macroscopic thermal behaviour of 2m diameter Jet A fires in crosswinds of 3m/s to 13m/s. Two scenarios were considered: with and without a 2.7m diameter, 10.8m long, blocking object situated 3.4m downwind of the fire. These scenarios simulated transportation accidents with the fire representing a burning pool of aviation fuel and the object simulating an aircraft fuselage. To date, the limited number of experiments that have been conducted to examine wind effects on fire behaviour have been performed at small scale, which does not fully simulate the physics of large fires, or in outdoor facilities, with poorly controlled wind conditions. This thesis presents the first systematic characterization of the thermal environment in a large, turbulent fire under controlled wind conditions, with and without a large downwind blocking object. In experiments without the object, flame geometry was measured using temperature contour plots and video images, and the results compared to values predicted using published correlations. Results were greatly affected by the method used to measure flame geometry and by differences in boundary conditions between experiments. Although the presence of the blocking object prevented direct measurement of flame geometry due to interaction between the fire plume and object, temperature and heat flux measurements were analyzed to describe overall effects of the object on fire plume development. The fire impinged on the blocking object at wind speeds below 7m/s and interacted with the low-pressure wake region behind the object.
Laboratory-scale experiments were also conducted to examine the responses of different heat flux gauges to controlled heating conditions simulating those found in wind-blown fires. Schmidt-Boelter, Gardon and Hemispherical Heat Flux gauges and a Directional Flame Thermometer were exposed to a convective flow and to radiation from a cone calorimeter heater. Measurements were influenced by differences between the calibration and measurement environments, differences in sensor surface temperature, and unaccounted thermal losses from the sensor plate. Heat flux results from the fires were consistent with those from the cone calorimeter, but were additionally affected by differences in location relative to the hot central core of the fire.
|
353 |
Nonparametric Bayesian Methods for Multiple Imputation of Large Scale Incomplete Categorical Data in Panel StudiesSi, Yajuan January 2012 (has links)
<p>The thesis develops nonparametric Bayesian models to handle incomplete categorical variables in data sets with high dimension using the framework of multiple imputation. It presents methods for ignorable missing data in cross-sectional studies, and potentially non-ignorable missing data in panel studies with refreshment samples.</p><p>The first contribution is a fully Bayesian, joint modeling approach of multiple imputation for categorical data based on Dirichlet process mixtures of multinomial distributions. The approach automatically models complex dependencies while being computationally expedient. </p><p>I illustrate repeated sampling properties of the approach</p><p>using simulated data. This approach offers better performance than default chained equations methods, which are often used in such settings. I apply the methodology to impute missing background data in the 2007 Trends in International Mathematics and Science Study.</p><p>For the second contribution, I extend the nonparametric Bayesian imputation engine to consider a mix of potentially non-ignorable attrition and ignorable item nonresponse in multiple wave panel studies. Ignoring the attrition in models for panel data can result in biased inference if the reason for attrition is systematic and related to the missing values. Panel data alone cannot estimate the attrition effect without untestable assumptions about the missing data mechanism. Refreshment samples offer an extra data source that can be utilized to estimate the attrition effect while reducing reliance on strong assumptions of the missing data mechanism. </p><p>I consider two novel Bayesian approaches to handle the attrition and item non-response simultaneously under multiple imputation in a two wave panel with one refreshment sample when the variables involved are categorical and high dimensional. </p><p>First, I present a semi-parametric selection model that includes an additive non-ignorable attrition model with main effects of all variables, including demographic variables and outcome measures in wave 1 and wave 2. The survey variables are modeled jointly using Bayesian mixture of multinomial distributions. I develop the posterior computation algorithms for the semi-parametric selection model under different prior choices for the regression coefficients in the attrition model. </p><p>Second, I propose two Bayesian pattern mixture models for this scenario that use latent classes to model the dependency among the variables and the attrition. I develop a dependent Bayesian latent pattern mixture model for which variables are modeled via latent classes and attrition is treated as a covariate in the class allocation weights. And, I develop a joint Bayesian latent pattern mixture model, for which attrition and variables are modeled jointly via latent classes.</p><p>I show via simulation studies that the pattern mixture models can recover true parameter estimates, even when inferences based on the panel alone are biased from attrition. </p><p>I apply both the selection and pattern mixture models to data from the 2007-2008 Associated Press/Yahoo News election panel study.</p> / Dissertation
|
354 |
Sleepy Stack: a New Approach to Low Power VLSI and MemoryPark, Jun Cheol 19 July 2005 (has links)
New low power solutions for Very Large Scale Integration (VLSI) are proposed. Especially, we focus on leakage power reduction. Although neglected at 0.18u technology and above, leakage power is nearly equal to dynamic power consumption in nanoscale technology, e.g., 0.07u.
We present a novel circuit structure, we call it sleepy stack, which is a combination of two well-known low-leakage techniques: the forced stack and sleep transistor techniques. Unlike the forced stack technique, the sleepy stack technique can utilize high-Vth transistors without incurring a large delay increase. Also, unlike the sleep transistor technique, the sleepy stack technique can retain exact logic state while achieving similar leakage power savings. In short, our sleepy stack structure achieves ultra-low leakage power consumption while retaining logic state.
We apply the sleepy stack technique to both generic logic circuits as well as SRAM. At 0.07u technology, the sleepy stack logic circuits achieves up to 200X leakage reduction compared the forced stack technique with small (under 7%) delay variations and 51~118% area overheads. The sleepy stack SRAM cell with 1.5xVth achieves 5X leakage reduction with 32% delay increase or 2.49X leakage reduction without delay increase compared to the high-Vth SRAM cell. As such, the sleepy stack technique can be applicable to a design that requires ultra-low leakage power with quick response time while paying area and delay cost.
We also propose a new low power architectural technique named Low-Power Pipelined Cache (LPPC). Although a conventional pipelined cache is mainly used to reduce cache access time, we lower supply voltage of cache using LPPC to save dynamic power. We achieve 20.43% processor dynamic energy savings with 4.14% execution cycle increase using 2-stage low-Vdd LPPC. Furthermore, we apply LPPC to the sleepy stack SRAM. The sleepy stack pipelined SRAM achieves 17X leakage power reduction while increasing execution time by 4% on average. Although this combined technique increases active power consumption by 33%, this technique is well suited for the system that spends most of its time in sleep mode.
|
355 |
Physical Design Automation for System-on-Packages and 3D-Integrated CircuitsMinz, Jacob Rajkumar 03 August 2006 (has links)
The focus of this research was to develop interconnect-centric
physical design tools for 3D technologies.
A new routing model for the SOP structure was developed which
incorporated the 3D structure and formalized the resource structure
that facilitated the development of the global routing tool.
The challenge of this
work was to intelligently convert the 3D SOP routing problem into a set
of 2D problems which could be solved efficiently.
On the lines of MCM, the global routing problem
was divided into a number of phases namely, coarse pin distribution,
net distribution, detailed pin distribution, topology generation, layer
assignment, channel assignment and local routing. The novelty in this
paradigm is due to the feed-through vias needed by the nets which traverse
through multiple placement layers. To gain further improvements in
performance, optical
routing was proposed and a cost analysis study was done. The areas for
the placement of waveguides were efficiently determined, which reduced
delays and maximized utilization.
The global router developed was integrated into a simulated-annealing based
floorplanner to investigate trade-offs of various objectives. Since power-supply
noise suppression is of paramount importance in SOP, a model was developed
for the SOP power-supply network. Decap allocation, and
insertion were also integrated into the framework. The challenges
in this work were to integrate computationally intensive analysis tools with
a floorplanning that works to its best efficency provided the evaluation of
the cost functions are rapid. Trajectory-based approaches were used to sample
representative data points for congestion analysis and interpolate the
the congestion metric during the optimization schedule. Efficient
algorithms were also proposed for 3D clock routing, which acheived
equal skews under uniform and worst thermal profiles. Other
objectives such as wirelength, through-vias, and
power were also handled.
|
356 |
Designing High-Performance Microprocessors in 3-Dimensional Integration TechnologyPuttaswamy, Kiran 08 November 2007 (has links)
The main contribution of this dissertation is the demonstration of the impact of a new emerging technology called 3D-integration technology on conventional high-performance microprocessors. 3D-integration technology stacks active devices in the vertical dimension in addition to the conventional horizontal dimension. The additional degree of connectivity in the vertical dimension enables circuit designers to replace long horizontal wires with short vertical interconnects, thus reducing delay, power consumption, and area.
To adapt planar microarchitectures to 3D-integrated designs, we study several building blocks that together comprise a substantial portion of a processor s total transistor count. In particular, we focus our attention on three basic circuit classes: static random access memory (SRAM) circuits, associative/CAM logic circuits, and data processing in conventional high-performance processors. We propose 2-die-stacked and 4-die-stacked 3D-integrated circuits to deal with the constraints of the conventional planar technology. We propose high-performance 3D-integrated microprocessors and evaluate the impact on performance, power, and temperature. We demonstrate two different approaches to improve performance: clock speed (3D-integrated processors with identical microarchitectural configurations as the corresponding planar processor run at a higher clock frequency), and IPC (3D-integrated processors accommodate larger-sized modules than the planar processors for the same frequency). We demonstrate the simultaneous benefits of the 3D-integration and highlight the power density and thermal issues related to the 3D-integration technology. Next, we propose microarchitectural techniques based on significance partitioning and data-width locality to effectively address the challenges of power density and temperature. We demonstrate that our microarchitecture-level techniques can effectively control the power density issues in the 3D-integrated processors. The 3D-integrated processors provide a significant performance benefit over the planar processors while simultaneously reducing the total power. The simultaneous benefits in multiple objectives make 3D-integration a highly desirable technology for use in building future microprocessors. One of the key contributions of this dissertation is the temperature analysis that shows that the worst-case temperatures on the 3D-integrated processors can be effectively controlled using microarchitecture level techniques. The 3D-integration technology may extend the applicability of Moore s law for a few more technology generations.
|
357 |
Design of Decentralized Block Backstepping Controllers for Large-Scale Systems to Achieve Asymptotic StabilityWu, Min-Yan 17 February 2011 (has links)
Based on the Lyapunov stability theorem, a design methodology of adaptive
block backstepping decentralized controller is proposed in this thesis for a class
of large-scale systems with interconnections to solve regulation problems. Each
subsystem contains m blocks¡¦ state variables, and m- 1 virtual input controllers
are designed from the first block to the (m - 1)th block. Then the proposed
robust controller is designed in accordance with the last block. Some adaptive
mechanisms are embedded in the backstepping controllers as well as virtual input
controllers in each subsystem, so that the upper bounds of interconnections
as well as perturbations are not required. Furthermore, the dynamic equations
of each subsystem do not need to strictly satisfy the block strict feedback form,
and the resultant controlled system can achieve asymptotic stability. Finally, a
numerical and a practical examples are given for demonstrating the feasibility of
the proposed control scheme.
|
358 |
Distributed Algorithms for SVD-based Least Squares EstimationPeng, Yu-Ting 19 July 2011 (has links)
Singular value decomposition (SVD) is a popular decomposition method for solving least-squares estimation problems. However, for large datasets, SVD is very time consuming and memory demanding in obtaining least squares solutions. In this paper, we propose a least squares estimator based on an iterative divide-and-merge scheme for large-scale estimation problems. The estimator consists of several levels. At each level, the input matrices are subdivided into submatrices. The submatrices are decomposed by SVD respectively and the results are merged into smaller matrices which become the input of the next level. The process is iterated until the resulting matrices are small enough which can then be solved directly and efficiently by the SVD algorithm. However, the iterative divide-and-merge algorithms executed on a single machine is still time demanding on large scale datasets. We propose two distributed algorithms to overcome this shortcoming by permitting several machines to perform the decomposition and merging of the submatrices in each level in parallel. The first one is implemented in MapReduce on the Hadoop distributed platform which can run the tasks in parallel on a collection of computers. The second one is implemented on CUDA which can run the tasks in parallel using the Nvidia GPUs. Experimental results demonstrate that the proposed distributed algorithms can greatly reduce the time required to solve large-squares problems.
|
359 |
Improvement to Storage Space Allocation Plan and Assigned Mechanism of Large Scale Object by using System Simulation technique¡XStainless Steel Processing Industry as an ExampleWu, Jia-Jiun 24 August 2011 (has links)
Research into appropriate plan of storage space is not a new issue. Several journals includestudies of storage space Allocation plan as the main topic whereas most of them focus on the subject of distribution center rather that of large scale object warehouse and few of them are to discuss the issue of storage space distribution from warehouse attendant¡¦s viewpoint.The cost of crane operation of large scale object is high; hence, inappropriate planning of large scale object may increase turning frequency of order picking. Due to this, storage operation speed will slow down and cost will rise, and thus will make it less competitive. Warehouse attendants always allot present storage by their own experiences; thus, they rarely transform their experiences into principles. If the principle of inbound operation can be documented by characters and if the implicit knowledge of warehouse attendants can be expressed explicitly, then storage department will have an efficient way of inbound operation for reference.
By the method of Storage allocation strategy, this study is to analyze the necessary information of storage plan. It draws up several principles of storage assignment; according to the different production conditions the variable principles will be assigned to assist storage space assignment and ways of piling. By this way, it decreases frequency of using large scale object hanging machines and thus speeds up shipping simultaneously. Finally, this study performs a simulated analysis of inbound operation via simulation software. By this way of simulation, it makes comparisons between different experimental results. It is proven that this way can help warehouse attendants to choose appropriate mode of inbound operation according to the different orders.
|
360 |
Design of Decentralized Adaptive Backstepping Tracking Controllers for Large-Scale Uncertain SystemsChang, Yu-Yi 01 February 2012 (has links)
Based on the Lyapunov stability theorem, a decentralized adaptive backstepping tracking control scheme for a class of perturbed large-scale systems with non-strict feedback form is presented in this thesis to solve tracking problems. First of all, the dynamic equations of the plant to be controlled are transformed into other equations with semi-strict feedback form. Then a decentralized tracking controller is designed based on the backstepping control methodology so that the outputs of controlled system are capable of tracking the desired signals generated from a reference model. In addition, by utilizing adaptive mechanisms embedded in the backstepping controller, one need not acquire the upper bounds of the perturbations and the interconnections in advance. The resultant control scheme is able to guarantee the stability of the whole large-scale systems, and the tracking precision may be adjusted through the design parameters. Finally, one numerical and one practical examples are demonstrated for showing the applicability of the proposed design technique.
|
Page generated in 0.0498 seconds