• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 627
  • 132
  • 103
  • 96
  • 79
  • 75
  • 62
  • 58
  • 52
  • 48
  • 47
  • 40
  • 40
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Automatic design and optimisation of thermoformed thin-walled structures

Ugail, Hassan, Wilson, M.J. January 2004 (has links)
Yes / Here the design and functional optimisation of thermoformed thin-walled structures made from plastics is considered. Such objects are created in great numbers especially in the food packaging industry. In fact these objects are produced in such vast numbers each year, that one important task in the design of these objects is the minimisation of the amount of plastic used, subject to functional constraints. In this paper a procedure for achieving this is described, which involves the automatic optimisation of the mold shape taking into account the strength of the final object and its thickness distribution, thus reducing the need to perform inefficient and expensive `trial and error¿ experimentation using physical prototypes. An efficient technique for parameterising geometry is utilised here, enabling to create a wide variety of possible mold shapes on which appropriate analysis can be performed. The results of the analysis are used within an automatic optimisation routine enabling to find a design which satisfies user requirements. Thus, the paper describes a rational means for the automatic optimal design of composite thermoformed thin-walled structures.
282

Designing Power Converter-Based Energy Management Systems with a Hierarchical Optimization Method

Li, Qian 10 June 2024 (has links)
This dissertation introduces a hierarchical optimization framework for power converter-based energy management systems, with a primary focus on weight minimization. Emphasizing modularity and scalability, the research systematically tackles the challenges in optimizing these systems, addressing complex design variables, couplings, and the integration of heterogeneous models. The study begins with a comparative evaluation of various metaheuristic optimization methods applied to power inductors and converters, including genetic algorithm, particle swarm optimization, and simulated annealing. This is complemented by a global sensitivity analysis using the Morris method to understand the impact of different design variables on the design objectives and constraints in power electronics. Additionally, a thorough evaluation of different modeling methods for key components is conducted, leading to the validation of selected analytical models at the component level through extensive experiments. Further, the research progresses to studies at the converter level, focusing on a weight-optimized design for the thermal management systems for silicon carbide (SiC) MOSFET-based modular converters and the development of a hierarchical digital control system. This stage includes a thorough assessment of the accuracy of small-signal models for modular converters. At this point, the research methodically examines various design constraints, notably thermal considerations and transient responses. This examination is critical in understanding and addressing the specific challenges associated with converter-level design and the implications on system performance. The dissertation then presents a systematic approach where design variables and constraints are intricately managed across different hierarchies. This strategy facilitates the decoupling of subsystem designs within the same hierarchy, simplifying future enhancements to the optimization process. For example, component databases can be expanded effortlessly, and diverse topologies for converters and subsystems can be incorporated without the need to reconfigure the optimization framework. Another notable aspect of this research is the exploration of the scalability of the optimization architecture, demonstrated through design examples. This scalability is pivotal to the framework's effectiveness, enabling it to adapt and evolve alongside technological advancements and changing design requirements. Furthermore, this dissertation delves into the data transmission architecture within the hierarchical optimization framework. This architecture is not only critical for identifying optimal performance measures, but also for conveying detailed design information across all hierarchy levels, from individual components to entire systems. The interrelation between design specifications, constraints, and performance measures is illustrated through practical design examples, showcasing the framework's comprehensive approach. In summary, this dissertation contributes a novel, modular, and scalable hierarchical optimization architecture for the design of power converter-based energy management systems. It offers a comprehensive approach to managing complex design variables and constraints, paving the way for more efficient, adaptable, and cost-effective power system designs. / Doctor of Philosophy / This dissertation introduces an innovative approach to designing energy control systems, inspired by the creativity and adaptability of a Lego game. Central to this concept is a layered design methodology. The journey begins with power components, the fundamental 'Lego bricks'. Each piece is meticulously optimized for compactness, forming the robust foundation of the system. Like connecting individual Lego bricks into a module, these power components come together to form standardized power converters. These converters offer flexibility and scalability, similar to how numerous structures can be built from the same set of Lego pieces. The final layer involves assembling these power converters in order to construct comprehensive energy control systems. This mirrors the process of using Lego subassemblies to build larger, more intricate structures. At this system-level design, the standardized converters are integrated to optimize overall system performance. Key to this dissertation's methodology is an emphasis on modularity and scalability. It enables the creation of diverse energy control systems of varying sizes and functionalities from these fundamental units. The research delves into the intricacies of design variables and constraints, ensuring that each 'Lego piece' contributes optimally to the bigger picture. This includes exploring the scalability of the architecture, allowing it to evolve with technological advancements and design requirements, as well as examining data transmission within the system to ensure efficient data communication across all levels. In essence, this dissertation is about recognizing the potential in the smallest components and understanding their role in the grand scheme of the system. It is akin to playing a masterful game of Lego, where building something greater from small, well-designed parts leads to more efficient, adaptable, and cost-effective energy control system designs. This approach is particularly relevant for applications in transportation systems and renewable energy in remote locations, showcasing the universal applicability of this 'Lego game' to energy management.
283

Representation Learning Based Causal Inference in Observational Studies

Lu, Danni 22 February 2021 (has links)
This dissertation investigates novel statistical approaches for causal effect estimation in observational settings, where controlled experimentation is infeasible and confounding is the main hurdle in estimating causal effect. As such, deconfounding constructs the main subject of this dissertation, that is (i) to restore the covariate balance between treatment groups and (ii) to attenuate spurious correlations in training data to derive valid causal conclusions that generalize. By incorporating ideas from representation learning, adversarial matching, generative causal estimation, and invariant risk modeling, this dissertation establishes a causal framework that balances the covariate distribution in latent representation space to yield individualized estimations, and further contributes novel perspectives on causal effect estimation based on invariance principles. The dissertation begins with a systematic review and examination of classical propensity score based balancing schemes for population-level causal effect estimation, presented in Chapter 2. Three causal estimands that target different foci in the population are considered: average treatment effect on the whole population (ATE), average treatment effect on the treated population (ATT), and average treatment effect on the overlap population (ATO). The procedure is demonstrated in a naturalistic driving study (NDS) to evaluate the causal effect of cellphone distraction on crash risk. While highlighting the importance of adopting causal perspectives in analyzing risk factors, discussions on the limitations in balance efficiency, robustness against high-dimensional data and complex interactions, and the need for individualization are provided to motivate subsequent developments. Chapter 3 presents a novel generative Bayesian causal estimation framework named Balancing Variational Neural Inference of Causal Effects (BV-NICE). Via appealing to the Robinson factorization and a latent Bayesian model, a novel variational bound on likelihood is derived, explicitly characterized by the causal effect and propensity score. Notably, by treating observed variables as noisy proxies of unmeasurable latent confounders, the variational posterior approximation is re-purposed as a stochastic feature encoder that fully acknowledges representation uncertainties. To resolve the imbalance in representations, BV-NICE enforces KL-regularization on the respective representation marginals using Fenchel mini-max learning, justified by a new generalization bound on the counterfactual prediction accuracy. The robustness and effectiveness of this framework are demonstrated through an extensive set of tests against competing solutions on semi-synthetic and real-world datasets. In recognition of the reliability issue when extending causal conclusions beyond training distributions, Chapter 4 argues ascertaining causal stability is the key and introduces a novel procedure called Risk Invariant Causal Estimation (RICE). By carefully re-examining the relationship between statistical invariance and causality, RICE cleverly leverages the observed data disparities to enable the identification of stable causal effects. Concretely, the causal inference objective is reformulated under the framework of invariant risk modeling (IRM), where a population-optimality penalty is enforced to filter out un-generalizable effects across heterogeneous populations. Importantly, RICE allows settings where counterfactual reasoning with unobserved confounding or biased sampling designs become feasible. The effectiveness of this new proposal is verified with respect to a variety of study designs on real and synthetic data. In summary, this dissertation presents a flexible causal inference framework that acknowledges the representation uncertainties and data heterogeneities. It enjoys three merits: improved balance to complex covariate interactions, enhanced robustness to unobservable latent confounders, and better generalizability to novel populations. / Doctor of Philosophy / Reasoning cause and effect is the innate ability of a human. While the drive to understand cause and effect is instinct, the rigorous reasoning process is usually trained through the observation of countless trials and failures. In this dissertation, we embark on a journey to explore various principles and novel statistical approaches for causal inference in observational studies. Throughout the dissertation, we focus on the causal effect estimation which answers questions like ``what if" and ``what could have happened". The causal effect of a treatment is measured by comparing the outcomes corresponding to different treatment levels of the same unit, e.g. ``what if the unit is treated instead of not treated?". The challenge lies in the fact that i) a unit only receives one treatment at a time and therefore it is impossible to directly compare outcomes of different treatment levels; ii) comparing the outcomes across different units may involve bias due to confounding as the treatment assignment potentially follows a systematic mechanism. Therefore, deconfounding constructs the main hurdle in estimating causal effects. This dissertation presents two parallel principles of deconfounding: i) balancing, i.e., comparing difference under similar conditions; ii) contrasting, i.e., extracting invariance under heterogeneous conditions. Chapter 2 and Chapter 3 explore causal effect through balancing, with the former systematically reviews a classical propensity score weighting approach in a conventional data setting and the latter presents a novel generative Bayesian framework named Balancing Variational Neural Inference of Causal Effects(BV-NICE) for high-dimensional, complex, and noisy observational data. It incorporates the advance deep learning techniques of representation learning, adversarial learning, and variational inference. The robustness and effectiveness of the proposed framework are demonstrated through an extensive set of experiments. Chapter 4 extracts causal effect through contrasting, emphasizing that ascertaining stability is the key of causality. A novel causal effect estimating procedure called Risk Invariant Causal Estimation(RICE) is proposed that leverages the observed data disparities to enable the identification of stable causal effects. The improved generalizability of RICE is demonstrated through synthetic data with different structures, compared with state-of-art models. In summary, this dissertation presents a flexible causal inference framework that acknowledges the data uncertainties and heterogeneities. By promoting two different aspects of causal principles and integrating advance deep learning techniques, the proposed framework shows improved balance for complex covariate interactions, enhanced robustness for unobservable latent confounders, and better generalizability for novel populations.
284

Efficiency of Logic Minimization Techniques for Cryptographic Hardware Implementation

Raghuraman, Shashank 15 July 2019 (has links)
With significant research effort being directed towards designing lightweight cryptographic primitives, logical metrics such as gate count are extensively used in estimating their hardware quality. Specialized logic minimization tools have been built to make use of gate count as the primary optimization cost function. The first part of this thesis aims to investigate the effectiveness of such logical metrics in predicting hardware efficiency of corresponding circuits. Mapping a logical representation onto hardware depends on the standard cell technology used, and is driven by trade-offs between area, performance, and power. This work evaluates aforementioned parameters for circuits optimized for gate count, and compares them with a set of benchmark designs. Extensive analysis is performed over a wide range of frequencies at multiple levels of abstraction and system integration, to understand the different regions in the solution space where such logic minimization techniques are effective. A prototype System-on-Chip (SoC) is designed to benchmark the performance of these circuits on actual hardware. This SoC is built with an aim to include multiple other cryptographic blocks for analysis of their hardware efficiency. The second part of this thesis analyzes the overhead involved in integrating selected authenticated encryption ciphers onto an SoC, and explores different design alternatives for the same. Overall, this thesis is intended to serve as a comprehensive guideline on hardware factors that can be overlooked, but must be considered during logical-to-physical mapping and during the integration of standalone cryptographic blocks onto a complete system. / Master of Science / The proliferation of embedded smart devices for the Internet-of-Things necessitates a constant search for smaller and power-efficient hardware. The need to ensure security of such devices has been driving extensive research on lightweight cryptography, which focuses on minimizing the logic footprint of cryptographic hardware primitives. Different designs are optimized, evaluated, and compared based on the number of gates required to express them at a logical level of abstraction. The expectation is that circuits requiring fewer gates to represent their logic will be smaller and more efficient on hardware. However, converting a logical representation into a hardware circuit, known as “synthesis”, is not trivial. The logic is mapped to a “library” of hardware cells, and one of many possible solutions for a function is selected - a process driven by trade-offs between area, speed, and power consumption on hardware. Our work studies the impact of synthesis on logical circuits with minimized gate count. We evaluate the hardware quality of such circuits by comparing them with that of benchmark designs over a range of speeds. We wish to answer questions such as “At what speeds do logical metrics rightly predict area- and power-efficiency?”, and “What impact does this have after integrating cryptographic primitives onto a complete system?”. As part of this effort, we build a System-on-Chip in order to observe the efficiency of these circuits on actual hardware. This chip also includes recently developed ciphers for authenticated encryption. The second part of this thesis explores different ways of integrating these ciphers onto a system, to understand their effect on the ciphers’ compactness and performance. Our overarching aim is to provide a suitable reference on how synthesis and system integration affect the hardware quality of cryptographic blocks, for future research in this area.
285

An Interactive Chemical Equilibrium Solver for the Personal Computer

Negus, Charles H. 20 February 1997 (has links)
The Virginia Tech Equilibrium Chemistry (VTEC) code is a keyboard interactive, user friendly, chemical equilibrium solver for use on a personal computer. The code is particularly suitable for a teaching / learning environment. For a set of reactants at a defined thermodynamic state given by a user, the program will select all species in the JANAF thermochemical database which could exist in the products. The program will then calculate equilibrium composition, flame temperature, and other thermodynamic properties for many common cases. Examples in this thesis show VTEC's ability to predict chemical equilibrium compositions and flame temperature for selected reactions, and demonstrate how VTEC can substitute for and aid in the design of lab experiments, and identify trends in parametric studies. The 1976 NASA Lewis Chemical Equilibrium Code (CEC76) from which VTEC has been adapted uses Lagrangian multipliers to minimize free energy. CEC76 was written for mainframe computer use. Later versions of CEC76, adapted for personal computer use are available for a fee and have a very minimal user interface. / Master of Science
286

Development and Testing Of The iCACC Intersection Controller For Automated Vehicles

Zohdy, Ismail Hisham 28 October 2013 (has links)
Assuming that vehicle connectivity technology matures and connected vehicles hit the market, many of the running vehicles will be equipped with highly sophisticated sensors and communication hardware. Along with the goal of eliminating human distracted driving and increasing vehicle automation, it is necessary to develop novel intersection control strategies. Accordingly, the research presented in this dissertation develops an innovative system that controls the movement of vehicles using cooperative cruise control system (CACC) capabilities entitled: iCACC (intersection management using CACC). In the iCACC system, the main assumption is that the intersection controller receives vehicle requests from vehicles and advises each vehicle on the optimum course of action by ensuring no crashes occur while at the same time minimizing the intersection delay. In addition, an innovative framework has been developed (APP framework) using the iCACC platform to prioritize the movements of vehicles based on the number of passengers in the vehicle. Using CACC and vehicle-to-infrastructure connectivity, the system was also applied to a single-lane roundabout. In general terms, this application is considered quite similar to the concept of metering single-lane entrance ramps. The proposed iCACC system was tested and compared to three other intersection control strategies, namely: traffic signal control, an all-way stop control (AWSC), and a roundabout, considering different traffic demand levels ranging from low to high levels of congestion (volume-to-capacity ration from 0.2 to 0.9). The simulated results showed savings in delay and fuel consumption in the order of 90 to 45 %, respectively compared to AWSC and traffic signal control. Delays for the roundabout and the iCACC controller were comparable. The simulation results showed that fuel consumption for the iCACC controller was, on average, 33%, 45% and 11% lower than the fuel consumption for the traffic signal, AWSC and roundabout control strategies, respectively. In summary, the developed iCACC system is an innovative system because of its ability to optimize/model different levels of vehicle automation market penetrations, weather conditions, vehicle classes/models, shared movements, roundabouts, and passenger priority. In addition, the iCACC is capable of capturing the heterogeneity of roadway users (cyclists, pedestrians, etc.) using a video detection technique developed in this dissertation effort. It is anticipated that the research findings will contribute to the application of automated systems, connected vehicle technology, and the future of driverless vehicle management. Finally, the public acceptability of the new advanced in-vehicle technologies is a challenging task and this research will provide valuable feedback for researchers, automobile manufacturers, and decision makers in making the case to introduce such systems. / Ph. D.
287

Exploring the community waste sector: Are sustainable development and social capital useful concepts for project-level research?

Luckin, D., Sharp, Liz January 2005 (has links)
Yes / The concept of sustainable development implies that social, economic and environmental objectives should be delivered together, and that they can be achieved through enhanced community participation. The concept of social capital indicates how these objectives interrelate, implying that community involvement enhances trust and reciprocity, thus promoting better governance and greater prosperity. This paper draws on a survey of Community Waste Projects to explore how these concepts can inform investigations of community projects. It argues that the concepts provide useful guides to research and debate, but highlights the resource requirements of empirically confirming the claims of the social capital perspective.
288

Low Power Test Methodology For SoCs : Solutions For Peak Power Minimization

Tudu, Jaynarayan Thakurdas 07 1900 (has links) (PDF)
Power dissipated during scan testing is becoming increasingly important for today’s very complex sequential circuits. It is shown that the power dissipated during test mode operation is in general higher than the power dissipated during functional mode operation, the test mode average power may sometimes go upto 3x and the peak power may sometimes go upto 30x of normal mode operation. The power dissipated during the scan operation is primarily due to the switching activity that arises in scan cells during the shift and capture operation. The switching in scan cells propagates to the combinational block of the circuit during scan operation, which in turn creates many transition in the circuit and hence it causes higher dynamic power dissipation. The excessive average power dissipated during scan operation causes circuit damage due to higher temperature and the excessive peak power causes yield loss due to IR-drop and cross talk. The higher peak power also causes the thermal related issue if it last for sufficiently large number of cycles. Hence, to avoid all these issues it is very important to reduce the peak power during scan testing. Further, in case of multi-module SoC testing the reduction in peak power facilitates in reducing the test application time by scheduling many test sessions parallelly. In this dissertation we have addressed all the above stated issues. We have proposed three different techniques to deal with the excessive peak power dissipation problem during test. The first solution proposes an efficient graph theoretic methodology for test vector reordering to achieve minimum peak power supported by the given test vector set. Three graph theoretic problems are formulated and corresponding algorithms to solve the problems are proposed. The proposed methodology also minimizes average power for the given minimum peak power. Further, a lower bound on minimum achievable peak power for a given test set is defined. The results on several benchmarks show that the proposed methodology is able to reduce peak power significantly. To address the peak power problem during scan test-cycle (the cycle between launch and capture pulse) we have proposed a scan chain reordering technique. A new formulation for scan chain reordering as TSP (Traveling Sales Person) problem and a solution is proposed. The experimental results show that the proposed methodology is able to minimize considerable amount of peak power compared to the earlier proposals. The capture power (power dissipated during capture cycle) problem in testing multi chip module (MCM) is also addressed. We have proposed a methodology to schedule the test set to reduce capture power. The scheduling algorithm consist of reordering of test vector and insertion of idle cycle to prevent capture cycle coincidence of scheduled cores. The experimental results show the significant reduction in capture power without increase in test application time.
289

Algorithms for Homogeneous Quadratic Minimization And Applications in Wireless Networks

Gaurav, Dinesh Dileep January 2016 (has links) (PDF)
Massive proliferation of wireless devices throughout world in the past decade comes with a host of tough and demanding design problems. Noise at receivers and wireless interference are the two major issues which severely limits the received signal quality and the quantity of users that can be simultaneously served. Traditional approaches to this problems are known as Power Control (PC), SINR Balancing (SINRB) and User Selection (US) in Wireless Networks respectively. Interestingly, for a large class of wireless system models, both this problems have a generic form. Thus any approach to this generic optimization problem benefits the transceiver design of all the underlying wireless models. In this thesis, we propose an Eigen approach based on the Joint Numerical Range (JNR) of hermitian matrices for PC, SINRB and US problems for a class of wireless models. In the beginning of the thesis, we address the PC and SINRB problems. PC problems can be expressed as Homogeneous Quadratic Constrained Quadratic Optimization Problems (HQCQP) which are known to be NP-Hard in general. Leveraging their connection to JNR, we show that when the constraints are fewer, HQCQP problems admit iterative schemes which are considerably fast compared to the state of the art and have guarantees of global convergence. In the general case for any number of constraints, we show that the true solution can be bounded above and below by two convex optimization problems. Our numerical simulations suggested that the bounds are tight in almost all scenarios suggesting the achievement of true solution. Further, the SINRB problems are shown to be intimately related to PC problems, and thus share the same approach. We then proceed on to comment on the convexity of PC problems and SINRB problems in the general case of any number of constraints. We show that they are intimately related to the convexity of joint numerical range. Based on this connection, we derive results on the attainability of solution and comment on the same about the state-of-the-art technique Semi-De nite Relaxation (SDR). In the subsequent part of the thesis, we address the US problem. We show that the US problem can be formulated as a combinatorial problem of selecting a feasible subset of quadratic constraints. We propose two approaches to the US problem. The first approach is based on the JNR view point which allows us to propose a heuristic approach. The heuristic approach is then shown to be equivalent to a convex optimization problem. In the second approach, we show that the US is equivalent to another non-convex optimization problem. We then propose a convex approximation approach to the latter. Both the approaches are shown to have near optimal performance in simulations. We conclude the thesis with a discussion on applicability and extension to other class of optimization problems and some open problems which has come out of this work.
290

Improving Fuel Efficiency of Commercial Vehicles through Optimal Control of Energy Buffers

Khodabakhshian, Mohammad January 2016 (has links)
Fuel consumption reduction is one of the main challenges in the automotiveindustry due to its economical and environmental impacts as well as legalregulations. While fuel consumption reduction is important for all vehicles,it has larger benefits for commercial ones due to their long operational timesand much higher fuel consumption. Optimal control of multiple energy buffers within the vehicle proves aneffective approach for reducing energy consumption. Energy is temporarilystored in a buffer when its cost is small and released when it is relativelyexpensive. An example of an energy buffer is the vehicle body. Before goingup a hill, the vehicle can accelerate to increase its kinetic energy, which canthen be consumed on the uphill stretch to reduce the engine load. The simplestrategy proves effective for reducing fuel consumption. The thesis generalizes the energy buffer concept to various vehicular componentswith distinct physical disciplines so that they share the same modelstructure reflecting energy flow. The thesis furthermore improves widely appliedcontrol methods and apply them to new applications. The contribution of the thesis can be summarized as follows: • Developing a new function to make the equivalent consumption minimizationstrategy (ECMS) controller (which is one of the well-knownoptimal energy management methods in hybrid electric vehicles (HEVs))more robust. • Developing an integrated controller to optimize torque split and gearnumber simultaneously for both reducing fuel consumption and improvingdrivability of HEVs. • Developing a one-step prediction control method for improving the gearchanging decision. • Studying the potential fuel efficiency improvement of using electromechanicalbrake (EMB) on a hybrid electric city bus. • Evaluating the potential improvement of fuel economy of the electricallyactuated engine cooling system through the off-line global optimizationmethod. • Developing a linear time variant model predictive controller (LTV-MPC)for the real-time control of the electric engine cooling system of heavytrucks and implementing it on a real truck. / <p>QC 20160128</p>

Page generated in 0.1092 seconds