Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
541 |
Evaluation of Passive Force on Skewed Bridge Abutments with Large-Scale TestsMarsh, Aaron Kirt 18 March 2013 (has links) (PDF)
Accounting for seismic forces and thermal expansion in bridge design requires an accurate passive force versus backwall deflection relationship. Current design codes make no allowances for skew effects on the development of the passive force. However, small-scale experimental results and available numerical models indicate that there is a significant reduction in peak passive force as skew angle increases for plane-strain cases. To further explore this issue large-scale field tests were conducted at skew angles of 0°, 15°, and 30° with unconfined backfill geometry. The abutment backwall was 11 feet (3.35-m) wide by 5.5 feet (1.68-m) high, and backfill material consisted of dense compacted sand. The peak passive force for the 15° and 30° tests was found to be 73% and 58%, respectively, of the peak passive force for the 0° test which is in good agreement with the small-scale laboratory tests and numerical model results. However, the small differences may suggest that backfill properties (e.g. geometry and density) may have some slight effect on the reduction in peak passive force with respect to skew angle. Longitudinal displacement of the backfill at the peak passive force was found to be approximately 3% of the backfill height for all field tests and is consistent with previously reported values for large-scale passive force-deflection tests, though skew angle may slightly reduce the deflection necessary to reach backfill failure. The backfill failure mechanism appears to transition from a log spiral type failure mechanism where Prandtl and Rankine failure zones develop at low skew angles, to a failure mechanism where a Prandtl failure zone does not develop as skew angle increases.
|
542 |
Asynchronous Algorithms for Large-Scale Optimization : Analysis and ImplementationAytekin, Arda January 2017 (has links)
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution. The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound. Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay. Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results. / <p>QC 20170317</p>
|
543 |
Analysis and Comparison of Distributed Training Techniques for Deep Neural Networks in a Dynamic Environment / Analys och jämförelse av distribuerade tränings tekniker för djupa neurala nätverk i en dynamisk miljöGebremeskel, Ermias January 2018 (has links)
Deep learning models' prediction accuracy tends to improve with the size of the model. The implications being that the amount of computational power needed to train models is continuously increasing. Distributed deep learning training tries to address this issue by spreading the computational load onto several devices. In theory, distributing computation onto N devices should give a performance improvement of xN. Yet, in reality the performance improvement is rarely xN, due to communication and other overheads. This thesis will study the communication overhead incurred when distributing deep learning training. Hopsworks is a platform designed for data science. The purpose of this work is to explore a feasible way of deploying distributed deep learning training on a shared cluster and analyzing the performance of different distributed deep learning algorithms to be used on this platform. The findings of this study show that bandwidth-optimal communication algorithms like ring all-reduce scales better than many-to-one communication algorithms like parameter server, but were less fault tolerant. Furthermore, system usage statistics collected revealed a network bottleneck when training is distributed on multiple machines. This work also shows that it is possible to run MPI on a hadoop cluster by building a prototype that orchestrates resource allocation, deployment, and monitoring of MPI based training jobs. Even though the experiments did not cover different cluster configurations, the results are still relevant in showing what considerations need to be made when distributing deep learning training. / Träffsäkerheten hos djupinlärningsmodeller tenderar att förbättras i relation med storleken på modellen. Implikationen blir att mängden beräkningskraft som krävs för att träna modeller ökar kontinuerligt.Distribuerad djupinlärning försöker lösa detta problem genom att distribuera beräkningsbelastning på flera enheter. Att distribuera beräkningarna på N enheter skulle i teorin innebär en linjär skalbarhet (xN). I verkligenheten stämmer sällan detta på grund av overhead från nätverkskommunikation eller I/O. Hopsworks är en dataanalys och maskininlärningsplattform. Syftetmed detta arbeta är att utforska ett möjligt sätt att utföra distribueraddjupinlärningträning på ett delat datorkluster, samt analysera prestandan hos olika algoritmer för distribuerad djupinlärning att använda i plattformen. Resultaten i denna studie visar att nätverksoptimala algoritmer såsom ring all-reduce skalar bättre för distribuerad djupinlärning änmånga-till-en kommunikationsalgoritmer såsom parameter server, men är inte lika feltoleranta. Insamlad data från experimenten visade på en flaskhals i nätverket vid träning på flera maskiner. Detta arbete visar även att det är möjligt att exekvera MPI program på ett hadoopkluster genom att bygga en prototyp som orkestrerar resursallokering, distribution och övervakning av exekvering. Trots att experimenten inte täcker olika klusterkonfigurationer så visar resultaten på vilka faktorer som bör tas hänsyn till vid distribuerad träning av djupinlärningsmodeller.
|
544 |
Reactivate/Recreate/ReconnectWang, Xiaotong January 2019 (has links)
Large-scale green space in city sometimes can bring isolation and disconnection. And isolation and disconnection make it become negative and even a barrier in urban space with time.
|
545 |
Planning And Scheduling For Large-scaledistributed SystemsYu, Han 01 January 2005 (has links)
Many applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual machine, we have to develop new algorithms for coordination, planning, scheduling, resource discovery, and other functions that can be automated. Then we can develop societal services based upon these algorithms, which hide the complexity of the computing system for users. In this dissertation, we address the problem of planning and scheduling for large-scale distributed systems. We discuss a model of the system, analyze the need for planning, scheduling, and plan switching to cope with a dynamically changing environment, present algorithms for the three functions, report the simulation results to study the performance of the algorithms, and introduce an architecture for an intelligent large-scale distributed system.
|
546 |
Analysis Of Complexity And Coupling Metrics Of Subsystems In Large Scale Software SystemsRamakrishnan, Harish 01 January 2006 (has links)
Dealing with the complexity of large-scale systems can be a challenge for even the most experienced software architects and developers. Large-scale software systems can contain millions of elements, which interact to achieve the system functionality. Managing and representing the complexity involved in the interaction of these elements is a difficult task. We propose an approach for analyzing the reusability, maintainability and complexity of such a complex large-scale software system. Reducing the dependencies between the subsystems increase the reusability and decrease the efforts needed to maintain the system thus reducing the complexity of the system. Coupling is an attribute that summarizes the degree of interdependence or connectivity among subsystems and within subsystems. When used in conjunction with measures of other attributes, coupling can contribute to an assessment or prediction of software quality. We developed a set of metrics for measuring the coupling at the subsystems level in a large-scale software system as a part of this work. These metrics do not take into account the complexity internal to a subsystem and considers a subsystem as a single entity. Such a dependency metric gives an opportunity to predict the cost and effort needed to maintain the system and also to predict the reusability of the system parts. It also predicts the complexity of the system. More the dependency, higher is the cost to maintain and reuse the software. Also the complexity and cost of the system will be high if the coupling is high. We built a large-scale system and implemented these research ideas and analyzed how these measures help in minimizing the complexity and system cost. We also proved that these coupling measures help in re-factoring of the system design.
|
547 |
Analysis Of Aircraft Arrival Delay And Airport On-time PerformanceBai, Yuqiong 01 January 2006 (has links)
While existing grid environments cater to specific needs of a particular user community, we need to go beyond them and consider general-purpose large-scale distributed systems consisting of large collections of heterogeneous computers and communication systems shared by a large user population with very diverse requirements. Coordination, matchmaking, and resource allocation are among the essential functions of large-scale distributed systems. Although deterministic approaches for coordination, matchmaking, and resource allocation have been well studied, they are not suitable for large-scale distributed systems due to the large-scale, the autonomy, and the dynamics of the systems. We have to seek for nondeterministic solutions for large-scale distributed systems. In this dissertation we describe our work on a coordination service, a matchmaking service, and a macro-economic resource allocation model for large-scale distributed systems. The coordination service coordinates the execution of complex tasks in a dynamic environment, the matchmaking service supports finding the appropriate resources for users, and the macro-economic resource allocation model allows a broker to mediate resource providers who want to maximize their revenues and resource consumers who want to get the best resources at the lowest possible price, with some global objectives, e.g., to maximize the resource utilization of the system.
|
548 |
Parallel Fabrication and Transport Properties of Carbon Nanotube Single Electron TransistorsIslam, Muhammad 01 January 2015 (has links)
Single electron transistors (SET) have attracted significant attention as a potential building block for post CMOS nanoelectronic devices. However, lack of reproducible and parallel fabrication approach and room temperature operation are the two major bottlenecks for practical realization of SET based devices. In this thesis, I demonstrate large scale single electron transistors fabrication techniques using solution processed single wall carbon nanotubes (SWNTs) and studied their electron transport properties. The approach is based on the assembly of individual SWNTs via dielectrophoresis (DEP) at the selected position of the circuit and formation of tunnel barriers on SWNT. Two different techniques: i) metal-SWNT Schottky contact, and ii) mechanical templating of SWNTs were used for tunnel barrier creation. Low temperature (4.2K) transport measurement of 100 nm long metal-SWNT Schottky contact devices show that 93% of the devices with contact resistance (RT) > 100 K? show SET behavior. Majority (90%) of the devices with 100 K? < RT < 1 M?, show periodic, well-de?ned Coulomb diamonds with a charging energy ~ 15 meV, represents single electron tunnelling through a single quantum dot (QD), defined by the top contact. For high RT (> 1M?), devices show multiple QDs behaviors, while QD was not formed for low RT ( < 100 K?) devices. From the transport study of 50 SWNT devices, a total of 38 devices show SET behavior giving an yield of 76%. I also demonstrate room temperature operating SET by using mechanical template technique. In mechanical template method individual SWNT is placed on top of a Al/Al2O3 local gate which bends the SWNT at the edge and tunnel barriers are created. SET devices fabricated with a template width of ~20 nm shows room temperature operation with a charging energy of ~150 meV. I also discussed the detailed transport spectroscopy of the devices.
|
549 |
Scheduling And Resource Management For Complex Systems: From Large-scale Distributed Systems To Very Large Sensor NetworksYu, Chen 01 January 2009 (has links)
In this dissertation, we focus on multiple levels of optimized resource management techniques. We first consider a classic resource management problem, namely the scheduling of data-intensive applications. We define the Divisible Load Scheduling (DLS) problem, outline the system model based on the assumption that data staging and all communication with the sites can be done in parallel, and introduce a set of optimal divisible load scheduling algorithms and the related fault-tolerant coordination algorithm. The DLS algorithms introduced in this dissertation exploit parallel communication, consider realistic scenarios regarding the time when heterogeneous computing systems are available, and generate optimal schedules. Performance studies show that these algorithms perform better than divisible load scheduling algorithms based upon sequential communication. We have developed a self-organization model for resource management in distributed systems consisting of a very large number of sites with excess computing capacity. This self-organization model is inspired by biological metaphors and uses the concept of varying energy levels to express activity and goal satisfaction. The model is applied to Pleiades, a service-oriented architecture based on resource virtualization. The self-organization model for complex computing and communication systems is applied to Very Large Sensor Networks (VLSNs). An algorithm for self-organization of anonymous sensor nodes called SFSN (Scale-free Sensor Networks) and an algorithm utilizing the Small-worlds principle called SWAS (Small-worlds of Anonymous Sensors) are introduced. The SFSN algorithm is designed for VLSNs consisting of a fairly large number of inexpensive sensors with limited resources. An important feature of the algorithm is the ability to interconnect sensors without an identity, or physical address used by traditional communication and coordination protocols. During the self-organization phase, the collision-free communication channels allowing a sensor to synchronously forward information to the members of its proximity set are established and the communication pattern is followed during the activity phases. Simulation study shows that the SFSN ensures the scalability, limits the amount of communication and the complexity of coordination. The SWAS algorithm is further improved from SFSN by applying the Small-worlds principle. It is unique in its ability to create a sensor network with a topology approximating small-world networks. Rather than creating shortcuts between pairs of diametrically positioned nodes in a logical ring, we end up with something resembling a double-stranded DNA. By exploiting Small-worlds principle we combine two desirable features of networks, namely high clustering and small path length.
|
550 |
Downtown Revitalization: Consumers' and City Planners' Perceived Barriers to Integrating Large-Scale Retail Into the DowntownDonofrio, Jennifer M 01 December 2008 (has links) (PDF)
Statement of Problem
Revitalization of downtowns across America continues to be challenged by the shift to the suburbs. The barriers to integrating large-scale retail in a small, medium, and large city downtown were examined.
Forces of Data
The System View Planning Theory (Taylor, 1998) guided the study of city planners’ and consumers’ perceived barriers to integrating large scale retail into the downtown. In order to ascertain the barriers to integrating large-scale retail into the downtown intercept-surveys with consumers (n=30, responded to the intercept survey in each city) and interviews with city planners were conducted.
Conclusion Reached
Some significant differences were found between perceived barriers towards integrating large-scale retail into small and large-city downtowns. Although most consumers reported a positive attitude towards large-scale retail, most consumers in Tucson and San Diego indicated that the cost of shopping in the downtown outweighed the benefits. Traffic, parking, pedestrian-friendly street-oriented environment, and local character are among the major barriers identified by the study cities to integrating large-scale retail into the downtown. However, over half of the consumers surveyed agreed that they would shop at large-scale retail on the weekdays if it were available, but less than half of consumers in Tucson and San Diego would shop at large-scale retail on the weekends.
Recommendations
Three recommendations were suggested to successfully establish and sustain large-scale retail in the downtown. 1. Continue to find creative solutions to parking and traffic barriers. 2. Create a multifunctional, walkable downtown, with amenities to meet most consumers’ needs. 3. Establish retail stores in the downtown that enhance the local character and cater to residents’ needs rather than mostly tourist needs.
|
Page generated in 0.0435 seconds