1 |
A Rhetorical Analysis of Campaign Songs in Modern ElectionsPeterson, Lottie Elizabeth 01 March 2018 (has links)
Since the U.S. presidential election of 1800, candidates have selected campaign songs to underscore their political platforms. The literature on politics and music suggests that in modern campaigns, the significance of music rests not in the song itself but in the artist behind the song and the image associated with that particular artist. This analysis sought to convey how the very process of selecting a campaign song is a profound rhetorical act, and that songs chosen even in modern elections have a specific meaning and purpose tied to the political contexts in which they are embedded. Using an adaptation of Sellnow and Sellnow's "Illusion of Life" rhetorical perspective, which analyzes whether the musical score and lyrics of a single song form a congruent or incongruent relationship, this study analyzed the official campaign songs for both Republican and Democratic candidates for the 1972-2016 elections. The adaptation provided the opportunity to examine the intersection of music, rhetoric and politics, and explore evolving patterns and trends in campaign music.The primary findings of this research indicated that both Republican and Democratic candidates have predominantly made use of congruity in their campaign songs, with that congruity only increasing over time — a surprising result considering congruity can often diminish listener appeal. The song analyses also indicated that in general, Republican candidates tend to utilize songs that are positive and patriotic in nature, while their Democratic opponents incorporate songs that offer a critique of the nation. Additionally, findings also revealed a transition that began taking place in the 1970s to hit full stride in the 21st century, as campaign songs shifted from being a direct endorsement of candidates to focusing on universal themes that could appeal to both sides of the political spectrum.
|
2 |
Large Scale Computer Investigations of Non-Equilibrium Surface Growth for Surfaces from Parallel Discrete Event SimulationsVerma, Poonam Santosh 08 May 2004 (has links)
The asymptotic scaling properties of conservative algorithms for parallel discrete-event simulations (e.g.: for spatially distributed parallel simulations of dynamic Monte Carlo for spin systems) of one-dimensional systems with system size $L$ is studied. The particular case studied here is the case of one or two elements assigned to each processor element. The previously studied case of one element per processor is reviewed, and the two elements per processor case is presented. The key concept is a simulated time horizon which is an evolving non equilibrium surface, specific for the particular algorithm. It is shown that the flat-substrate initial condition is responsible for the existence of an initial non-scaling regime. Various methods to deal with this non-scaling regime are documented, both the final successful method and unsuccessful attempts. The width of this time horizon relates to desynchronization in the system of processors. Universal properties of the conservative time horizon are derived by constructing a distribution of the interface width at saturation.
|
3 |
The Distributed Open Network Emulator: Applying Relativistic TimeBergstrom, Craig Casey 11 September 2006 (has links)
The increasing scale and complexity of network applications and protocols motivates the need for tools to aid in the understanding of network dynamics at similarly large scales. While current network simulation tools achieve large scale modeling, they do so by ignoring much of the intra-program state that plays an important role in the overall system's behavior. This work presents The Distributed Open Network Emulator, a scalable distributed network model that incorporates application program state to achieve high fidelity modeling.
The Distributed Open Network Emulator, or DONE for short, is a parallel and distributed network simulation-emulation hybrid that achieves both scalability and the capability to run existing application code with minimal modification. These goals are accomplished through the use of a protocol stack extracted from the Linux kernel, a new programming model based on C, and a scaled real-time method for distributed synchronization.
One of the primary challenges in the development of DONE was in reconciling the opposing requirements of emulation and simulation. Emulated code directly executes in real-time which progresses autonomously. In contrast, simulation models are forced ahead by the execution of events, an explicitly controlled mechanism. Relativistic time is used to integrate these two paradigms into a single model while providing efficient distributed synchronization.
To demonstrate that the model provides the desired traits, a series of experiments are described. They show that DONE can provide super-linear speedup on small clusters, nearly linear speedup on moderate sized clusters, and accurate results when tuned appropriately. / Master of Science
|
4 |
Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event SimulationSarkar, Falguni 05 1900 (has links)
Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently.
In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead.
In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of them on a network of workstations. The other algorithm has been analyzed probabilistically. The reason for choosing network of workstations is their low cost and the availability of efficient message passing softwares like PVM and MPI. All of these load balancing algorithms piggyback on the existing GVT computation algorithms and try to balance the speed of simulation in different lps. We have also designed an optimal GVT computation algorithm for the hypercubes and studied its performance with respect to the other GVT computation algorithms by simulating a hypercube in our network cluster.
We use the topological properties of a star network in order to design an algorithm for computing a safe time-window for parallel discrete event simulation. We have analyzed and simulated the behavior of an open queuing network resembling such an architecture. Our algorithm is also extended for hierarchical stars and for recursive window computation.
|
5 |
The virtual time function and rate-based schedulers for real-time communications over packet networksDevadason, Tarith Navendran January 2007 (has links)
[Truncated abstract] The accelerating pace of convergence of communications from disparate application types onto common packet networks has made quality of service an increasingly important and problematic issue. Applications of different classes have diverse service requirements at distinct levels of importance. Also, these applications offer traffic to the network with widely variant characteristics. Yet a common network is expected at all times to meet the individual communication requirements of each flow from all of these application types. One group of applications that has particularly critical service requirements is the class of real-time applications, such as packet telephony. They require both the reproduction of a specified timing sequence at the destination, and nearly instantaneous interaction between the users at the endpoints. The associated delay limits (in terms of upper bound and variation) must be consistently met; at every point where these are violated, the network transfer becomes worthless, as the data cannot be used at all. In contrast, other types of applications may suffer appreciable deterioration in quality of service as a result of slower transfer, but the goal of the transfer can still largely be met. The goal of this thesis is to evaluate the potential effectiveness of a class of packet scheduling algorithms in meeting the specific service requirements of real-time applications in a converged network environment. Since the proposal of Weighted Fair Queueing, there have been several schedulers suggested to be capable of meeting the divergent service requirements of both real-time and other data applications. ... This simulation study also sheds light on false assumptions that can be made about the isolation produced by start-time and finish-time schedulers based on the deterministic bounds obtained. The key contributions of this work are as follows. We clearly show how the definition of the virtual time function affects both delay bounds and delay distributions for a real-time flow in a converged network, and how optimality is achieved. Despite apparent indications to the contrary from delay bounds, the simulation analysis demonstrates that start-time rate-based schedulers possess useful characteristics for real-time flows that the traditional finish-time schedulers do not. Finally, it is shown that all the virtual time rate-based schedulers considered can produce isolation problems over multiple hops in networks with high loading. It becomes apparent that the benchmark First-Come-First-Served scheduler, with spacing and call admission control at the network ingresses, is a preferred arrangement for real-time flows (although lower priority levels would also need to be implemented for dealing with other data flows).
|
Page generated in 0.0544 seconds