401 |
To the boundary of the zero: postmodernism as whimsical tragedy in Tristram Shandy: a cock and bull story and 24 hour party peopleProthro, Travis 12 1900 (has links)
This paper attempts to solve the problem of postmodern tragedy by examining two films by British director Michael Winterbottom, Tristram Shandy: a Cock and Bull Story, and 24 Hour Party People. Actually, I am not certain that the notion of postmodern tragedy is problematic as much as it is non-traditional in terms of classical critical definitions of tragedy. The films suggest that the postmodern protagonist faces the same trials as the protagonist from any other era, but responds to them differently, if at all. My thesis states that the protagonist‘s failure to respond adequately to the consequences of his choices, indeed, his failure to learn from his own repeated failures, is the basis for the tragedy presented in the films, as well as the basis for tragedy in the postmodern era. Certainly choice has always been key regarding the tragic fall of characters, from Oedipus to Willy Loman, and beyond. The particular circumstances of the films in question, however, suggest that the fall is not here the ultimate tragedy. Rather, these films clearly portray their respective protagonists as incapable of falling, in the tragic sense, because, whether consciously or unconsciously, they tend to reach not for greatness, but for failure, and as each failure mounts, they descend a little lower, as though the true glory of endeavor is to dig as deep a hole as possible by mounting failure on top of failure. In a sense, as the paper makes clear, the protagonists of these films attempt to define success by failure, or, to use a mathematical metaphor, they attempt to define themselves by their proximity to zero. / Thesis (M.A.)--Wichita State University, College of Liberal Arts and Sciences, Dept. of English.
|
402 |
Demonstration of the optimal control modification for general aviation: design and simulationReed, Scott 12 1900 (has links)
This work presents the design and simulation of a model reference adaptive flight control
system for general aviation. The controller is based on previous adaptive control research
conducted at Wichita State University (WSU) and the National Aeronautics and Space
Administration (NASA) Ames Research Center. The control system is designed for longitudinal
control of a Beech Bonanza given the commands of pitch rate and airspeed.
The structure of the controller includes a first-order model follower, proportional-integral
(PI) controller, inverse controller, and adaptation element. Two adaptation methods were
considered, the WSU-developed Adaptive Bias Corrector (ABC) and the Optimal Control
Modification (OCM). The ABC is used with two error schemes, adapting to the modeling-error
and the tracking-error. Three variations of the OCM are presented, which differ in the
parameterization of the adaptive signal. The first is called OCM-Linear (OCM-L), where the
adaptive signal is linearly related to the states. The second variation is OCM-Bias (OCM-B),
which only includes a bias term. The third is the OCM-Linear and Bias (OCM-LB), a
combination of the previous two variations.
To design the controllers, varied values of the PI gains and adaptive gains were evaluated
based on time response tracking of a pitch doublet and time delay margin. The time delay margin
is based on error metrics developed at NASA Ames.
Of the five controllers presented, the OCM-L and ABC with tracking-error adaptation
performed the best. The ABC with modeling-error adaptation did not track the pitch doublet. The
OCM-B and OCM-LB are good controllers but had worse performance than OCM-Linear in
tracking and time delay margin, respectively. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Aerospace Engineering.
|
403 |
The effects of no child left behind policy on gifted education in the United States, Kansas and a midwestern suburban schoolRolett-Boone, Sherry Kay 12 1900 (has links)
While its intent is commendable, the No Child Left Behind Act (NCLB) has created a less than desirable learning environment for our nation‟s gifted children. NCLB has mandated all students achieve a level of proficiency by 2014, thereby, creating an environment where states, school administrators, and classroom teachers are compelled to focus on students who are functioning below a proficient level.
This research assessed the effects NCLB has had on gifted education at the national, state, and local levels. Literature and data were collected and analyzed in order to summarize the effects NCLB has had on gifted students and programs including those in Kansas and one midwestern suburban school district. The most prevalent topics included funding, low prioritization of gifted education, inconsistency in gifted standards and programs, the need for differentiated curriculum, testing, and the underrepresentation of ethnic minority groups in gifted programs.
The evidence supports that NCLB has had negative effects on gifted education. The conclusions and recommendations focus on encouraging, facilitating, and monitoring individual growth to ensure the advancement of all students to their highest potential. / Thesis (M.Ed.)--Wichita State University, College of Education, Dept. of Curriculum and Instruction.
|
404 |
Estimation of driver fatality ratio using computational modeling and objective measures based on vehicle intrusion ratio in head-on collisionsSetpally, Rajarshi 12 1900 (has links)
In the last decade, the increase in usage of light trucks and vans (LTVs) has resulted in an
increase in fatal injuries to the occupants of passenger cars for truck-car accidents, because of the
aggressive nature of LTVs. To study the aggressive behavior of LTVs, National Traffic Highway
Safety Administration (NHTSA) has developed an aggressivity metric (AM) for different
vehicles in a specific impact configuration. These AM however does not produce consistent
estimates when specific vehicle-to-vehicle impact categories are studied. Hence, NHTSA has
introduced a Driver Fatality Ratio (DFR), based on the Fatality Analysis Reporting System
(FARS) and General Estimating System (GES) crash Involvement statistics, which has produced
good estimates of the aggressive behavior of vehicles in crashes.
The DFR proposed by NHTSA is based on the statistical data, which makes it difficult to
evaluate DFR for other vehicle categories (e.g., crossovers, etc.), which are relatively new in the
market as they do not have sufficient crash statistics. This research work proposes a new
methodology based on computational reconstruction of impact crashes and objective measures to
predict the DFR for any vehicle. The objective measures considered include the ratios of
maximum intrusion, peak acceleration, and weight for the two vehicles in head-on collisions.
Factors which directly influence fatal injuries to the occupants are identified and studied to
develop a relation between these objective measures to the DFR. The proposed method is then
validated for a range of LTVs against a passenger car, and is then used to predict the DFR for a
cross category vehicle, a light pick-up truck, and a full-size car. Factors which influence these
objective measures in predicting the DFR are discussed. Results from this study indicate that the
ratio of intrusions produces a better estimate of the DFR and can be utilized in predicting fatality
ratios for head-on collisions. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Mechanical Engineering.
|
405 |
Risk based worker allocation and line balancingSolaimuthu Pachaimuthu, Sathya Madhan 12 1900 (has links)
In general, manufacturing systems could be classified into machine intensive
manufacturing and labor intensive manufacturing. From the previous studies, we can infer that
worker allocation plays an important role in determining efficiency of a labor intensive
manufacturing system. Most of the research works in the previous literature is performed in a
deterministic bed. But from the time study data that was obtained from a local aircraft company
shows a high degree of variability in worker processing times.
Thus this research presents a worker allocation approach which also considers the
uncertainty in worker processing times into account. Risk based worker allocation approach is
developed for three different scenarios. First scenario is the single task per station balanced
production line scenario, where workers are allocated to processes by minimizing the overall risk
of delay due to workers. In the second scenario, in addition to worker allocation by minimizing
the overall risk, multiple workers are allocated to processes to make the flow of products uniform
in a single task per station unbalanced production line. Prior to implementing the final approach,
a method for line balancing when variability is involved is studied and compared to the ranked
positional-weight method. The final scenario developed is a simultaneous approach to balance
and allocate workers in a multiple task per station production line. Case studies were simulated
using QUEST software and the result indicates that risk based allocation has increased
throughput and efficiency compared to deterministic worker allocation. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering.
|
406 |
Study of energy efficiency in portable devices using cloud computing: case of multimedia applicationsSolur Virupakshaiah, Girish 12 1900 (has links)
Energy efficiency is considered an important argument when designing information and communication technology solutions. Today the use of portable devices to receive valuable services while on the move is immense. Advancements in software and hardware technology need to be assimilated, with corresponding meliorations in battery engineering. With further progress in device miniaturization, striking improvements in battery technologies cannot be foretold. Due to limited energy requirements on portable devices, changing computing power from the user end to a remote server has provided the opportunity for the evolution of cloud computing.
With on-demand self-service from cloud computing, there has been substantial growth in the number of users of portable device like laptops, netbooks, smartphones, and tablet PCs, which has resulted in a significant increase in energy consumption. The main objective of this research is to study and analyze the energy patterns between client-computing and cloud-computing-based applications that provide multimedia services.
This study renders potential solutions for cloud users and cloud service providers to choose applications based on their requirements. It provides an alternative to end users for optimal-use cloud services through battery-powered portable devices. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering.
|
407 |
Relay network under broadband jammingSonawane, Jayesh Murlidhar 12 1900 (has links)
In this thesis, the bit error rate (BER) performance of a relay network using binary phase
shift keying (BPSK) and quadrature phase shift keying (QPSK) modulation was compared under
broadband jamming with a normalized maximum ratio combining (MRC) scheme at various
jamming locations. Results show that, among various jamming locations in the relay network,
jamming at the destination has more negative effects on BER than jamming at the relay. Other
observations show that increasing the jamming power will cause loss in the diversity order of the
relay network. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
|
408 |
Evaluation of mechanical properties of laminated composites using Multi-Continuum Theory in ABAQUSSubba Rao, Vinay Sagar 12 1900 (has links)
A thorough knowledge of the mechanical properties of composites is very important for proper design and optimization of structures in composite applications. Experimental determination of these properties like strength and modulus is prohibitively expensive, as there are unlimited combinations of matrix type, fiber type and stacking sequences possible. In this thesis, “Progressive Damage Modeling” is used to simulate the notched and un-notched tension and compression tests to obtain the strength and stiffness properties of composites. A quasi-isotropic layup (25/50/25) of Toray T700GC-12K-31E/#2510 unidirectional tape was used for the purposes of simulation. Previously obtained experimental data is used to validate the model. The commercially available software ABAQUS, is used for the simulations. A commercially available plug-in for ABAQUS, Helius:MCT, which utilizes the concepts of Multi-Continuum Theory (MCT), was employed to determine its suitability to accurately predict the loads and stiffnesses. Also, the suitability of Helius:MCT for certification, by analysis, of various laminates under various types of loading, was evaluated. It is shown that progressive damage modeling, of the notched and un-notched tension and compressive tests, using Helius:MCT, results in accurate prediction of failure loads and stiffnesses of laminates. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Mechanical Engineering.
|
409 |
Credit scheduling and prefetching in hypervisors using Hidden Markov ModelsSuryanarayana, Vidya Rao 12 1900 (has links)
The advances in storage technologies like storage area networking,
virtualization of servers and storage have revolutionized the storage of the explosive data
of modern times. With such technologies, resource consolidation has become an
increasingly easy task to accomplish which has in turn simplified the access of remote
data. Recent researches in hardware has boosted the capacity of drives and the hard disks
have become very inexpensive than before. However, with such an increase in the storage
technologies, there come some bottlenecks in terms of performance and interoperability.
When it comes to virtualization, especially server virtualization, there will be a lot of
guest operating systems running on the same hardware. Hence, it is very important to
ensure each guest is scheduled at the right time and decrease the latency of data access.
There are various hardware advances that have made prefetching of data into the cache
easy and efficient. But, however, interoperability between vendors must be assured and
more efficient algorithms need to be developed for these purposes.
In virtualized environments where there can be hundreds of virtual machines
running, very good scheduling algorithms need to be developed in order to reduce the
latency and the wait time of the virtual machines in run queue. The current algorithms are
more oriented in providing fair access to the virtual machines and are not very concerned
about reducing the latency. This can be a major bottleneck in time critical applications
like scientific applications that have now started deploying SAN technologies to store the
explosive data. Also, when data needs to be extracted from these storage arrays to
vii
analyze and process them, the latency of a read operation has to be reduced in order to
improve the performance.
The research done in this thesis aims to reduce the scheduling delay in a XEN
hypervisor and also to reduce the latency of reading data from the disk using Hidden
Markov Models (HMM). The scheduling and prefetching scenarios are modeled using a
Gaussian and a Discrete HMM and the latency involved is evaluated. The HMM is a
statistical analysis technique used to classify and predict data that has a repetitive pattern
over time. The results show that using a HMM decreases the scheduling and access
latencies involved. The proposed technique is mainly intended for virtualization scenarios
involving hypervisors and storage arrays.
Various patterns of data access involving different ratios of reads and writes are
considered and a discrete HMM (DHMM) is used to prefetch the next most probable
block of data that might be read by a guest. Also, a Gaussian HMM is used to classify the
arrival time of the requests in a XEN hypervisor and the GHMM is incorporated with the
credit scheduler used in order to reduce the scheduling latency. The results are
numerically evaluated and found that scheduling the virtual machines (domains) at the
correct time indeed decreases the waiting times of the domains in the run queue. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
|
410 |
Numerical study of the occurrence of adiabatic shear banding without material failureSwaminathan, Palanivel 12 1900 (has links)
Segmented chips that arise as a result of formation of adiabatic shear bands are common in high-speed machining of harder alloys. Analytical models describing the mechanics of segmented chip formation have been developed, and these models do not assume or require failure in the material. Most of the finite element (FE) studies of segmented chip formation in the literature always include failure in the material. This thesis presents a numerical study of the occurrence of adiabatic shear banding without material failure. Here, the mechanical properties of hardened 4340 steel was used as a base line, and Johnson and Cook material model for this material were modified to make the material more prone to shear banding. Simulations were performed in two different commercial FE packages: ABAQUS/Explicit and LS-DYNA. The arbitrary Lagrangian-Eulerian (ALE) approach in ABAQUS/Explicit showed shear banding when mass scaling factor was 100. The same simulation did not show shear banding once the mass scaling was removed. Also, the simulations with accelerated thermal softening kept crashing; likely due to the onset of shear banding causing too much mesh distortion. Since steady state was not achieved with any of these simulations, it is hard to say exactly how the intensity of shear banding changes with material properties.
Finite element studies in the recent literature, where researchers have shown shear banding through strain softening without material failure using the updated Lagrangian approach, were repeated with ALE in ABAQUS. With the reduction in the parameter controlling the curvature based mesh refinement, shear banding has been successfully simulated in ABAQUS. ALE approach in ABAQUS did not produce the same chip geometry shown in literature because the simulations failed since the mesh was not able to reform to
vii
a large extent and produce concave shapes as we see in serrated chips. Mass scaling with a factor of 100 did not affect the result. It should be noted that the strain seems to diffuse with the increase in the element size. So, smaller elements are needed to better capture the strains along the PSZ.
Lagrangian analysis with a parting-layer approach in LS-DYNA showed shear banding for different plastic properties of the material and no shear banding for others. Observations of the effect of different material and mesh parameters on the shear banding intensity have been made from the simulations and future research are proposed. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering.
|
Page generated in 0.3364 seconds