Spelling suggestions: "subject:"desolution"" "subject:"cesolution""
471 |
Application of Adaptive Algorithm on Analysis of Spatial Energy of Ocean Ambient NoiseCheng, Ni-hung 23 July 2009 (has links)
Ocean ambient noise is one of factors that can affect the performance of sonar and underwater communication system, it can degrade the performance of sonar system on listening or active detection, and also can affect the quality of underwater communication. Due to the variation of temperature and density in the ocean which make ambient noise has directionality. Beamforming can analyze the directionality of noise energy. Conventional beamforming is based on the assumption of plane wave sound field, so the energy from each angle is obtained by linear accumulation of every element. However plane wave assumption may not be satisfied because of the boundary interactions of sound propagation and energy attenuation of water column, therefore conventional beamforming may have poor beam resolution and SNR in applications. This research is to study of the influence of spatial coherence of ambient noise on beam resolution, and to improve the beam resolution by using the adaptive algorithm from the communication system theory. Firstly, simulations were performed to study spatial coherence between plane wave and non-plane wave in ambient noise, and the results were compared with beam resolution. This research also analyzes the influence of different conditions of noise spatial coherence on beamforming with ASIAEX data. The results showed that ambient noise has lower spatial coherence at high frequency, and the beamforming has poor beam resolution because of the lower spatial coherence in noise. Therefore, the adaptive beamforming were performed to improve the beam resolution, and compared with the conventional beamforming. The results showed that the highest improvement on beam resolution is 42.9 %, and increased SNR by 6 dB. On the other hand, the application of ASIAEX data show that, the highest improvement on beam resolution is 40.0 %, and increased SNR by 8 dB. The noise notch of ambient noise became more significant by increasing in beam resolution, and it also promoted the accuracy of analysis on noise directionality.
|
472 |
Real-time terrain rendering with large geometric deformationsDahlbom, Anders January 2003 (has links)
<p>Computer gamers demand more realistic effects for each release of a new game. This final year project is concerned with deforming the geometry in a terrain rendering environment. The intension is to increase the resolution where the original resolution of the terrain is not enough to cater for all the details associated with a deformation, such as an explosion.</p><p>An algorithm for extending the maximum available resolution was found, the DEXTER algorithm, but calculations have shown that it has a too high memory consumption to be feasible in a game environment. In this project, an algorithm has been implemented, based on the DEXTER algorithm, but with some structural changes. The algorithm which has been implemented increases the resolution, if needed, where a deformation occurs. The increased resolution is described by b-spline surfaces, whereas the original resolution is given by a height map. Further, graphics primitives are only allocated to a high resolution region, when needed by the refinement process.</p><p>It has been found that by using dynamic blocks of graphics primitives, the amount of RAM consumed can be lowered, without a severe decrease in rendering speed. However, the algorithm implemented has been found to suffer from frame rate drops, if too many high resolution cells need to be attached to the refinement process during a single frame.</p><p>Is has been concluded that the algorithm, which is the result of this final year project, is not suitable for a game environment, as the memory consumption is still too high. The amount of time spent on refining the terrain can also be considered too much, as no time is left for other aspects of a game environment.</p><p>The algorithm is however considered a good choice concerning deformations, as the updates needed in association with a deformation, can be kept small and localized, according to the DEXTER structure. Also, the b-spline surfaces offer more freedom over the deformation, compared to using a height map.</p>
|
473 |
Institutional limitations on hegemonic influence in international organizations : conflict resolution in the Organization of American States, 1948-1989 /Shaw, Carolyn Michelle, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 230-239). Available also in a digital version from Dissertation Abstracts.
|
474 |
A game theoretic analysis of verifiability and dispute resolution /Bull, Jesse L. January 2001 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2001. / Vita. Includes bibliographical references.
|
475 |
Reconciliation in Mandrills (Mandrillus sphinx)Otovic, Pete 21 May 2007 (has links)
This study aimed to examine whether mandrills (Mandrillus sphinx) reconcile their conflicts. The data were collected from a captive group of nine mandrills (5 males and 4 females) at the Lowry Park Zoo that ranged in age from 3 to 16 years at the time of study. After a conflict was observed, the behavior of one of the two former opponents was documented for a period of ten minutes using continuous recording methods. On the next possible observation day, at the same time of the previous conflict, the behavior of the same individual was recorded for an additional ten minutes. Former opponents exchanged peaceful or affiliative signals sooner after a conflict than during control periods. These post-conflict signals were selectively directed towards former opponents, and were most likely to be exchanged in the first two minutes after a conflict's termination. The silent bared-teeth face comprised 62.5% of the first peaceful interactions between former opponents. The best predictor of the likelihood of reconciliation was the dyad's baseline rate of silent bared-teeth face exchange. Mandrill dyads with higher rates of silent bared-teeth face exchange at baseline had higher conflict rates and spent less time in non-aggressive proximity than those with lower rates of silent bared-teeth face exchange. These results are consistent with the Insecure Relationship Hypothesis, which posits that individuals with insecure relationships are more likely to reconcile because their relationships are more likely to be damaged by a conflict than those with secure relationships. The exchange of peaceful post-conflict signals did not appear to have an effect on the behavior of the former opponents.
|
476 |
The "resolution" of verb meaning in contextGaylord, Nicholas L. 24 September 2013 (has links)
It is well-known that the meaning of a word often changes depending on the context in which the word is used. Determining the appropriate interpretation for a word occurrence requires a knowledge of the range of possible meanings for that word, and consideration of those possibilities given available contextual evidence. However, there is still much to be learned about the nature of our lexical knowledge, as well as how we make use of that knowledge in the course of language comprehension. I report on a series of three experiments that explore these issues. I begin with the question of how precise our perceptions of word meaning in context really are. In Experiment 1, I present a Magnitude Estimation study in which I obtain judgments of meaning-in-context similarity over pairs of intransitive verb occur- rences, such as The kid runs / The cat runs, or The cat runs / The lane runs. I find that participants supply a large range of very specific similarity judgments, that judgments are quite consistent across participants, and that these judgments can be at least partially predicted even by simple measures of contextual properties, such as subject noun animacy and human similarity ratings over pairs of subject nouns. However, I also find that while some participants supply a great variety of ratings, many participants supply only a few unique values during the task. This suggests that some individuals are making more fine-grained judgments than others. These differences in response granularity could stem from a variety of sources. However, the offline nature of Experiment 1 does not enable direct examination of the comprehension process, but rather focuses on its end result. In Experiment 2, I present a Speed-Accuracy Tradeoff study that explores the earliest stages of meaning-in-context resolution to better understand the dynamics of the comprehension process itself. In particular, I focus on the timecourse of meaning resolution and the question of whether verbs carry context-independent default interpretations that are activated prior to semantic integration. I find, consistent with what has previously been shown for nouns, that verbs do in fact carry such a default meaning, as can be seen in early false alarms to stimuli such as The dawn broke -- Something shattered. These default meanings appear to reflect the most frequent interpretation of the verb. While these default meanings are likely an emergent effect of repeated exposure to frequent interpretations of a verb, I hypothesize that they additionally support a shallow semantic processing strategy. Recently, a growing body of work has begun to demonstrate that our language comprehension is often less than exhaustive and less than maximally accurate -- people often vary the depth of their processing. In Experiment 3, I explore changes in depth of semantic processing by making an explicit connection to research on human decision making, particularly as regards questions of strategy selection and effort- accuracy tradeoffs. I present a semantic judgment task similar to that used in Experiment 2, but incorporating design principles common in studies on decision making, such as response-contingent financial payoffs and trial-by-trial feedback on response accuracy. I show that participants' preferences for deep and shallow semantic processing strategies are predictably influenced by factors known to affect decision making in other non-linguistic domains. In lower-risk situations, participants are more likely to accept default meanings even when they are not contextually supported, such as responding "True" to stimuli such as The dawn broke -- Something shattered, even without the presence of time pressure. In Experiment 3, I additionally show that participants can adjust not only their processing strategies but also their stimulus acceptance thresholds. Stimuli were normed for truthfulness, i.e. how strongly implied (or entailed) a probe sentence was given its context sentence. Some stimuli in the task posessed an intermediate degree of truthfulness, akin to implicature, as in The log burned -- Something was dangerous (truthfulness 4.55/7). Across 3 conditions, the threshold separating "true" from "false" stimuli was moved such that stimuli such as the example just given would be evaluated differently in different conditions. Participants rapidly learned these threshold placements via feedback, indicating that their perceptions of meaning-in-context, as expressed via the range of possible conclusions that could be drawn from the verb, could vary dynamically in response to situational constraints. This learning was additionally found to occur both faster and more accurately under increased levels of risk. This thesis makes two primary contributions to the literature. First, I present evidence that our knowledge of verb meanings is at least two-layered -- we have access to a very information-rich base of event knowledge, but we also have a more schematic level of representation that is easier to access. Second, I show that these different sources of information enable different semantic processing strategies, and that moreover the choice between these strategies is dependent upon situational characteristics. I additionally argue for the more general relevance of the decision making literature to the study of language processing, and suggest future applications of this approach for work in experimental semantics and pragmatics. / text
|
477 |
Interactions of single and few organic molecules with SERS hot spots investigated by orientational imaging and super-resolution optical imagingStranahan, Sarah Marie 18 November 2013 (has links)
Dynamics between organic molecules and surface enhanced Raman scattering (SERS) hot spots are extracted from far-field optical images by two experimental methods presented in this thesis: orientational imaging and super-resolution optical imaging. We introduce SERS orientational imaging as an all-optical technique able to determine the three-dimensional orientations of SERS-active Ag nanoparticle dimers. This is accomplished by observing lobe positions in SERS emission patterns formed by the directional polarization of SERS emission along the longitudinal axis of the dimer. We further extend this technique to discriminate nanoparticle dimers from higher order aggregates by observing the wavelength-dependence of SERS emission patterns, which are unchanged in nanoparticle dimers, but show differences in higher order aggregates involving two or more nanoparticle junctions. Dynamic fluctuations in the SERS emission pattern lobes are observed in aggregates labeled with low dye concentrations, as molecules diffuse into regions of higher electromagnetic enhancement in multiple nanoparticle junctions. In order to investigate these dynamic interactions between single organic molecules and nanoparticle hot spots we present the first super-resolution optical images of single-molecule SERS (SM-SERS), introducing super-resolution imaging as a powerful new tool for SM-SERS studies. Mapping the dynamic movement of SM-SERS centroid positions with +/- 5 nm resolution reveals the position-dependent SERS intensity as the centroid samples different positions in space. We have proposed that the diffusion of the SERS centroid is due to diffusion of a single molecule on the surface of the nanoparticle, which leads to changes in coupling between the scattering dipole and the optical near field of the nanoparticle. Finally, we combine an isotope-edited bi-analyte SERS spectral approach with super-resolution optical imaging and atomic force microscopy (AFM) structural analysis for a more complete picture of molecular dynamics in SERS hot spots. We demonstrate the ability to observe multiple molecule dynamics in a single hot spot and show that in addition to the single-molecule regime, a "few" molecule regime is able to report on position-dependent SERS intensities in a hot spot. Furthermore, we are able to identify multiple local hot spots in single nanoparticle aggregates. / text
|
478 |
Application of e-TDR to achieve precise time synchronization and controlled asynchronization of remotely located signalsSripada, Aparna 14 January 2014 (has links)
Time Domain Reflectometer (TDR) measures the electrical length of a cable from the
applied end to the location of an impedance change. An impedance change causes a
portion of the applied signal to reflect back based on the value of its reflection
coefficient. The time of flight (TOF) between the applied and reflected wave is computed
and multiplied with previously determined signal propagation velocity to determine the
location of the impedance change. We intentionally open terminate the output end of the
cable which makes the reflection coefficient be maximum (=1) to measure its electrical
length. Conventional TDRs designed for testing integrity of long cables use various
closed pulse shaped test signals i.e. the half sine wave and the Gaussian pulse, that
disperse (change shape) and change velocity while propagation along the cable. Quoting
Dr. Leon Brillouin’s comments on electromagnetic energy propagation [10], “in a
vacuum, all waves (e.g. frequencies) propagate at the same velocity, hence withoutdistortion, whereas in a dispersive lossy media, except for an infinitely long sinusoidal
waveform, distortion will occur due to frequency dependent velocity.” This signal
distortion generally degrades the accuracy of the measurement of the signal’s TOF.
We discuss here an Enhanced Resolution Time Domain Reflectometer (e-TDR).
The enhanced resolution is due to a newly discovered signal called SPEEDY DELIVERY
(SD) by Dr. Robert Flake at The University of Texas at Austin (US PATENT 6,441,695
B1 issued in August 27, 2002). This SD signal has a propagation velocity that is a
programmable constant and this signal preserves its shape during propagation through
dispersive lossy media (DLM). This signal behavior allows us to use ‘e-TDR’ in
applications where remotely located signals need to be synchronized or asynchronized
precisely. Potential applications include signal based synchronization of devices like
sensors connected in a network. Since the cable carrying data from sensors at discrete and
remote locations to a collecting center have different electrical lengths, it is necessary to
precisely offset the timestamp of the incoming signal from these sensors to allow
accurate data fusion. Our prototype is capable of synchronizing signals 1,200 ft (~ 400
m) apart with sub-nanosecond resolution. / text
|
479 |
Persuasion strategies for litigators and negotiators : what’s the difference?Ahmed, Jessica Amber 17 March 2014 (has links)
Persuasion scholars have documented the use of compliance-gaining messages in both negotiation and negotiation. The extant research offers suggestions for litigators and negotiators, but fails to compare the methods of persuasion in the two circumstances in order to advise attorneys and clients which communication messages to employ in the different contexts. The present study explores differences in use of 7 common compliance-gaining message strategies (“It's Up To You”, “This Is The Way Things Are”, “Equity”, “Benefit (Other)”, “Bargaining”, and “Cooperation”; Kellerman, 2004) in separate negotiation and litigation cases. Findings indicate that “This Is The Way Things Are” messages were more frequent in litigation than negotiation, but “Cooperation” messages were more common in negotiation than litigation. No other significant differences in strategy frequency across the different contexts were found. These results indicate that some differences exist between the messages used in negotiation and litigation and that future research should investigate what other messages may be used differently in the two contexts. / text
|
480 |
Supervised language models for temporal resolution of text in absence of explicit temporal cuesKumar, Abhimanu 18 March 2014 (has links)
This thesis explores the temporal analysis of text using the implicit temporal cues
present in document. We consider the case when all explicit temporal expressions such as
specific dates or years are removed from the text and a bag of words based approach is used
for timestamp prediction for the text. A set of gold standard text documents with times-
tamps are used as the training set. We also predict time spans for Wikipedia biographies
based on their text. We have training texts from 3800 BC to present day. We partition this
timeline into equal sized chronons and build a probability histogram for a test document
over this chronon sequence. The document is assigned to the chronon with the highest
probability.
We use 2 approaches: 1) a generative language model with Bayesian priors, and 2) a
KL divergence based model. To counter the sparsity in the documents and chronons we use
3 different smoothing techniques across models. We use 3 diverse datasets to test our mod-
els: 1) Wikipedia Biographies, 2) Guttenberg Short Stories, and 3) Wikipedia Years dataset.
Our models are trained on a subset of Wikipedia biographies. We concentrate on
two prediction tasks: 1) time-stamp prediction for a generic text or mid-span prediction for
a Wikipedia biography , and 2) life-span prediction for a Wikipedia biography. We achieve
an f-score of 81.1% for life-span prediction task and a mean error of around 36 years for
mid-span prediction for biographies from present day to 3800 BC. The best model gives a
mean error of 18 years for publication date prediction for short stories that are uniformly
distributed in the range 1700 AD to 2010 AD. Our models exploit the temporal distribu-
tion of text for associating time. Our error analysis reveals interesting properties about the
models and datasets used.
We try to combine explicit temporal cues extracted from the document with its
implicit cues and obtain combined prediction model. We show that a combination of the
date-based predictions and language model divergence predictions is highly effective for this
task: our best model obtains an f-score of 81.1% and the median error between actual and
predicted life span midpoints is 6 years. This would be one of the emphasis for our future
work.
The above analyses demonstrates that there are strong temporal cues within texts
that can be exploited statistically for temporal predictions. We also create good benchmark
datasets along the way for the research community to further explore this problem. / text
|
Page generated in 0.1027 seconds