Spelling suggestions: "subject:"catching"" "subject:"batching""
141 |
Three explanations for the link between language style matching and likingIreland, Molly Elizabeth 1984- 27 February 2014 (has links)
People who match each other’s language styles in dialogue tend to have more positive interactions. A person’s language style is defined by his or her use of function words (e.g., pronouns, articles), a class of short, commonly used words that make up the grammatical structure of language. The language style matching (LSM) metric indexes the degree of similarity between two individual’s patterns of function word usage.
Previous research assumes that function word similarity and its positive social correlates, such as liking, result from convergence that occurs within an interaction. However, the link between language style similarity and liking may alternately be explained by two kinds of preexisting similarity. First, people tend to like each other more to the degree that they are similar in terms of attitudes, backgrounds, and personality, and these kinds of interpersonal similarity tend to manifest themselves in similar function word use. Second, processing fluency research suggests that people will process typical language styles—which are by definition similar to most other language styles in a normal population—more fluently and thus will like typical speakers more than less typical speakers.
Two studies compared the relationship between liking and three measures of function word similarity (convergence, baseline similarity, and typicality) during brief conversations. Each language similarity variable was hypothesized to positively predict measures of liking individually. However, consistent with the behavior coordination literature, only LSM, a measure of within-conversation language convergence, was expected to predict liking above and beyond the other predictors. Study 1 revealed that both men and women in mixed-sex dyads were more interested in contacting their partners the more that their language styles converged during 4-minute face-to-face conversations. Men were also more interested in contacting their female partners to the degree that women’s baseline language styles matched their own. Study 2 found that men, but not women, were more interested in contacting their partners the more that they matched each other’s language styles during 8-minute online chats. Results support the hypothesis that language convergence, theoretically an index of interpersonal engagement, positively predicts quasi-behavioral measures of liking. / text
|
142 |
Essays in Microeconomic TheoryMerrill, Lauren 26 July 2012 (has links)
If the number of individuals is odd, Campbell and Kelly (2003) show that majority rule is the only non-dictatorial strategy-proof social choice rule on the domain of linear orders that admit a Condorcet winner, an alternative that is preferred to every other by a majority of individuals in pairwise majority voting. This paper shows that the claim is false when the number of individuals is even, and provides a characterization of non-dictatorial strategy-proof social choice rules on this domain. Two examples illustrate the primary reason that the result does not translate to the even case: when the number of individuals is even, no single individual can change her reported preference ordering in a manner that changes the Condorcet winner while remaining within the preference domain. Introducing two new definitions to account for this partitioning of the preference domain, the chapter concludes with a counterpart to the characterization of Campbell and Kelly (2003) for the even case. Adapting the models of Laibson (1994) and O’Donogue and Rabin (2001), a learning–naıve agent is presented who is endowed with beliefs about the value of the quasi–hyperbolic discount factor that enters into the utility calculations of her future–selves. Facing an infinite–horizon decision problem in which the payoff to a particular action varies stochastically, the agent updates her beliefs over time. Conditions are given under which the behavior of a learning–na¨ıve agent is eventually indistinguishable from that of a sophisticated agent, contributing to the efforts of Ali (2011) to justify the use of sophistication as a modeling assumption. Building upon the literature on one–to–one matching pioneered by Gale and Shapley (1962), this paper introduces a social network to the standard marriage model, embodying informational limitations of the agents. Motivated by the restrictive nature of stability in large markets, two new network–stability concepts are introduced that reflect informational limitations; in particular, two agents cannot form a blocking pair if they are not acquainted. Following Roth and Sotomayor (1990), key properties of the sets of network–stable matchings are derived, and concludes by introducing a network–formation game whose set of complete–information Nash equilibria correspond to the set of stable matchings / Economics
|
143 |
Essays in MicroeconomicsMonteiro de Azevedo, Eduardo January 2012 (has links)
This dissertation consists of three essays on microeconomics. The first essay considers matching markets, markets where buyers and sellers and concerned about who they interact with. It proposes a model to analyze these markets akin to the standard supply and demand framework. The second essay considers mechanism design, the problem of designing rules to make collective decisions in the presence of private information. It proposes the concept of strategyproofness in the large, which is that an agent without too fine information has negligible gains from misreporting her type in a large market. It argues that, for all practical purposes, this concept correctly separates mechanisms where behavior akin to price-taking is observed, and those where participants rampantly manipulate their stated preferences. A Theorem is proven that gives a precise sense in which strategyproofness in the large is not a very restrictive property. The third essay considers the evolutionary origins of the endowment effect bias, where the willingness to pay for a good is smaller than the willingness to accept. It gives evidence that this bias is not present in a modern hunter-gatherer population, questioning standard evolutionary accounts. It shows that cultural shocks in a subpopulation did give rise to the bias. / Economics
|
144 |
Essays in Market DesignLeshno, Jacob January 2012 (has links)
This dissertation consists of three essays in market design. The first essay studies a dynamic allocation problem. The second presents a new model for many-to-one matching markets where colleges are matched to a large number of students. The third analyzes the effect of the minimum wage on training in internships. In many assignment problems items arrive stochastically over time, and must therefore be assigned dynamically. The first essay studies the social planers ability to dynamically match agents with heterogenous preferences to their preferred items. Impatient agents may misreport their preferences to receive an earlier assignment, causing welfare loss. The first essay presents a tractable model of the problem and mechanisms that minimize the welfare loss. The second essay, which is joint work with Eduardo Azevedo, considers the classical many-to-one matching problem when many students are assigned to a few large colleges. We show that stable matchings have a simple characterization. Any stable matching is equivalent to market clearing cutoffs — admission thresholds for each college. The essay presents a model where a continuum of students is to be matched to a finite number of schools. Using the cutoff representation we show that under broad conditions there is a unique stable matching, and that it varies continuously with respect to the underlying economy. The third essay, which is joint work with Michael Schwarz, looks at on the job training in firms. The firm recovers the cost of training by gradually training the worker over time, paying a wage below the workers marginal product and providing the remaining compensation in the form of training. When the worker’s productivity is close to the minimum wage the firm finds it profitable to front-load training, making the worker more productive and the training faster. A decrease in the minimal wage reduces the firm's incentive to front-load training, and can make training less efficient.
|
145 |
The approximation of Cartesian coordinate data by parametric orthogonal distance regressionTurner, David Andrew January 1999 (has links)
This thesis is concerned with the approximation of Cartesian coordinate data by parametric curves and surfaces, with an emphasis upon a technique known as parametric orthogonal distance regression (parametric ODR). The technique has become increasingly popular in the literature over the past decade and has applications in a wide range of fields, including metrology-the science of measurement, and computer aided design (CAD) modelling. Typically, the data are obtained by recording points measured in the surface of some physical artefact, such as a manufactured part. Parametric ODR involves minimizing the shortest distances from the data to the curve or surface in some norm. Under moderate assumptions, these shortest distances are orthogonal projections from the data onto the approximant, hence the nomenclature ODR. The motivation behind this type of approximation is that, by using a distance-based measure, the resulting best fit curve or surface is independent of the position or orientation of the physical artefact from which the data is obtained. The thesis predominately concerns itself with parametric ODR in a least squares setting, although it is indicated how the techniques described can be extended to other error measures in a fairly straightforward manner. The parametric ODR problem is formulated mathematically, and a detailed survey of the existing algorithms for solving it is given. These algorithms are then used as the basis for developing new techniques, with an emphasis placed upon their efficiency and reliability. The algorithms (old and new) detailed in this thesis are illustrated by problems involving well-known geometric elements such as lines, circles, ellipse and ellipsoids, as well as spline curves and surfaces. Numerical considerations specific to these individual elements, including ones not previously reported in the literature, are addressed. We also consider a sub-problem of parametric ODR known as template matching, which involves mapping in an optimal way a set of data into the same frame of reference as a fixed curve or surface.
|
146 |
Επεξεργασία ηλεκτροκαρδιογραφήματος με χρήση του αλγόριθμου Matching pursuit / ECG processing with Matching pursuit algorithmΒαβατσιούλα, Μαρία 26 January 2009 (has links)
Η παρούσα εργασία πραγματεύεται την αποτελεσματικότητα του αλγορίθμου Matching Pursuit στον τομέα της επεξεργασίας σήματος. Περιγράφονται οι γενικότερες δυνατότητες του στην επεξεργασία βασικών βιοϊατρικών σημάτων και ειδικότερα εξετάζονται τα πλεονεκτήματά του στην επεξεργασία του ηλεκτροκαρδιογράφηματος, μιας και η εργασία αποτελεί μέρος του προγράμματος τηλεκαρδιολογίας e-Herofilus.
Αρχικά λοιπόν γίνεται μία αναλυτική περιγραφή του τρόπου λήψης και καταγραφής του καρδιογραφικού σήματος. Στη συνέχεια γίνεται μία περιγραφή του αλγορίθμου Matching Pursuit και των αποτελεσμάτων που είχε η εφαρμογή του σε γνωστά βιοσήματα και κυρίως σε εγκεφαλογραφήματα και καρδιογραφήματα, από όπου προκύπτουν τα πλεονεκτήματα της συγκεκριμένης μεθόδου έναντι των άλλων μεθόδων που έχουν χρησιμοποιηθεί στο παρελθόν για τη μεταφορά και επεξεργασία σημάτων στο πεδίο χρόνου- συχνότητας.
Τέλος ακολουθεί η υλοποιήση του αλγορίθμου του Matching Pursuit και η εφαρμογή του σε πραγματικά ηλεκτροκαρδιογραφήματα τόσο υγιή όσο και παθολογικά, που λήφθηκαν από τη βάση Physionet του MIT. Η ανάλυση των αποτελεσμάτων της εφαρμογής αυτής οδηγεί σε συμπεράσματα σχετικά με την αξία και την αποτελεσματικότητα του αλγορίθμου στους τομείς της αποθορυβοποίησης του καρδιογραφικού σήματος και του εντοπισμού της χρήσιμης πληροφορίας που αυτό μεταφέρει. Επίσης, μέσα από τη σύγκριση υγιών και παθολογικών καρδιογραφημάτων γίνεται μία προσπάθεια αναγνώρισης στοιχείων που εμφανίζονται στο πεδίο χρόνου-συχνότητας και συσχέτισής τους με τις εκάστοτε παθολογίες, γεγονός που μελλοντικά μπορεί να αποτελέσει το εφαλτήριο για την ανάπτυξη αλγορίθμου αυτόματης διάγνωσης καρδιογραφημάτων. / The following project, which is a part of telemedicine program e-Herofilus, examines the effectiveness of Matching Pursuit method. Its ability of processing in biomedical signals is described in a general format, as well as the advantages in processing of electrocardiogram (ECG) in more details.
Firstly, there is an analytical description of collection and record of ECG. Afterwards, there is a description of Matching Pursuit algorithm and its results in biosignal processing. The algorithm is applied in electrocardiogram (ECG) and electroencephalogram (EEG). The results of those applications showed the advantages of Matching Pursuit method over the other methods were used in the past in the signal processing field in the time frequency plane.
Finally, the implementation of Matching Pursuit algorithm in real ECGs is following. These ECGs are taken from healthy as well as from pathological specimens. The source of these specimens is Physionet Bank of MIT.
In the conclusions of this project, it is underlined the value and the efficiency of Matching Pursuit method in denoising of ECG, and in detecting the useful signal information in time frequency plane.
Additionally, comparing the results of processing healthy and pathological ECGs, could lead us in the future in the development of an automated diagnosis algorithm, which can be an innovation in both Engineering and Medicine Sciences.
|
147 |
Algorithms acceleration of pattern-matching in multi-core architecturesRodenas Pico, David 08 July 2011 (has links)
L'objectiu d'aquesta tesis es crear o adaptar models de programació per a fer els processadors multi-core accessibles per a la majoria de programadors. Aquest objectiu inclou la possibilitat de reusar els algoritmes existents, la capacitat de depuració, I la capacitat d'introduir els canvis de forma incremental. Contrastem les solucions proposades en diversos tipus de multi-core, incloent sistemes homogenis i heterogenis, i sistemes de memòria compartida i memòria distribuïda. A més a més contribuïm exposant algorismes i programes reals i ensenyant com aquests es poden ser usat per aplicacions en temps quasi real. / The aim of this thesis is to create or adapt a programming model in order to make multi-core processors accessible by almost every programmer. This objective includes existing codes and algorithms reuse, debuggability, and the capacity to introduce changes incrementally. We face multi-cores with many architectures including homogeneity versus heterogeneity and shared-memory versus distributed-memory. We also contribute by exposing real algorithms and programs and showing how some of them can be used for quasi realtime applications.
|
148 |
A polyhedral approach to combinatorial complementarity programming problemsde Farias, Ismael, Jr. 12 1900 (has links)
No description available.
|
149 |
Application of the Ensemble Kalman Filter to Estimate Fracture Parameters in Unconventional Horizontal Wells by Downhole Temperature MeasurementsGonzales, Sergio Eduardo 16 December 2013 (has links)
The increase in energy demand throughout the world has forced the oil industry to develop and expand on current technologies to optimize well productivity. Distributed temperature sensing has become a current and fairly inexpensive way to monitor performance in hydraulic fractured wells in real time by the aid of fiber optic. However, no applications have yet been attempted to describe or estimate the fracture parameters using distributed temperature sensing as the observation parameter. The Ensemble Kalman Filter, a recursive filter, has proved to be an effective tool in the application of inverse problems to determine parameters of non-linear models. Even though large amounts of data are acquired as the information used to apply an estimation, the Ensemble Kalman Filter effectively minimizes the time of operation by only using “snapshots” of the ensembles collected by various simulations where the estimation is updated continuously to be calibrated by comparing it to a reference model.
A reservoir model using ECLIPSE is constructed that measures temperature throughout the wellbore. This model is a hybrid representation of what distributed temperature sensing measures in real-time throughout the wellbore. Reservoir and fracture parameters are selected in this model with similar properties and values to an unconventional well. However, certain parameters such as fracture width are manipulated to significantly diminish the computation time.
A sensitivity study is performed for all the reservoir and fracture parameters in order to understand which parameters require more or less data to allow the Ensemble Kalman Filter to arrive to an acceptable estimation. Two fracture parameters are selected based on their low sensitivity and importance in fracture design to perform the Ensemble Kalman Filter on various simulations.
Fracture permeability has very low sensitivity. However, when applying the estimation the Ensemble Kalman Filter arrives to an acceptable estimation. Similarly fracture halflength, with medium sensitivity, arrives to an acceptable estimation around the same number of integration steps. The true effectiveness of the Ensemble Kalman Filter is presented when both parameters are estimated jointly and arrive to an acceptable estimation without being computationally expensive. The effectiveness of the Ensemble Kalman Filter is directly connected to the quantity of data acquired. The more data available to run simulations, the better and faster the filter performs.
|
150 |
ClsEqMatcher: An Ontology Matching ApproachZand-Moghaddam, Yassaman 09 January 2012 (has links)
No description available.
|
Page generated in 0.0502 seconds