• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 10
  • 10
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Decision-Making with Big Information: The Relationship between Decision Context, Stopping Rules, and Decision Performance

Gerhart, Natalie 08 1900 (has links)
Ubiquitous computing results in access to vast amounts of data, which is changing the way humans interact with each other, with computers, and with their environments. Information is literally at our fingertips with touchscreen technology, but it is not valuable until it is understood. As a result, selecting which information to use in a decision process is a challenge in the current information environment (Lu & Yuan, 2011). The purpose of this dissertation was to investigate how individual decision makers, in different decision contexts, determine when to stop collecting information given the availability of virtually unlimited information. Decision makers must make an ultimate decision, but also must make a decision that he or she has enough information to make the final decision (Browne, Pitts, & Wetherbe, 2007). In determining how much information to collect, researchers found that people engage in ‘satisficing' in order to make decisions, particularly when there is more information than it is possible to manage (Simon, 1957). A more recent elucidation of information use relies on the idea of stopping rules, identifying five common stopping rules information seekers use: mental list, representational stability, difference threshold, magnitude threshold, and single criterion (Browne et al., 2007). Prior research indicates a lack of understanding in the areas of information use (Prabha, Connaway, Olszewski, & Jenkins, 2007) and information overload (Eppler & Mengis, 2004) in Information Systems literature. Moreover, research indicates a lack of clarity in what information should be used in different decision contexts (Kowalczyk & Buxmann, 2014). The increase in the availability of information further complicates and necessitates research in this area. This dissertation seeks to fill these gaps in the literature by determining how information use changes across decision contexts and the relationships between stopping rules. Two unique methodologies were used to test the hypotheses in the conceptual model, which both contribute to research on information stopping rules. One tracks the participant during an online search, the second asks follow-up survey questions on a Likert scale. One of four search tasks (professional or personal context and a big data analytics understanding or restaurant location search) was randomly assigned to each participant. Results show different stopping rules are more useful for different decision contexts. Specifically, professional tasks are more likely to use stopping rules with an a priori decision on how much information to collect, while personal tasks encourage users to determine how much information to collect during the search process. The analysis also shows that different stopping rules have different emphases on quality and quantity of information. Specifically, representational stability requires both a high quality and quantity of information, while other stopping rules indicate a preference for one of the two. Finally, information quality and quantity ultimately have a positive relationship with decision confidence, satisfaction, and efficiency. The findings of this research are useful to practitioners and academics tackling issues with the availability of more information. As systems are designed for information search, understanding information stopping rules become increasingly important.
2

TAAF Stopping Rules for Maximizing the Utility of One-Shot Systems

Maillart, Lisa M. 25 April 1997 (has links)
Test-analyze-and-fix (TAAF) is the most commonly recognized method of improving system reliability. The work presented here addresses the question of when to stop testing during TAAF programs involving one-shot systems when the number of systems to be produced is predetermined and the probabilities of identifying and successfully correcting each failure mode are less than one. The goal here is to determine when to cease testing to maximize utility where utility is defined as the number of systems expected to perform successfully in the field after deployment of the lot. Two TAAF stopping rules are presented. Simulation is used to model TAAF execution under different reliability growth conditions. Four discrete reliability growth models (DRGM's) are used to generate "real world" reliability growth and to estimate reliability growth using hypothetical observed success/failure data. Ranges for the following parameters are considered: starting reliability, growth rate, maximum achievable reliability, number of systems to be produced, probability of incorrectly identifying a failure mode, and probability of an unsuccessful design modification. Conclusions are drawn regarding stopping rule performance in terms of stopping rule signal location, utility loss, achieved reliability, and fraction tested. Both rules perform well and are implementable from a practical standpoint. Specific recommendations for stopping rule implementation are given based on the controllable factors, estimation methodology and lot size. / Master of Science
3

Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming

Pierre-Louis, Péguy January 2012 (has links)
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
4

The Use of Landweber Algorithm in Image Reconstruction

Nikazad, Touraj January 2007 (has links)
Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modelled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system arises. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case. When applying an iterative solver to an ill-posed set of linear equations the error typically initially decreases but after some iterations (depending on the amount of noise in the data, and the degree of ill-posedness) it starts to increase. This phenomena is called semi-convergence. It is therefore vital to find good stopping rules for the iteration. We describe a class of stopping rules for Landweber type iterations for solving linear inverse problems. The class includes, e.g., the well known discrepancy principle, and also the monotone error rule. We also unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections. / Vi betraktar lösning av sådana linjära ekvationssystem som uppkommer vid diskretisering av inversa problem. Dessa problem karakteriseras av att den sökta informationen inte direkt kan mätas. Ett välkänt exempel utgör datortomografi. Där mäts hur mycket strålning som passerar genom ett föremål som belyses av en strålningskälla vilken intar olika vinklar i förhållande till objektet. Syftet är förstås att generera bilder av föremålets inre (i medicinska tillämpngar av det inre av kroppen). Vi studerar en klass av iterativa lösningsmetoder för lösning av ekvationssystemen. Metoderna tillämpas på testdata från bildrekonstruktion och jämförs med andra föreslagna iterationsmetoder. Vi gör även en konvergensanalys för olika val av metod-parametrar. När man använder en iterativ metod startar man med en begynnelse approximation som sedan gradvis förbättras. Emellertid är inversa problem känsliga även för relativt små fel i uppmätta data. Detta visar sig i att iterationerna först förbättras för att senare försämras. Detta fenomen, s.k. ’semi-convergence’ är väl känt och förklarat. Emellertid innebär detta att det är viktigt att konstruera goda stoppregler. Om man avbryter iterationen för tidigt fås dålig upplösning och om den avbryts för sent fås en oskarp och brusig bild. I avhandligen studeras en klass av stoppregler. Dessa analyseras teoretiskt och testas på mätdata. Speciellt föreslås en inlärningsförfarande där stoppregeln presenteras med data där det korrekra värdet på stopp-indexet är känt. Dessa data används för att bestämma en viktig parameter i regeln. Sedan används regeln för nya okända data. En sådan tränad stoppregel visar sig fungera väl på testdata från bildrekonstruktionsområdet.
5

The Use of Landweber Algorithm in Image Reconstruction

Nikazad, Touraj January 2007 (has links)
<p>Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modelled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system arises. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case.</p><p>When applying an iterative solver to an ill-posed set of linear equations the error typically initially decreases but after some iterations (depending on the amount of noise in the data, and the degree of ill-posedness) it starts to increase. This phenomena is called semi-convergence. It is therefore vital to find good stopping rules for the iteration.</p><p>We describe a class of stopping rules for Landweber type iterations for solving linear inverse problems. The class includes, e.g., the well known discrepancy principle, and also the monotone error rule. We also unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections.</p> / <p>Vi betraktar lösning av sådana linjära ekvationssystem som uppkommer vid diskretisering av inversa problem. Dessa problem karakteriseras av att den sökta informationen inte direkt kan mätas. Ett välkänt exempel utgör datortomografi. Där mäts hur mycket strålning som passerar genom ett föremål som belyses av en strålningskälla vilken intar olika vinklar i förhållande till objektet. Syftet är förstås att generera bilder av föremålets inre (i medicinska tillämpngar av det inre av kroppen). Vi studerar en klass av iterativa lösningsmetoder för lösning av ekvationssystemen. Metoderna tillämpas på testdata från bildrekonstruktion och jämförs med andra föreslagna iterationsmetoder. Vi gör även en konvergensanalys för olika val av metod-parametrar.</p><p>När man använder en iterativ metod startar man med en begynnelse approximation som sedan gradvis förbättras. Emellertid är inversa problem känsliga även för relativt små fel i uppmätta data. Detta visar sig i att iterationerna först förbättras för att senare försämras. Detta fenomen, s.k. ’semi-convergence’ är väl känt och förklarat. Emellertid innebär detta att det är viktigt att konstruera goda stoppregler. Om man avbryter iterationen för tidigt fås dålig upplösning och om den avbryts för sent fås en oskarp och brusig bild.</p><p>I avhandligen studeras en klass av stoppregler. Dessa analyseras teoretiskt och testas på mätdata. Speciellt föreslås en inlärningsförfarande där stoppregeln presenteras med data där det korrekra värdet på stopp-indexet är känt. Dessa data används för att bestämma en viktig parameter i regeln. Sedan används regeln för nya okända data. En sådan tränad stoppregel visar sig fungera väl på testdata från bildrekonstruktionsområdet.</p>
6

Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems

Pornsawad, Pornsarp, Böckmann, Christine January 2014 (has links)
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.
7

Effective, Efficient Retrieval in a Network of Digital Information Objects

France, Robert Karl 27 November 2001 (has links)
Although different authors mean different thing by the term "digital libraries," one common thread is that they include or are built around collections of digital objects. Digital libraries also provide services to large communities, one of which is almost always search. Digital library collections, however, have several characteristic features that make search difficult. They are typically very large. They typically involve many different kinds of objects, including but not limited to books, e-published documents, images, and hypertexts, and often including items as esoteric as subtitled videos, simulations, and entire scientific databases. Even within a category, these objects may have widely different formats and internal structure. Furthermore, they are typically in complex relationships with each other and with such non-library objects as persons, institutions, and events. Relationships are a common feature of traditional libraries in the form of "See / See also" pointers, hierarchical relationships among categories, and relations between bibliographic and non-bibliographic objects such as having an author or being on a subject. Binary relations (typically in the form of directed links) are a common representational tool in computer science for structures from trees and graphs to semantic networks. And in recent years the World-Wide Web has made the construct of linked information objects commonplace for millions. Despite this, relationships have rarely been given "first-class" treatment in digital library collections or software. MARIAN is a digital library system designed and built to store, search over, and retrieve large numbers of diverse objects in a network of relationships. It is designed to run efficiently over large collections of digital library objects. It addresses the problem of object diversity through a system of classes unified by common abilities including searching and presentation. Divergent internal structure is exposed and interpreted using a simple and powerful graphical representation, and varied format through a unified system of presentation. Most importantly, MARIAN collections are designed to specifically include relations in the form of an extensible collection of different sorts of links. This thesis presents MARIAN and argues that it is both effective and efficient. MARIAN is effective in that it provides new and useful functionality to digital library end-users, and in that it makes constructing, modifying, and combining collections easy for library builders and maintainers. MARIAN is efficient since it works from an abstract presentation of search over networked collections to define on the one hand common operations required to implement a broad class of search engines, and on the other performance standards for those operations. Although some operations involve a high minimum cost under the most general assumptions, lower costs can be achieved when additional constraints are present. In particular, it is argued that the statistics of digital library collections can be exploited to obtain significant savings. MARIAN is designed to do exactly that, and in evidence from early versions appears to succeed. In conclusion, MARIAN presents a powerful and flexible platform for retrieval on large, diverse collections of networked information, significantly extending the representation and search capabilities of digital libraries. / Ph. D.
8

Algebraic Reconstruction Methods

Nikazad, Touraj January 2008 (has links)
Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modeled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system occurs. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case. When applying an iterative solver to an ill-posed set of linear equations the error usually initially decreases but after some iterations, depending on the amount of noise in the data, and the degree of ill-posedness, it starts to increase. This phenomenon is called semi-convergence. We study the semi-convergence performance of Landweber-type iteration, and propose new ways to specify the relaxation parameters. These are computed so as to control the propagated error. We also describe a class of stopping rules for Landweber-type iteration for solving linear inverse problems. The class includes the well known discrepancy principle, and the monotone error rule. We unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections. Kaczmarz's method, also called ART (Algebraic Reconstruction Technique) is often used for solving the linear system which appears in image reconstruction. This is a fully sequential method. We examine and compare ART and its symmetric version. It is shown that the cycles of symmetric ART, unlike ART, converge to a weighted least squares solution if and only if the relaxation parameter lies between zero and two. Further we show that ART has faster asymptotic rate of convergence than symmetric ART. Also a stopping criterion is proposed and evaluated for symmetric ART. We further investigate a class of block-iterative methods used in image reconstruction. The cycles of the iterative sequences are characterized in terms of the original linear system. We define symmetric block-iteration and compare the behavior of symmetric and non-symmetric block-iteration. The results are illustrated using some well-known methods. A stopping criterion is offered and assessed for symmetric block-iteration.
9

Development of stopping rule methods for the MLEM and OSEM algorithms used in PET image reconstruction / Ανάπτυξη κριτηρίων παύσης των αλγορίθμων MLEM και OSEM που χρησιμοποιούνται στην ανακατασκευή εικόνας σε PET

Γαϊτάνης, Αναστάσιος 11 January 2011 (has links)
The aim of this Thesis is the development of stopping rule methods for the MLEM and OSEM algorithms used in image reconstruction positron emission tomography (PET). The development of the stopping rules is based on the study of the properties of both algorithms. Analyzing their mathematical expressions, it can be observed that the pixel updating coefficients (PUC) play a key role in the upgrading process of the reconstructed image from iteration k to k+1. For the analysis of the properties of the PUC, a PET scanner geometry was simulated using Monte Carlo methods. For image reconstruction using iterative techniques, the calculation of the transition matrix is essential. And it fully depends on the geometrical characteristics of the PET scanner. The MLEM and OSEM algorithms were used to reconstruct the projection data. In order to compare the reconstructed and true images, two figures of merit (FOM) were used; a) the Normalized Root Mean Square Deviation (NRMSD) and b) the chi-square χ2. The behaviour of the PUC C values for a zero and non-zero pixel in the phantom image was analyzed and it has been found different behavior for zero and non-zero pixels. Based on this assumption, the vector of all C values was analyzed for all non-zero pixels of the reconstructed image and it was found that the histograms of the values of the PUC have two components: one component around C(i)=1.0 and a tail component, for values C(i)<1.0. In this way, a vector variable has been defined, where I is the total number of pixels in the image and k is the iteration number. is the minimum value of the vector of the pixel updating coefficients among the non-zero pixels of the reconstructed image at iteration k. Further work was performed to find out the dependence of Cmin on the image characteristics, image topology and activity level. The analysis shows that the parameterization of Cmin is reliable and allows the establishment of a robust stopping rule for the MLEM algorithm. Furthermore, following a different approach, a new stopping rule using the log-likelihood properties of the MLEM algorithm has been developed. The two rules were evaluated using the independent Digimouse phantom. The study revealed that both stopping rules produce reconstructed images with similar properties. The same study was performed for the OSEM algorithm and a stopping rule for the OSEM algorithm dedicated to each number of subset was developed. / Σκοπός της διατριβής είναι η ανάπτυξη κριτηρίων παύσης για τους επαναληπτικούς αλγόριθμους (MLEM και OSEM) που χρησιμοποιούνται στην ανακατασκευή ιατρικής εικόνας στους τομογράφους εκπομπής ποζιτρονίου (PET). Η ανάπτυξη των κριτηρίων παύσης βασίστηκε στη μελέτη των ιδιοτήτων των αλγόριθμων MLEM & OSEM. Απο τη μαθηματική έκφραση των δύο αλγορίθμων προκύπτει ότι οι συντελεστές αναβάθμισης (ΣΑ) των pixels της εικόνας παίζουν σημαντικό ρόλο στην ανακατασκευή της απο επανάληψη σε επανάληψη. Για την ανάλυση ένας τομογράφος PET προσομοιώθηκε με τη χρήση των μεθόδων Μόντε Κάρλο.Για την ανακατασκευή της εικόνας με τη χρήση των αλγόριθμων MLEM και OSEM, υπολογίστηκε ο πίνακας μετάβασης. Ο πίνακας μετάβασης εξαρτάται απο τα γεωμετρικά χαρακτηριστικά του τομογράφου PET και για τον υπολογισμό του χρησιμοποιήθηκαν επίσης μέθοδοι Μόντε Κάρλο. Ως ψηφιακά ομοιώματα χρησιμοποιήθηκαν το ομοίωμα εγκεφάλου Hoffman και το 4D MOBY. Για κάθε ένα απο τα ομοιώματα δημιουργήθηκαν προβολικά δεδομένα σε διαφορετικές ενεργότητες. Για τη σύγκριση της ανακατασκευασμένης και της αρχικής εικόνας χρησιμοποιήθηκαν δύο ξεχωριστοί δείκτες ποίοτητας, το NRMSD και το chi square. Η ανάλυση έδειξε οτι οι ΣΑ για τα μη μηδενικά pixels της εικόνας τείνουν να λάβουν την τιμή 1.0 με την αύξηση των επαναλήψεων, ενώ για τα μηδενικά pixels αυτό δε συμβαίνει. Αναλύοντας περισσότερο το διάνυσμα των ΣΑ για τα μη μηδενικά pixels της ανακατασκευασμένης εικόνας διαπιστώθηκε ότι αυτό έχει δύο μέρη: α) Μια κορυφή για τιμές των ΣΑ = 1.0 και β) μια ουρά με τιμές των ΣΑ<1.0. Αυξάνοντας τις επαναλήψεις, ο αριθμός των pixels με ΣΑ=1.0 αυξάνονταν ενώ ταυτόχρονα η ελάχιστη τιμή του διανύσματος των ΣΑ μετακινούνταν προς το 1.0. Με αυτό τον τρόπο προσδιορίστηκε μια μεταβλητή της μορφής όπου N είναι ο αριθμός των pixels της εικόνας, k η επανάληψη και η ελάχιστη τιμή του διανύσματος των ΣΑ. Η ανάλυση που έγινε έδειξε ότι η μεταβλητή Cmin συσχετίζεται μόνο με την ενεργότητα της εικόνας και όχι με το είδος ή το μέγεθός της. Η παραμετροποίηση αυτής της σχέσης οδήγησε στην ανάπτυξη του κριτηρίου παύσης για τον MLEM αλγόριθμο. Μια άλλη προσέγγιση βασισμένη στις ιδιότητες πιθανοφάνειας του MLEM αλγόριθμου, οδήγησε στην ανάπτυξη ενός διαφορετικού κριτηρίου παύσης του MLEM. Τα δύο κριτήρια αποτιμήθηκαν με τη χρήση του ομοιώματος Digimouse και βρέθηκε να παράγουν παρόμοιες εικόνες. Η ίδια μελέτη έγινε και για τον OSEM αλγόριθμο και αναπτύχθηκε κριτήριο παύσης για διαφορετικό αριθμό subsets.
10

Visual multistability: influencing factors and analogies to auditory streaming

Wegner, Thomas 03 May 2023 (has links)
Sensory inputs can be ambiguous. A physically constant stimulus that induces several perceptual alternatives is called multistable. Many factors can influence perception. In this thesis I investigate factors that affect visual multistability. All presented studies use a pattern-component rivalry stimulus consisting of two gratings drifting in opposite directions (called the plaid stimulus). This induces an “integrated” perception of a moving plaid (the pattern) or a “segregated” perception of overlaid gratings (the components). One study (chapter 2) investigates parameter dependence of a plaid stimulus on perception, with particular emphasis on the first percept. Specifically, it addresses how the enclosed angle (opening angle) affects the perception at stimulus onset and during prolonged viewing. The effects that are shown persist even if the stimulus is rotated. On a more abstract level it is shown that percepts can influence each other over time (chapter 3) which emphasizes the importance of instructions and report mode. In particular, it relates to the decision which percepts are instructed to be reported at all as well as which percepts can be reported as separate entities and which are pooled into the same response option. A further abstract level (predictability of a stimulus change, chapter 5) shows that transferring effects from one modality to another modality (specifically from audition to vision) requires careful choice of stimulus parameters. In this context, we give considerations to the proposal for a wider usage of sequential stopping rules (SSR, chapter 4), especially in studies where effect sizes are hard to estimate a priori. This thesis contributes to the field of visual multistability by providing novel experimental insights into pattern-component rivalry and by linking these findings to data on sequential dependencies, to the optimization of experimental designs, and to models and results from another sensory modality.:Bibliographische Beschreibung 3 Acknowledgments 4 CONTENTS 5 Collaborations 7 List of Figures 8 List of Tables 8 1. Introduction 9 1.1. Tristability 10 1.2. Two or more interpretations? 11 1.3. Multistability in different modalities 12 1.3.1. Auditory multistability 12 1.3.2. Haptic multistability 13 1.3.3. Olfactory multistability 13 1.4. multistability with several interpretations 13 1.5. Measuring multistability 14 1.5.1. The optokinetic nystagmus 14 1.5.2. Pupillometry 15 1.5.3. Measuring auditory multistability 15 1.5.4. Crossmodal multistability 16 1.6. Factors governing multistability 16 1.6.1. Manipulations that do not involve the stimulus 16 1.6.2. Manipulation of the stimulus 17 1.6.2.1. Factors affecting the plaid stimulus 17 1.6.2.2. Factors affecting the auditory streaming stimulus 18 1.7. Goals of this thesis 18 1.7.1. Overview of the thesis 18 2. Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing 21 2.1. Introduction 21 2.2. Methods 24 2.2.1. Participants 24 2.2.2. Setup 24 2.2.3. Stimuli 25 2.2.4. Procedure 26 2.2.5. Analysis 27 2.2.6. (Generalized) linear mixed-effects models 30 2.3. Results 30 2.3.1. Experiment 1 30 2.3.1.1. Relative number of integrated percepts 31 2.3.1.2. Generalized linear mixed-effects model 32 2.3.1.3. Dominance durations 33 2.3.1.4. Linear mixed-effects models 33 2.3.1.5. Control: Disambiguated trials 33 2.3.1.6. Time course of percept reports at onset 34 2.3.1.7. Eye movements 35 2.3.2. Experiment 2 36 2.3.2.1. Relative number of percepts 36 2.3.2.2. Generalized linear mixed-effects model 37 2.3.2.3. Dominance durations 38 2.3.2.4. Linear mixed-effects model 38 2.3.2.5. Control: Disambiguated trials 40 2.3.2.6. Time course of percept reports at onset 42 2.3.2.7. Eye movements 44 2.4. Discussion 45 2.5. Appendix 49 2.5.1. Appendix A 49 3. Perceptual history 51 3.1. Markov chains 52 3.1.1. Markov chains of order 1 and 2 52 3.2. Testing for Markov chains 55 3.2.1. The method of Naber and colleagues (2010) 56 3.2.1.1. The method 56 3.2.1.2. Advantages and disadvantages of the method 56 3.2.2. Further methods for testing Markov chains 57 3.3. Summary and discussion 58 4. Sequential stopping rules 60 4.1. The COAST rule 61 4.2. The CLAST rule 61 4.3. The variable criteria sequential stopping rule 61 4.4. Discussion 62 4.5. Using the vcSSR when transferring an effect from audition to vision 64 5. Predictability in visual multistability 66 5.1. Pretests 66 5.2. Predictability effects in visual pattern-component rivalry 69 5.2.1. Introduction 69 5.2.2. Methods 71 5.2.2.1. Participants 71 5.2.2.2. Setup 72 5.2.2.3. Stimuli 73 5.2.2.4. Conditions 73 5.2.2.5. Design and procedure 73 5.2.2.6. Analysis 74 5.2.3. Results 75 5.2.3.1. Valid reports 75 5.2.3.2. Verification of reports by eye movements 76 5.2.3.3. Onset latency 76 5.2.3.4. Dominance durations 78 5.2.3.5. Relative dominance of the segregated percept 78 5.2.4. Discussion 78 6. General discussion 83 6.1. Reporting percepts 83 6.1.1. Providing two versus three response options 83 6.1.2. Stimuli with more than three percepts 84 6.1.3. When to pool percepts together and when not 84 6.1.4. Leaving out percepts 87 6.1.5. Measuring (unreported) percepts 88 6.2. Comparing influencing factors on different levels 88 6.3. The use of the vcSSR 90 6.4. Valid reports 90 6.5. Conclusion 93 References 94

Page generated in 0.4819 seconds