501 |
Choice set formulation for discrete choice modelsPitschke, Steven B January 1980 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 100-102. / by Steven B. Pitschke. / M.S.
|
502 |
Distributions of some random volumes and their connection to multivariate analysisJairu, Desiderio N. January 1987 (has links)
No description available.
|
503 |
Decision Theory Classification Of High-dimensional Vectors Based On Small SamplesBradshaw, David 01 January 2005 (has links)
In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
|
504 |
Möbius operators and non-additive quantum probabilities in the Birkhoff-von Neumann lattice.Vourdas, Apostolos 08 December 2015 (has links)
yes / The properties of quantum probabilities are linked to the geometry of quantum mechanics, described
by the Birkhoff-von Neumann lattice. Quantum probabilities violate the additivity property
of Kolmogorov probabilities, and they are interpreted as Dempster-Shafer probabilities. Deviations from the additivity property are quantified with the Möbius (or non-additivity) operators which are defined through Möbius transforms, and which are shown to be intimately related to commutators.
The lack of distributivity in the Birkhoff-von Neumann lattice Λd, causes deviations from the law of the total probability (which is central in Kolmogorov’s probability theory). Projectors which quantify the lack of distributivity in Λd, and also deviations from the law of the total probability, are introduced. All these operators, are observables and they can be measured experimentally. Constraints for the Möbius operators, which are based on the properties of the Birkhoff-von Neumann
lattice (which in the case of finite quantum systems is a modular lattice), are derived. Application of this formalism in the context of coherent states, generalizes coherence to multi-dimensional structures.
|
505 |
An Analysis of Equally Weighted and Inverse Probability Weighted Observations in the Expanded Program on Immunization (EPI) Sampling MethodReyes, Maria 11 1900 (has links)
Performing health surveys in developing countries and humanitarian emergencies can be challenging work because the resources in these settings are often quite limited and information needs to be gathered quickly. The Expanded Program on Immunization (EPI) sampling method provides one way of selecting subjects for a survey. It involves having field workers proceed on a random walk guided by a path of nearest household neighbours until they have met their quota for interviews. Due to its simplicity, the EPI sampling method has been utilized by many surveys. However, some concerns have been raised over the quality of estimates resulting from such samples because of possible selection bias inherent to the sampling procedure. We present an algorithm for obtaining the probability of selecting a household from a cluster under several variations of the EPI sampling plan. These probabilities are used to assess the sampling plans and compute estimator properties. In addition to the typical estimator for a proportion, we also investigate the Horvitz-Thompson (HT) estimator, an estimator that assigns weights to individual responses. We conduct our study on computer-generated populations having different settlement types, different prevalence rates for the characteristic of interest and different spatial distributions of the characteristic of interest. Our results indicate that within a cluster, selection probabilities can vary largely from household to household. The largest probability was over 10 times greater than the smallest probability in 78% of the scenarios that were tested. Despite this, the properties of the estimator with equally weighted observations (EQW) were similar to what would be expected from simple random sampling (SRS) given that cases of the characteristic of interest were evenly distributed throughout the cluster area. When this was not true, we found absolute biases as large as 0.20. While the HT estimator was always unbiased, the trade off was a substantial increase in the variability of the estimator where the design effect relative to SRS reached a high of 92. Overall, the HT estimator did not perform better than the EQW estimator under EPI sampling, and it involves calculations that may be difficult to do for actual surveys. Although we recommend continuing to use the EQW estimator, caution should be taken when cases of the characteristic of interest are potentially concentrated in certain regions of the cluster. In these situations, alternative sampling methods should be sought. / Thesis / Master of Science (MSc)
|
506 |
Characterization of isomeric states in neutron-rich nuclei approaching N = 28Ogunbeku, Timilehin Hezekiah 08 December 2023 (has links) (PDF)
The investigation of isomeric states in neutron-rich nuclei provides useful insights into the underlying nuclear configurations, and understanding their occurrence along an isotopic chain can inform about shell evolution. Recent studies on neutron-rich Si isotopes near the magic number N = 20 and approaching N = 28 have revealed the presence of low-lying states with intruder configurations, resulting from multiple-particle, multiple-hole excitations across closed shell gaps. The characterization of these states involves measuring their half-lives and transition probabilities.
In this study, a new low-energy (7/2−1) isomer at 68 keV in 37Si was accessed via beta decay and characterized. To achieve this, radioactive 37Al and 38Al ions were produced through the projectile fragmentation reaction of a 48Ca beam and implanted into a CeBr3 detector, leading to the population of states in 37Si. The 68-keV isomer was directly populated in the beta-delayed one neutron emission decay of implanted 38Al ions. Ancillary detector arrays comprising HPGe and LaBr3(Ce) detectors were employed for the detection of beta-delayed gamma rays. The choice of detectors was driven by their excellent energy and timing resolutions, respectively.
The beta-gamma timing method was utilized to measure the half-life of the new isomeric state in 37Si. This dissertation also discusses other timing techniques employed to search for and characterize isomeric states following beta decay of implanted ions. Notably, the half-life of the newly observed (7/2−1) isomeric state in 37Si was measured to be 9.1(7) ns. The half-life of the previously observed closely-lying (3/2−1) state at 156 keV was determined to be 3.20(4) ns, consistent with previously reported values. Reduced ground-state transition probabilities associated with the gamma-ray decay from these excited states were in agreement with results obtained from shell model calculations.
In addition to the investigation of isomeric states in 37Si, isomeric 0+ states in 34Si and 32Mg nuclei belonging to the N = 20 “island of inversion” were characterized and searched for, respectively. The isomeric 0+ state in 34Si was populated following the beta decay of implanted 34Mg ions and its 34Al daughter nucleus. Similarly, the 0+ state in 32Mg was searched for via the beta-delayed one neutron emission decay of implanted 33Na ions.
|
507 |
"It's Not Probabilities, It's Possibilities": Lay Views of Disclosure Regarding Emerging Health IssuesMoreau, Geneviève 08 1900 (has links)
Products and technologies provide us with significant lifestyle benefits but they can
also evolve into hazards and bring about concern for human health. A history of poor
regulatory performances has resulted in a public displeased with and skeptical of the
actors responsible for protecting the public against the unintended effects of progress.
It is within this historical and social context that the study explores the following
objectives: to understand people's responses to emerging health issues, of which there
is considerable knowledge uncertainty and little public awareness; to identify the
information needs regarding these issues, and to explore the role of government
disclosure for personal decision-making around these issues. Seven focus groups
were conducted in Hamilton, Ontario with community members from a range of
backgrounds: youth, faith, allophone immigrants, environmental, health, recreational,
and mixed. Two scenarios about potential hazards, i.e. a persistent pollutant and
extreme heatwaves from climate change, were used to generate discussion about
people's experiences with risk and knowledge. Results indicate that emerging health
issues are framed by lay individuals as a chronic societal phenomenon. Their
concerns about health and well-being, resiliency, and issue comprehension point to an
overarching preoccupation about social vulnerability, irrespective of the presence of
confirmed hazards. The analysis further revealed several roles for disclosure which
would allow for more capacity in personal decision-making; more transparent and
accountable regulatory processes, and which could lead to more trustworthy relations
between citizens and government. / Thesis / Master of Arts (MA)
|
508 |
Experimental Knowledge in Cognitive Neuroscience: Evidence, Errors, and InferenceAktunc, Mahir Emrah 06 September 2011 (has links)
This is a work in the epistemology of functional neuroimaging (fNI) and it applies the error-statistical (ES) philosophy to inferential problems in fNI to formulate and address these problems. This gives us a clear, accurate, and more complete understanding of what we can learn from fNI and how we can learn it.
I review the works in the epistemology of fNI which I group into two categories; the first category consists of discussions of the theoretical significance of fNI findings and the second category discusses methodological difficulties of fNI. Both types of works have shortcomings; the first category has been too theory-centered in its approach and the second category has implicitly or explicitly adopted the assumption that methodological difficulties of fNI cannot be satisfactorily addressed. In this dissertation, I address these shortcomings and show how and what kind of experimental knowledge fNI can reliably produce which would be theoretically significant.
I take fMRI as a representative fNI procedure and discuss the history of its development. Two independent trajectories of research in physics and physiology eventually converge to give rise to fMRI. Thus, fMRI findings are laden in the theories of physics and physiology and I propose how this creates a kind of useful theory-ladenness which allows for the representation of and intervention in the constructs of cognitive neuroscience.
Duhemian challenges and problems of underdetermination are often raised to argue that fNI is of little, if any, epistemic value for psychology. I show how the ES notions of severe tests and error probabilities can be applied in epistemological analyses of fMRI. The result is that hemodynamic hypotheses can be severely tested in fMRI experiments and I demonstrate how these hypotheses are theoretically significant and fuel the growth of experimental knowledge in cognitive neuroscience.
Throughout this dissertation, I put the emphasis on the experimental knowledge we obtain from fNI and argue that this is the fruitful approach that enables us to see how fNI can contribute to psychology. In doing so, I offer an error-statistical epistemology of fNI, which hopefully will be a significant contribution to the philosophy of psychology. / Ph. D.
|
509 |
Joint probability distribution of rainfall intensity and durationPatron, Glenda G. 23 June 2009 (has links)
Intensity-duration-frequency (IDF) curves are widely used for peak discharge estimation in designing hydraulic structures. The traditional Gumbel probability method entails selecting annual maximum rainfall depths (intensities) conditioned on a fixed time window width (which in general will not coincide with the rainfall event duration) from a continuous record to perform a frequency analysis in terms of the marginal distribution. The digitized database contains annual maximum intensities for selected discrete durations. This method presents problems when intensities are required for arbitrary durations which are not part of the selected durations. Accurate interpolated and especially extrapolated intensity values are hard to obtain. The present study offers two methods both involving a joint probability approach to overcome the deficiencies inherent in the traditional method of IDF analysis. The first joint probability approach employs Box-Cox and modulus transformations to transform original data to near bivariate normality. The second method does not require such a transformation. Instead, it uses the closed-form bivariate Burr III cumulative distribution to fit the data. Another advantage of the joint probability approach is that it allows one to gauge the rarity of certain extreme events, such as probable maximum precipitation, in terms of the joint occurrence of its extremely high intensity and a sufficiently long duration (e.g. 24 hours). The joint probability approach is applied to three data sets. The resulting conditional probability intensity estimates are quite close to those obtained by traditional Gumbel IDF analysis. In addition, reliable interpolated and extrapolated intensities are available because the approach essentially fits a flexible surface to the discrete data with the capability of providing a complete probabilistic structure. / Master of Science
|
510 |
Data-dependent Regret Bounds for Adversarial Multi-Armed Bandits and Online Portfolio SelectionPutta, Sudeep Raja January 2024 (has links)
This dissertation studies 𝐷𝑎𝑡𝑎-𝐷𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 regret bounds for two online learning problems. As opposed to worst-case regret bounds, data-dependent bounds are able to adapt to the particular sequence of losses seen by the player. Thus, they offer a more fine grained performance guarantee compared to worst-case bounds.
We start off with the Adversarial 𝑛-Armed Bandit problem. In prior literature it was a standard practice to assume that the loss vector belonged to a known domain, typically [0,1]ⁿ or [-1,1]ⁿ. We make no such assumption on the loss vectors, they may be completely arbitrary. We term this problem the Scale-Free Adversarial Multi Armed Bandit. At the beginning of the game, the player only knows the number of arms 𝑛. It does not know the scale and magnitude of the losses chosen by the adversary or the number of rounds 𝑇. In each round, it sees bandit feedback about the loss vectors 𝑙₁, . . . , 𝑙_𝑇 ⋲ ℝⁿ. Our goal is to bound its regret as a function of 𝑛 and norms of 𝑙₁, . . . , 𝑙_𝑇 . We design a bandit Follow The Regularized Leader (FTRL) algorithm, that uses a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. We give two different regret bounds, based on the exploration parameter used. With non-adaptive exploration, our algorithm has a regret of 𝑂̃(√𝑛𝐿₂ + 𝐿_∞√𝑛𝑇) and with adaptive exploration, it has a regret of 𝑂(√𝑛𝐿₂ + 𝐿∞√𝑛𝐿₁). Here 𝐿∞ = sup_𝑡 ∥𝑙_𝑡∥_∞, 𝐿₂ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥²₂, 𝐿₁ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥₁ and the 𝑂̃ notation suppress logarithmic factors. These are the first MAB bounds that adapt to the ∥・∥₂, ∥・∥₁ norms of the losses. The second bound is the first data-dependent scale-free MAB bound as 𝑇 does not directly appear in the regret. We also develop a new technique for obtaining a rich class of local-norm lower-bounds for Bregman Divergences. This technique plays a crucial role in our analysis for controlling the regret when using importance weighted estimators of unbounded losses.
Next, we consider the Online Portfolio Selection (OPS) problem over 𝑛 assets and 𝑇 time periods. This problem was first studied by Cover [1], who proposed the Universal Portfolio (UP) algorithm. UP is a computationally expensive algorithm with minimax optimal regret of 𝑂(𝑛 log 𝑇). There has been renewed interest in OPS due to a recently posed open problem Van Erven 𝑒𝑡 𝑎𝑙. [2] which asks for a computationally efficient algorithm that is also has minimax optimal regret. We study data-dependent regret bounds for OPS problem that adapt to the sequence of returns seen by the investor. Our proposed algorithm called AdaCurv ONS modifies the Online Newton Step(ONS) algorithm of [3] using a new adaptive curvature surrogate function for the log losses — log(𝑟_𝑡ᵀ𝑤). We show that the AdaCurv ONS algorithm has 𝑂(𝑅𝑛𝑙𝑜𝑔𝑇) regret where 𝑅 is the data-dependent quantity. For sequences where 𝑅=𝑂(1), the regret of AdaCurv ONS matches the optimal regret. However, for some sequences 𝑅 could be unbounded, making the regret bound vacuous. To overcome this issue, we propose the LB-AdaCurv ONS algorithm that adds a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. LB-AdaCurv ONS has an adaptive regret of the form 𝑂(min(𝑅 log 𝑇, √𝑛𝑇 log 𝑇)). Thus, LB-AdaCurv ONS has a worst case regret of 𝑂(√𝑛𝑇 log 𝑇) while also having a data-dependent regret of 𝑂(𝑛𝑅 log 𝑇) when 𝑅 = 𝑂(1). Additionally, we show logarithmic First-Order and Second-Order regret bounds for AdaCurv ONS and LB-AdaCurv ONS.
Finally, we consider the problem of Online Portfolio Selection (OPS) with predicted returns. We are the first to extend the paradigm of online learning with predictions to the portfolio selection problem. In this setting, the investor has access to noisy predictions of returns for the 𝑛 assets that can be incorporated into the portfolio selection process. We propose the Optimistic Expected Utility LB-FTRL (OUE-LB-FTRL) algorithm that incorporates the predictions using a utility function into the LB-FTRL algorithm. We explore the consistency-robustness properties for our algorithm. If the predictions are accurate, OUE-LB-FTRL's regret is 𝑂(𝑛 log 𝑇), providing a consistency guarantee. Even if the predictions are arbitrary, OUE-LB-FTRL's regret is always bounded by 𝑂(√𝑛𝑇 log 𝑇) providing a providing a robustness guarantee. Our algorithm also recovers a Gradual-variation regret bound for OPS. In the presence of predictions, we argue that the benchmark of static-regret becomes less meaningful. So, we consider the regret with respect to an investor who only uses predictions to select their portfolio (i.e., an expected utility investor). We provide a meta-algorithm called Best-of-Both Worlds for OPS (BoB-OPS), that combines the portfolios of an expected utility investor and a purely regret minimizing investor using a higher level portfolio selection algorithm. By instantiating the meta-algorithm and the purely regret minimizing investor with Cover's Universal Portfolio, we show that the regret of BoB-OPS with respect to the expected utility investor is 𝑂(log 𝑇). Simultaneously, BoB-OPS's static regret is 𝑂(𝑛 log 𝑇). This achieves a stronger form of consistency-robustness guarantee for OPS with predicted returns.
|
Page generated in 0.0707 seconds