• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2882
  • 446
  • 379
  • 288
  • 257
  • 135
  • 95
  • 58
  • 47
  • 31
  • 26
  • 15
  • 15
  • 15
  • 15
  • Tagged with
  • 5540
  • 1414
  • 1244
  • 1094
  • 995
  • 951
  • 951
  • 861
  • 847
  • 585
  • 538
  • 513
  • 446
  • 398
  • 383
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

The Impact of Stockouts on Customer Loyalty to Lean Retailers

Turk, Jeffery I. 01 January 2011 (has links)
The lean inventory concept has been shown to streamline operations and improve efficiency in a retail environment. The negative side of the practice is that limited inventories increase the risk of stockouts, where a routinely available product is missing and the retailer is not able to meet customer demands. The purpose of this exploratory case study was to examine stockouts as an event and document their effects on changes in customer attitudes and behaviors. Guided by a constructivist conceptual framework, the research question explored how stockout experiences affected customers' purchasing behaviors and loyalty to brand and retailer. A survey containing both open-ended and categorical response elements was validated through a pilot study and used to collect data from 40 randomly selected participants shopping at a retail mall in eastern Pennsylvania. Data coding for qualitative data consisted of 3 sequential stages of open, axial, and selective coding into a priori themes. Categorical responses were employed in downward analyses that revealed patterns in the qualitative data. The results indicated that repeated stockout experiences decreased customers' loyalty to brand and retailer and caused customers to abandon both retailers and brand. Respondents indicated that stockout impacts can be buffered through improved inventory management and better customer service. Specific recommendations included monetary incentives, personal contacts, coupons, and item discounts. Results of this study will enable retailers to gain deeper understanding of how stockout affects customers shopping experiences and loyalty, and offer mitigation measures to improve both. Results will provide a positive change to both consumers and retailers, where shoppers will enjoy pleasant shopping experiences and retailers will maintain their competitive advantage through loyalty of their customer base.
382

Apprentissage statistique pour séquences d’évènements à l’aide de processus ponctuels / Learning from Sequences with Point Processes

Achab, Massil 09 October 2017 (has links)
Le but de cette thèse est de montrer que l'arsenal des nouvelles méthodes d'optimisation permet de résoudre des problèmes d'estimation difficile basés sur les modèles d'évènements.Alors que le cadre classique de l'apprentissage supervisé traite les observations comme une collection de couples de covariables et de label, les modèles d'évènements ne regardent que les temps d'arrivée d'évènements et cherchent alors à extraire de l'information sur la source de donnée.Ces évènements datés sont ordonnés de façon chronologique et ne peuvent dès lors être considérés comme indépendants.Ce simple fait justifie l'usage d'un outil mathématique particulier appelé processus ponctuel pour apprendre une certaine structure à partir de ces évènements.Deux exemples de processus ponctuels sont étudiés dans cette thèse.Le premier est le processus ponctuel derrière le modèle de Cox à risques proportionnels:son intensité conditionnelle permet de définir le ratio de risque, une quantité fondamentale dans la littérature de l'analyse de survie.Le modèle de régression de Cox relie la durée avant l'apparition d'un évènement, appelé défaillance, aux covariables d'un individu.Ce modèle peut être reformulé à l'aide du cadre des processus ponctuels.Le second est le processus de Hawkes qui modélise l'impact des évènements passés sur la probabilité d'apparition d'évènements futurs.Le cas multivarié permet d'encoder une notion de causalité entre les différentes dimensions considérées.Cette thèse est divisée en trois parties.La première s'intéresse à un nouvel algorithme d'optimisation que nous avons développé.Il permet d'estimer le vecteur de paramètre de la régression de Cox lorsque le nombre d'observations est très important.Notre algorithme est basé sur l'algorithme SVRG (Stochastic Variance Reduced Gradient) et utilise une méthode MCMC (Monte Carlo Markov Chain) pour approcher un terme de la direction de descente.Nous avons prouvé des vitesses de convergence pour notre algorithme et avons montré sa performance numérique sur des jeux de données simulés et issus de monde réel.La deuxième partie montre que la causalité au sens de Hawkes peut être estimée de manière non-paramétrique grâce aux cumulants intégrés du processus ponctuel multivarié.Nous avons développer deux méthodes d'estimation des intégrales des noyaux du processus de Hawkes, sans faire d'hypothèse sur la forme de ces noyaux. Nos méthodes sont plus rapides et plus robustes, vis-à-vis de la forme des noyaux, par rapport à l'état de l'art. Nous avons démontré la consistence statistique de la première méthode, et avons montré que la deuxième peut être réduite à un problème d'optimisation convexe.La dernière partie met en lumière les dynamiques de carnet d'ordre grâce à la première méthode d'estimation non-paramétrique introduite dans la partie précédente.Nous avons utilisé des données du marché à terme EUREX, défini de nouveaux modèles de carnet d'ordre (basés sur les précédents travaux de Bacry et al.) et appliqué la méthode d'estimation sur ces processus ponctuels.Les résultats obtenus sont très satisfaisants et cohérents avec une analysé économétrique.Un tel travail prouve que la méthode que nous avons développé permet d'extraire une structure à partir de données aussi complexes que celles issues de la finance haute-fréquence. / The guiding principle of this thesis is to show how the arsenal of recent optimization methods can help solving challenging new estimation problems on events models.While the classical framework of supervised learning treat the observations as a collection of independent couples of features and labels, events models focus on arrival timestamps to extract information from the source of data.These timestamped events are chronologically ordered and can't be regarded as independent.This mere statement motivates the use of a particular mathematical object called point process to learn some patterns from events.Two examples of point process are treated in this thesis.The first is the point process behind Cox proportional hazards model:its conditional intensity function allows to define the hazard ratio, a fundamental quantity in survival analysis literature.The Cox regression model relates the duration before an event called failure to some covariates.This model can be reformulated in the framework of point processes.The second is the Hawkes process which models how past events increase the probability of future events.Its multivariate version enables encoding a notion of causality between the different nodes.The thesis is divided into three parts.The first focuses on a new optimization algorithm we developed to estimate the parameter vector of the Cox regression in the large-scale setting.Our algorithm is based on stochastic variance reduced gradient descent (SVRG) and uses Monte Carlo Markov Chain to estimate one costly term in the descent direction.We proved the convergence rates and showed its numerical performance on both simulated and real-world datasets.The second part shows how the Hawkes causality can be retrieved in a nonparametric fashion from the integrated cumulants of the multivariate point process.We designed two methods to estimate the integrals of the Hawkes kernels without any assumption on the shape of the kernel functions. Our methods are faster and more robust towards the shape of the kernels compared to state-of-the-art methods. We proved the statistical consistency of the first method, and designed turned the second into a convex optimization problem.The last part provides new insights from order book data using the first nonparametric method developed in the second part.We used data from the EUREX exchange, designed new order book model (based on the previous works of Bacry et al.) and ran the estimation method on these point processes.The results are very insightful and consistent with an econometric analysis.Such work is a proof of concept that our estimation method can be used on complex data like high-frequency financial data.
383

Campus Climate and Non-Faculty Employees with Disabilities: A Quantitative Analysis of Perceptions

Heider, Mark Alan 05 May 2023 (has links)
No description available.
384

IRT in SPSS: The development of a new software tool to conduct item response models

DiTrapani, John B. 29 September 2016 (has links)
No description available.
385

Sheet Forming and Porging Of Zn-Al alloys

Porster, Allam James 06 1900 (has links)
<p>A brief introduction to superplasticity is presented.</p> <p>The forming of superplastic sheet into a rectangular trough is examined and the thickness distribution theoretically determined. Figures, dependent on the height to width ratio of the trough, are presented of the thickness variation against a suitable geometric parameter.</p> <p>Experiments have been performed on the forming of superplastic Zn-Al into a flat bottomed cylindrical cavity. A semi-empirical analysis based on the theoretical work of Cornfield and Johnson is presented and the theoretical and experimental thickness distributions compared.</p> <p>The closed die forging of superplastic and conventional Zn-Al eutectoid alloys is examined. The results of experiments are presented and a two phase forging cycle, suitable for rate dependent materials, is presented.</p> / Master of Engineering (ME)
386

Planar maximal covering location problem under different block norms

Younies, Hassan 06 1900 (has links)
<p>This study introduces a new model for the planar maximal covering location problem (PMCLP) under different block norms. The problem involves locating p facilities anywhere on the plane in order to cover the maximum number of n given demand points. The generalization we introduce is that distance measures assigned to facilities are block norms of different types and different proximity measures. This problem is handled in three phases. First, a simple model based on the geometrical properties of the block norms' unit ball contours is formulated as a mixed integer program (MIP). The MIP formulation is more general than previous PMCLP's and can handle facilities with different coverage measures under block norm distance and different setup cost, and capacity. Second, an exact solution approach is presented based on: (1) An exact algorithm that is capable of handling a single facility efficiently. (2) An algorithm for an equivalent graph problem--the maximum clique problem (MCP). Finally, the PMCLP under different block norms is formulated as an equivalent graph problem. This graph problem is then modeled as an unconstrained binary quadratic problem (UQP) and solved by a genetic algorithm. Computational examples are provided for the MIP, the exact algorithm, and the genetic algorithm approaches.</p> / Doctor of Philosophy (PhD)
387

Interactive computer graphical approaches to some maximin and minimax location problems

Buchanan, John David 03 1900 (has links)
<p>This study describes algorithms for the solution of several single facility location problems with maximin or minimax objective functions. Interactive computer graphical algorithms are presented for maximizing the minimum rectilinear travel distance and for minimizing the maximum rectilinear travel distance to a number of point demands when there exist several right-angled polygonal barriers to travel. For the special case of unweighted rectilinear distances with barriers, a purely numerical algorithm for the maximin location problem is described. An interactive computer graphical algorithm for maximizing the minimum Euclidean, rectilinear, or general l$\sb{\rm p}$ distance to a number of polygonal areas is described. A modified version of this algorithm for location problems with the objective of minimizing the maximum cost when the costs are non-linear monotonically decreasing functions of distance is presented. Extension of this algorithm to problems involving the minimization of the maximum cost when the costs are functions of both distance and direction is discussed using asymmetric distances.</p> / Doctor of Philosophy (PhD)
388

Properties of distance functions and minisum location models

Brimberg, Jack 03 1900 (has links)
<p>This study is divided into two main parts. The first section deals with mathematical properties of distance functions. The ℓp norm is analyzed as a function of its parameter p, leading to useful insights for fitting this distance measure to a transportation network. Properties of round norms are derived, which allow us later to generalize some well-known results. The properties of a norm raised to a power are also investigated, and these prove useful in our subsequent analysis of location problems with economies or dis-economies of scale. A positive linear combination of the Euclidean and the rectangular distance measures, which we term the weighted one-two norm, is introduced. This distance function provides a linear regression model with interesting implications on the characterization of transportation networks. A directional bias function is defined, and examined in detail for the ℓp and weighted one-two norms. In the second part of this study, several properties are derived for various forms of the continuous minisum location model. The Weiszfeld iterative solution procedure for the standard Weber problem with ℓp distances is also examined, and global and local convergence results obtained. These results are extended to the mixed-norm problem. In addition, optimally criteria are derived at non-differentiable points of the objective function.</p> / Doctor of Philosophy (PhD)
389

<b>Integrative analysis of Transcriptome-wide and Proteome-wide association study for non-Mendelian disorders</b>

Sudhanshu Shekhar (18430305) 25 April 2024 (has links)
<p dir="ltr">Genome-wide association studies (GWAS) have uncovered numerous variants linked to a wide range of complex traits. However, understanding the mechanisms underlying these associations remains a challenge. To determine genetically regulated mechanisms, additional layers of gene regulation, such as transcriptome and proteome, need to be assayed. Transcriptome-wide association studies (TWAS) and Proteome-wide association studies (PWAS) offer a gene-centered approach to illuminate these mechanisms by examining how variants influence transcript expression and protein expression, thereby inferring their impact on complex traits. In the introductory chapter of this dissertation, I discuss the methodology of TWAS and PWAS, exploring the assumptions they make in estimating SNP-gene effect sizes, their applications, and their limitations. In Chapter 2, I undertake an integrative analysis of TWAS and PWAS using the largest cohort of individuals affected with Tourette’s Syndrome within the Psychiatric Genomics Consortium (PGC) – Tourette’s Syndrome working group. I identified genomic regions containing multiple TWAS and PWAS signals and integrated these results using the computational colocalization method to gain insights into genetically regulated genes implicated in the disorder. In Chapter 3, I conduct an extensive TWAS of the Myasthenia Gravis phenotype, uncovering novel genes associated with the disorder. Utilizing two distinct methodologies, I performed individual tissue-based and cross-tissue-based imputation to assess the genetic influence on transcript expression. A secondary TWAS analysis was conducted after removing SNPs from the major histocompatibility complex (MHC) region to identify significant genes outside this region. Finally, in Chapter 4, I present the conclusions drawn from both studies, offering a comprehensive understanding of the genetic architecture underlying these traits. I also discuss future directions aimed at advancing the mechanistic understanding of complex non-Mendelian disorders.</p>
390

Quantitative Anisotropy Imaging based on Spectral Interferometry

Li, Chengshuai 01 February 2019 (has links)
Spectral interferometry, also known as spectral-domain white light or low coherence interferometry, has seen numerous applications in sensing and metrology of physical parameters. It can provide phase or optical path information of interest in single shot measurements with exquisite sensitivity and large dynamic range. As fast spectrometer became more available in 21st century, spectral interferometric techniques start to dominate over time-domain interferometry, thanks to its speed and sensitivity advantage. In this work, a dual-modality phase/birefringence imaging system is proposed to offer a quantitative approach to characterize phase, polarization and spectroscopy properties on a variety of samples. An interferometric spectral multiplexing method is firstly introduced by generating polarization mixing with specially aligned polarizer and birefringence crystal. The retardation and orientation of sample birefringence can then be measured simultaneously from a single interference spectrum. Furthermore, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. The highly integrated system demonstrates its capability for noninvasive, label-free, highly sensitive birefringence, DIC and phase imaging on anisotropic materials and biological specimens, where multiple intrinsic contrasts are desired. Besides using different intrinsic contrast regime to quantitatively measure different biological samples, spectral multiplexing interferometry technique also finds an exquisite match in imaging single anisotropic nanoparticles, even its size is well below diffraction limit. Quantitative birefringence spectroscopy measurement over gold nanorod particles on glass substrate demonstrates that the proposed system can simultaneously determine the polarizability-induced birefringence orientation, as well as the scattering intensity and the phase differences between major/minor axes of single nanoparticles. With the anisotropic nanoparticles' spectroscopic polarizability defined prior to the measurement with calculation or simulation, the system can be further used to reveal size, aspect ratio and orientation information of the detected anisotropic nanoparticle. Alongside developing optical anisotropy imaging systems, the other part of this research describes our effort of investigating the sensitivity limit for general spectral interferometry based systems. A complete, realistic multi-parameter interference model is thus proposed, while corrupted by a combination of shot noise, dark noise and readout noise. With these multiple noise sources in the detected spectrum following different statistical behaviors, Cramer-Rao Bounds is derived for multiple unknown parameters, including optical pathlength, system-specific initial phase, spectrum intensity as well as fringe visibility. The significance of the work is to establish criteria to evaluate whether an interferometry-based optical measurement system has been optimized to its hardware best potential. An algorithm based on maximum likelihood estimation is also developed to achieve absolute optical pathlength demodulation with high sensitivity. In particular, it achieves Cramer-Rao bound and offers noise resistance that can potentially suppress the demodulation jump occurrence. By simulations and experimental validations, the proposed algorithm demonstrates its capability of achieving the Cramer-Rao bound over a large dynamic range of optical pathlengths, initial phases and signal-to-noise ratios. / PHD / Optical imaging is unique for its ability to use light to provide both structural and functional information from microscopic to macroscopic scales. As for microscopy, how to create contrast for better visualization of detected objects is one of the most important topic. In this work, we are aiming at developing a noninvasive, label-free and quantitative imaging technique based on multiple intrinsic contrast regimes, such as intensity, phase and birefringence. Spectral multiplexing interferometry method is firstly introduced by generating spectral interference with polarization mixing. Multiple parameters can thus be demodulated from single-shot interference spectrum. With Jones Matrix analysis, the retardation and orientation of sample birefringence can be measured simultaneously. A dual-modality phase/birefringence imaging system is proposed to offer a quantitative approach to characterize phase, polarization and spectroscopy properties on a variety of samples. The high integrated system can not only deliver label-free, highly sensitive birefringence, DIC and phase imaging of anisotropic materials and biological specimens, but also reveal size, aspect ratio and orientation information of anisotropic nanoparticles of which the size is well below diffraction limit. Alongside developing optical imaging systems based on spectral interferometry, the other part of this research describes our effort of investigating the sensitivity limit for general spectral interferometry based systems. The significance of the work is using Cramer-Rao Bounds to establish criteria to evaluate whether an optical measurement system has been optimized to its hardware best potential. An algorithm based on maximum likelihood estimation is also developed to achieve absolute optical pathlength demodulation with high sensitivity. In particular, it achieves Cramer-Rao bound and offers noise resistance that can potentially suppress the demodulation jump occurrence.

Page generated in 0.0778 seconds