• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 244
  • 50
  • 37
  • 18
  • 17
  • 11
  • 10
  • 10
  • 6
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 497
  • 126
  • 98
  • 90
  • 75
  • 62
  • 42
  • 41
  • 37
  • 34
  • 32
  • 32
  • 31
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

It’s Personal and Not Just Business: The Effects of Admitting Transgressions on the Perception of Transgressors

Blandina, Alexander 01 January 2013 (has links)
Three experiments examined how a transgressor’s response, once accused of a wrongdoing, alters other’s perceptions of transgressor. Study 1 investigated how a baseball player’s response to steroid usage accusations affected fans’ perceptions of him. Participants thought of the athlete more positively when he apologized for his drug usage as compared to when he denied it or provided no comment. Study 2 examined if the effects of a transgressor’s response are moderated by the transgressor’s reputation. Participants were predicted to prefer apologies over denials if they had a pre-existing positive view of the transgressor (i.e., the person was a friend and not a stranger or someone known for being lazy). Results showed that, similar to Study 1, participants respected the transgressor and thought he handled the situation better when he apologized instead of denied the transgression, but contrary to predictions, the transgressor’s reputation did not have an effect on participants’ reactions to a transgressor’s responses. Study 3 examined whether feelings of schadenfreude (i.e., positive affect resulting from another’s misfortune) mitigated negative feelings toward a transgressor who denied the transgression. After participants witnessed a transgression, they then had to work with the transgressor on a task. When the transgressor performed the task incompetently, participants were predicted to feel schadenfreude and therefore not feel it was as important to hear the transgressor admit to his wrongdoing. Results indicated that participants felt more negatively toward an incompetent transgressor than one who contributed equally to the task, regardless of whether he denied or apologized for the transgression. Furthermore, contrary to the results of Studies 1 and 2, participants did not have increased positive feelings toward transgressors who apologized. Overall, these studies provide evidence that apologizing and expressing ownership for a transgression is the best method to respond with to facilitate relationship repair within multiple situations.
422

Exponential weighted aggregation : oracle inequalities and algorithms / Agrégation à poids exponentiels : inégalités oracles et algorithmes

Luu, Duy tung 23 November 2017 (has links)
Dans plusieurs domaines des statistiques, y compris le traitement du signal et des images, l'estimation en grande dimension est une tâche importante pour recouvrer un objet d'intérêt. Toutefois, dans la grande majorité de situations, ce problème est mal-posé. Cependant, bien que la dimension ambiante de l'objet à restaurer (signal, image, vidéo) est très grande, sa ``complexité'' intrinsèque est généralement petite. La prise en compte de cette information a priori peut se faire au travers de deux approches: (i) la pénalisation (très populaire) et (ii) l'agrégation à poids exponentiels (EWA). L'approche penalisée vise à chercher un estimateur qui minimise une attache aux données pénalisée par un terme promouvant des objets de faible complexité (simples). L'EWA combine une famille des pré-estimateurs, chacun associé à un poids favorisant exponentiellement des pré-estimateurs, lesquels privilègent les mêmes objets de faible complexité.Ce manuscrit se divise en deux grandes parties: une partie théorique et une partie algorithmique. Dans la partie théorique, on propose l'EWA avec une nouvelle famille d'a priori favorisant les signaux parcimonieux à l'analyse par group dont la performance est garantie par des inégalités oracle. Ensuite, on analysera l'estimateur pénalisé et EWA, avec des a prioris généraux favorisant des objets simples, dans un cardre unifié pour établir des garanties théoriques. Deux types de garanties seront montrés: (i) inégalités oracle en prédiction, et (ii) bornes en estimation. On les déclinera ensuite pour des cas particuliers dont certains ont été étudiés dans littérature. Quant à la partie algorithmique, on y proposera une implémentation de ces estimateurs en alliant simulation Monte-Carlo (processus de diffusion de Langevin) et algorithmes d'éclatement proximaux, et montrera leurs garanties de convergence. Plusieurs expériences numériques seront décrites pour illustrer nos garanties théoriques et nos algorithmes. / In many areas of statistics, including signal and image processing, high-dimensional estimation is an important task to recover an object of interest. However, in the overwhelming majority of cases, the recovery problem is ill-posed. Fortunately, even if the ambient dimension of the object to be restored (signal, image, video) is very large, its intrinsic ``complexity'' is generally small. The introduction of this prior information can be done through two approaches: (i) penalization (very popular) and (ii) aggregation by exponential weighting (EWA). The penalized approach aims at finding an estimator that minimizes a data loss function penalized by a term promoting objects of low (simple) complexity. The EWA combines a family of pre-estimators, each associated with a weight exponentially promoting the same objects of low complexity.This manuscript consists of two parts: a theoretical part and an algorithmic part. In the theoretical part, we first propose the EWA with a new family of priors promoting analysis-group sparse signals whose performance is guaranteed by oracle inequalities. Next, we will analysis the penalized estimator and EWA, with a general prior promoting simple objects, in a unified framework for establishing some theoretical guarantees. Two types of guarantees will be established: (i) prediction oracle inequalities, and (ii) estimation bounds. We will exemplify them for particular cases some of which studied in the literature. In the algorithmic part, we will propose an implementation of these estimators by combining Monte-Carlo simulation (Langevin diffusion process) and proximal splitting algorithms, and show their guarantees of convergence. Several numerical experiments will be considered for illustrating our theoretical guarantees and our algorithms.
423

Maintien en conditions opérationnelles pour une flotte de véhicules : étude de la non stabilité des flux de rechange dans le temps / Maintenance, repair and operations for a fleet of vehicles : study of the non-stability of the flow of spares over time

Ducros, Florence 26 June 2018 (has links)
Dans cette thèse, nous proposons une démarche méthodologique permettant de simuler le besoin en équipement de rechange pour une flotte de véhicules. Les systèmes se dégradent avec l’âge ou l’usage, et sont défaillants lorsqu’ils ne remplissent plus leur mission. L’usager a alors besoin d’une assurance que le système soit opérationnel pendant sa durée de vie utile. Un contrat de soutien oblige ainsi l’industriel à remédier à une défaillance et à maintenir le système en condition opérationnelle durant la durée du contrat. Ces dernières années, la mondialisation et l’évolution rapide des technologies obligent les constructeurs à proposer des offres de contrat de maintenance bien au-delà de la vie utile des équipements. La gestion de contrat de soutien ou d’extension de soutien requiert la connaissance de la durée de vie des équipements, mais aussi des conditions d’usages des véhicules, dépendant du client. L’analyse des retours clientèle ou des RetEx est alors un outil important d’aide à la décision pour l’industriel. Cependant ces données ne sont pas homogènes et sont très fortement censurées, ce qui rend les estimations difficiles. La plupart du temps, cette variabilité n’est pas observée mais doit cependant être prise en compte sous peine d’erreur de décision. Nous proposons dans cette thèse de modéliser l’hétérogénéité des durées de vie par un modèle de mélange et de concurrence de deux lois de Weibull. On propose une méthode d’estimation des paramètres capable d’être performante malgré la forte présence de données censurées.Puis, nous faisons appel à une méthode de classification non supervisée afin d’identifier des profils d’utilisation des véhicules. Cela nous permet alors de simuler les besoins en pièces de rechange pour une flotte de véhicules pour la durée du contrat ou pour une extension de contrat. / This thesis gathers methodologicals contributions to simulate the need of replacement equipment for a vehile fleet. Systems degrade with age or use, and fail when they do not fulfill their mission. The user needs an assurance that the system is operational during its useful life. A support contract obliges the manufacturer to remedy a failure and to keep the system in operational condition for the duration of the MCO contract.The management of support contracts or the extension of support requires knowledge of the equipment lifetime and also the uses condition of vehicles, which depends on the customer. The analysis of customer returns or RetEx is then an important tool to help support the decision of the industrial. In reliability or warranty analysis, engineers must often deal with lifetimes data that are non-homogeneous. Most of the time, this variability is unobserved but has to be taken into account for reliability or warranty cost analysis.A further problem is that in reliability analysis, the data is heavily censored which makes estimations more difficult. We propose to consider the heterogeneity of lifetimes by a mixture and competition model of two Weibull laws. Unfortunately, the performance of classical estimation methods (maximum of likelihood via EM, Bayes approach via MCMC) is jeopardized due to the high number of parameters and the heavy censoring.To overcome the problem of heavy censoring for Weibull mixture parameters estimation, we propose a Bayesian bootstrap method, called Bayesian RestorationMaximization.We use an unsupervised clustering method to identify the profiles of vehicle uses. Our method allows to simulate the needs of spare parts for a vehicles fleet for the duration of the contract or for a contract extension.
424

Autonomní jednokanálový deinterleaving / Autonomous Single-Channel Deinterleaving

Tomešová, Tereza January 2021 (has links)
This thesis deals with an autonomous single-channel deinterleaving. An autonomous single-channel deinterleaving is a separation of the received sequence of impulses from more than one emitter to sequences of impulses from one emitter without a human assistance. Methods used for deinterleaving could be divided into single-parameter and multiple-parameter methods according to the number of parameters used for separation. This thesis primarily deals with multi-parameter methods. As appropriate methods for an autonomous single-channel deinterleaving DBSCAN and variational bayes methods were chosen. Selected methods were adjusted for deinterleaving and implemented in programming language Python. Their efficiency is examined on simulated and real data.
425

A Tool for Administration of the Company Products Portfolio / A Tool for Administration of the Company Product Portfolio

Koreň, Miroslav January 2011 (has links)
This paper concerns about key business process in the production companies, namely, the new product development. The object of this thesis has been to create a tool to estimate the risk of the new product development. To reach this goal, current tools used to deciding the risk must have been explored. As the best tool, appropriate for assessing the risk of new product development has proved the Bayesian Network. This paper explains the construction of the Bayesian network and shows the way how to generate the probabilities in the network to be accurate for the risk estimation. Based on this theoretical knowledge has been built an information system, which estimates the risk of the new products and administer the risks.
426

Estimating the Ratio of Two Poisson Rates

Price, Robert M., Bonett, Douglas G. 01 September 2000 (has links)
Classical and Bayesian methods for interval estimation of the ratio of two independent Poisson rates are examined and compared in terms of their exact coverage properties. Two methods to determine sampling effort requirements are derived.
427

Planification et analyse de données spatio-temporelles / Design and analysis of spatio-temporal data

Faye, Papa Abdoulaye 08 December 2015 (has links)
La Modélisation spatio-temporelle permet la prédiction d’une variable régionalisée à des sites non observés du domaine d’étude, basée sur l’observation de cette variable en quelques sites du domaine à différents temps t donnés. Dans cette thèse, l’approche que nous avons proposé consiste à coupler des modèles numériques et statistiques. En effet en privilégiant l’approche bayésienne nous avons combiné les différentes sources d’information : l’information spatiale apportée par les observations, l’information temporelle apportée par la boîte noire ainsi que l’information a priori connue du phénomène. Ce qui permet une meilleure prédiction et une bonne quantification de l’incertitude sur la prédiction. Nous avons aussi proposé un nouveau critère d’optimalité de plans d’expérience incorporant d’une part le contrôle de l’incertitude en chaque point du domaine et d’autre part la valeur espérée du phénomène. / Spatio-temporal modeling allows to make the prediction of a regionalized variable at unobserved points of a given field, based on the observations of this variable at some points of field at different times. In this thesis, we proposed a approach which combine numerical and statistical models. Indeed by using the Bayesian methods we combined the different sources of information : spatial information provided by the observations, temporal information provided by the black-box and the prior information on the phenomenon of interest. This approach allowed us to have a good prediction of the variable of interest and a good quantification of incertitude on this prediction. We also proposed a new method to construct experimental design by establishing a optimality criterion based on the uncertainty and the expected value of the phenomenon.
428

Multimodal Performance Evaluation of Urban Traffic Control: A Microscopic Simulation Study

Sautter, Natalie, Kessler, Lisa, Belikhov, Danil, Bogenberger, Klaus 23 June 2023 (has links)
Multimodality is a main requirement for future Urban Traffic Control (UTC). For cities and traffic engineers to implement multimodal UTC, a holistic, multimodal assessment of UTC measures is needed. This paper proposes a Multimodal Performance Index (MPI), which considers the delays and number of stops of different transport modes that are weighted to each other. To determine suitable mode-specific weights, a case study for the German city Ingolstadt is conducted using the microscopic simulation tool SUMO. In the case study, different UTC measures (bus priority, coordination for cyclists, coordination for private vehicle traffic) are implemented to a varying extent and evaluated according to different weight settings. The MPI calculation is done both network-wide and intersection-specific. The results indicate that a weighting according to the occupancy level of modes, as mainly proposed in the literature so far, is not sufficient. This applies particularly to cycling, which should be weighted according to its positive environmental impact instead of its occupancy. Besides, the modespecific weights have to correspond to the traffic-related impact of the mode-specific UTC measures. For Ingolstadt, the results are promising for a weighting according to the current modal split and a weighting with incentives for sustainable modes.
429

Chemical Analysis, Databasing, and Statistical Analysis of Smokeless Powders for Forensic Application

Dennis, Dana-Marie 01 January 2015 (has links)
Smokeless powders are a set of energetic materials, known as low explosives, which are typically utilized for reloading ammunition. There are three types which differ in their primary energetic materials; where single base powders contain nitrocellulose as their primary energetic material, double and triple base powders contain nitroglycerin in addition to nitrocellulose, and triple base powders also contain nitroguanidine. Additional organic compounds, while not proprietary to specific manufacturers, are added to the powders in varied ratios during the manufacturing process to optimize the ballistic performance of the powders. The additional compounds function as stabilizers, plasticizers, flash suppressants, deterrents, and opacifiers. Of the three smokeless powder types, single and double base powders are commercially available, and have been heavily utilized in the manufacture of improvised explosive devices. Forensic smokeless powder samples are currently analyzed using multiple analytical techniques. Combined microscopic, macroscopic, and instrumental techniques are used to evaluate the sample, and the information obtained is used to generate a list of potential distributors. Gas chromatography – mass spectrometry (GC-MS) is arguably the most useful of the instrumental techniques since it distinguishes single and double base powders, and provides additional information about the relative ratios of all the analytes present in the sample. However, forensic smokeless powder samples are still limited to being classified as either single or double base powders, based on the absence or presence of nitroglycerin, respectively. In this work, the goal was to develop statistically valid classes, beyond the single and double base designations, based on multiple organic compounds which are commonly encountered in commercial smokeless powders. Several chemometric techniques were applied to smokeless powder GC-MS data for determination of the classes, and for assignment of test samples to these novel classes. The total ion spectrum (TIS), which is calculated from the GC-MS data for each sample, is obtained by summing the intensities for each mass-to-charge (m/z) ratio across the entire chromatographic profile. A TIS matrix comprising data for 726 smokeless powder samples was subject to agglomerative hierarchical cluster (AHC) analysis, and six distinct classes were identified. Within each class, a single m/z ratio had the highest intensity for the majority of samples, though the m/z ratio was not always unique to the specific class. Based on these observations, a new classification method known as the Intense Ion Rule (IIR) was developed and used for the assignment of test samples to the AHC designated classes. Discriminant models were developed for assignment of test samples to the AHC designated classes using k-Nearest Neighbors (kNN) and linear and quadratic discriminant analyses (LDA and QDA, respectively). Each of the models were optimized using leave-one-out (LOO) and leave-group-out (LGO) cross-validation, and the performance of the models was evaluated by calculating correct classification rates for assignment of the cross-validation (CV) samples to the AHC designated classes. The optimized models were utilized to assign test samples to the AHC designated classes. Overall, the QDA LGO model achieved the highest correct classification rates for assignment of both the CV samples and the test samples to the AHC designated classes. In forensic application, the goal of an explosives analyst is to ascertain the manufacturer of a smokeless powder sample. In addition, knowledge about the probability of a forensic sample being produced by a specific manufacturer could potentially decrease the time invested by an analyst during investigation by providing a shorter list of potential manufacturers. In this work, Bayes* Theorem and Bayesian Networks were investigated as an additional tool to be utilized in forensic casework. Bayesian Networks were generated and used to calculate posterior probabilities of a test sample belonging to specific manufacturers. The networks were designed to include manufacturer controlled powder characteristics such as shape, color, and dimension; as well as, the relative intensities of the class associated ions determined from cluster analysis. Samples were predicted to belong to a manufacturer based on the highest posterior probability. Overall percent correct rates were determined by calculating the percentage of correct predictions; that is, where the known and predicted manufacturer were the same. The initial overall percent correct rate was 66%. The dimensions of the smokeless powders were added to the network as average diameter and average length nodes. Addition of average diameter and length resulted in an overall prediction rate of 70%.
430

Site-Specific Point Positioning and GPS Code Multipath Parameterization and Prediction

EDWARDS, KARLA ROBERTA LISA 25 October 2011 (has links)
No description available.

Page generated in 0.0537 seconds