• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 11
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Exploring Leader-Employee Work Relationship Agreement and Constructiveness of Feedback

Lindsay Mechem Rosokha (13151073) 26 July 2022 (has links)
<p> In recent years, there has been a performance management revolution, making it especially critical that researchers study the informal exchange of feedback outside of the formal review. In this dissertation, I conduct two studies that focus on informal, constructive feedback. In study 1, I validate a measure that captures constructiveness of feedback and another that captures the degree to which work relies on virtual interactions. In study 2, I draw on interpersonal attraction theory to develop a dyadic model that tests three sets of hypotheses using polynomial regression and response surface methodology. First, I test the direct effects of leader-employee (L-E) relational attribute agreement on constructive feedback. Second, I contextualize this dyadic interaction by testing two moderators – gender similarity and virtuality of work. Finally, I examine constructive feedback as a mediating mechanism between L-E relational attribute agreement and three sets of beneficial (job performance and work engagement), consequential (turnover intentions and stress) and interpersonal (prosocial behavior and relationship conflict) outcomes. Overall, my hypotheses received mixed support. In L-E dyads with agreement at high levels of relational attributes, employees experienced more constructive feedback compared to those in L-E dyads that agreed their relational attributes were at low levels. Surprisingly, it was not the case that the extent to which leaders and employees agreed on their relational attributes (whether at high or low levels) was better than disagreeing for constructive feedback. The strength of the relationship between L-E relational attribute agreement and constructive feedback was marginally influenced by gender similarity, but not by virtuality of work. Finally, constructiveness of feedback mediated the relationship between L-E relational attribute agreement and work engagement. Overall, the results show that positive L-E work relationships are important for constructive feedback and motivating employees, especially when the leader and employee both view the relationship positively.</p>
42

The COMPASS Paradigm For The Systematic Evaluation Of U.S. Army Command And Control Systems Using Neural Network And Discrete Event Computer Simulation

Middlebrooks, Sam E. 15 April 2003 (has links)
In today's technology based society the rapid proliferation of new machines and systems that would have been undreamed of only a few short years ago has become a way of life. Developments and advances especially in the areas of digital electronics and micro-circuitry have spawned subsequent technology based improvements in transportation, communications, entertainment, automation, the armed forces, and many other areas that would not have been possible otherwise. This rapid "explosion" of new capabilities and ways of performing tasks has been motivated as often as not by the philosophy that if it is possible to make something better or work faster or be more cost effective or operate over greater distances then it must inherently be good for the human operator. Taken further, these improvements typically are envisioned to consequently produce a more efficient operating system where the human operator is an integral component. The formal concept of human-system interface design has only emerged this century as a recognized academic discipline, however, the practice of developing ideas and concepts for systems containing human operators has been in existence since humans started experiencing cognitive thought. An example of a human system interface technology for communication and dissemination of written information that has evolved over centuries of trial and error development, is the book. It is no accident that the form and shape of the book of today is as it is. This is because it is a shape and form readily usable by human physiology whose optimal configuration was determined by centuries of effort and revision. This slow evolution was mirrored by a rate of technical evolution in printing and elsewhere that allowed new advances to be experimented with as part of the overall use requirement and need for the existence of the printed word and some way to contain it. Today, however, technology is advancing at such a rapid rate that evolutionary use requirements have no chance to develop along side the fast pace of technical progress. One result of this recognition is the establishment of disciplines like human factors engineering that have stated purposes and goals of systematic determination of good and bad human system interface designs. However, other results of this phenomenon are systems that get developed and placed into public use simply because new technology allowed them to be made. This development can proceed without a full appreciation of how the system might be used and, perhaps even more significantly, what impact the use of this new system might have on the operator within it. The U.S. Army has a term for this type of activity. It is called "stove-piped development". The implication of this term is that a system gets developed in isolation where the developers are only looking "up" and not "around". They are thus concerned only with how this system may work or be used for its own singular purposes as opposed to how it might be used in the larger community of existing systems and interfaces or, even more importantly, in the larger community of other new systems in concurrent development. Some of the impacts for the Army from this mode of system development are communication systems that work exactly as designed but are unable to interface to other communications systems in other domains for battlefield wide communications capabilities. Having communications systems that cannot communicate with each other is a distinct problem in its own right. However, when developments in one industry produce products that humans use or attempt to use with products from totally separate developments or industries, the Army concept of product development resulting from stove-piped design visions can have significant implication on the operation of each system and the human operator attempting to use it. There are many examples that would illustrate the above concept, however, one that will be explored here is the Army effort to study, understand, and optimize its command and control (C2) operations. This effort is at the heart of a change in the operational paradigm in C2 Tactical Operations Centers (TOCs) that the Army is now undergoing. For the 50 years since World War II the nature, organization, and mode of the operation of command organizations within the Army has remained virtually unchanged. Staffs have been organized on a basic four section structure and TOCs generally only operate in a totally static mode with the amount of time required to move them to keep up with a mobile battlefield going up almost exponentially from lower to higher command levels. However, current initiatives are changing all that and while new vehicles and hardware systems address individual components of the command structures to improve their operations, these initiatives do not necessarily provide the environment in which the human operator component of the overall system can function in a more effective manner. This dissertation examines C2 from a system level viewpoint using a new paradigm for systematically examining the way TOCs operate and then translating those observations into validated computer simulations using a methodological framework. This paradigm is called COmputer Modeling Paradigm And Simulation of Systems (COMPASS). COMPASS provides the ability to model TOC operations in a way that not only includes the individuals, work groups and teams in it, but also all of the other hardware and software systems and subsystems and human-system interfaces that comprise it as well as the facilities and environmental conditions that surround it. Most of the current literature and research in this area focuses on the concept of C2 itself and its follow-on activities of command, control, communications (C3), command, control, communications, and computers (C4), and command, control, communications, computers and intelligence (C4I). This focus tends to address the activities involved with the human processes within the overall system such as individual and team performance and the commander's decision-making process. While the literature acknowledges the existence of the command and control system (C2S), little effort has been expended to quantify and analyze C2Ss from a systemic viewpoint. A C2S is defined as the facilities, equipment, communications, procedures, and personnel necessary to support the commander (i.e., the primary decision maker within the system) for conducting the activities of planning, directing, and controlling the battlefield within the sector of operations applicable to the system. The research in this dissertation is in two phases. The overall project incorporates sequential experimentation procedures that build on successive TOC observation events to generate an evolving data store that supports the two phases of the project. Phase I consists of the observation of heavy maneuver battalion and brigade TOCs during peacetime exercises. The term "heavy maneuver" is used to connotate main battle forces such as armored and mechanized infantry units supported by artillery, air defense, close air, engineer, and other so called combat support elements. This type of unit comprises the main battle forces on the battlefield. It is used to refer to what is called the conventional force structure. These observations are conducted using naturalistic observation techniques of the visible functioning of activities within the TOC and are augmented by automatic data collection of such things as analog and digital message traffic, combat reports generated by the computer simulations supporting the wargame exercise, and video and audio recordings where appropriate and available. Visible activities within the TOC include primarily the human operator functions such as message handling activities, decision-making processes and timing, coordination activities, and span of control over the battlefield. They also include environmental conditions, functional status of computer and communications systems, and levels of message traffic flows. These observations are further augmented by observer estimations of such indicators as perceived level of stress, excitement, and level of attention to the mission of the TOC personnel. In other words, every visible and available component of the C2S within the TOC is recorded for analysis. No a priori attempt is made to evaluate the potential significance of each of the activities as their contribution may be so subtle as to only be ascertainable through statistical analysis. Each of these performance activities becomes an independent variable (IV) within the data that is compared against dependent variables (DV) identified according to the mission functions of the TOC. The DVs for the C2S are performance measures that are critical combat tasks performed by the system. Examples of critical combat tasks are "attacking to seize an objective", "seizure of key terrain", and "river crossings'. A list of expected critical combat tasks has been prepared from the literature and subject matter expert (SME) input. After the exercise is over, the success of these critical tasks attempted by the C2S during the wargame are established through evaluator assessments, if available, and/or TOC staff self analysis and reporting as presented during after action reviews. The second part of Phase I includes datamining procedures, including neural networks, used in a constrained format to analyze the data. The term constrained means that the identification of the outputs/DV is known. The process was to identify those IV that significantly contribute to the constrained DV. A neural network is then constructed where each IV forms an input node and each DV forms an output node. One layer of hidden nodes is used to complete the network. The number of hidden nodes and layers is determined through iterative analysis of the network. The completed network is then trained to replicate the output conditions through iterative epoch executions. The network is then pruned to remove input nodes that do not contribute significantly to the output condition. Once the neural network tree is pruned through iterative executions of the neural network, the resulting branches are used to develop algorithmic descriptors of the system in the form of regression like expressions. For Phase II these algorithmic expressions are incorporated into the CoHOST discrete event computer simulation model of the C2S. The programming environment is the commercial programming language Micro Saintä running on a PC microcomputer. An interrogation approach was developed to query these algorithms within the computer simulation to determine if they allow the simulation to reflect the activities observed in the real TOC to within an acceptable degree of accuracy. The purpose of this dissertation is to introduce the COMPASS concept that is a paradigm for developing techniques and procedures to translate as much of the performance of the entire TOC system as possible to an existing computer simulation that would be suitable for analyses of future system configurations. The approach consists of the following steps: • Naturalistic observation of the real system using ethnographic techniques. • Data analysis using datamining techniques such as neural networks. • Development of mathematical models of TOC performance activities. • Integration of the mathematical into the CoHOST computer simulation. • Interrogation of the computer simulation. • Assessment of the level of accuracy of the computer simulation. • Validation of the process as a viable system simulation approach. / Ph. D.
43

On specification and inference in the econometrics of public procurement

Sundström, David January 2016 (has links)
In Paper [I] we use data on Swedish public procurement auctions for internal regularcleaning service contracts to provide novel empirical evidence regarding green publicprocurement (GPP) and its effect on the potential suppliers’ decision to submit a bid andtheir probability of being qualified for supplier selection. We find only a weak effect onsupplier behavior which suggests that GPP does not live up to its political expectations.However, several environmental criteria appear to be associated with increased complexity,as indicated by the reduced probability of a bid being qualified in the postqualificationprocess. As such, GPP appears to have limited or no potential to function as an environmentalpolicy instrument. In Paper [II] the observation is made that empirical evaluations of the effect of policiestransmitted through public procurements on bid sizes are made using linear regressionsor by more involved non-linear structural models. The aspiration is typically to determinea marginal effect. Here, I compare marginal effects generated under both types ofspecifications. I study how a political initiative to make firms less environmentally damagingimplemented through public procurement influences Swedish firms’ behavior. Thecollected evidence brings about a statistically as well as economically significant effect onfirms’ bids and costs. Paper [III] embarks by noting that auction theory suggests that as the number of bidders(competition) increases, the sizes of the participants’ bids decrease. An issue in theempirical literature on auctions is which measurement(s) of competition to use. Utilizinga dataset on public procurements containing measurements on both the actual and potentialnumber of bidders I find that a workhorse model of public procurements is bestfitted to data using only actual bidders as measurement for competition. Acknowledgingthat all measurements of competition may be erroneous, I propose an instrumental variableestimator that (given my data) brings about a competition effect bounded by thosegenerated by specifications using the actual and potential number of bidders, respectively.Also, some asymptotic results are provided for non-linear least squares estimatorsobtained from a dependent variable transformation model. Paper [VI] introduces a novel method to measure bidders’ costs (valuations) in descending(ascending) auctions. Based on two bounded rationality constraints bidders’costs (valuations) are given an imperfect measurements interpretation robust to behavioraldeviations from traditional rationality assumptions. Theory provides no guidanceas to the shape of the cost (valuation) distributions while empirical evidence suggeststhem to be positively skew. Consequently, a flexible distribution is employed in an imperfectmeasurements framework. An illustration of the proposed method on Swedishpublic procurement data is provided along with a comparison to a traditional BayesianNash Equilibrium approach.
44

Optimizing the Nuclear Waste Fund's Profit / Optimering av Kärnavfallsfondens avkastning

Kazi-tani, Zakaria, Ramirez Alvarez, André January 2018 (has links)
The Nuclear Waste Fund constitutes a financial system that finances future costs of the management of spent nuclear fuel as well as decommissioning of nuclear power plants. The fund invests its capital under strict rules which are stipulated in the investment policy established by the board. The policy stipulates that the fund can only invest according to certain allocation limits, and restricts it to invest solely in nominal and inflation-linked bonds issued by the Swedish state as well as treasury securities. A norm portfolio is built to compare the performance of the NWF’s investments. On average, the NWF has outperformed the norm portfolio on recent years, but it may not always have been optimal. Recent studies suggest that allocation limits should be revised over time as the return and risk parameters may change over time. This study focused on simulating three different portfolios where the allocation limits and investment options were extended to see if these extensions would outperform the norm portfolio while maintaining a set risk limit. Portfolio A consisted of OMRX REAL and OMRX TBOND indexes, Portfolio B consisted of OMRX REAL, OMRX TBOND and S&amp;P Sweden 1+ Year Investment Grade Corporate Bond Indexes, and Portfolio C consisted of OMXR REAL, OMRX TBOND and OMXSPI indexes. The return of each portfolio for different weight distributions of the assets were simulated in MATLAB, and polynomial regression models were built in order to optimize the return as a function of the assets’ weights using a Lagrange Multiplier approach for each portfolio. The results depicted that the maximal returns of Portfolios A, B and C were 4.00%, 4.13% and 7.93% respectively, outperforming the norm portfolio’s average return of 3.69% over the time period 2009-2016.
45

Application of Java on Statistics Education

Tsay, Yuh-Chyuan 24 July 2000 (has links)
With the prevalence of internet, it is gradually becoming a trend to use the network as a tool of computer-added education. However, it is used to present the computer-added education with static state of the word, but it is just convenient to read for the user and there are no difference with traditional textbook. As the growing up of WWW and the development of Java, the interactive computer-added education is becoming a trend in the future and it can promote the effect of teaching basic statistics with the application of this new media. The instructor can take advantage of HTML by combining with Java Applets to achieve the display of interactive education through WWW. In this paper, we will use six examples of Java Applets about statistical computer-added education to help student easily to learn and to understand some abstract statistical concepts. The key methods to reach the goal are visualization and simulation with the display of graphics or games. Finally, we will discuss how to use the Applets and how to add the Java Applets into your homepage easily.
46

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
47

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
48

Unpacking Emotional Dissonance: Examining the Effects of Event-Level Emotional Dissonance on Well-Being Using Polynomial Regression

Harris, Mary Margaret 10 September 2014 (has links)
No description available.
49

[pt] MODELO SUBSTITUTO PARA FLUXO NÃO SATURADO VIA REGRESSÃO POLINOMIAL EVOLUCIONÁRIA: CALIBRAÇÃO COM O ENSAIO DE INFILTRAÇÃO MONITORADA / [en] SURROGATE MODEL FOR UNSATURATED FLOW THROUGH EVOLUTIONARY POLYNOMIAL REGRESSION: CALIBRATION WITH THE MONITORED INFILTRATION TEST

RUAN GONCALVES DE SOUZA GOMES 26 February 2021 (has links)
[pt] A análise de fluxo de água sob condição transiente não saturada requer o conhecimento das propriedades hidráulicas do solo. Essas relações constitutivas, denominadas curva característica e função de condutividade hidráulica, são descritas através de modelos empíricos que geralmente possuem vários parâmetros que devem ser calibrados com relação a dados coletados. Muitos dos parâmetros nos modelos constitutivos não podem ser medidos diretamente em campo ou laboratório, mas somente podem ser inferidos de forma significativa a partir de dados coletados e da modelagem inversa. Para obter os parâmetros do solo com a análise inversa, um algoritmo de otimização de busca local ou global pode ser aplicado. As otimizações globais são mais capazes de encontrar parâmetros ótimos, no entanto, a solução direta, por meio da modelagem numérica é computacionalmente custosa. Portanto, soluções analíticas (modelo substituto) podem superar essa falha acelerando o processo de otimização. Nesta dissertação, apresentamos a Regressão Polinomial Evolucionária (EPR) como uma ferramenta para desenvolver modelos substitutos do fluxo não saturado. Um rico conjunto de dados de parâmetros hidráulicos do solo é usado para calibrar o nosso modelo, e dados do mundo real são utilizados para validar nossa metodologia. Nossos resultados demonstram que o modelo da EPR prevê com precisão os dados de carga de pressão. As simulações do modelo se mostram concordantes com as simulações do programa Hydrus. / [en] Water flow analyses under transient soil hydraulic conditions require knowledge of the soil hydraulic properties. These constitutive relationships, named soil-water characteristic curve (SWCC) and hydraulic conductivity function (HCF) are described through empirical models which generally have several parameters that must be calibrated against collected data. Many of the parameters in SWCC and HCF models cannot be directly measured in field or laboratory but can only be meaningfully inferred from collected data and inverse modeling. In order to obtain the soil parameters with the inverse process, a local or global optimization algorithm may be applied. Global optimizations are more capable of fiding optimum parameters, however the direct solution through numerical modeling are time consuming. Therefore, analytical solutions (surrogate models) may overcome this shortcomming by accelerating the optimization process. In this work we introduce Evolutionary Polynomial Regression (EPR) as a tool to develop surrogate models of the physically-based unsaturated flow. A rich dataset of soil hydraulic parameters is used to calibrate our surrogate model, and real-world data are then utilized to validate our methodology. Our results demonstrate that the EPR model predicts accurately the observed pressure head data. The model simulations are shown to be in good agreement with the Hydrus software package.
50

Colorimetric and spectral analysis of rock art by means of the characterization of digital sensors

Molada Tebar, Adolfo 01 February 2021 (has links)
Tesis por compendio / [ES] Las labores de documentación de arte rupestre son arduas y delicadas, donde el color desempeña un papel fundamental, proporcionando información vital a nivel descriptivo, técnico y cuantitativo . Tradicionalmente los métodos de documentación en arqueología quedaban restringidos a procedimientos estrictamente subjetivos. Sin embargo, esta metodología conlleva limitaciones prácticas y técnicas, afectando a los resultados obtenidos en la determinación del color. El empleo combinado de técnicas geomáticas, como la fotogrametría o el láser escáner, junto con técnicas de procesamiento de imágenes digitales, ha supuesto un notable avance. El problema es que, aunque las imágenes digitales permiten capturar el color de forma rápida, sencilla, y no invasiva, los datos RGB registrados por la cámara no tienen un sentido colorimétrico riguroso. Se requiere la aplicación de un proceso riguroso de tranformación que permita obtener datos fidedignos del color a través de imágenes digitales. En esta tesis se propone una solución científica novedosa y de vanguardia, en la que se persigue integrar el análisis espectrofotométrico y colorimétrico como complemento a técnicas fotogramétricas que permitan una mejora en la identificación del color y representación de pigmentos con máxima fiabilidad en levantamientos, modelos y reconstrucciones tridimensionales (3D). La metodología propuesta se basa en la caracterización colorimétrica de sensores digitales, que es de novel aplicación en pinturas rupestres. La caracterización pretende obtener las ecuaciones de transformación entre los datos de color registrados por la cámara, dependientes del dispositivo, y espacios de color independientes, de base física, como los establecidos por la Commission Internationale de l'Éclairage (CIE). Para el tratamiento de datos colorimétricos y espectrales se requiere disponer de un software de características técnicas muy específicas. Aunque existe software comercial, lo cierto es que realizan por separado el tratamiento digital de imágenes y las operaciones colorimétricas. No existe software que integre ambas, ni que además permita llevar a cabo la caracterización. Como aspecto fundamental, presentamos en esta tesis el software propio desarrollado, denominado pyColourimetry, siguiendo las recomendaciones publicadas por la CIE, de código abierto, y adaptado al flujo metodológico propuesto, de modo que facilite la independencia y el progreso científico sin ataduras comerciales, permitiendo el tratamiento de datos colorimétricos y espectrales, y confiriendo al usuario pleno control del proceso y la gestión de los datos obtenidos. Adicinalmente, en este estudio se expone el análisis de los principales factores que afectan a la caracterización tales como el sensor empleado, los parámetros de la cámara durante la toma, la iluminación, el modelo de regresión, y el conjunto de datos empleados como entrenamiento del modelo. Se ha aplicado un modelo de regresión basado en procesos Gaussianos, y se ha comparado con los resultados obtenidos mediante polinomios. También presentamos un nuevo esquema de trabajo que permite la selección automática de muestras de color, adaptado al rango cromático de la escena, que se ha denominado P-ASK, basado en el algoritmo de clasificación K-means. Los resultados obtenidos en esta tesis demuestran que el proceso metodológico de caracterización propuesto es altamente aplicable en tareas de documentación y preservación del patrimonio cultural en general, y en arte rupestre en particular. Se trata de una metodología de bajo coste, no invasiva, que permite obtener el registro colorimétrico de escenas completas. Una vez caracterizada, una cámara digital convencional puede emplearse para la determinación del color de forma rigurosa, simulando un colorímetro, lo que permitirá trabajar en un espacio de color de base física, independiente del dispositivo y comparable con / [CA] Les tasques de documentació gràfica d'art rupestre són àrdues i delicades, on el color compleix un paper fonamental, proporcionant informació vital a nivell descriptiu, t\`ecnic i quantitatiu.Tradicionalment els mètodes de documentació en arqueologia quedaven restringits a procediments estrictament subjectius, comportant limitacions pràctiques i tècniques, afectant els resultats obtinguts en la determinació de la color. L'ús combinat de tècniques geomàtiques, com la fotogrametria o el làser escàner, juntament amb tècniques de processament i realç d'imatges digitals, ha suposat un notable avanç. Tot i que les imatges digitals permeten capturar el color de forma ràpida, senzilla, i no invasiva, les dades RGB proporcionades per la càmera no tenen un sentit colorimètric rigorós. Es requereix l'aplicació d'un procés rigorós de transformació que permeti obtenir dades fidedignes de la color a través d'imatges digitals. En aquesta tesi es proposa una solució científica innovadora i d'avantguarda, en la qual es persegueix integrar l'anàlisi espectrofotomètric i colorimètric com a complement a tècniques fotogramètriques que permetin una millora en la identificació de la color i representació de pigments amb màxima fiabilitat en aixecaments, models i reconstruccions tridimensionals 3D. La metodologia proposada es basa en la caracterització colorimètrica de sensors digitals, que és de novell aplicació en pintures rupestres. La caracterització pretén obtenir les equacions de transformació entre les dades de color registrats per la càmera, dependents d'el dispositiu, i espais de color independents, de base física, com els establerts per la Commission Internationale de l'Éclairage (CIE). Per al tractament de dades colorimètriques i espectrals de forma rigorosa es requereix disposar d'un programari de característiques tècniques molt específiques. Encara que hi ha programari comercial, fan per separat el tractament digital d'imatges i les operacions colorimètriques. No hi ha programari que integri totes dues, ni que permeti dur a terme la caracterització. Com a aspecte addicional i fonamental, vam presentar el programari propi que s'ha desenvolupat, denominat pyColourimetry, segons les recomanacions publicades per la CIE, de codi obert, i adaptat al flux metodológic proposat, de manera que faciliti la independència i el progrés científic sense lligams comercials, permetent el tractament de dades colorimètriques i espectrals, i conferint a l'usuari ple control del procés i la gestió de les dades obtingudes. A més, s'exposa l'anàlisi dels principals factors que afecten la caracterització tals com el sensor emprat, els paràmetres de la càmera durant la presa, il¿luminació, el model de regressió, i el conjunt de dades emprades com a entrenament d'el model. S'ha aplicat un model de regressió basat en processos Gaussians, i s'han comparat els resultats obtinguts mitjançant polinomis. També vam presentar un nou esquema de treball que permet la selecció automàtica de mostres de color, adaptat a la franja cromàtica de l'escena, que s'ha anomenat P-ASK, basat en l'algoritme de classificació K-means. Els resultats obtinguts en aquesta tesi demostren que el procés metodològic de caracterització proposat és altament aplicable en tasques de documentació i preservació de el patrimoni cultural en general, i en art rupestre en particular. Es tracta d'una metodologia de baix cost, no invasiva, que permet obtenir el registre colorimètric d'escenes completes. Un cop caracteritzada, una càmera digital convencional pot emprar-se per a la determinació de la color de forma rigorosa, simulant un colorímetre, el que permetrà treballar en un espai de color de base física, independent d'el dispositiu i comparable amb dades obtingudes mitjançant altres càmeres que tambè estiguin caracteritzades. / [EN] Cultural heritage documentation and preservation is an arduous and delicate task in which color plays a fundamental role. The correct determination of color provides vital information on a descriptive, technical and quantitative level. Classical color documentation methods in archaeology were usually restricted to strictly subjective procedures. However, this methodology has practical and technical limitations, affecting the results obtained in the determination of color. Nowadays, it is frequent to support classical methods with geomatics techniques, such as photogrammetry or laser scanning, together with digital image processing. Although digital images allow color to be captured quickly, easily, and in a non-invasive way, the RGB data provided by the camera does not itself have a rigorous colorimetric sense. Therefore, a rigorous transformation process to obtain reliable color data from digital images is required. This thesis proposes a novel technical solution, in which the integration of spectrophotometric and colorimetric analysis is intended as a complement to photogrammetric techniques that allow an improvement in color identification and representation of pigments with maximum reliability in 3D surveys, models and reconstructions. The proposed methodology is based on the colorimetric characterization of digital sensors, which is of novel application in cave paintings. The characterization aims to obtain the transformation equations between the device-dependent color data recorded by the camera and the independent, physically-based color spaces, such as those established by the Commission Internationale de l'Éclairage (CIE). The rigorous processing of color and spectral data requires software packages with specific colorimetric functionalities. Although there are different commercial software options, they do not integrate the digital image processing and colorimetric computations together. And more importantly, they do not allow the camera characterization to be carried out. Therefore, as a key aspect in this thesis is our in-house pyColourimetry software that was developed and tested taking into account the recommendations published by the CIE. pyColourimetry is an open-source code, independent without commercial ties; it allows the treatment of colorimetric and spectral data and the digital image processing, and gives full control of the characterization process and the management of the obtained data to the user. On the other hand, this study presents a further analysis of the main factors affecting the characterization, such as the camera built-in sensor, the camera parameters, the illuminant, the regression model, and the data set used for model training. For computing the transformation equations, the literature recommends the use of polynomial equations as a regression model. Thus, polynomial models are considered as a starting point in this thesis. Additionally, a regression model based on Gaussian processes has been applied, and the results obtained by means of polynomials have been compared. Also, a new working scheme was reported which allows the automatic selection of color samples, adapted to the chromatic range of the scene. This scheme is called P-ASK, based on the K-means classification algorithm. The results achieved in this thesis show that the proposed framework for camera characterization is highly applicable in documentation and conservation tasks in general cultural heritage applications, and particularly in rock art painting. It is a low-cost and non-invasive methodology that allows for the colorimetric recording from complete image scenes. Once characterized, a conventional digital camera can be used for rigorous color determination, simulating a colorimeter. Thus, it is possible to work in a physical color space, independent of the device used, and comparable with data obtained from other cameras that are also characterized. / Thanks to the Universitat Politècnica de València for the FPI scholarship / Molada Tebar, A. (2020). Colorimetric and spectral analysis of rock art by means of the characterization of digital sensors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160386 / Compendio

Page generated in 0.0791 seconds