1 |
Tillräckligt kvalificerad? : Ett intersektionellt perspektiv på arbetsgivares kvalifikationskrav i kunskapssamhälletHallqvist, Linn January 2016 (has links)
This thesis aims to highlight the problems with statutory employment protection available to workers when the employer imposes new qualification requirements in connection with the reorganization. The purpose of this thesis is also that from an intersectional perspective, examine the societal implications employers for new skill requirements, in the knowledge society. The methods used to fulfill the purpose of the essay is legal dogmatic. This has been applied in order to determine what is the law in relation to the new qualification requirements at the reorganization of the business. Furthermore has a sociological analysis applied to study the social implications employers new qualification requirements may be. This analysis has assumed an intersectional perspective of power. The conclusions that emerged through the essay indicates the law of today primarily protects workers with formal qualifications as university education or vocational training. Informal qualifications in terms of experience and length of employment is not as highly valued. Furthermore, it has been concluded that the strongest protection for workers in today's labor is itself being an active part in providing themselves with the skills and knowledge their current job seems to require. The impact of the new formal proficiency requirements may in society from an intersectional perspective are that it shapes new classes in society by those who lack the required qualifications tend to be marginalized from the labor market. Hardened seems the workers suffer who established themselves in the labor market at a time when traditional production professions and other less skilled occupations did not require training. Employers new qualification requirements may thus negative effects on many older workers but also other workers who lack the education and workers with different ethnicity. Changed qualification requirements may thus be justification for structural discrimination. Partly by qualification requirements in itself makes some people do not achieve the requirements, but also to the legislation today formally fair and neutral, which means that it does not take into account substantive injustice and people's different conditions to acclimatize to the new labor market qualification requirements.
|
2 |
Concept possession and incorrect understandingNordby, Halvor January 2000 (has links)
No description available.
|
3 |
Linear Feature Extraction with Emphasis on Face RecognitionMahanta, Mohammad Shahin 15 February 2010 (has links)
Feature extraction is an important step in the classification of high-dimensional data such as face images. Furthermore, linear feature extractors are more prevalent due to computational efficiency and preservation of the Gaussianity.
This research proposes a simple and fast linear feature extractor approximating the sufficient statistic for Gaussian distributions. This method preserves the discriminatory information in both first and second moments of the data and yields the linear discriminant analysis as a special case.
Additionally, an accurate upper bound on the error probability of a plug-in classifier can be used to approximate the number of features minimizing the error probability. Therefore, tighter error bounds are derived in this work based on the Bayes error or the classification error on the trained distributions. These bounds can also be used for performance guarantee and to determine the required number of training samples to guarantee approaching the Bayes classifier performance.
|
4 |
Electronic structure and optical properties of ZnO : bulk and surfaceYan, Caihua 23 February 1994 (has links)
Graduation date: 1994
|
5 |
Spinoza's Version of the PSRSchaeffer, Erich 31 March 2014 (has links)
Michael Della Rocca has provided an influential interpretation of Spinoza relying heavily on the principle of sufficient reason. In order to challenge this interpretation, I identify three assumptions Della Rocca makes about the PSR and demonstrate that it is not clear Spinoza shares them. First, Della Rocca contends that the PSR is unlimited in scope. I show that the scope of Spinoza’s version of the PSR is ambiguous. While it is clear that substances and modes are included, it is unclear just how widely the scope extends. Second, Della Rocca argues that the PSR demands there are no illegitimate bifurcations. I argue that Della Rocca’s account of illegitimate bifurcations is too strong. I show that Spinoza offers a distinction in explanatory types that should be considered illegitimate and inexplicable according to Della Rocca’s definition of illegitimate bifurcations. Third, Della Rocca argues that explanations which satisfy the demands of the PSR must be in terms of the concepts involved. I show that Spinoza does not use conceptual explanations. Instead, in almost all cases, the explanations Spinoza relies on to satisfy the demands of the PSR are in terms of a thing’s cause. / Thesis (Master, Philosophy) -- Queen's University, 2014-03-28 11:35:29.035
|
6 |
Linear Feature Extraction with Emphasis on Face RecognitionMahanta, Mohammad Shahin 15 February 2010 (has links)
Feature extraction is an important step in the classification of high-dimensional data such as face images. Furthermore, linear feature extractors are more prevalent due to computational efficiency and preservation of the Gaussianity.
This research proposes a simple and fast linear feature extractor approximating the sufficient statistic for Gaussian distributions. This method preserves the discriminatory information in both first and second moments of the data and yields the linear discriminant analysis as a special case.
Additionally, an accurate upper bound on the error probability of a plug-in classifier can be used to approximate the number of features minimizing the error probability. Therefore, tighter error bounds are derived in this work based on the Bayes error or the classification error on the trained distributions. These bounds can also be used for performance guarantee and to determine the required number of training samples to guarantee approaching the Bayes classifier performance.
|
7 |
DEEP LEARNING FOR STATISTICAL DATA ANALYSIS: DIMENSION REDUCTION AND CAUSAL STRUCTURE INFERENCESiqi Liang (11799653) 19 December 2021 (has links)
<div>During the past decades, deep learning has been proven to be an important tool for statistical data analysis. Motivated by the promise of deep learning in tackling the curse of dimensionality, we propose three innovative methods which apply deep learning techniques to high-dimensional data analysis in this dissertation.</div><div><br></div><div>Firstly, we propose a nonlinear sufficient dimension reduction method, the so-called split-and-merge deep neural networks (SM-DNN), which employs the split-and-merge technique on deep neural networks to obtain nonlinear sufficient dimension reduction of the input data and then learn a deep neural network on the dimension reduced data. We show that the DNN-based dimension reduction is sufficient for data drawn from exponential family, which retains all information on response contained in the explanatory data. Our numerical experiments indicate that the SM-DNN method can lead to significant improvement in phenotype prediction for a variety of real data examples. In particular, with only rare variants, we achieved a remarkable prediction accuracy of over 74\% for the Early-Onset Myocardial Infarction (EOMI) exome sequence data. </div><div><br></div><div>Secondly, we propose another nonlinear SDR method based on a new type of stochastic neural network under a rigorous probabilistic framework and show that it can be used for sufficient dimension reduction for high-dimensional data. The proposed stochastic neural network can be trained using an adaptive stochastic gradient Markov chain Monte Carlo algorithm. Through extensive experiments on real-world classification and regression problems, we show that the proposed method compares favorably with the existing state-of-the-art sufficient dimension reduction methods and is computationally more efficient for large-scale data.</div><div><br></div><div>Finally, we propose a structure learning method for learning the causal structure hidden in the high-dimensional data, which consists of two stages:</div><div>we first conduct Bayesian sparse learning for variable screening to build a primary graph, and then we perform conditional independence tests to refine the primary graph. </div><div>Extensive numerical experiments and quantitative tests confirm the generality, effectiveness and power of the proposed methods.</div>
|
8 |
The Problems of Generating Sufficient Revenue in a Rapidly Growing Small CommunityMemmott, Jeffery L. 01 May 1979 (has links)
The purpose of this study is to analyze the impact of rapid growth on the various revenue and expenditure categories at the local level. A model is presented to project local revenues and expenditures based upon projected per capita income, population, and average daily attendance. The model is based upon previous models from a review of the literature with modification and additions by the author. Revenues and expenditures are estimated simultaneously reflecting the budgetary process at the local level.
Coefficients in the model are estimated using sample data from Duchesne and Uintah counties. These coefficients are then used to project the effects of oil shale development in the region.
Different policies and changes in current policies are presented to lessen or alleviate the adverse impacts of rapid growht on local cities and school districts.
|
9 |
Necessary and sufficient conditions for deadlock in a manufacturing systemDeering, Paul E. January 2000 (has links)
No description available.
|
10 |
INFORMATIONAL INDEX AND ITS APPLICATIONS IN HIGH DIMENSIONAL DATAYuan, Qingcong 01 January 2017 (has links)
We introduce a new class of measures for testing independence between two random vectors, which uses expected difference of conditional and marginal characteristic functions. By choosing a particular weight function in the class, we propose a new index for measuring independence and study its property. Two empirical versions are developed, their properties, asymptotics, connection with existing measures and applications are discussed. Implementation and Monte Carlo results are also presented.
We propose a two-stage sufficient variable selections method based on the new index to deal with large p small n data. The method does not require model specification and especially focuses on categorical response. Our approach always improves other typical screening approaches which only use marginal relation. Numerical studies are provided to demonstrate the advantages of the method.
We introduce a novel approach to sufficient dimension reduction problems using the new measure. The proposed method requires very mild conditions on the predictors, estimates the central subspace effectively and is especially useful when response is categorical. It keeps the model-free advantage without estimating link function. Under regularity conditions, root-n consistency and asymptotic normality are established. The proposed method is very competitive and robust comparing to existing dimension reduction methods through simulations results.
|
Page generated in 0.0689 seconds