• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 10
  • 8
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 47
  • 34
  • 29
  • 25
  • 23
  • 18
  • 18
  • 16
  • 15
  • 15
  • 13
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Vers des modes de scrutin moins manipulables / Toward less manipulable voting systems

Durand, François 24 September 2015 (has links)
Nous étudions la manipulation par coalition des modes de scrutin: est-ce qu'un sous-ensemble des électeurs, en votant de façon stratégique, peut faire élire un candidat qu'ils préfèrent tous au candidat qui aurait été vainqueur si tous les électeurs avaient voté sincèrement? D'un point de vue théorique, nous développons un formalisme qui permet d'étudier tous les modes de scrutin, que les bulletins soient des ordres de préférences sur les candidats (systèmes ordinaux), des notes ou des valeurs d'approbation (systèmes cardinaux) ou des objets encore plus généraux. Nous montrons que pour la quasi-totalité des modes de scrutin classiques, on peut réduire strictement leur manipulabilité en ajoutant un test préliminaire visant à élire le vainqueur de Condorcet s'il en existe un. Pour les autres modes de scrutin, nous définissons la condorcification généralisée qui permet d'obtenir des résultats similaires. Puis nous définissons la notion de culture décomposable, une hypothèse dont l'indépendance probabiliste des électeurs est un cas particulier. Sous cette hypothèse, nous prouvons que, pour tout mode de scrutin, il existe un mode de scrutin qui est ordinal, qui partage certaines propriétés avec le mode de scrutin original et qui est au plus aussi manipulable. Ainsi, la recherche d'un mode de scrutin de manipulabilité minimale (dans une classe de systèmes raisonnables) peut être restreinte à ceux qui sont ordinaux et vérifient le critère de Condorcet. Afin de permettre à tous d'examiner ces phénomènes en pratique, nous présentons SVVAMP, un package Python de notre cru dédié à l'étude des modes de scrutin et de leur manipulabilité. Puis nous l'utilisons pour comparer la manipulabilité par coalition de divers modes de scrutin dans plusieurs types de cultures, c'est-à-dire des modèles probabilistes permettant de générer des populations d'électeurs munis de préférences aléatoires. Nous complétons ensuite l'analyse avec des élections issues d'expériences réelles. Enfin, nous déterminons les modes de scrutin de manipulabilité minimale pour de très faibles valeurs du nombre d'électeurs et du nombre de candidats et nous les comparons avec les modes de scrutin classiques. De manière générale, nous établissons que la méthode de Borda, le vote par notation et le vote par assentiment sont particulièrement manipulables. À l'inverse, nous montrons l'excellente résistance à la manipulation du système appelé VTI, également connu par son acronyme anglophone STV ou IRV, et de sa variante Condorcet-VTI. / We investigate the coalitional manipulation of voting systems: is there a subset of voters who, by producing an insincere ballot, can secure an outcome that they strictly prefer to the candidate who wins if all voters provide a sincere ballot? From a theoretical point of view, we develop a framework that allows us to study all kinds of voting systems: ballots can be linear orders of preferences over the candidates (ordinal systems), grades or approval values (cardinal systems) or even more general objects. We prove that for almost all voting systems from literature and real life, manipulability can be strictly diminished by adding a preliminary test that elects the Condorcet winner if one exists. Then we define the notion of decomposable culture and prove that it is met, in particular, when voters are independent. Under this assumption, we prove that for any voting system, there exists a voting system that is ordinal, has some common properties with the original voting system and is at most as manipulable. As a consequence of these theoretical results, when searching for a voting system whose manipulability is minimal (in a class of reasonable systems), investigation can be restricted to those that are ordinal and meet the Condorcet criterion.In order to provide a tool to investigate these questions in practice, we present SVVAMP, a Python package we designed to study voting systems and their manipulability. We use it to compare the coalitional manipulability of several voting systems in a variety of cultures, i.e. probabilistic models generating populations of voters with random preferences. Then we perform the same kind of analysis on real elections. Lastly, we determine voting systems with minimal manipulability for very small values of the number of voters and the number of candidates and we compare them with classical voting systems from literature and real life. Generally speaking, we show that the Borda count, Range voting and Approval voting are especially vulnerable to manipulation. In contrast, we find an excellent resilience to manipulation for the voting system called IRV (also known as STV) and its variant Condorcet-IRV.
72

Understanding Noise and Structure behind Metric Spaces

Wang, Dingkang 20 October 2021 (has links)
No description available.
73

New Technique for Imputing Missing Item Responses for an Ordinal Variable: Using Tennessee Youth Risk Behavior Survey as an Example.

Ahmed, Andaleeb Abrar 15 December 2007 (has links) (PDF)
Surveys ordinarily ask questions in an ordinal scale and often result in missing data. We suggest a regression based technique for imputing missing ordinal data. Multilevel cumulative logit model was used with an assumption that observed responses of certain key variables can serve as covariate in predicting missing item responses of an ordinal variable. Individual predicted probabilities at each response level were obtained. Average individual predicted probabilities for each response level were used to randomly impute the missing responses using a uniform distribution. Finally, likelihood ratio chi square statistics was used to compare the imputed and observed distributions. Two other forms of multiple imputation algorithms were performed for comparison. Performance of our imputation technique was comparable to other 2 established algorithms. Our method being simpler does not involve any complex algorithms and with further research can potentially be used as an imputation technique for missing ordinal variables.
74

Factor Retention Strategies with Ordinal Variables in Exploratory Factor Analysis: A Simulation

Fagan, Marcus A. 08 1900 (has links)
Previous research has individually assessed parallel analysis and minimum average partial for factor retention in exploratory factor analysis using ordinal variables. The current study is a comprehensive simulation study including the manipulation of eight conditions (type of correlation matrix, sample size, number of variables per factor, number of factors, factor correlation, skewness, factor loadings, and number of response categories), and three types of retention methods (minimum average partial, parallel analysis, and empirical Kaiser criterion) resulting in a 2 × 2 × 2 × 2 × 2 × 3 × 3 × 4 × 5 design that totals to 5,760 condition combinations tested over 1,000 replications each. Results show that each retention method performed worse when utilizing polychoric correlation matrices. Moreover, minimum average partials are quite sensitive to factor loadings and overall perform poorly compared to parallel analysis and empirical Kaiser criterion. Empirical Kaiser criterion performed almost identical to parallel analysis in normally distributed data; however, performed much worse under highly skewed conditions. Based on these findings, it is recommended to use parallel analysis utilizing principal components analysis with a Pearson correlation matrix to determine the number of factors to retain when dealing with ordinal data.
75

The Nature of Cognitive Chunking Processes in Rat Serial Pattern Learning

Doyle, Karen Elizabeth 04 December 2013 (has links)
No description available.
76

Multifactor Models of Ordinal Data: Comparing Four Factor Analytical Methods

Sanders, Margaret 02 June 2014 (has links)
No description available.
77

Essays on Mechanism Design and Positive Political Theory: Voting Rules and Behavior

Kim, Semin 06 June 2014 (has links)
No description available.
78

Confirmatory factor analysis with ordinal variables: A comparison of different estimation methods

Jing, Jiazhen January 2024 (has links)
In social science research, data is often collected using questionnaires with Likert scales, resulting in ordinal data. Confirmatory factor analysis (CFA) is the most common type of analysis, which assumes continuous data and multivariate normality, the assumptions violated for ordinal data. Simulation studies have shown that Robust Maximum Likelihood (RML) works well when the normality assumption is violated. Diagonally Weighted Least Squares (DWLS) estimation is especially recommended for categorical data. Bayesian estimation (BE) methods are also potentially effective for ordinal data. The current study employs a CFA model and Monte Carlo simulation to evaluate the performance of three estimation methods with ordinal data under various conditions in terms of the levels of asymmetry, sample sizes, and number of categories. The results indicate that, for ordinal data, DWLS outperforms RML and BE. RML is effective for ordinal data when the category numbers are sufficiently large. Bayesian methods do not demonstrate a significant advantage with different values of factor loadings, and category distributions had minimal impact on the estimation results.
79

Learning Algorithms Using Chance-Constrained Programs

Jagarlapudi, Saketha Nath 07 1900 (has links)
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors. A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification. The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets. The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed. The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
80

Proposta metodológica para identificar fatores contribuintes de acidentes viários por meio de geotecnologias / Methodological proposal to identify contributing factors of road accidents through geotechnologies

Batistão, Mariana Dias Chaves [UNESP] 02 February 2018 (has links)
Submitted by Mariana Dias Chaves null (mariana.unesp@hotmail.com) on 2018-02-16T19:43:53Z No. of bitstreams: 1 Batistao, MDC-TeseDr.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5) / Approved for entry into archive by Claudia Adriana Spindola null (claudia@fct.unesp.br) on 2018-02-19T11:31:34Z (GMT) No. of bitstreams: 1 batistao_mdc_dr_prud.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5) / Made available in DSpace on 2018-02-19T11:31:34Z (GMT). No. of bitstreams: 1 batistao_mdc_dr_prud.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5) Previous issue date: 2018-02-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Essa pesquisa apresenta um estudo acerca dos fatores contribuintes de acidentes rodoviários com o objetivo de fornecer evidências para analisar o comportamento dos fatores contribuintes envolvidos nesses acidentes, mais especificamente nos trechos críticos. Desejase identificar a relação dos fatores com o grau de severidade de um acidente (danos materiais, sem vítimas fatais e com vítimas fatais) e o impacto de cada classe de fator na ocorrência de um acidente. A intensão é embasar uma análise geoespacial levando em consideração técnicas estatísticas e cartográficas e contribuir para melhorar a qualidade das informações sobre segurança viária no país e seu atual cenário crítico. A estrutura metodológica da pesquisa consiste em três etapas principais: (I) Identificação e determinação de segmentos de trechos críticos, (II) Mapeamento dos fatores contribuintes “via” para o acidente e (III) Investigação e estudo dos fatores contribuintes. Quatro trechos de rodovias do oeste do estado de São Paulo foram escolhidos como área de estudo. Na etapa I propôs-se um método de interpolação espacial de escolha de segmentos de trechos críticos levando a premissa existência da dependência geográfica dos acidentes em consideração. No total, foram identificados oito segmentos de trechos críticos na área de estudo. A etapa II concentrou-se no mapeamento dos fatores contribuintes desses segmentos de trechos críticos. Essa etapa trouxe o caráter tecnológico à pesquisa por fazer uso da integração de geotecnologias e a contribuição das Ciências Cartográficas para os estudos de segurança viária, por gerar informação a partir do mapeamento da localização dos fatores contribuintes. Das quatro classes de fatores (humano, ambiente, veículo e via) as características da via foram escolhidas para serem mapeadas, tendo-se deparado com a ausência de qualquer dado dessa classe de fatores tanto no banco de dados dos acidentes como no boletim de ocorrências. A relação com as outras três classes de fatores foi tratada na etapa III da pesquisa, cujos resultados proporcionaram montar o ranking dos seis fatores contribuintes da via mais frequentes nos segmentos de trechos críticos. Adicionalmente, foram construídos três modelos de regressão logística ordinal para investigar o impacto de cada uma das quatro classes de fatores no grau de severidade do acidente (três graus de severidade). Para isso, o grau foi tratado como variável dependente dos modelos. Quatro variáveis independentes (fatores contribuintes) foram consideradas significativas e escolhidas para compor os modelos: consumo de drogas (da classe de fator contribuinte humano), estado dos pneus (da classe de fator veículo), vegetação (da classe de fator via) e sinalização (da classe de fator via). Por fim, os modelos puderam ser analisados a partir da razão de chances (odds ratio) para complementar as informações e sintetizar os resultados como contribuições da pesquisa. / This research presents a study about the contributing factors of road accidents in order to provide evidences to analyse the behaviour of contributing factors involved in these accidents more specifically in critical sections. The intention is to identify the relationship between those factors and the severity degree of an accident (material damage, no fatalities and fatalities) and the impact of each factor class on an accident occurrence. The aim is to base on geospatial analysis taking into account statistical and cartographic techniques and contribute to improve the quality of the road safety information in the country which has a current critical scene. The methodological structure of this thesis consists of following three main steps: (I) Identification and determination of critical sections segments, (II) mapping “road” contributing factors for each accident and (III) Investigation and study of the contributing factors. Four sections of highways in the west of São Paulo state were chosen as the study area. In Step I, proposed a spatial interpolation method to choose critical sections segments premising the existence of geographical dependence of the considered accidents. In entire, eight critical sections segments were identified in the study area. Step II focused on mapping the contributing factors of these segments. This step brought the technological character to this research by making use of geotechnologies integration and the contribution of Cartographic Sciences to road safety by generating information of the contributing factors location from mapping. Of the four factors classes (human, environment, vehicle and road), the road characteristics were chosen to be mapped, since no data from this factor class was found in both the accident database and the occurrence report. The relation with the other three factors classes was the subject of step III, which results provided a ranking of the six most frequent contributing factors in critical sections segments. In addition, three ordinal logistic regression models were constructed to investigate the impact of each of the four factors classes on the accident severity degree (three severity degrees). For this, the severity degree was considered as the models dependent variable. Four significant independent variables (contributing factors) were chosen to compose the following models: drug consumption (from the human contributing factor class), tire condition (vehicle factor class), vegetation (road factor class) and signaling (road factor class). Lastly, the models could be analysed by the odds ratio method to complement the information and synthesize the results as research contributions.

Page generated in 0.0625 seconds