Spelling suggestions: "subject:"multidimensional scaling."" "subject:"ultidimensional scaling.""
81 |
The end of ‘Welcome Culture’? How the Cologne assaults reframed Germany’s immigration discourseWigger, Iris, Yendell, Alexander, Herbert, David 25 April 2023 (has links)
Controversy over immigration and integration intensified in German news media following Chancellor Merkel’s response to the refugee crisis of 2015. Using multidimensional scaling of word associations in reporting across four national news publications in conjunction with key event, moral panic and framing theories, we argue that reporting of events at Cologne station on New Year’s Eve 2015–2016 reframed debate away from terror-related concerns and towards anxieties about the sexual predation of dark-skinned males, thus racializing immigration coverage and resonating with a long history of Orientalist stereotyping. We further identify an increased clustering of ‘race’, gender, religion, crowd-threat and national belonging terms in reporting on sexual harassment incidents following Cologne, suggesting an increased criminalization of immigration discourse. The article provides new empirically based insights into the dynamics of news media reporting on migrants in Germany and contributes to scholarly debates on media framing of migrants, sexuality and crime.
|
82 |
Bycatch associated with a horseshoe crab (Limulus polyphemus) trawl survey: identifying species composition and distributionGraham, Larissa Joy 04 September 2007 (has links)
Horseshoe crabs (Limulus polyphemus) have been harvested along the east coast of the United States since the 1800s, however a Fishery Management Plan (FMP) was only recently created for this species. To date, there have not been any studies that have attempted to identify or quantify bycatch in the horseshoe crab trawl fishery. A horseshoe crab trawl survey was started in 2001 to collect data on the relative abundance, distribution, and population demographics of horseshoe crabs along the Atlantic coast of the United States. In the present study, species composition data were collected at sites sampled by the horseshoe crab trawl survey in 2005 and 2006. Seventy-six different taxa were identified as potential bycatch in the horseshoe crab trawl fishery. Non-metric multidimensional scaling (NMS) was used to cluster sites and identify the spatial distribution of taxa. Sites strongly clustered into distinct groups, suggesting that species composition changes spatially and seasonally. Species composition shifted between northern and southern sites. Location and bottom water temperature explain most of the variation in species composition. These results provide a list of species that are susceptible to this specific trawl gear and describe their distribution during fall months throughout the study area. Identifying these species and describing their distribution is a first step to understanding the ecosystem-level effects of the horseshoe crab trawl fishery. / Master of Science
|
83 |
Continuous Approximations of Discrete Phylogenetic Migration ModelsHuss, Simon, Mosetti Björk, Theodor January 2024 (has links)
Phylogenetics explores the evolutionary relationships among species and one of the main approaches is to construct phylogenetic trees through inference-based methods. Beyond the evolutionary insights these trees provide, the underlying tree structure can also be used to study geographical migration of species. These geographical models, reminiscent of models of DNA sequence evolution, have predominantly been discrete in their nature. However, this poses a multitude of challenges, especially with high-dimensional state-spaces. Previous work has explored the possibility of using continuous diffusion models for geographical migration, however these were not aiming to model non-local migration and large state-spaces. This paper presents and evaluates a scalable continuous phylogenetic migration model which aims to approximate conventional discrete migration models in the case of local and non-local migration.
|
84 |
COPS: Cluster optimized proximity scalingRusch, Thomas, Mair, Patrick, Hornik, Kurt January 2015 (has links) (PDF)
Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical
method whose general idea is to reduce the multivariate complexity in a data set
by employing suitable proximities between the data points and finding low-dimensional
configurations where the fitted distances optimally approximate these proximities. The
ultimate goal, however, is often not only to find the optimal configuration but to infer
statements about the similarity of objects in the high-dimensional space based on the
the similarity in the configuration. Since these two goals are somewhat at odds it can
happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized
proximity scaling (COPS), which solves the conundrum by introducing a more clustered
appearance into the configuration while adhering to the general idea of multidimensional
scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations
and combined with an index that quantifies the c-clusteredness of the solution.
This index, the OPTICS cordillera, has intuitively appealing properties with respect to
measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered
appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their
social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract) / Series: Discussion Paper Series / Center for Empirical Research Methods
|
85 |
Mathematics for history's sake : a new approach to Ptolemy's GeographyMintz, Daniel V. January 2011 (has links)
Almost two thousand years ago, Claudius Ptolemy created a guide to drawing maps of the world, identifying the names and coordinates of over 8,000 settlements and geographical features. Using the coordinates of those cities and landmarks which have been identified with modern locations, a series of best-fit transformations has been applied to several of Ptolemy’s regional maps, those of Britain, Spain, and Italy. The transformations relate Ptolemy’s coordinates to their modern equivalents by rotation and skewed scaling. These reflect the types of error that appear in Ptolemy’s data, namely those of distance and orientation. The mathematical techniques involved in this process are all modern. However, these techniques have been altered in order to deal with the historical difficulties of Ptolemy’s maps. To think of Ptolemy’s data as similar to that collected from a modern random sampling of a population and to apply unbiased statistical methods to it would be erroneous. Ptolemy’s data is biased, and the nature of that bias is going to be informed by the history of the data. Using such methods as cluster analysis, Procrustes analysis, and multidimensional scaling, we aimed to assess numerically the accuracy of Ptolemy’s maps. We also investigated the nature of the errors in the data and whether or not these could be linked to historical developments in the areas mapped.
|
86 |
Emergence prostorových geometrií z kvantového entanglementu / Emergence of space geometries from quantum entanglementLukeš, Petr January 2019 (has links)
MASTER THESIS Petr Lukeš Emergence of space geometries from quantum entanglement Institute of Theoretical Physics Supervisor of the master thesis: Mgr. Martin Scholtz, Ph.D. Study programme: Physics Study branch: Theoretical physics Prague 2019 Abstract: Connecting the field of Quantum Physics and General Relativity is one of the main interests of contemporary Theoretical Physics. This work attempts to find solution to simplified version of this problem. Firstly entropy is shown to be a good meeting point between the two different theories. Then some of entropy's less intuitive properties are shown, namely its dependence on area, not volume. This relation is studied from both Relativistic and Quantum viewpoint. After- wards there is a short description of a quantum model interpretable as geometry based on the information between its subsystems. Lastly, results of computations within this model are presented.
|
87 |
Perfil dos grupos estratégicos bancários no Brasil / A segmentation model for the Brazilian banking systemGonzalez, Rodrigo Barbone 15 August 2005 (has links)
O balanço de uma instituição financeira reflete suas principais decisões estratégicas, a saber, suas decisões de aplicação e captação que determinam os seus resultados. O objetivo desse trabalho é sugerir e testar uma composição para os segmentos do sistema bancário brasileiro baseado nessas decisões estratégicas e, assim, desenhar um perfil de atuação para os bancos no país. Esse trabalho utiliza dados de balancetes públicos padronizados pelo Plano Contábil das Instituições Financeiras (COSIF) e disponibilizados pelo Banco Central do Brasil. Os dados são transversais e a data base escolhida para esse estudo é dezembro de 2004, dez anos após a implantação do Plano Real e a publicação do primeiro artigo do gênero no Brasil por Savoia e Weiss (1995). Muitas transformações aconteceram nesses dez anos, em que pese à redução do sistema bancário de 263 para 140 instituições bancárias operantes. As técnicas multivariadas usadas são: análise de cluster, análise discriminante e escalonamento multidimensional. Os procedimentos hierárquico e não-hierárquico de análise de clusters foram utilizados em seqüência para formar segmentos internamente homogêneos e heterogêneos entre si. A solução escolhida subdivide o sistema bancário brasileiro em cinco grupos: varejo, crédito, tesouraria, intermediação bancária e transição ou outros repasses. Essa solução foi testada por meio de uma análise discriminante com bons resultados do ponto de vista da sua significância prática. O escalonamento multidimensional foi utilizado para propiciar uma solução gráfica que facilitasse a análise dos dados. Os resultados sugeriram que o sistema bancário era bem explicado por esses cinco segmentos. Três deles, os segmentos de varejo, crédito e tesouraria estavam voltados para a atividade-fim do sistema bancário, a intermediação financeira. Dois deles, os segmentos de intermediação bancária e transição ou repasses, foram caracterizados como intermediação da intermediação. Grupos com menor foco na intermediação financeira completa, entre credores e devedores primários, realizada pelos três segmentos anteriores. Levanta-se a hipótese de que o grupo de transição ou repasse representa os novos entrantes do mercado ou bancos com dificuldade de adaptação ao sistema bancário. O fato de mais de 30% dos bancos terem essas características de intermediação da atividade de crédito, ou estarem em busca de novos nichos de atuação sugere que o processo de reestruturação do sistema bancário iniciado em 1994 ainda não está concluído / The balance sheets of financial institutions reveal their primary strategies, namely investment and funding, which determine banks profitability. The aim of the present study was to suggest and try out experiment with a (optimal) combination for the Brazilian banking system markets based on these strategic parameters decisions, and thus, design a course of action for the Brazilian banks. This study relies on public balances provided by the Brazilian Central Bank and standardized by the Accounting Chart for Institutions of the National Financial System (COSIF). Balances chosen for this cross-section study date December, 2004; ten years after the implementation of the Real plan and the publication of the first article of the kind by Savoia and Weiss (1995). During the referred period Brazilian banking system underwent deep transformations and banking institutions were reduced from 263 to 140. The multivariate methods applied to this study comprised cluster analysis, discriminant analysis, and multidimensional scaling. Hierarchical and non-hierarchical cluster procedures were carried out in order to bring about five groups, distinct among themselves, but homogeneous within themselves. The proposal lies in dividing the Brazilian banking system into five major groups: hybrid; credit; treasury; interbanking; and transition or distribution banks. This solution was tested by a discriminant analysis and met practical significance criteria. Multidimensional scaling provided a graphical interface that simplified further analysis. The results suggest the five-group solution is adequate. Three of them, hybrid, credit and treasury banks, perform well-defined bank operations, providing banking intermediation, whereas the other two, interbanking and transition or distribution banks operate as intermediates in the banking system (i.e., an intermediation of the intermediation). Thus, the last two are not so focused on the whole financial intermediation between lenders and borrowers as the three first groups are. It is suggested that such intermediation of credit distribution be a non-profit strategy of the transition or distribution banks for new entrants or banks facing difficulties in fitting in the financial system. Over 30% of the banking system operate as credit intermediates alone or follow a course of action searching for new profitable markets. This high number of transition banks suggests that the Brazilian banking system is still in the process of consolidation.
|
88 |
Mapeamento de dados genômicos usando escalonamento multidimensional / Representation of genomics data with multidimensional scalingEspezúa Llerena, Soledad 04 June 2008 (has links)
Neste trabalho são exploradas diversas técnicas de escalonamento multidimensional (MDS), com o objetivo de estudar sua aplicabilidade no mapeamento de dados genômicos resultantes da técnica RFLP-PCR, sendo esse mapeamento realizado em espaços de baixa dimensionalidade (2D ou 3D) com o fim de aproveitar a habilidade de análise e interpretação visual que possuem os seres humanos. Foi realizada uma análise comparativa de diversos algoritmos MDS, visando sua aptidão para mapear dados genômicos. Esta análise compreendeu o estudo de alguns índices de desempenho como a precisão no mapeamento, o custo computacional e a capacidade de induzir bons agrupamentos. Para a realização dessa análise foi desenvolvida a ferramenta \"MDSExplorer\", a qual integra os algoritmos estudados e várias opções que permitem comparar os algoritmos e visualizar os mapeamentos. Á análise realizada sobre diversos bancos de dados citados na literatura, sugerem que o algoritmo LANDMARK possui o menor tempo computacional, uma precisão de mapeamento similar aos demais algoritmos, e uma boa capacidade de manter as estruturas existentes nos dados. Finalmente, o MDSExplorer foi usado para mapear um banco de dados genômicos: o banco de estirpes de bactérias fixadoras de nitrogênio, pertencentes ao gênero Bradyrhizobium, com objetivo de ajudar o especialista a inferir visualmente alguma taxonomia nessas estirpes. Os resultados na redução dimensional desse banco de dados sugeriram que a informação relevante (acima dos 60% da variância acumulada) para as regiões 16S, 23S e IGS estaria nas primeiras 5, 4 e 9 dimensões respectivamente. / In this work were studied various Multidimensional Scaling (MDS) techniques intended to apply in the mapping of genomics data obtained of RFLP-PCR technique. This mapping is done in a low dimensional space (2D or 3D), and has the intention of exploiting the visual human capability on analysis and synthesis. A comparative analysis of diverse algorithms MDS was carried out in order to devise its ubiquity in representing genomics data. This analysis covers the study of some indices of performance such as: the precision in the mapping, the computational cost and the capacity to induce good groupings. The purpose of this analysis was developed a software tool called \"MDSExplorer\", which integrates various MDS algorithms and some options that allow to compare the algorithms and to visualize the mappings. The analysis, carried out over diverse datasets cited in the literature, suggest that the algorithm LANDMARK has the lowest computational time, a good precision in the mapping, and a tendency to maintain the existing structures in the data. Finally, MDSExplorer was used to mapping a real genomics dataset: the RFLP-PRC images of a Brazilian collection of bacterial strains belonging to the genus Bradyrhizobium (known by their capability to transform the nitrogen of the atmosphere into compounds useful for the host plants), with the objective to aid the specialist to infer visually a taxonomy in these strains. The results in reduction of dimensionality in this data base, suggest that the relevant information (above 60% of variance accumulated) to the region 16S, 23S and IGS is around 5, 4 and 9 dimensions respectively.
|
89 |
Interactive Visualization of Statistical Data using Multidimensional Scaling TechniquesJansson, Mattias, Johansson, Jimmy January 2003 (has links)
<p>This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137. </p><p>In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions. </p><p>In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize. </p><p>Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.</p>
|
90 |
Χρήση τυχαίων χρονικών διαστημάτων για έλεγχο βιομετρικών χαρακτηριστικώνΣταμούλη, Αλεξία 30 April 2014 (has links)
Η μέθοδος αναγνώρισης μέσω του τρόπου πληκτρολόγησης αποτελεί μία μέθοδο αναγνώρισης βιομετρικών χαρακτηριστικών με στόχο να ελαχιστοποιηθεί ο κίνδυνος κλοπής των προσωπικών κωδικών των πελατών ενός συστήματος. Το παρόν βιομετρικό σύστημα βασίζεται στο σενάριο ότι ο ρυθμός με τον οποίο ένα πρόσωπο πληκτρολογεί είναι ξεχωριστός.
Το βιομετρικό σύστημα έχει δύο λειτουργίες, την εγγραφή των πελατών στο σύστημα και τη σύγκριση. Για την εγγραφή απαραίτητη είναι η εξαγωγή των προτύπων των πελατών τα οποία αποθηκεύονται στη βάση δεδομένων του συστήματος ενώ για στη σύγκριση το πρότυπο του χρήστη συγκρίνεται με το πρότυπο του πελάτη που ισχυρίζεται ότι είναι.
Στη παρούσα εργασία η εξαγωγή τον προτύπων πραγματοποιείται μέσω μία σειράς αλγοριθμικών διαδικασιών. Αρχικά η μονοδιάστατη χαρακτηριστική χρονοσειρά του χρήστη μετατρέπεται μέσω της μεθόδου Method of Delays σε ένα πολυδιάστατο διάνυσμα που λειτουργεί ως χαρακτηριστικό της ακολουθίας. Στη συνέχεια χρησιμοποιούμε δύο διαφορετικές μεθόδους για να υπολογίσουμε τις ανομοιότητες μεταξύ των πολυδιάστατων διανυσμάτων που προέκυψαν. Οι δύο αυτές μέθοδοι είναι οι Wald-Wolfowitz test και Mutual Nearest Point Distance. Οι τιμές αυτές τοποθετούνται σε έναν πίνακα κάθε στοιχείο του οποίου αναπαριστά την ανομοιότητα μεταξύ δύο χρονοσειρών. Ο πίνακας αυτός μπορεί είτε να αποτελέσει το σύνολο των προτύπων των χρηστών είτε να χρησιμοποιηθεί ως είσοδο στη μέθοδο Multidimensional Scaling που χρησιμοποιείται για μετατροπή του πίνακα ανομοιοτήτων σε διανύσματα και εξαγωγή νέων προτύπων. Τέλος, προτείνουμε ως επέκταση της εργασίας την εκπαίδευση του βιομετρικού συστήματος με χρήση των τεχνικών Support Vector Machines.
Για τη λειτουργία της σύγκρισης εξάγουμε πάλι το πρότυπο του χρήστη με την ίδια διαδικασία και το συγκρίνουμε με μία τιμή κατωφλίου. Τέλος, ο έλεγχος της αξιοπιστίας του συστήματος πραγματοποιείται μέσω της χρήσης τριών δεικτών απόδοσης, Equal Error Rate, False Rejection Rate και False Acceptance Rate. / The identification method via keystroke is a method of identifying biometric features in order to minimize the risk of theft of personal codes of customers of a system. The present biometric system based on the scenario that the rate at which a person presses the keyboard buttons is special.
The biometric system has two functions, the enrollment of customers in the system and their test. For enrollment, it is necessary to export standards of customers’ information stored in the system database and for the test the standard of the user is compared with the standard of the user that is intended to be the customer.
In the present thesis the export of the standards is taken place via a series of algorithmic procedures. Initially,the one dimensional characteristic time series of user is converted, by the technique Method of Delays, in a multidimensional vector that acts as a feature of the sequence. Then, two different methods are used to compute the dissimilarities between multidimensional vectors obtained. These two methods are the Wald-Wolfowitz test and the Mutual Nearest Point Distance. These values are placed in an array, each element of which represents the dissimilarity between two time series. This table can be either the standards of users or can be entry in the Multidimensional Scaling method used to convert the table disparities in vectors and then produce new standards of users. Finally, we propose as extension of our thesis, the training of biometric system with using the techniques of Support Vector Machine.
For the test, again the pattern of the user is extracted with the same procedure and is compared to a threshold. Finally, the reliability of the system is carried out through the use of three performance indicators, Equal Error Rate, False Rejection Rate and False Acceptance Rate.
|
Page generated in 0.1014 seconds