11 |
Uma nova metáfora visual escalável para dados tabulares e sua aplicação na análise de agrupamentos / A scalable visual metaphor for tabular data and its application on clustering analysisEvinton Antonio Cordoba Mosquera 19 September 2017 (has links)
A rápida evolução dos recursos computacionais vem permitindo que grandes conjuntos de dados sejam armazenados e recuperados. No entanto, a exploração, compreensão e extração de informação útil ainda são um desafio. Com relação às ferramentas computacionais que visam tratar desse problema, a Visualização de Informação possibilita a análise de conjuntos de dados por meio de representações gráficas e a Mineração de Dados fornece processos automáticos para a descoberta e interpretação de padrões. Apesar da recente popularidade dos métodos de visualização de informação, um problema recorrente é a baixa escalabilidade visual quando se está analisando grandes conjuntos de dados, resultando em perda de contexto e desordem visual. Com intuito de representar grandes conjuntos de dados reduzindo a perda de informação relevante, o processo de agregação visual de dados vem sendo empregado. A agregação diminui a quantidade de dados a serem representados, preservando a distribuição e as tendências do conjunto de dados original. Quanto à mineração de dados, visualização de informação vêm se tornando ferramental essencial na interpretação dos modelos computacionais e resultados gerados, em especial das técnicas não-supervisionados, como as de agrupamento. Isso porque nessas técnicas, a única forma do usuário interagir com o processo de mineração é por meio de parametrização, limitando a inserção de conhecimento de domínio no processo de análise de dados. Nesta dissertação, propomos e desenvolvemos uma metáfora visual baseada na TableLens que emprega abordagens baseadas no conceito de agregação para criar representações mais escaláveis para a interpretação de dados tabulares. Como aplicação, empregamos a metáfora desenvolvida na análise de resultados de técnicas de agrupamento. O ferramental resultante não somente suporta análise de grandes bases de dados com reduzida perda de contexto, mas também fornece subsídios para entender como os atributos dos dados contribuem para a formação de agrupamentos em termos da coesão e separação dos grupos formados. / The rapid evolution of computing resources has enabled large datasets to be stored and retrieved. However, exploring, understanding and extracting useful information is still a challenge. Among the computational tools to address this problem, information visualization techniques enable the data analysis employing the human visual ability by making a graphic representation of the data set, and data mining provides automatic processes for the discovery and interpretation of patterns. Despite the recent popularity of information visualization methods, a recurring problem is the low visual scalability when analyzing large data sets resulting in context loss and visual disorder. To represent large datasets reducing the loss of relevant information, the process of aggregation is being used. Aggregation decreases the amount of data to be represented, preserving the distribution and trends of the original dataset. Regarding data mining, information visualization has become an essential tool in the interpretation of computational models and generated results, especially of unsupervised techniques, such as clustering. This occurs because, in these techniques, the only way the user interacts with the mining process is through parameterization, limiting the insertion of domain knowledge in the process. In this thesis, we propose and develop the new visual metaphor based on the TableLens that employs approaches based on the concept of aggregation to create more scalable representations of tabular data. As application, we use the developed metaphor in the analysis of the results of clustering techniques. The resulting framework does not only support large database analysis but also provides insights into how data attributes contribute to clustering regarding cohesion and separation of the composed groups
|
12 |
Syntetisering av tabulär data: En systematisk litteraturstudie om verktyg för att skapa syntetiska datasetAllergren, Erik, Hildebrand, Clara January 2023 (has links)
De senaste åren har efterfrågan på stora mängder data för att träna maskininläringsalgoritmer ökat. Algoritmerna kan användas för att lösa stora som små samhällsfrågor och utmaningar. Ett sätt att möta efterfrågan är att generera syntetisk data som bibehåller statistiska värden och egenskaper från verklig data. Den syntetiska datan möjliggör generering av stora mängder data men är också bra då den minimerar risken för att personlig integritet röjd och medför att data kan tillgängliggöras för forskning utan att identiteter röjs. I denna studie var det övergripande syftet att undersöka och sammanställa vilka verktyg för syntetisering av tabulär data som finns beskrivna i vetenskapliga publiceringar på engelska. Studien genomfördes genom att följa de åtta stegen i en systematisk litteraturstudie med tydligt definierade kriterier för vilka artiklar som skulle inkluderas eller exkluderas. De främsta kraven för artiklarna var att de beskrivna verktygen existerar i form av kod eller program, alltså inte enbart i teorin, samt var generella och applicerbara på olika tabulära dataset. Verktygen fick därmed inte bara fungera eller vara anpassad till ett specifikt dataset eller situation. De verktyg som fanns beskrivna i de återstående artiklarna efter genomförd sökning och därmed representeras i resultatet är (a) Synthpop, ett verktyg som togs fram i ett projekt för UK Longitudinal Studies för att kunna hantera känslig data och personuppgifter; (b) Gretel, ett kommersiellt och open-source verktyg som uppkommit för att möta det ökade behovet av träningsdata; (c) UniformGAN, en ny variant av GAN (Generative Adversarial Network) som genererar syntetiska tabulära dataset medan sekretess säkerställs samt; (d) Synthia, ett open-source paket för Python som är gjort för att generera syntetisk data med en eller flera variabler, univariat och multivariat data. Resultatet visade att verktygen använder sig av olika metoder och modeller för att framställa syntetisk data samt har olika grad av tillgänglighet. Gretel framträdde mest från verktygen, då den är mer kommersiell med fler tjänster samt erbjuder möjligheten att generera syntetiskt data utan att ha goda kunskaper i programmering. / During the last years the demand for big amounts of data to train machine learning algorithms has increased. The algorithms can be used to solve real world problems and challenges. A way to meet the demand is to generate synthetic data that preserve the statistical values and characteristics from real data. The synthetic data makes it possible to obtain large amounts of data, but is also good since it minimizes the risk for privacy issues in micro data. In that way, this type of data can be made accessible for important research without disclosure and potentially harming personal integrity. In this study, the overall aim was to examine and compile which tools for generation of synthetic data are described in scientific articles written in English. The study was conducted by following the eight steps of systematic literature reviews with clearly defined requirements for which articles to include or exclude. The primary requirements for the articles were that the described tools where existing in the form of accessible code or program and that they could be used for general tabular datasets. Thus the tools could not be made just for a specific dataset or situation. The tools that were described in the remaining articles after the search, and consequently included in the result of the study, was (a) Synthpop, a tool developed within the UK Longitudinal Studies to handle sensitive data containing personal information; (b) Gretel, a commercial and open source tool that was created to meet the demand for training data; (c) UniformGAN, a new Generative Adversarial Network that generates synthetic data while preserving privacy and (d) Synthia, a Python open-source package made to generate synthetic univariate and multivariate data. The result showed that the tools use different methods and models to generate synthetic data and have different degrees of accessibility. Gretel is distinguished from the other tools, since it is more commercial with several services and offers the possibility to generate synthetic data without good knowledge in programming.
|
13 |
A Comparison of AutoML Hyperparameter Optimization Tools for Tabular DataPokhrel, Prativa 02 May 2023 (has links)
No description available.
|
14 |
[en] ALGORITHMS FOR TABLE STRUCTURE RECOGNITION / [pt] ALGORITMOS PARA RECONHECIMENTO DE ESTRUTURAS DE TABELASYOSVENI ESCALONA ESCALONA 26 June 2020 (has links)
[pt] Tabelas são uma forma bastante comum de organizar e publicar dados. Por exemplo, a Web possui um enorme número de tabelas publicadas em HTML, embutidas em documentos em PDF, ou que podem ser simplesmente baixadas de páginas Web. Porém, tabelas nem sempre são fáceis de interpretar pois possuem uma grande variedade de características e são organizadas de diversas formas. De fato, um grande número de métodos e ferramentas foram desenvolvidos para interpretação de tabelas. Esta dissertação apresenta a implementação de um algoritmo, baseado em Conditional Random Fields (CRFs), para classificar as linhas de uma tabela em linhas de cabeçalho, linhas de dados e linhas de metadados.
A implementação é complementada por dois algoritmos para reconhecimento de tabelas em planilhas, respectivamente baseados em regras e detecção de regiões. Por fim, a dissertação descreve os resultados e os benefícios obtidos pela aplicação dos algoritmos a tabelas em formato HTML, obtidas da Web, e a tabelas em forma de planilhas, baixadas do Web site da Agência Nacional de Petróleo. / [en] Tables are widely adopted to organize and publish data. For example, the Web has an enormous number of tables, published in HTML, imbedded in PDF documents, or that can be simply downloaded from Web pages. However, tables are not always easy to interpret because of the variety of features and formats used. Indeed, a large number of methods and tools have been developed to interpret tables. This dissertation presents the implementation of an algorithm, based on Conditional Random Fields (CRFs), to classify the rows of a table as header rows, data rows or metadata rows. The implementation is complemented by two algorithms for table recognition in a spreadsheet document, respectively based on rules and on region detection. Finally, the dissertation describes the results and the benefits obtained by applying the implemented algorithms to HTML tables, obtained from the Web, and to spreadsheet tables, downloaded from the Brazilian National Petroleum Agency.
|
15 |
Benchmarking AutoML for regression tasks on small tabular data in materials designConrad, Felix, Mälzer, Mauritz, Schwarzenberger, Michael, Wiemer, Hajo, Ihlenfeldt, Steffen 05 March 2024 (has links)
Machine Learning has become more important for materials engineering in the last decade. Globally, automated machine learning (AutoML) is growing in popularity with the increasing demand for data analysis solutions. Yet, it is not frequently used for small tabular data. Comparisons and benchmarks already exist to assess the qualities of AutoML tools in general, but none of them elaborates on the surrounding conditions of materials engineers working with experimental data: small datasets with less than 1000 samples. This benchmark addresses these conditions and draws special attention to the overall competitiveness with manual data analysis. Four representative AutoML frameworks are used to evaluate twelve domain-specific datasets to provide orientation on the promises of AutoML in the field of materials engineering. Performance, robustness and usability are discussed in particular. The results lead to two main conclusions: First, AutoML is highly competitive with manual model optimization, even with little training time. Second, the data sampling for train and test data is of crucial importance for reliable results.
|
16 |
Variational AutoEncoders and Differential Privacy : balancing data synthesis and privacy constraints / Variational AutoEncoders och Differential Privacy : balans mellan datasyntes och integritetsbegränsningarBremond, Baptiste January 2024 (has links)
This thesis investigates the effectiveness of Tabular Variational Auto Encoders (TVAEs) in generating high-quality synthetic tabular data and assesses their compliance with differential privacy principles. The study shows that while TVAEs are better than VAEs at generating synthetic data that faithfully reproduces the distribution of real data as measured by the Synthetic Data Vault (SDV) metrics, the latter does not guarantee that the synthetic data is up to the task in practical industrial applications. In particular, models trained on TVAE-generated data from the Creditcards dataset are ineffective. The author also explores various optimisation methods on TVAE, such as Gumbel Max Trick, Drop Out (DO) and Batch Normalization, while pointing out that techniques frequently used to improve two-dimensional TVAE, such as Kullback–Leibler Warm-Up and B Disentanglement, are not directly transferable to the one-dimensional context. However, differential privacy to TVAE was not implemented due to time constraints and inconclusive results. The study nevertheless highlights the benefits of stabilising training with the Differential Privacy - Stochastic Gradient Descent (DP-SGD), as with a dropout, and the existence of an optimal equilibrium point between the constraints of differential privacy and the number of training epochs in the model. / Denna avhandling undersöker hur effektiva Tabular Variational AutoEncoders (TVAE) är när det gäller att generera högkvalitativa syntetiska tabelldata och utvärderar deras överensstämmelse med differentierade integritetsprinciper. Studien visar att även om TVAE är bättre än VAE på att generera syntetiska data som troget återger fördelningen av verkliga data mätt med Synthetic Data Vault (SDV), garanterar det senare inte att de syntetiska data är upp till uppgiften i praktiska industriella tillämpningar. I synnerhet är modeller som tränats på TVAE-genererade data från Creditcards-datasetet ineffektiva. Författaren undersöker också olika optimeringsmetoder för TVAE, såsom Gumbel Max Trick, DO och Batch Normalization, samtidigt som han påpekar att tekniker som ofta används för att förbättra tvådimensionell TVAE, såsom Kullback-Leibler Warm-Up och B Disentanglement, inte är direkt överförbara till det endimensionella sammanhanget. På grund av tidsbegränsningar och redan ofullständiga resultat implementerades dock inte differentierad integritet för TVAE. Studien belyser ändå fördelarna med att stabilisera träningen med Differential Privacy - Stochastic Gradient Descent (DP-SGD), som med en drop-out, och förekomsten av en optimal jämviktspunkt mellan begränsningarna för differential privacy och antalet träningsepoker i modellen.
|
17 |
Differential neural architecture search for tabular data : Efficient neural network design for tabular datasetsMedhage, Marcus January 2024 (has links)
Artificial neural networks are some of the most powerful machine learning models and have gained interest in the telecommunications domain as well as other fields and applications due to their strong performance and flexibility. Creating these models typically requires manually choosing their architecture along with other hyperparameters that are crucial for their performance. Neural Architecture Search (NAS) seeks to automate architecture choice and has gained increasing interest in recent years. In this thesis, we propose a new NAS method based on differential architecture search (DARTS) to find architectures of fully-connected feed forward networks on tabular datasets. We train a gating mechanism on a validation dataset and compare four candidate gate functions as a tool to determine the number of hidden units per hidden layer in our neural networks for different tasks. Our findings show that our new method can reliably find architectures that are more compact and outperform manually chosen architectures. Interestingly, we also found that extracting weights learned during the search process could generate models that achieve significantly higher and more stable performance than identical architectures retrained from scratch. Our method achieved equal in performance to that of another NAS-method, while only requiring half an hour of training compared to 280 hours. The trained models also demonstrated a competitive performance when benchmarked to other state-of-the-art machine learning models. The primary benefit of our method, stems from the extraction and fine-tuning of certain weights. Our results indicate that improvements from extracted weights could relate to the lottery ticket hypothesis of neural networks, which invites further study for a fuller understanding.
|
18 |
Visualization of tabular data on mobile devices / Visualisering av tabulär data på mobila enheterCaspár, Sophia January 2018 (has links)
This thesis evaluates various ways of displaying tabular data on mobile devices using different responsive table solutions. It also presents a tool to help web developers and designers in the process of choosing and implementing a suitable table approach. The proposed solution for this thesis is a web system called The Visualizing Wizard that allows the user to answer some questions about the intended table and then get a recommended responsive table solution generated based on the answers. The system uses a rule-based approach via Prolog to match the answers to a set of rules and provide an appropriate result. In order to determine which table solutions are more appropriate to use for which type of data a statistical analysis and user tests were performed. The statistical analysis contains an investigation to identify the most common table approaches and data types used on various websites. The result indicates that solutions such as "squish", "collapse by rows", "click" and "scroll" are most common. The most common table categories are product comparison, product offerings, sports and stock market/statistics. This information was used to implement and establish user tests to collect feedback and opinions. The data and statistics gathered from the user tests were mapped into sets of rules to answer the question of which responsive table solution is more appropriate to use for which type of data. This serves as the foundation for The Visualizing Wizard.
|
19 |
Synthesis of Tabular Financial Data using Generative Adversarial Networks / Syntes av tabulär finansiell data med generativa motstridande nätverkKarlsson, Anton, Sjöberg, Torbjörn January 2020 (has links)
Digitalization has led to tons of available customer data and possibilities for data-driven innovation. However, the data needs to be handled carefully to protect the privacy of the customers. Generative Adversarial Networks (GANs) are a promising recent development in generative modeling. They can be used to create synthetic data which facilitate analysis while ensuring that customer privacy is maintained. Prior research on GANs has shown impressive results on image data. In this thesis, we investigate the viability of using GANs within the financial industry. We investigate two state-of-the-art GAN models for synthesizing tabular data, TGAN and CTGAN, along with a simpler GAN model that we call WGAN. A comprehensive evaluation framework is developed to facilitate comparison of the synthetic datasets. The results indicate that GANs are able to generate quality synthetic datasets that preserve the statistical properties of the underlying data and enable a viable and reproducible subsequent analysis. It was however found that all of the investigated models had problems with reproducing numerical data. / Digitaliseringen har fört med sig stora mängder tillgänglig kunddata och skapat möjligheter för datadriven innovation. För att skydda kundernas integritet måste dock uppgifterna hanteras varsamt. Generativa Motstidande Nätverk (GANs) är en ny lovande utveckling inom generativ modellering. De kan användas till att syntetisera data som underlättar dataanalys samt bevarar kundernas integritet. Tidigare forskning på GANs har visat lovande resultat på bilddata. I det här examensarbetet undersöker vi gångbarheten av GANs inom finansbranchen. Vi undersöker två framstående GANs designade för att syntetisera tabelldata, TGAN och CTGAN, samt en enklare GAN modell som vi kallar för WGAN. Ett omfattande ramverk för att utvärdera syntetiska dataset utvecklas för att möjliggöra jämförelse mellan olika GANs. Resultaten indikerar att GANs klarar av att syntetisera högkvalitativa dataset som bevarar de statistiska egenskaperna hos det underliggande datat, vilket möjliggör en gångbar och reproducerbar efterföljande analys. Alla modellerna som testades uppvisade dock problem med att återskapa numerisk data.
|
20 |
Generation of Synthetic Clinical Trial Subject Data Using Generative Adversarial NetworksLindell, Linus January 2024 (has links)
The development of new solutions incorporating artificial intelligence (AI) within the medical field is an area of great interest. However, access to comprehensive and diverse datasets is restricted due to the sensitive nature of the data. A potential solution to this is to generatesynthetic datasets based on real medical data. Synthetic data could protect the integrity of the subjects while preserving the inherent information necessary for training AI models and be generated in greater quantity than otherwise available. This thesis project aims to generate reliable clinical trial subject data using a generative adversarial network (GAN). The main data set used is a mock clinical trial dataset consisting of multiple subject visits, however an additional data set containing authentic medical data is also used for better insights into the model’s ability to learn underlying relationships. The thesis also investigates training strategies for simulating the temporal dimension and the missing values in the data. The GAN model used is an altered version of the Conditional Tabular GAN (CTGAN)made to be compatible with the preprocessed clinical trial mock data, and multiple model architectures and number of training epochs are examined. The results show great potential for GAN models on clinical trial datasets, especially for real-life data. One model, trained on the authentic dataset, generates near-perfect synthetic data with respect to column distributions and correlation between columns. The results also show that classification models trained on synthetic data and tested on real data have the potential to match the performance of classification models trained on real data. While the synthetic data replicates the missing values, no definitive conclusion can be drawn regarding the temporal characteristics due to the sparsity of the mock dataset and lack of real correlations in it. Although the results are promising, further experiments on authentic datasets with less sparsity are required.
|
Page generated in 0.0801 seconds