• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 591
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1225
  • 1225
  • 181
  • 170
  • 163
  • 156
  • 150
  • 150
  • 149
  • 129
  • 112
  • 110
  • 110
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

[pt] RUMO A CIDADES MAIS INTELIGENTES: ESTRATÉGIAS PARA INTEGRAR DADOS QUANTITATIVOS E QUALITATIVOS POR MEIO DE PROCESSOS DE DESIGN PARTICIPATIVO / [en] TOWARDS SMARTER CITIES: STRATEGIES TO INTEGRATE QUANTITATIVE AND QUALITATIVE DATA BY PARTICIPATORY DESIGN PROCESS

RAQUEL CORREA CORDEIRO 28 May 2024 (has links)
[pt] O conceito de cidades inteligentes é frequentemente associado a avanços tecnológicos, porém também abrange aspectos do bem-estar dos cidadãos e a sustentabilidade. A crescente disponibilidade de dados digitais resulta em um foco excessivo na tecnologia, negligenciando a participação cidadã e subutilizado consequentemente o potencial dessas informações. A nossa hipótese é que o design pode facilitar o acesso a dados urbanos complexos por meio de narrativas baseadas em dados e de processos participativos com a população. Logo, testamos um processo de co-design utilizando métodos mistos para analisar o comportamento de mobilidade. Estruturada em duas fases, a pesquisa inicialmente explorou projetos de mobilidade, analisando relatórios da iniciativa Civitas e entrevistando profissionais atuantes na área. Os desafios e soluções identificados foram testados na segunda fase, usando métodos como coleta de dados abertos municipais, diário de uso e análise de sentimentos em redes sociais. Por fim, foi realizado um workshop de co-design usando ferramentas de visualização de dados para co-analisar a relação dos efeitos meteorológicos na mobilidade urbana. Os resultados destacam o potencial do designer como mediador, com participantes relatando facilidade para analisar volumes substanciais de dados e considerando a proposta inovadora e agradável. Pesquisas futuras poderiam avaliar a compreensão dos dados pelos participantes. A contribuição desta tese reside em um processo de co-design que pode incluir diversos atores, como governo, setor privado e cidadãos, utilizando ferramentas de narrativas baseadas em dados, aplicáveis a quaisquer projetos com vasto volume de informação. / [en] The concept of smart cities is often associated with technological advancement, but it also encompasses aspects of citizen well-being and sustainability. The growing availability of digital data results in an excessive focus on technology, neglecting citizen participation and consequently underutilizing the potential of this information. Our hypothesis is that design can facilitate access to complex urban data through data storytelling and participatory processes. Therefore, we tested a co-design process using mixed methods to analyze mobility behavior. Structured in two phases, the study initially explored mobility projects by analyzing reports from the Civitas initiative and interviewing professionals in the field. The identified challenges and solutions were then tested in the second phase, employing data collection methods such as city open data analysis, diary studies, and sentiment analysis on social media. Finally, a co-design workshop was conducted incorporating data visualization tools to co-analyze the weather effects on urban mobility. The results highlight the significant potential of the designer as a facilitator, with participants reporting ease in analyzing substantial data volumes and considering the proposal innovative and enjoyable. Future research may evaluate participants understanding of the data. The contribution of this thesis lies in a co-design process that can involve various stakeholders, including government, private enterprises, and citizens, using data storytelling tools applicable to any project dealing with large data volumes.
292

The utilization of BDA in digital marketing strategies of international B2B organizations from a dynamic capability´s perspective : A qualitative case study

Jonsdottir, Hugrun Dis January 2024 (has links)
In B2B organizations, the adoption of digital marketing strategies has increased, leading to the collection of large amounts of data, big data. This has enabled the use of big data analytics, BDA, to uncover valuable insights for digital marketing purpose. Yet, there is limited research on how the B2B organizations integrate and utilize BDA in their digital marketing strategies, especially in the international context. This study aimed to address this research gap by examining how international B2B organizations integrate and utilize BDA in their digital marketing strategy, employing a dynamic capabilities perspective. The methodology of qualitative case study was applied, focusing on two established Swedish B2B organizations with an international presence. Empirical data was collected through semi-structured interviews and complemented with document analysis. Through abductive approach and hermeneutic interpretation, the findings show that despite the need for internal structural improvements, international B2B organizations are actively integrating BDA into their digital marketing strategies. By developing new routines and skills, these organizations can navigate the challenges posed by BDA while harnessing its benefits. Additionally, a framework comprising 10 practices in which international B2B organizations leverage BDA is proposed.
293

Revealing the Non-technical Side of Big Data Analytics : Evidence from Born analyticals and Big intelligent firms

Denadija, Feda, Löfgren, David January 2016 (has links)
This study aspired to gain a more a nuanced understanding of the emerging analytics technologies and the vital capabilities that ultimately drive evidence-based decision making. Big data technology is widely discussed by varying groups in society and believed to revolutionize corporate decision making. In spite of big data's promising possibilities only a trivial fraction of firms deploying big data analytics (BDA) have gained significant benefits from their initiatives. Trying to explain this inability we leaned back on prior IT literature suggesting that IT resources can only be successfully deployed when combined with organizational capabilities. We identified key theoretical components at an organizational, relational, and human level. The data collection included 20 interviews with decision makers and data scientist from four analytical leaders. Early on we distinguished the companies into two categories based on their empirical characteristics. The terms “Born analyticals” and “Big intelligent firms” were coined. The analysis concluded that social, non-technical elements play a crucial role in building BDA abilities. These capabilities differ among companies but can still enable BDA in different ways, indicating that organizations´ history and context seem to influence how firms deploy capabilities. Some capabilities have proven to be more important than others. The individual mindset towards data is seemingly the most determining capability in building BDA ability. Varying mindsets foster different BDA-environments in which other capabilities behave accordingly. Born analyticals seemed to display an environment benefitting evidence based decisions.
294

Benefits, business considerations and risks of big data

Smeda, Jorina 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Big data is an emerging technology and its use holds great potential and benefits for organisations. The governance of this technology is something that is still a big concern and as aspect for which guidance to organisations wanting to use this technology is still lacking. In this study an extensive literature review was conducted to identify and define the business imperatives distinctive of an organisation that will benefit from the use of big data. The business imperatives were identified and defined based on the characteristics and benefits of big data. If the characteristics and benefits are clear, the relevant technology will be better understood. Furthermore, the business imperatives provide business managers with guidance to whether their organisation will benefit from the use of this technology or not. The strategic and operational risks related to the use of big data were also identified and they are discussed in this assignment, based on a literature review. The risks specific to big data are highlighted and guidance is given to business managers as to which risks should be addressed when using big data. The risks are then mapped against COBIT 5 (Control Objectives for Information and Related Technology) to highlight the processes most affected when implementing and using big data, providing business managers with guidance when governing this technology. / AFRIKAANSE OPSOMMING: ‘Big data’ is 'n ontwikkelende tegnologie en die gebruik daarvan hou baie groot potensiaal en voordele vir besighede in. Die bestuur van hierdie tegnologie is egter ʼn groot bron van kommer en leiding aan besighede wat hierdie tegnologie wil gebruik ontbreek steeds. Deur middel van 'n uitgebreide literatuuroorsig is die besigheidsimperatiewe kenmerkend van 'n besigheid wat voordeel sal trek uit die gebruik van ‘big data’ geïdentifiseer. Die besigheidsimperatiewe is geïdentifiseer en gedefinieer gebaseer op die eienskappe en voordele van ‘big data’. Indien die eienskappe en voordele behoorlik verstaan word, is 'n beter begrip van die tegnologie moontlik. Daarbenewens bied die besigheidsimperatiewe leiding aan bestuur sodat hulle in staat kan wees om te beoordeel of hulle besigheid voordeel sal trek uit die gebruik van hierdie tegnologie of nie. Die strategiese en operasionele risiko's wat verband hou met die gebruik van ‘big data’ is ook geïdentifiseer en bespreek, gebaseer op 'n literatuuroorsig. Dit beklemtoon die risiko's verbonde aan ‘big data’ en daardeur word leiding verskaf aan besigheidsbestuurders ten opsigte van watter risiko's aangespreek moet word wanneer ‘big data’ gebruik word. Die risiko's is vervolgens gekarteer teen COBIT 5 (‘Control Objectives for Information and Related Technology’) om die prosesse wat die meeste geraak word deur die gebruik van ‘big data’ te beklemtoon, ten einde leiding te gee aan besigheidsbestuurders vir die beheer en kontrole van hierdie tegnologie.
295

Social Media und Banken – Die Reaktionen von Facebook-Nutzern auf Kreditanalysen mit Social Media Daten / Social Media and Banks – Facebook Users Reactions to Meta Data Based Credit Analysis

Thießen, Friedrich, Brenger, Jan Justus, Kühn, Annemarie, Gliem, Georg, Nake, Marianne, Neuber, Markus, Wulf, Daniel 14 March 2017 (has links) (PDF)
Der Trend zur Auswertung aller nur möglichen Datenbestände für kommerzielle Zwecke ist eine nicht mehr aufzuhaltende Entwicklung. Auch für die Kreditwürdigkeitsprüfung wird überlegt, Daten aus Sozialen Netzwerken einzusetzen. Die Forschungsfrage entsteht, wie die Nutzer dieser Netzwerke reagieren, wenn Banken ihre privaten Profile durchsuchen. Mit Hilfe einer Befragung von 271 Probanden wurde dieses Problem erforscht. Die Ergebnisse sind wie folgt: Die betroffenen Bürger sehen die Entwicklung mit Sorge. Sie begreifen ganz rational die neuen Geschäftsmodelle und ihre Logik und erkennen die Vorteile. Sie stehen dem Big-Data-Ansatz nicht vollkommen ablehnend gegenüber. Abgelehnt wird es aber, wenn sich Daten aus sozialen Medien negativ für eine Person auswirken. Wenn man schon sein Facebook-Profil einer Bank öffnet, dann will man einen Vorteil davon haben, keinen Nachteil. Ein Teil der Gesellschaft lehnt das Schnüffeln in privaten Daten strikt ab. Insgesamt sind die Antworten deutlich linksschief verteilt mit einem sehr dicken Ende im ablehnenden Bereich. Das Schnüffeln in privaten Daten wird als unethisch und unfair empfunden. Die Menschen fühlen sich im Gegenzug berechtigt, ihre Facebook-Daten zu manipulieren. Eine wie-Du-mir-so-ich-Dir-Mentalität ist festzustellen. Wer kommerziell ausgeschnüffelt wird, der antwortet kommerziell mit Manipulationen seiner Daten. Insgesamt ist Banken zu raten, nicht Vorreiter der Entwicklung zu sein, sondern abzuwarten, welche Erfahrungen Fintechs machen. Banken haben zu hohe Opportunitätskosten in Form des Verlustes von Kundenvertrauen. / The trend to analyze all conceivable data sets for commercial purposes is unstoppable. Banks and fintechs try to use social media data to assess the creditworthiness of potential customers. The research question is how social media users react when they realize that their bank evaluates personal social media profiles. An inquiry among 271 test persons has been performed to analyze this problem. The results are as follows: The persons are able to rationally reflect the reasons for the development and the logic behind big data analyses. They realize the advantages, but also see risks. Opening social media profiles to banks should not lead to individual disadvantages. Instead, people expect an advantage from opening their profiles voluntarily. This is a moral attitude. An important minority of 20 to 30 % argues strictly against the commercial use of social media data. When people realize that they cannot prevent the commercial use of private data, they start to manipulate them. Manipulation becomes more extensive when test persons learn about critical details of big data analyses. Those who realize that their private data are used commercially think it would be fair to answer in the same style. So the whole society moves into a commercial direction. To sum up, banks should be reluctant and careful in analyzing private client big data. Instead, banks should give the lead to fintechs as they have fewer opportunity costs, because they do not depend on good customer relations for related products.
296

User Adoption of Big Data Analyticsin the Public Sector

Akintola, Abayomi Rasheed January 2019 (has links)
The goal of this thesis was to investigate the factors that influence the adoption of big data analytics by public sector employees based on the adapted Unified Theory of Acceptance and Use of Technology (UTAUT) model. A mixed method of survey and interviews were used to collect data from employees of a Canadian provincial government ministry. The results show that performance expectancy and facilitating conditions have significant positive effects on the adoption intention of big data analytics, while effort expectancy has a significant negative effect on the adoption intention of big data analytics. The result shows that social influence does not have a significant effect on adoption intention. In terms of moderating variables, the results show that gender moderates the effects of effort expectancy, social influence and facilitating condition; data experience moderates the effects of performance expectancy, effort expectancy and facilitating condition; and leadership moderates the effect of social influence. The moderation effects of age on performance expectancy, effort expectancy is significant for only employees in the 40 to 49 age group while the moderation effects of age on social influence is significant for employees that are 40 years and more. Based on the results, implications for public sector organizations planning to implement big data analytics were discussed and suggestions for further research were made. This research contributes to existing studies on the user adoption of big data analytics.
297

Ballstering : un algorithme de clustering dédié à de grands échantillons / Ballstering : a clustering algorithm for large datasets

Courjault-Rade, Vincent 17 April 2018 (has links)
Ballstering appartient à la famille des méthodes de machine learning qui ont pour but de regrouper en classes les éléments formant la base de données étudiée et ce sans connaissance au préalable des classes qu'elle contient. Ce type de méthodes, dont le représentant le plus connu est k-means, se rassemblent sous le terme de "partitionnement de données" ou "clustering". Récemment un algorithme de partitionnement "Fast Density Peak Clustering" (FDPC) paru dans le journal Science a suscité un intérêt certain au sein de la communauté scientifique pour son aspect innovant et son efficacité sur des données distribuées en groupes non-concentriques. Seulement cet algorithme présente une complexité telle qu'il ne peut être aisément appliqué à des données volumineuses. De plus nous avons pu identifier plusieurs faiblesses pouvant nuire très fortement à la qualité de ses résultats, dont en particulier la présence d'un paramètre général dc difficile à choisir et ayant malheureusement un impact non-négligeable. Compte tenu de ces limites, nous avons repris l'idée principale de FDPC sous un nouvel angle puis apporté successivement des modifications en vue d'améliorer ses points faibles. Modifications sur modifications ont finalement donné naissance à un algorithme bien distinct que nous avons nommé Ballstering. Le fruit de ces 3 années de thèse se résume principalement en la conception de ce dernier, un algorithme de partitionnement dérivé de FDPC spécialement conçu pour être efficient sur de grands volumes de données. Tout comme son précurseur, Ballstering fonctionne en deux phases: une phase d'estimation de densité suivie d'une phase de partitionnement. Son élaboration est principalement fondée sur la construction d'une sous-procédure permettant d'effectuer la première phase de FDPC avec une complexité nettement amoindrie tout évitant le choix de dc qui devient dynamique, déterminé suivant la densité locale. Nous appelons ICMDW cette sous-procédure qui représente une partie conséquente de nos contributions. Nous avons également remanié certaines des définitions au cœur de FDPC et revu entièrement la phase 2 en s'appuyant sur la structure arborescente des résultats fournis par ICDMW pour finalement produire un algorithme outrepassant toutes les limitations que nous avons identifié chez FDPC. / Ballstering belongs to the machine learning methods that aim to group in classes a set of objects that form the studied dataset, without any knowledge of true classes within it. This type of methods, of which k-means is one of the most famous representative, are named clustering methods. Recently, a new clustering algorithm "Fast Density Peak Clustering" (FDPC) has aroused great interest from the scientific community for its innovating aspect and its efficiency on non-concentric distributions. However this algorithm showed a such complexity that it can't be applied with ease on large datasets. Moreover, we have identified several weaknesses that impact the quality results and the presence of a general parameter dc difficult to choose while having a significant impact on the results. In view of those limitations, we reworked the principal idea of FDPC in a new light and modified it successively to finally create a distinct algorithm that we called Ballstering. The work carried out during those three years can be summarised by the conception of this clustering algorithm especially designed to be effective on large datasets. As its Precursor, Ballstering works in two phases: An estimation density phase followed by a clustering step. Its conception is mainly based on a procedure that handle the first step with a lower complexity while avoiding at the same time the difficult choice of dc, which becomes automatically defined according to local density. We name ICMDW this procedure which represent a consistent part of our contributions. We also overhauled cores definitions of FDPC and entirely reworked the second phase (relying on the graph structure of ICMDW's intermediate results), to finally produce an algorithm that overcome all the limitations that we have identified.
298

Digitaliseringens påverkan på energibranschen : En flerfallstudie på framstående svenska energibolag / The impact of digitalization in the Swedish energy sector

Oscarsson, David, Palmenäs, Johan January 2018 (has links)
The ongoing digitalization affects all sectors and changes the competitive landscape. A sector that is often seen upon as traditional, with low digital maturity is the energy sector. Hence, existing literature has focused on overcoming technical difficulties associated with the digitalization and lacks reasoning concerning the implications on existing business models. The purpose of the study is therefore to investigate how the digitalization affects companies in the Swedish energy sector when it comes to innovations in the business model, how companies creates, delivers and captures value. This purpose is addressed through an exploratory multiple case study including some of the most prominent actors on the Swedish energy market. The result of the study shows that the digitalization has had multiple implications in all of the business model´s building blocks, but it is still associated with a lot of uncertainties and the most radical changes are expected to happen in the future. Theoretical implications of this study are the increased understanding to how digitalization drives business model innovations and how application of new technologies can lead to increased business value. Practical implications are deepened knowledge for business managers in how digitalization can be utilized to gain increased value in an industry with an overall low digital maturity. / Syfte – Studiens syfte är att undersöka hur digitaliseringen påverkar företag i svenska energibranschen när det kommer till att skapa, leverera och fånga värde. Detta genom att skapa förståelse genom att undersöka hur digitaliseringen har påverkat företagen i den svenska energibranschen. Studiens syfte, att undersöka hur digitaliseringen driver affärsmodellsinnovationer inom varje del av energibranschens värdekedja, är explorativt. Studiens underlag grundar sig på insamling av empirisk data för att skapa ny kunskap, vilket medför att studiens forskningsansats är induktiv. Metod – Datainsamlingen har huvudsakligen genomförts via semistrukturerade intervjuer som analyserats via tematisk analys. Det selektiva urvalet grundar sig i fem olika kriterier där två av dessa kriterier ansågs som nödvändiga för samtliga av de företag som användes som fallföretag i studien. Därefter har tre kriterier använts för att identifiera viktiga aspekter kopplat till respektive forskningsfråga. Forskningsfrågorna ämnar besvara hur företagen anses använda digitaliseringen för att skapa, leverera eller fånga värde av den vara som de producerar och/eller levererar. Därefter har ett snöbollsurval tillämpats för att identifiera intervjupersoner på respektive fallföretag. Resultat – Resultatet av studien påvisar att svenska energibolag har förändrat sina affärsmodeller utifrån dimensionerna skapa, fånga och leverera värde till följd av digitaliseringen. Detta har genomförts på olika sätt mellan fallföretagen, både genom inkrementell och radikal affärsmodellsinnovation. Teoretiska implikationer – Studien bidrar till förståelsen för hur digitaliseringen vidare driver affärsmodellsinnovationer, där har studien flertalet teoretiska bidrag och tillför insikter i hur digitalisering som fenomen påverkar och förändrar affärsmodeller. Praktiska implikationer – Studien bidrar med insikter hur digitaliseringen påverkar den svenska energibranschen sett från ett perspektiv från företag i framkant inom detta område. Studien har undersökt fallföretag efter ett visst antal kriterier, dessa kriterier har lett till att framstående företag liknande bästa praxis inom området har bidragit, vilket kommer leda till en ökad förståelse för andra bolag i samma bransch. Dessutom kan rapporten nyttjas för att identifiera förbättringspotential i företagen och agera som en katalysator för att digitalt transformera verksamheten.
299

Utilização de filtros em programa de imagem digital / Use of filters in mobile photo-sharing application and services

Azevedo, Telma Luiza de 26 April 2017 (has links)
A imagem concentra a informação ideológica que abrange complexas estruturas que permeiam a vida de milhões de usuários e constituem e constroem a sociedade em nosso tempo. A partir do olhar sobre o panorama atual das práticas fotográficas na sociedade, a dissertação trata da utilização dos filtros, que consistem em ferramentas que o fotógrafo pode utilizar para aplicar diversos efeitos em suas imagens, como, por exemplo, evidenciar cores, alterar os contrastes da cena, modificar foco, aplicar efeitos gráficos, absorver parte da luz que chega a lente, isto é, sobrepor camadas de informação às mesmas, na produção de fotografias compartilhadas em redes sociais. Filtros também se referem ao ato de classificar e selecionar os fluxos de dados em rede referentes a informações públicas ou privadas de usuários ao redor do globo interferindo nas atividades de milhões de indivíduos. Deste modo, a promoção do conhecimento científico de uma esfera da linguagem fotográfica compartilhada, criativa e experimental popularizada pela tecnologia em nossos dias é imprescindível para evidenciar a abrangência do fenômeno e promover ou provocar a reflexão sobre determinantes financeiros que permeiam comportamentos cotidianos e, então, agir sobre os padrões instituídos e não apenas reproduzi-los / The image concentrates the ideological information that encompasses complex structures that permeate the lives of millions of users and constitute and build society in our time. From a look at the current view of the photographic practices in society, the dissertation deals with the use of filters, which consist of tools that the photographer can use to apply various effects to his images, such as highlighting colors, changing contrasts of the scene, modify focus, apply graphic effects, absorb part of the light that reaches the lens, that is, superimpose layers of information on them, in the production of shared photographs in social networks. Filters also refer to the act of classifying and selecting networked data flows for public or private information from users around the globe interfering with the activities of millions of individuals. Thus the promotion of scientific knowledge of a sphere of shared, creative and experimental photographic language popularized by technology in our day is essential to highlight the scope of the phenomenon and to promote or provoke reflection of the financial determinants that permeate habitual behaviors, and so transforming the established standards and not just reproduce them.
300

Effective and unsupervised fractal-based feature selection for very large datasets: removing linear and non-linear attribute correlations / Seleção de atributos efetiva e não-supervisionada em grandes bases de dados: aplicando a Teoria de Fractais para remover correlações lineares e não-lineares

Fraideinberze, Antonio Canabrava 04 September 2017 (has links)
Given a very large dataset of moderate-to-high dimensionality, how to mine useful patterns from it? In such cases, dimensionality reduction is essential to overcome the well-known curse of dimensionality. Although there exist algorithms to reduce the dimensionality of Big Data, unfortunately, they all fail to identify/eliminate non-linear correlations that may occur between the attributes. This MSc work tackles the problem by exploring concepts of the Fractal Theory and massive parallel processing to present Curl-Remover, a novel dimensionality reduction technique for very large datasets. Our contributions are: (a) Curl-Remover eliminates linear and non-linear attribute correlations as well as irrelevant attributes; (b) it is unsupervised and suits for analytical tasks in general not only classification; (c) it presents linear scale-up on both the data size and the number of machines used; (d) it does not require the user to guess the number of attributes to be removed, and; (e) it preserves the attributes semantics by performing feature selection, not feature extraction. We executed experiments on synthetic and real data spanning up to 1.1 billion points, and report that our proposed Curl-Remover outperformed two PCA-based algorithms from the state-of-the-art, being in average up to 8% more accurate. / Dada uma grande base de dados de dimensionalidade moderada a alta, como identificar padrões úteis nos objetos de dados? Nesses casos, a redução de dimensionalidade é essencial para superar um fenômeno conhecido na literatura como a maldição da alta dimensionalidade. Embora existam algoritmos capazes de reduzir a dimensionalidade de conjuntos de dados na escala de Terabytes, infelizmente, todos falham em relação à identificação/eliminação de correlações não lineares entre os atributos. Este trabalho de Mestrado trata o problema explorando conceitos da Teoria de Fractais e processamento paralelo em massa para apresentar Curl-Remover, uma nova técnica de redução de dimensionalidade bem adequada ao pré-processamento de Big Data. Suas principais contribuições são: (a) Curl-Remover elimina correlações lineares e não lineares entre atributos, bem como atributos irrelevantes; (b) não depende de supervisão do usuário e é útil para tarefas analíticas em geral não apenas para a classificação; (c) apresenta escalabilidade linear tanto em relação ao número de objetos de dados quanto ao número de máquinas utilizadas; (d) não requer que o usuário sugira um número de atributos para serem removidos, e; (e) mantêm a semântica dos atributos por ser uma técnica de seleção de atributos, não de extração de atributos. Experimentos foram executados em conjuntos de dados sintéticos e reais contendo até 1,1 bilhões de pontos, e a nova técnica Curl-Remover apresentou desempenho superior comparada a dois algoritmos do estado da arte baseados em PCA, obtendo em média até 8% a mais em acurácia de resultados.

Page generated in 0.0415 seconds