• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 74
  • 52
  • 10
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 271
  • 271
  • 177
  • 167
  • 95
  • 56
  • 55
  • 51
  • 50
  • 47
  • 44
  • 43
  • 42
  • 40
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Využití data miningu v řízení podniku / Using data mining to manage an enterprise.

Prášil, Zdeněk January 2010 (has links)
The thesis is focused on data mining and its use in management of an enterprise. The thesis is structured into theoretical and practical part. Aim of the theoretical part was to find out: 1/ the most used methods of the data mining, 2/ typical application areas, 3/ typical problems solved in the application areas. Aim of the practical part was: 1/ to demonstrate use of the data mining in small Czech e-shop for understanding of the structure of the sale data, 2/ to demonstrate, how the data mining analysis can help to increase marketing results. In my analyses of the literature data I found decision trees, linear and logistic regression, neural network, segmentation methods and association rules are the most used methods of the data mining analysis. CRM and marketing, financial institutions, insurance and telecommunication companies, retail trade and production are the application areas using the data mining the most. The specific tasks of the data mining focus on relationships between marketing sales and customers to make better business. In the analysis of the e-shop data I revealed the types of goods which are buying together. Based on this fact I proposed that the strategy supporting this type of shopping is crucial for the business success. As a conclusion I proved the data mining is methods appropriate also for the small e-shop and have capacity to improve its marketing strategy.
262

A new framework for a technological perspective of knowledge management

Botha, Antonie Christoffel 26 June 2008 (has links)
Rapid change is a defining characteristic of our modern society. This has huge impact on society, governments, and businesses. Businesses are forced to fundamentally transform themselves to survive in a challenging economy. Transformation implies change in the way business is conducted, in the way people perform their contribution to the organisation, and in the way the organisation perceives and manages its vital assets – which increasingly are built around the key assets of intellectual capital and knowledge. The latest management tool and realisation of how to respond to the challenges of the economy in the new millennium, is the idea of "knowledge management" (KM). In this study we have focused on synthesising the many confusing points of view about the subject area, such as: <ul><li> a. different focus points or perspectives; </li><li> b. different definitions and positioning of the subject; as well as</li><li> c. a bewildering number of definitions of what knowledge is and what KM entails.</li></ul> There exists a too blurred distinction in popular-magazine-like sources about this area between subjects and concepts such as: knowledge versus information versus data; the difference between information management and knowledge management; tools available to tackle the issues in this field of study and practice; and the role technology plays versus the huge hype from some journalists and within the vendor community. Today there appears to be a lack of a coherent set of frameworks to abstract, comprehend, and explain this subject area; let alone to build successful systems and technologies with which to apply KM. The study is comprised of two major parts:<ul><li> 1. In the first part the study investigates the concepts, elements, drivers, and challenges related to KM. A set of models for comprehending these issues and notions is contributed as we considered intellectual capital, organizational learning, communities of practice, and best practices. </li><li> 2. The second part focuses on the technology perspective of KM. Although KM is primarily concerned with non-technical issues this study concentrates on the technical issues and challenges. A new technology framework for KM is proposed to position and relate the different KM technologies as well as the two key applications of KM, namely knowledge portals and knowledge discovery (including text mining). </li></ul> It is concluded that KM and related concepts and notions need to be understood firmly as well as effectively positioned and employed to support the modern business organisation in its quest to survive and grow. The main thesis is that KM technology is a necessary but insufficient prerequisite and a key enabler for successful KM in a rapidly changing business environment. / Thesis (PhD (Computer Science))--University of Pretoria, 2010. / Computer Science / unrestricted
263

Inteligentní emailová schránka / Intelligent Mailbox

Pohlídal, Antonín January 2012 (has links)
This master's thesis deals with the use of text classification for sorting of incoming emails. First, there is described the Knowledge Discovery in Databases and there is also analyzed in detail the text classification with selected methods. Further, this thesis describes the email communication and SMTP, POP3 and IMAP protocols. The next part contains design of the system that classifies incoming emails and there are also described realated technologie ie Apache James Server, PostgreSQL and RapidMiner. Further, there is described the implementation of all necessary components. The last part contains an experiments with email server using Enron Dataset.
264

Získávání znalostí z časoprostorových dat / Knowledge Discovery in Spatio-Temporal Data

Pešek, Martin January 2011 (has links)
This thesis deals with knowledge discovery in spatio-temporal data, which is currently a rapidly evolving area of research in information technology. First, it describes the general principles of knowledge discovery, then, after a brief introduction to mining in the temporal and spatial data, it focuses on the overview and description of existing methods for mining in spatio-temporal data. It focuses, in particular, on moving objects data in the form of trajectories with an emphasis on the methods for trajectory outlier detection. The next part of the thesis deals with the process of implementation of the trajectory outlier detection algorithm called TOP-EYE. In order to testing, validation and possibility of using this algorithm is designed and implemented an application for trajectory outlier detection. The algorithm is experimentally evaluated on two different data sets.
265

Získávání znalostí z datových skladů / Knowledge Discovery over Data Warehouses

Pumprla, Ondřej January 2009 (has links)
This Master's thesis deals with the principles of the data mining process, especially with the mining  of association rules. The theoretical apparatus of general description and principles of the data warehouse creation is set. On the basis of this theoretical knowledge, the application for the association rules mining is implemented. The application requires the data in the transactional form or the multidimensional data organized in the Star schema. The implemented algorithms for finding  of the frequent patterns are Apriori and FP-tree. The system allows the variant setting of parameters for mining process. Also, the validation tests and efficiency proofs were accomplished. From the point of view of the association rules searching support, the resultant application is more applicable and robust than the existing compared systems SAS Miner and Oracle Data Miner.
266

Multimodal Data Management in Open-world Environment

K M A Solaiman (16678431) 02 August 2023 (has links)
<p>The availability of abundant multimodal data, including textual, visual, and sensor-based information, holds the potential to improve decision-making in diverse domains. Extracting data-driven decision-making information from heterogeneous and changing datasets in real-world data-centric applications requires achieving complementary functionalities of multimodal data integration, knowledge extraction and mining, situationally-aware data recommendation to different users, and uncertainty management in the open-world setting. To achieve a system that encompasses all of these functionalities, several challenges need to be effectively addressed: (1) How to represent and analyze heterogeneous source contents and application context for multimodal data recommendation? (2) How to predict and fulfill current and future needs as new information streams in without user intervention? (3) How to integrate disconnected data sources and learn relevant information to specific mission needs? (4) How to scale from processing petabytes of data to exabytes? (5) How to deal with uncertainties in open-world that stem from changes in data sources and user requirements?</p> <p><br></p> <p>This dissertation tackles these challenges by proposing novel frameworks, learning-based data integration and retrieval models, and algorithms to empower decision-makers to extract valuable insights from diverse multimodal data sources. The contributions of this dissertation can be summarized as follows: (1) We developed SKOD, a novel multimodal knowledge querying framework that overcomes the data representation, scalability, and data completeness issues while utilizing streaming brokers and RDBMS capabilities with entity-centric semantic features as an effective representation of content and context. Additionally, as part of the framework, a novel text attribute recognition model called HART was developed, which leveraged language models and syntactic properties of large unstructured texts. (2) In the SKOD framework, we incrementally proposed three different approaches for data integration of the disconnected sources from their semantic features to build a common knowledge base with the user information need: (i) EARS: A mediator approach using schema mapping of the semantic features and SQL joins was proposed to address scalability challenges in data integration; (ii) FemmIR: A data integration approach for more susceptible and flexible applications, that utilizes neural network-based graph matching techniques to learn coordinated graph representations of the data. It introduces a novel graph creation approach from the features and a novel similarity metric among data sources; (iii) WeSJem: This approach allows zero-shot similarity matching and data discovery by using contrastive learning<br> to embed data samples and query examples in a high-dimensional space using features as a novel source of supervision instead of relevance labels. (3) Finally, to manage uncertainties in multimodal data management for open-world environments, we characterized novelties in multimodal information retrieval based on data drift. Moreover, we proposed a novelty detection and adaptation technique as an augmentation to WeSJem.<br> </p> <p>The effectiveness of the proposed frameworks, models, and algorithms was demonstrated<br> through real-world system prototypes that solved open problems requiring large-scale human<br> endeavors and computational resources. Specifically, these prototypes assisted law enforcement officers in automating investigations and finding missing persons.<br> </p>
267

Dynamic Network Modeling from Temporal Motifs and Attributed Node Activity

Giselle Zeno (16675878) 26 July 2023 (has links)
<p>The most important networks from different domains—such as Computing, Organization, Economic, Social, Academic, and Biology—are networks that change over time. For example, in an organization there are email and collaboration networks (e.g., different people or teams working on a document). Apart from the connectivity of the networks changing over time, they can contain attributes such as the topic of an email or message, contents of a document, or the interests of a person in an academic citation or a social network. Analyzing these dynamic networks can be critical in decision-making processes. For instance, in an organization, getting insight into how people from different teams collaborate, provides important information that can be used to optimize workflows.</p> <p><br></p> <p>Network generative models provide a way to study and analyze networks. For example, benchmarking model performance and generalization in tasks like node classification, can be done by evaluating models on synthetic networks generated with varying structure and attribute correlation. In this work, we begin by presenting our systemic study of the impact that graph structure and attribute auto-correlation on the task of node classification using collective inference. This is the first time such an extensive study has been done. We take advantage of a recently developed method that samples attributed networks—although static—with varying network structure jointly with correlated attributes. We find that the graph connectivity that contributes to the network auto-correlation (i.e., the local relationships of nodes) and density have the highest impact on the performance of collective inference methods.</p> <p><br></p> <p>Most of the literature to date has focused on static representations of networks, partially due to the difficulty of finding readily-available datasets of dynamic networks. Dynamic network generative models can bridge this gap by generating synthetic graphs similar to observed real-world networks. Given that motifs have been established as building blocks for the structure of real-world networks, modeling them can help to generate the graph structure seen and capture correlations in node connections and activity. Therefore, we continue with a study of motif evolution in <em>dynamic</em> temporal graphs. Our key insight is that motifs rarely change configurations in fast-changing dynamic networks (e.g. wedges intotriangles, and vice-versa), but rather keep reappearing at different times while keeping the same configuration. This finding motivates the generative process of our proposed models, using temporal motifs as building blocks, that generates dynamic graphs with links that appear and disappear over time.</p> <p><br></p> <p>Our first proposed model generates dynamic networks based on motif-activity and the roles that nodes play in a motif. For example, a wedge is sampled based on the likelihood of one node having the role of hub with the two other nodes being the spokes. Our model learns all parameters from observed data, with the goal of producing synthetic graphs with similar graph structure and node behavior. We find that using motifs and node roles helps our model generate the more complex structures and the temporal node behavior seen in real-world dynamic networks.</p> <p><br></p> <p>After observing that using motif node-roles helps to capture the changing local structure and behavior of nodes, we extend our work to also consider the attributes generated by nodes’ activities. We propose a second generative model for attributed dynamic networks that (i) captures network structure dynamics through temporal motifs, and (ii) extends the structural roles of nodes in motifs to roles that generate content embeddings. Our new proposed model is the first to generate synthetic dynamic networks and sample content embeddings based on motif node roles. To the best of our knowledge, it is the only attributed dynamic network model that can generate <em>new</em> content embeddings—not observed in the input graph, but still similar to that of the input graph. Our results show that modeling the network attributes with higher-order structures (e.g., motifs) improves the quality of the networks generated.</p> <p><br></p> <p>The generative models proposed address the difficulty of finding readily-available datasets of dynamic networks—attributed or not. This work will also allow others to: (i) generate networks that they can share without divulging individual’s private data, (ii) benchmark model performance, and (iii) explore model generalization on a broader range of conditions, among other uses. Finally, the evaluation measures proposed will elucidate models, allowing fellow researchers to push forward in these domains.</p>
268

PREDICTIVE MODELS TRANSFER FOR IMPROVED HYPERSPECTRAL PHENOTYPING IN GREENHOUSE AND FIELD CONDITIONS

Tanzeel U Rehman (13132704) 21 July 2022 (has links)
<p>  </p> <p>Hyperspectral Imaging is one of the most popular technologies in plant phenotyping due to its ability to predict the plant physiological features such as yield biomass, leaf moisture, and nitrogen content accurately, non-destructively, and efficiently. Various kinds of hyperspectral imaging systems have been developed in the past years for both greenhouse and field phenotyping activities. Developing the plant physiological prediction model such as relative water content (RWC) using hyperspectral imaging data requires the adoption of machine learning-based calibration techniques. Convolutional neural networks (CNNs) have been known to automatically extract the features from the raw data which can lead to highly accurate physiological prediction models. Once a reliable prediction model has been developed, sharing that model across multiple hyperspectral imaging systems is very desirable since collecting the large number of ground truth labels for predictive model development is expensive and tedious. However, there are always significant differences in imaging sensors, imaging, and environmental conditions between different hyperspectral imaging facilities, which makes it difficult to share plant features prediction models. Calibration transfer between the imaging systems is critically important. In this thesis, two approaches were taken to address the calibration transfer from the greenhouse to the field. First, direct standardization (DS), piecewise direct standardization (PDS), double window piecewise direct standardization (DPDS) and spectral space transfer (SST) were used for standardizing the spectral reflectance to minimize the artifacts and spectral differences between different greenhouse imaging systems. A linear transformation matrix estimated using SST based on a small set of plant samples imaged in two facilities reduced the root mean square error (RMSE) for maize physiological feature prediction significantly, i.e., from 10.64% to 2.42% for RWC and from 1.84% to 0.11% for nitrogen content. Second, common latent space features between two greenhouses or a greenhouse and field imaging system were extracted in an unsupervised fashion. Two different models based on deep adversarial domain adaptation are trained, evaluated, and tested. In contrast to linear standardization approaches developed using the same plant samples imaged in two greenhouse facilities, the domain adaptation extracted non-linear features common between spectra of different imaging systems. Results showed that transferred RWC models reduced the RMSE by up to 45.9% for the greenhouse calibration transfer and 12.4% for a greenhouse to field transfer. The plot scale evaluation of the transferred RWC model showed no significant difference between the measurements and predictions. The methods developed and reported in this study can be used to recover the performance plummeted due to the spectral differences caused by the new phenotyping system and to share the knowledge among plant phenotyping researchers and scientists.</p>
269

Deep Learning Based Models for Cognitive Autonomy and Cybersecurity Intelligence in Autonomous Systems

Ganapathy Mani (8840606) 21 June 2022 (has links)
Cognitive autonomy of an autonomous system depends on its cyber module's ability to comprehend the actions and intent of the applications and services running on that system. The autonomous system should be able to accomplish this without or with limited human intervention. These mission-critical autonomous systems are often deployed in unpredictable and dynamic environments and are vulnerable to evasive cyberattacks. In particular, some of these cyberattacks are Advanced Persistent Threats where an attacker conducts reconnaissance for a long period time to ascertain system features, learn system defenses, and adapt to successfully execute the attack while evading detection. Thus an autonomous system's cognitive autonomy and cybersecurity intelligence depend on its capability to learn, classify applications (good and bad), predict the attacker's next steps, and remain operational to carryout the mission-critical tasks even under cyberattacks. In this dissertation, we propose novel learning and prediction models for enhancing cognitive autonomy and cybersecurity in autonomous systems. We develop (1) a model using deep learning along with a model selection framework that can classify benign and malicious operating contexts of a system based on performance counters, (2) a deep learning based natural language processing model that uses instruction sequences extracted from the memory to learn and profile the behavior of evasive malware, (3) a scalable deep learning based object detection model with data pre-processing assisted by fuzzy-based clustering, (4) fundamental guiding principles for cognitive autonomy using Artificial Intelligence (AI), (5) a model for privacy-preserving autonomous data analytics, and finally (6) a model for backup and replication based on combinatorial balanced incomplete block design in order to provide continuous availability in mission-critical systems. This research provides effective and computationally efficient deep learning based solutions for detecting evasive cyberattacks and increasing autonomy of a system from application-level to hardware-level. <br>
270

Dolování z dat v prostředí informačního systému K2 / Data Mining in K2 Information System

Figura, Petr Unknown Date (has links)
This project was originated by K2 atmitec Brno s.r.o. company. The result is data mining module in K2 information system environment. Engineered data module implements association analysis over the data of K2 information system data warehouse. Analyzed data contains information about sales filed in K2 information system. Module is implementing consumer basket analysis.

Page generated in 0.0628 seconds