• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 18
  • 13
  • 13
  • 9
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 74
  • 59
  • 36
  • 29
  • 23
  • 23
  • 22
  • 19
  • 16
  • 15
  • 15
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The importance of frequent flyer programmes in airline selection : a survey of corporate travel management in large-sized corporations in South Africa

Wieme, Lesley Liliane Patricia 13 May 2011 (has links)
In many organisations, air travel is an essential part of conducting business in order to meet company objectives and goals. The selection of a preferred airline is a complex undertaking. Corporations may obtain discounts based on expenditure commitments; the airline may have a frequent flyer programme; or a low cost carrier may offer a better alternative to full service carriers. The literature on corporate travel is fairly limited with determinants determining the selection of airlines having been studied from other perspectives such as the business traveller, thus, this study should make a significant contribution to this field by generating new information on corporate travel and in particular corporate air travel decisions. The literature review provides a demarcation of the broad concepts of the buying approach towards corporate air travel. Also discussed is the relationship between the key role players in airline selection: the corporate traveller; the travel management company; and the airline. Various determinants of airline selection by corporations are identified and the role of frequent flyer programmes is analysed. Furthermore, the move towards low-cost airlines as a preferred choice for corporate travel is investigated. The empirical phase of the research study focussed on identifying the determinants of airline selection by large-sized corporations in South Africa; the relative importance of frequent flyer programmes; and the move towards low-cost airlines as a preferred choice for corporate travel. The target population was sampled using a non-probability convenience sampling approach with a newly designed quantitative, ex post facto web-based questionnaire, distributed via e-mail to the target population. Exploratory factor analysis was done to identify whether an underlying structure of airline selection determinants exists from which the relative importance of frequent flyer programmes could be assessed. From the results, a model of corporate airline selection determinants was derived. Then, the model was compared to the conceptual model formulated from the literature survey. A number of important selection determinants were identified, and it became evident that frequent flyer programmes are, generally, not considered a decisive determinant in the selection of a preferred airline by corporations. However, the influence of low-cost airlines was shown to be considerable, in line with the endeavour to save on air travel expenses within a corporate air travel management programme. The findings should assist both corporations and airlines with the design of their air travel buying approaches and marketing strategies respectively. / Dissertation (MCom)--University of Pretoria, 2010. / Tourism Management / Unrestricted
82

"Välkänd patient" - mångbesökares upplevelse av bemötande inom akutsjukvård : en litteraturöversikt / "Well known patient" - frequent attenders´experiences of reception within emergency care : a literature review

Binback, Jenny January 2016 (has links)
Antalet besök inom akutsjukvården ökar varje år. Det leder till högre krav på akutvårdspersonalens kompetens och förmåga till bemötande. Bemötande tenderar att vara en unik subjektiv upplevelse som kan ha stor betydelse för den enskilda individen. Bemötandet innefattar det skrivna språket, tonfall, ansikte mot ansikte, kroppsspråk och kroppsspråk med beröring. Alla har olika förutsättningar, fördelar och begränsningar. Sex procent av Stockholms läns landstings ca 550 000 akutbesök per år utgör en patientgrupp som har gjort fyra eller fler akutbesök på ett år. Patientgruppen kallas för mångbesökare och utgör en femtedel av den totala andelen akutbesök på akutmottagning. Statistik som tämmer väl överrens med internationell forskning. Liknande beteende påträffas inom primärvård och psykiatri. Det är en utsatt patientgrupp som väcker delade åsikter hos vårdpersonal. Alla är emellertid överrens om att mångbesökare kräver mer tid, resurser och ansträngning. Syftet var att beskriva mångbesökares upplevelser av bemötande inom akutsjukvård. Som metod valdes en litteraturöversikt där relevanta artiklar har samlats in från databaserna PubMed, CINAHL och PsycINFO. Resultatet av litteraturöversikten visade på mångbesökares förutsättningar och förväntningar på akutsjukvård, bemötande i akutsjukvården och relationen mellan patient och vårdpersonal. Sammanfattningsvis tenderar mångbesökare att utgöras av sårbara individer med svårigheter att hantera oroliga tankar och överväldigas lätt av känslomässig stress. När de väljer att söka vård präglas deras uppfattning av rädsla och oro. Akutsjukvården upplevs som en trygg plats, där mångbesökare vill känna sig respekterade och värderade. Ett dåligt bemötande resulterade i en känsla av att vara ovärdig vård, en fortsatt misstro och en undermedveten strategi där patienterna undvek att aktivt ta del av sin egen hälso- och sjukvårdsprocess. Resultatet visar även på en diskrepans mellan patienten och vårdpersonalen när det kommer till uppfattningar och förväntningar på vården och bemötandet.
83

[pt] MINERAÇÃO DE ITENS FREQUENTES EM SEQUÊNCIAS DE DADOS: UMA IMPLEMENTAÇÃO EFICIENTE USANDO VETORES DE BITS / [en] MINING FREQUENT ITEMSETS IN DATA STREAMS: AN EFFICIENT IMPLEMENTATION USING BIT VECTORS

FRANKLIN ANDERSON DE AMORIM 11 February 2016 (has links)
[pt] A mineração de conjuntos de itens frequentes em sequências de dados possui diversas aplicações práticas como, por exemplo, análise de comportamento de usuários, teste de software e pesquisa de mercado. Contudo, a grande quantidade de dados gerada pode representar um obstáculo para o processamento dos mesmos em tempo real e, consequentemente, na sua análise e tomada de decisão. Sendo assim, melhorias na eficiência dos algoritmos usados para estes fins podem trazer grandes benefícios para os sistemas que deles dependem. Esta dissertação apresenta o algoritmo MFI-TransSWmais, uma versão otimizada do algoritmo MFI-TransSW, que utiliza vetores de bits para processar sequências de dados em tempo real. Além disso, a dissertação descreve a implementação de um sistema de recomendação de matérias jornalísticas, chamado ClickRec, baseado no MFI-TransSWmais, para demonstrar o uso da nova versão do algoritmo. Por último, a dissertação descreve experimentos com dados reais e apresenta resultados da comparação de performance dos dois algoritmos e dos acertos do sistema de recomendações ClickRec. / [en] The mining of frequent itemsets in data streams has several practical applications, such as user behavior analysis, software testing and market research. Nevertheless, the massive amount of data generated may pose an obstacle to processing then in real time and, consequently, in their analysis and decision making. Thus, improvements in the efficiency of the algorithms used for these purposes may bring great benefits for systems that depend on them. This thesis presents the MFI-TransSWplus algorithm, an optimized version of MFI-TransSW algorithm, which uses bit vectors to process data streams in real time. In addition, this thesis describes the implementation of a news articles recommendation system, called ClickRec, based on the MFI-TransSWplus, to demonstrate the use of the new version of the algorithm. Finally, the thesis describes experiments with real data and presents results of performance and a comparison between the two algorithms in terms of performance and the hit rate of the ClickRec recommendation system.
84

Frequent Subgraph Analysis and its Software Engineering Applications

Henderson, Tim A. D. 06 September 2017 (has links)
No description available.
85

New techniques for efficiently discovering frequent patterns

Jin, Ruoming 01 August 2005 (has links)
No description available.
86

ACTION : Adaptive Cache Block Migration in Distributed Cache Architectures

Mummidi, Chandra Sekhar 20 October 2021 (has links)
Increasing number of cores in chip multiprocessors (CMP) result in increasing traffic to last-level cache (LLC). Without commensurate increase in LLC bandwidth, such traffic cannot be sustained resulting in loss of performance. Further, as the number of cores increases, it is necessary to scale up the LLC size; otherwise, the LLC miss rate will rise, resulting in a loss of performance. Unfortunately, for a unified LLC with uniform cache access time, access latency increases with cache size, resulting in performance loss. Previously, researchers have proposed partitioning the cache into multiple smaller caches interconnected by a communication network which increases aggregate cache bandwidth but causes non-uniform access latency. Such a cache architecture is called non-uniform cache architecture (NUCA). While NUCA addresses the LLC bandwidth issue, partitioning by itself does not address the access latency problem. Consequently, researchers have previously considered data placement techniques to improve access latency. However, earlier data placement work did not account for the frequency with which specific memory references are accessed. A major reason for that is access frequency for all memory references is difficult to track. In this research, we present a hardware-assisted solution called ACTION (Adaptive Cache Block Migration) to track the access frequency of individual memory references and prioritize their placement closer to the affine core. ACTION mechanism implements cache block migration when there is a detectable change in access frequencies due to a change in the program phase. To keep the hardware overhead low, ACTION counts access references in the LLC stream using a simple and approximate method, and uses simple algorithms for placement and migration. We tested ACTION on a 4-core CMP with a 5x5 mesh LLC network implementing a partitioned D-NUCA against workloads exhibiting distinct asymmetry in cache block access frequency. Our simulation results indicate that ACTION can improve CMP performance by as much as 8% over the state-of-the-art (SOTA) solutions.
87

Multiple Uses of Frequent Episodes in Temporal Process Modeling

Patnaik, Debprakash 19 August 2011 (has links)
This dissertation investigates algorithmic techniques for temporal process discovery in many domains. Many different formalisms have been proposed for modeling temporal processes such as motifs, dynamic Bayesian networks and partial orders, but the direct inference of such models from data has been computationally intensive or even intractable. In this work, we propose the mining of frequent episodes as a bridge to inferring more formal models of temporal processes. This enables us to combine the advantages of frequent episode mining, which conducts level wise search over constrained spaces, with the formal basis of process representations, such as probabilistic graphical models and partial orders. We also investigate the mining of frequent episodes in infinite data streams which further expands their applicability into many modern data mining contexts. To demonstrate the usefulness of our methods, we apply them in different problem contexts such as: sensor networks in data centers, multi-neuronal spike train analysis in neuroscience, and electronic medical records in medical informatics. / Ph. D.
88

高效率常見超集合探勘演算法之研究 / Efficient Algorithms for the Discovery of Frequent Superset

廖忠訓, Liao, Zhung-Xun Unknown Date (has links)
過去對於探勘常見項目集的研究僅限於找出資料庫中交易紀錄的子集合,在這篇論文中,我們提出一個新的探勘主題:常見超集合探勘。常見超集合意指它包含資料庫中各筆紀錄的筆數多於最小門檻值,而原本用來探勘常見子集合的演算法並無法直接套用,因此我們以補集合的角度,提出了三個快速的演算法來解決這個新的問題。首先為Apriori-C:此為使用先廣後深搜尋的演算法,並且以掃描資料庫的方式來決定具有相同長度之候選超集合的支持度,第二個方法是Eclat-C:此為採用先深後廣搜尋的演算法,並且搭配交集法來計算倏選超集合的支持度,最後是DCT:此方法可利用過去常見子集合探勘的演算法來進行探勘,如此可以省下開發新系統的成本。 常見超集合的探勘可以應用在電子化的遠距學習系統,生物資訊及工作排程的問題上。尤其在線上學習系統,我們可以利用常見超集合來代表一群學生的學習行為,並且藉以預測學生的學習成就,使得老師可以及時發現學生的學習迷失等行為;此外,透過常見超集合的探勘,我們也可以為學生推薦個人化的課程,以達到因材施教的教學目標。 在實驗的部份,我們比較了各演算法的效率,並且分別改變實驗資料庫的下列四種變因:1) 交易資料的筆數、2) 每筆交易資料的平均長度、3) 資料庫中項目的總數和4) 最小門檻值。在最後的分析當中,可以清楚地看出我們提出的各種方法皆十分有效率並且具有可延伸性。 / The algorithms for the discovery of frequent itemset have been investigated widely. These frequent itemsets are subsets of database. In this thesis, we propose a novel mining task: mining frequent superset from the database of itemsets that is useful in bioinformatics, E-learning systems, jobshop scheduling, and so on. A frequent superset means that the number of transactions contained in it is not less than minimum support threshold. Intuitively, according to the Apriori algorithm, the level-wise discovering starts from 1-itemset, 2-itemset, and so forth. However, such steps cannot utilize the property of Apriori to reduce search space, because if an itemset is not frequent, its superset maybe frequent. In order to solve this problem, we propose three methods. The first is the Apriori-based approach, called Apriori-C. The second is the Eclat-based approach, called Eclat-C, which is a depth-first approach. The last is the proposed data complement technique (DCT) that we utilize original frequent itemset mining approach to discover frequent superset. The experimental studies compare the performance of the proposed three methods by considering the effect of the number of transactions, the average length of transactions, the number of different items, and minimum support. The analysis shows that the proposed algorithms are time efficient and scalable.
89

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 15 August 2009 (has links) (PDF)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
90

Získávání frekventovaných vzorů z proudu dat / Frequent Pattern Discovery in a Data Stream

Dvořák, Michal January 2012 (has links)
Frequent-pattern mining from databases has been widely studied and frequently observed. Unfortunately, these algorithms are not suitable for data stream processing. In frequent-pattern mining from data streams, it is important to manage sets of items and also their history. There are several reasons for this; it is not just the history of frequent items, but also the history of potentially frequent sets that can become frequent later. This requires more memory and computational power. This thesis describes two algorithms: Lossy Counting and FP-stream. An effective implementation of these algorithms in C# is an integral part of this thesis. In addition, the two algorithms have been compared.

Page generated in 0.0561 seconds