• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 20
  • 5
  • Tagged with
  • 25
  • 25
  • 13
  • 13
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

運用充分資料縮減法於基因組分析 / Application of the Sufficient Dimension Reduction to Gene Set Analysis

蔡志旻, Tsai, Chih Min Unknown Date (has links)
生物現象多是由許多基因共同作用產生的結果,以基因組分析方法探討外顯特徵變數與基因組的相關性將更能幫助研究人員了解生物體的作用機制。目前已發展的基因組分析方法大多是針對離散型態的外顯特徵變數,在臨床醫學上,很多疾病的外顯特徵為連續型變數。本研究之目的即為發展運用在連續型外顯特徵變數的基因組分析方法。本文將考慮切片平均變異數估計法進行充分維度縮減的方法,原先被用來決定原始資料被縮減的程度之邊際維度檢定法將被運用於基因組分析方法。除了原有的邊際維度檢定法之外,我們另提出一改良的邊際維度檢定法,並以排列重抽法獲得這兩種檢定方法之排列顯著值。本文將透過電腦模擬以及實例分析來評估兩種邊際維度檢定法,同時也將列入Dinu等學者(2013)所發展的線性組合檢定法之結果以作為比較。
12

維度縮減應用於蛋白質質譜儀資料 / Dimension Reduction on Protein Mass Spectrometry Data

黃靜文, Huang, Ching-Wen Unknown Date (has links)
本文應用攝護腺癌症蛋白質資料庫,是經由表面強化雷射解吸電離飛行質譜技術的血清蛋白質強度資料,藉此資料判斷受測者是否罹患癌症。此資料庫之受測者包含正常、良腫、癌初和癌末四種類別,其中包括兩筆資料,一筆為包含約48000個區間資料(變數)之原始資料,另一筆為經由人工變數篩選後,僅剩餘779區間資料(變數)之人工處理資料,此兩筆皆為高維度資料,皆約有650個觀察值。高維度資料因變數過多,除了分析不易外,亦造成運算時間較長。故本研究目的即探討在有效的維度縮減方式下,找出最小化分錯率的方法。 本研究先比較分類方法-支持向量機、類神經網路和分類迴歸樹之優劣,再將較優的分類方法:支持向量機和類神經網路,應用於維度縮減資料之分類。本研究採用之維度縮減方法,包含離散小波分析、主成份分析和主成份分析網路。根據分析結果,離散小波分析和主成份分析表現較佳,而主成份分析網路差強人意。 本研究除探討以上維度縮減方法對此病例資料庫分類之成效外,亦結合線性維度縮減-主成份分析,非線性維度縮減-主成份分析網路,希望能藉重疊法再改善僅做單一維度縮減方法之病例篩檢分錯率,根據分析結果,重疊法對原始資料改善效果不明顯,但對人工處理資料卻有明顯的改善效果。 / In this paper, we study the serum protein data set of prostate cancer, which acquired by Surface-Enhanced Laser Desorption/Ionization Time-of-Flight Mass Spectrometry (SELDI-TOF-MS) technique. The data set, with four populations of prostate cancer patients, includes both raw data and preprocessed data. There are around 48000 variables in raw data and 779 variables in preprocessed data. The sample size of each data is around 650. Because of the high dimensionality, this data set provokes higher level of difficulty and computation time. Therefore, the goal of this study is to search efficient dimension reduction methods. We first compare three classification methods: support vector machine, artificial neural network, and classification and regression tree. And, we use discrete wavelet transform, principal component analysis and principal component analysis networks to reduce the data dimension. Then, we discuss the dimension reduction methods and propose overlap method that combines the linear dimension reduction method-principal component analysis, and the nonlinear dimension reduction method-principal component analysis networks to improve the classification result. We find that the improvement of overlap method is significant in the preprocessed data, but not significant in the raw data.
13

Mining Multi-Dimension Rules in Multiple Database Segmentation-on Examples of Cross Selling

吳家齊, Wu,Chia-Chi Unknown Date (has links)
在今日以客戶為導向的市場中,“給較好的客戶較好的服務”的概念已經逐漸轉變為“給每一位客戶適當的服務”。藉由跨域行銷(cross-selling)的方式,企業可以為不同的客戶提供適當的服務及商品組合。臺灣的金融業近年來在金融整合中陸續成立了多家金融控股公司,希望藉由銀行、保險與證券等領域統籌資源與資本集中,以整合旗下子公司達成跨領域的共同行銷。這種新的行銷方式需要具有表達資料項目間關係的資訊技術,而關聯規則(association rule)是一種支援共同行銷所需之資料倉儲中的極重要元件。 傳統關聯規則的挖掘可以用來找出交易資料庫中客戶潛在的消費傾向。如果得以進一步的鎖定是那些客戶在什麼時間、什麼地點具有這種消費傾向,我們可藉此制定更精確、更具獲利能力的行銷策略。然而,大部分的相關習成技術都假設挖掘出的規則在資料庫的每一個區間都是一樣有效的,然而這顯然不符合大多數的現實狀況。 本研究主要著眼於如何有效率的在不同維度、不同大小的資料庫區域中挖掘關聯規則。藉此發展出可以自動在資料庫中產生分割的機制。就此,本研究提出一個方法找出在各個分割中成立的關聯規則,此一方法具有以下幾個優點: 1. 對於找出的關聯規則,可以進一步界定此規則在資料庫的那些區域成立。 2. 對於使用者知識以及資料庫重覆掃瞄次數的要求低於先前的方法。 3. 藉由保留中間結果,此一方法可以做到增量模式的規則挖掘。 本研究舉了兩個例子來驗證所提出的方法,結果顯示本方法具有效率及可規模化方面均較以往之方法為優。 / In today’s customer-oriented market, vision of “For better customer, the better service” becomes “For every customer, the appropriate service”. Companies can develop composite products to satisfy customer needs by cross-selling. In Taiwan’s financial sector, many financial holding companies have been consecutively founded recently. By pooling the resources and capital for banking, insurance, and securities, these financial holding companies would like to integration information resources from subsidiary companies for cross-selling. This new promotion method needs the information technology which can present the relationship between items, and association rule is an important element in data warehouse which supports cross-selling. Traditional association rule can discover some customer purchase trend in a transaction database. The further exploration into targets as when, where and what kind of customers have this purchase trend that we chase, the more precise information that we can retrieve to make accurate and profitable strategies. Moreover, most related works assume that the rules are effective in database thoroughly, which obviously does not work in the majority of cases. The aim of this paper is to discover correspondent rules from different zones in database. We develop a mechanism to produce segmentations with different granularities related to each dimension, and propose an algorithm to discover association rules in all the segmentations. The advantages of our method are: 1. The rules which only hold in several segmentations of database will be picked up by our algorithm. 2. Mining all association rules in all predefined segmentations with less user prior knowledge and redundant database scans than previous methods. 3. By keeping the intermediate results of the algorithm, we can implement an incremental mining. We give two examples to evaluate our method, and the results show that our method is efficient and effective.
14

以商業智慧概念建構電子商務觀測站 / Constructing an E-Commerce observation stand by business intelligence approach

蕭勝隆, Hsiao, Sheng Lung Unknown Date (has links)
商業智慧(Business Intelligence; BI)可以協助企業進行資料分析與決策支援,將相關的資料整合,轉化成對企業有用的知識,對內可以提升管理績效,對外可以創造競爭優勢。這樣的概念,已經被大企業導入並普及化,但對小型企業而言,仍然是可望而不可及。網際網路的興起,帶動了電子商務的崛起,電子商務商家不論大小,面對的是全球化的挑戰,而大部分的小型商家,因為資源的不足的情況下,只能依照自行編製的報表來進行分析與決策,不易獲得即時和整合的管理決策支援資訊。本研究以商業智慧的概念,結合平衡計分卡模型,建構電子商務觀測站,透過Web的介面,可以即時查詢。本研究所建立的平台可用來提供電子商務經營管理諮詢服務,吸引小型電子商務業者加入,進行資料整合與決策分析,了解本身體質,並透過定期收集外部資料,提供平台使用者了解產業環境與與市場趨勢,幫助小型電子商店擬定營運策略與目標,提升全球競爭力。 / Business Intelligence(BI) helps enterprises to integrate heterogeneous data to conduct various analyses. The useful knowledge can help managers to improve internal performances and external competitive advantages. The concept of business intelligence has been well accepted in large enterprises, but not yet popular in the booming small E-Commerce(EC) businesses. Due to the resource shortage, most small EC stores manage the company and make business decision based on the scattered ad-hoc reports. Facing the globalized competition, a common platform providing the data integration and analysis services should be desirable. This research aims to construct such an EC observation stand, using the concepts of business intelligence and balanced scorecard, to help the small EC managers formulating their strategic objectives and the inherited key performance indicators. The observation stand also collects the industry information from other sources and provides the integrated data analyses based on the heterogeneous data sources. The continuous observation services from the website should be able to help the platform users to understand more on the industry environment and market trend and make better decisions for instant responses.
15

應用記憶體內運算於多維度多顆粒度資料探勘之研究―以醫療服務創新為例 / A Research Into In-memory Computing In Multidimensional, Multi-granularity Data Mining ― With Healthcare Services Innovation

朱家棋, Chu, Chia Chi Unknown Date (has links)
全球面臨人口老化與人口不斷成長的壓力下,對於醫療服務的需求不斷提升。醫療服務領域中常以資料探勘「關聯規則」分析,挖掘隱藏在龐大的醫學資料庫中的知識(knowledge),以支援臨床決策或創新醫療服務。隨著醫療服務與應用推陳出新(如,電子健康紀錄或行動醫療等),與醫療機構因應政府政策需長期保存大量病患資料,讓醫療領域面臨如何有效的處理巨量資料。 然而傳統的關聯規則演算法,其效能上受到相當大的限制。因此,許多研究提出將關聯規則演算法,在分散式環境中,以Hadoop MapReduce框架實現平行化處理巨量資料運算。其相較於單節點 (single-node) 的運算速度確實有大幅提升。但實際上,MapReduce並不適用於需要密集迭帶運算的關聯規則演算法。 本研究藉由Spark記憶體內運算框架,在分散式叢集上實現平行化挖掘多維度多顆粒度挖掘關聯規則,實驗結果可以歸納出下列三點。第一點,當資料規模小時,由於平行化將資料流程分為Map與Reduce處理,因此在小規模資料處理上沒有太大的效益。第二點,當資料規模大時,平行化策略模式與單機版有明顯大幅度差異,整體運行時間相差100倍之多;然而當項目個數大於1萬個時,單機版因記憶體不足而無法運行,但平行化策略依舊可以運行。第三點,整體而言Spark雖然在小規模處理上略慢於單機版的速度,但其運行時間仍小於Hadoop的4倍。大規模處理速度上Spark依舊優於Hadoop版本。因此,在處理大規模資料時,就運算效能與擴充彈性而言,Spark都為最佳化解決方案。 / Under the population aging and population growth and rising demand for Healthcare. Healthcare is facing a big issue how to effectively deal with huge amounts of data. Cased by new healthcare services or applications (such as electronic health records or health care, etc), and also medical institutions in accordance with government policy for long-term preservation of a large number of patient data. But the traditional algorithms for mining association rules, subject to considerable restrictions on their effectiveness. Therefore, many studies suggest that the association rules algorithm in a distributed computing, such as Hadoop MapReduce framework implements parallel to process huge amounts of data operations. But in fact, MapReduce does not apply to require intensive iterative computation algorithm of association rules. Studied in this Spark in-memory computing framework, implemented on a distributed cluster parallel mining association rules mining multidimensional granularity, the experimental results can be summed up in the following three points. 1th, when data is small, due to the parallel data flow consists of Map and Reduce, so not much in the small-scale processing of benefits. 2nd, when the data size is large, parallel strategy models and stand-alone obviously significant differences overall running time is 100 times as much when the item number is greater than 10,000, however, stand-alone version cannot run due to insufficient memory, but parallel strategies can still run. 3rd, overall Spark though somewhat slower than the single version in small scale processing speed, but the running time is less than 4 times times the Hadoop. Massive processing speed Spark is still superior to the Hadoop version. Therefore, when working with large data, operational efficiency and expansion elasticity, Spark for optimum solutions.
16

透過高齡幸福價值最大化達到成功的在地老化 / Successful Aging through Senior Well-being Valuation and Maximization

謝承豫, Hsieh, Cheng Yu Unknown Date (has links)
大多數的已開發國家都逐漸成為高齡化社會;同時,可以幫助高齡族群的科技發明變得越來越流行和重要。本研究提出一個以穿戴式裝置為基礎的服務系統,不只專注在健康,也重視正向情緒、投入程度、人際關係、人生意義和成就感(PERMA,幸福模型)。本系統的目標是透過以活動為基礎的介入,促進幸福價值;亦即透過整合服務系統內的所有資源以提供個人化的活動介入給戰後嬰兒潮族群,並整合子系統的最佳化機制以得到整體的價值最大化。 因為幸福被定義為一種多維度的觀念,為了衡量並評價幸福,我們使用一個以影子價格為基礎的計算矩陣來比較各個活動介入的價值。透過幸福價值最大化,本研究試著幫助戰後嬰兒潮族群成功的在地老化。本研究初探性的結果指出,此系統被證實是有效的,且活動介入有能力增加使用者感受的幸福價值,介入品質和高齡幸福評價被發現有高度關聯性。 / Most of the societies of developed countries are becoming “aging population society”. Technologies that help baby boomers stay healthy become more popular and essential. We propose a wearable-device-based service system that focused not only on vitality but also positive emotion, engagement, relationship, meaning and accomplishment (PERMA as well-being model). Since the system aims at improving well-being value through activity-based interventions, it is necessary to integrate all the resources within the service system and provide the most appropriate personalized interventions with most expected value to the baby boomers. The system generates those interventions through greedy approach, which means they are generated by local mechanism first and combine them to get the global optimality. In order to measure well-being, which is definitely a multi-dimensional concept, we use an evaluation matrix based on shadow price to compare the value between each intervention, i.e. activities. By the well-being valuation and maximization, we try to make the baby boomers live a successful aging. Our exploratory evaluations show that the system is justified to be effective and activity-based interventions are able to increase baby boomers’ perceived value. The correlation between intervention quality and senior well-being valuation is found along with other findings.
17

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
18

科技政策網站內容分析之研究

賴昌彥, Lai, Chang-Yen Unknown Date (has links)
面對全球資訊網(WWW)應用蓬勃發展,網際網路上充斥著各種類型的資訊資源。而如何有效地管理及檢索這些資料,就成為當前資訊管理的重要課題之一。在發掘資訊時,最常用的便是搜尋引擎,透過比對查詢字串與索引表格(index table),找出相關的網頁文件,並回傳結果。但因為網頁描述資訊的不足,導致其回覆大量不相關的查詢結果,浪費使用者許多時間。 為了解決上述問題,就資訊搜尋的角度而言,本研究提出以文字開採技術實際分析網頁內容,並將其轉換成維度資訊來描述,再以多維度資料庫方式儲存的架構。做為改進現行資訊檢索的參考架構。 就資訊描述的角度,本研提出採用RDF(Resource Description Framework)來描述網頁Metadata的做法。透過此通用的資料格式來描述網路資源,做為跨領域使用、表達資訊的標準,便於Web應用程式間的溝通。期有效改善現行網際網路資源描述之缺失,大幅提昇搜尋之品質。
19

營運績效導向之企業資源規劃系統供應商選擇與導入研究-以某金控業為例 / The study of business performance-oriented ERP selection and implementation - a financial holding example

林智 Unknown Date (has links)
隨著兩岸政策的開放,金融業面臨著瞬息萬變、倍速成長、地域競爭的經營環境,如何能迅速回應環境與組織的變遷以及提升全球化競爭的能力,儼然已成為金融業者經營方針的重要議題。企業追求獲利、永續經營的理念建構於良好的企業營運績效管理,而企業資源規劃(ERP)具有整合平臺、彈性的作業流程、即時資訊彙總的功能,能夠提供企業做為營運管理、決策分析與提升競爭優勢管理工具,因此已成為眾多企業導入的選擇。 個案公司導入ERP系統的效益希望能提升營運績效整合與資訊分析的能力,因此,本研究對個案公司導入ERP系統時著重於各項關鍵研究議題:願意「改變」的決心、員工的配合心態、現行作業流程的差異程度、具體化功能規格的敘述、資訊分析維度的細度決定、外圍系統資訊流介接能力等構面,本研究希望能將ERP系統導入決策過程、供應商遴選與導入期間各項關鍵問題的發現暨解決方案等研究結果,逐一實踐在個案公司的導入專案。
20

Multifractal Analysis for the Stock Index Futures Returns with Wavelet Transform Modulus Maxima / 股價指數期貨報酬率的多重碎形分析與小波轉換的模數最大值

洪榕壕, Hung,Jung-Hao Unknown Date (has links)
本文應用資產報酬率的多重碎形模型,該模型為一整合財務時間序列上的厚尾及波動持續性的連續時間過程。多重碎形的方法允許我們估計隨時間變動的報酬率高階動差,進而推論財務時間序列的產生機制。我們利用小波轉換的模數最大值計算多重碎形譜,透過譜分解得到資產報率分配的高階動差資訊。根據實證結果,我們得到S&P和DJIA的股價指數期貨報酬率符合動差尺度行為且資料也展現幕律的形態。根據估計出的譜形態為對數常態分配。實證結果也顯示S&P和DJIA的股價指數期貨報酬率均具有長記憶及多重碎形的特性。 / We apply the multifractal model of asset returns (MMAR), a class of continuous-time processes that incorporate the thick tails and volatility persistence of financial time series. The multifractal approach allows for higher moments of returns that may vary with the time horizon and leads to infer about the generating mechanism of the financial time series. The multifractal spectrum is calculated by the Wavelet Transform Modulus Maxima (WTMM) provides information on the higher moments of the distribution of asset returns and the multiplicative cascade of volatilities. We obtain the evidences of multifractality in the moment-scaling behavior of S&P and DJIA stock index futures returns and the moments of the data represent a power law. According to the shape of the estimated spectrum we infer a log normal distribution.The empirical evidences show that both of them have long memory and multifractal property.

Page generated in 0.0213 seconds