• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 13
  • 1
  • Tagged with
  • 14
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

全球資訊網中使用者網頁-動作路徑的資料挖掘

林青峰, Lin , Qing-Fung Unknown Date (has links)
客戶在從事消費時,往往會有許多不一樣的行為產生。對組織而言,研究客戶的消費行為能夠協助組織更了解客戶的資訊,進而支援其經營活動。以往與客戶行為相關的資料挖掘研究,較著重於客戶的消費資料。而對於客戶在商店中做了那些動作,及其動作會導致發生的事件並沒有較全盤及深入的討論。對實體業者而言,要實際的去記錄使用者在商店內的行為,是不太可行的;但相對的說,隨著網際網路與資料收集技術的發展,網站經營者應用log留存技術,將比傳統業者更容易且完整的收集到消費者行為記錄。本研究試圖在全球資訊網的環境中建立一個能夠同時分析使用者的瀏覽網頁路徑及其動作過程的演算法;並且配合該演算法建置一個雛形系統,以驗証其效能,最後並評估其日後實務操作的可行性。 / Different kind of customer purchases with different behavior. Studying the customer’s purchase behavior can help organizations understand their client intentions to support their business activities. In the past, customer behavior data mining emphasized on their purchase items, i.e., what they buy. There was few studies discussing what path they took and what actions they made in an e-store. It is impossible for a physical store to record its customers’ all actions and passing paths. However, a website store can easily collect such data in an Internet log. This study proposes a data mining algorithm that can analyze both customers’ browsing pages and their actions path. The algorithm’s efficiency and feasibility were examined in our prototype. This study may contribute to help the website mangers to restructure their website layouts or advertisement position to catch the customer’s eyes.
12

我國大學圖書館網站網頁連結引用之研究 / The study on the sitatoin analysis of university library websites in Taiwan

方靜如 Unknown Date (has links)
本研究旨在應用網路計量法,尤其是網頁連引分析法,對我國大學圖書館網站網頁進行測量和分析。一方面、以EXtract URL蒐集大學圖書館網站網路資源之相關數據,並從國家/地區(top-level domain)、單位屬性(Second-level domain)、網域名稱(Domain Name)、一致性資源位址(URL)等四個網域層級,剖析我國大學圖書館網站所蒐錄之「網路資源」的數量、網站類型、網頁內容屬性等,從而比較各大學圖書館網站所連引網路資源的共通性與特殊性;另一方面,採用搜尋引擎A1taiSta檢索並分析這些大學圖書館網站綱頁在網網相連的虛擬世界中,為其他綱頁所連結引用的程度與情形,包括連引次數與網路影響係數的計算,以及連引網頁之語文,並透過跨時研究比較其在前後七個月的兩個時點上被連引情形的消長。 本研究所規範的「網路資源」範疇,乃依圖書館網站組織與呈現網路資源的方式與取向區別為三類,其中以「一般性網路資源」和「參性網路資源」居多,「專題性網路資源」則相對較少。研究結果發現39個大學圖書館網站中提供「網路資源」服務的佔有34個,共收錄了10,144筆網路資源,平均每個圖書館收錄298.35筆,其中靜宜大學圖書館以1,429筆奪得第一。實際上,這些網路資源來自於54個國家/地區網域;4,606個不同的綱域名稱;6,818個不同的網址。若將其網址依國家/地區網域做分析,我國(.tw)網域佔五成以上,而其中又有半數為學術網域所含括;居次者為美國,所佔將近四成,而商業網域又佔有其申四成;若依單位屬性網域做分析,以屬於學術網域的網路資源所佔達四成為最多,其中以我國與美國網域貢獻最多;屬於公司網域緊隨於後,佔有將近三成.,其申以美國與我國綱域貢獻最多。在被連引次數上,國內網站以被連引網頁次數累積達150次的中央研究院為最高,其次為教育部、行政院主計處、國家圖書館等網站;國外網站則以美國國家醫學圖書館與柏克萊數位圖書館SunSITE並列第一。大致上,可將我國大學圖書館網站所連引網路資源歸納為八類,被連引最多的類型是圖書資訊服務,其餘依次為政府機關、學術單位、研究機構、博物館/數位博物館、書店/網路書店、國際組織、其他等。" 外部連引可作為評估圖書館網站對外部網站的影響力之依據;而內部連引則反映了圖書館網站與校園網域內的其他網站的互動。本研究結果顯示,外部連引次數以成大、台大、央大等圖書館次數為領先;內部連引次數則以台大、交大、政大等圖書館居前。以連引數為分子,網頁數為分母所計算出的網路影響係數方面,所有大學圖書館網站的影響係數為0,33,亦即表示我國大學圖書館網站,平均每一個網頁被連引0.33次;外部連引的網路影響係數以逢甲、東華、中正等大學圖書館領先;內部連引的網路影響係數則以東華、南華、世新等大學圖書館居前。另外,在連引網頁的語文方面,我國大學圖書館網站被中文綱頁所連引的數量佔 93.57%;被英文綱頁所連引的數量佔9.46%,其餘語言則佔0、73%。 本研究建議圖書館應透過電腦軟體與人工定期檢閱連引節 點之有效性;並拓展專題性與特藏性網路資源的建置,以期建立各館之不可替代性的網路資源;同時深入連引網站內層之綱頁內容,以一個網頁或一個網頁片段作為一個連引單元,俾能強化其對使用者的實質效益。圖書館也能善用搜尋引擎以調查圖書館網站被內部和外部連引的情形,以自我評估圖書館網站與內、外部網域社群之間的互動和影響。另外,本土性的幾家中文搜尋引擎,加Openfind、Gais等,除應致力於擴展索引涵蓋的版圖興提升穩定性高的檢索結果外,還能豐富其相關檢索功能。 / The purpose of this study is to apply webometrics, especially sitation analysis, to examine and analyze thirty-nine university library websites in Taiwan. On one hand, the study utilized Extract URL to collect data aboutInternet resources of university library websites, and analyzed top-level DNS,second-level DNS, Domain Name, URL, respectively. The similarities and differences of the university library websites among each other were compared based on the analyses of number, types of websites and attributes of web pages of the "Internet resources". On the other hand, the study used AltaVista to retrieve and analyze the web pages of university library websites which interlinked by other web pages in the labyrinthine cyberspace. The Items measured included the numbers of total sitation, external sitation, internal sitation, and self-sitation, as well as languages of linking web pages.Furthermore, the investigation ran through two time spots across seven months to observe the changes of the sitations. The "Internet resources" appeared on the library website were organized into three categories: "general resources" and "reference resources", came out to be the majority, whereas the "specific resources" was the minority. The study found that thirty-four out of thirty-nine university library websites had provided service of Internet resources, and the items of Internet resources were 10,144 in total, from 54 different top-level DNS, 4,606 different Domain Name, and 6,818 different URLs. The average number of Internet resources per library website was 298.35. The Providence University Library earned the first. The analysis of top-level domain showed that the Taiwan was in the majority, and USA got the second. The analysis of second-level domain also found that academic domain earned the first, and business domain was the second. TheInternet resources can be grouped into eight types: library and information services was the first, and the other included government institutions, academic organizations,reseah institutes, museum/digital museum, bookstores/Internet bookstores, international organizations and others. External sitation is seen as an indicator of significance and influence of a site for external websites, and internal sitation may respond interaction and effect between a university library website and other websites in web domain of campus. The findings of study revealed that the external sitations ofNational Cheng Kung Universe Library, National Taiwan University Library and National Central University Library were the first group; and the internal sitations of National Taiwan University Library, National Chengchi University Library and National Chiao Tung University Library were the top group.Besides, the Web Impact Factor of university libraries in Taiwan was 0.33. The number of Chinese web pages that linked to university library websites turned out to be 93.57% whereas the number of English web pages was only 9.46%.The number of web pages linked in other language was only 0.73%. Finally, the study suggested that libraries should better use computer software and human aids to check the validity of hyperlinks; and devote library staff's energies into the development of specific Internet resources; further,libraries should link the content of web pages of websites in depth. Libraries should also examine various sitations of self library website to evaluate the interaction and influence of the library website with inside and outside web domain. Local search engines, for instance, Openfind and Gais, should strengthen the scope of index, enhance the stability of searching, and improve the retrieval functions in the filture.
13

運用使用者輸入欄位屬性偵測防禦資料隱碼攻擊 / Preventing SQL Injection Attacks Using the Field Attributes of User Input

賴淑美, Lai, Shu Mei Unknown Date (has links)
在網路的應用蓬勃發展與上網使用人口不斷遞增的情況之下,透過網路提供客戶服務及從事商業行為已經是趨勢與熱潮,而伴隨而來的風險也逐步顯現。在一個無國界的網路世界,威脅來自四面八方,隨著科技進步,攻擊手法也隨之加速且廣泛。網頁攻擊防範作法的演進似乎也只能一直追隨著攻擊手法而不斷改進。但最根本的方法應為回歸原始的程式設計,網頁欄位輸入資料的檢核。確實做好欄位內容檢核並遵守網頁安全設計原則,嚴謹的資料庫存取授權才能安心杜絕不斷變化的攻擊。但因既有系統對於輸入欄位內容,並無確切根據應輸入的欄位長度及屬性或是特殊表示式進行檢核,以致造成類似Injection Flaws[1]及部分XSS(Cross Site Scripting)[2]攻擊的形成。 面對不斷變化的網站攻擊,大都以系統原始碼重覆修改、透過滲透測試服務檢視漏洞及購買偵測防禦設備防堵威脅。因原始碼重覆修改工作繁重,滲透測試也不能經常施行,購買偵測防禦設備也相當昂貴。 本研究回歸網頁資料輸入檢核,根據輸入資料的長度及屬性或是特殊的表示式進行檢核,若能堅守此項原則應可抵禦大部分的攻擊。但因既有系統程式龐大,若要重新檢視所有輸入欄位屬性及進行修改恐為曠日費時。本文中研究以側錄分析、資料庫SCHEMA的結合及方便的欄位屬性定義等功能,自動化的處理流程,快速產生輸入欄位的檢核依據。再以網站動態欄位檢核的方式,於網站接收使用者需求,且應用程式尚未處理前攔截網頁輸入資料,根據事先明確定義的網站欄位屬性及長度進行資料檢核,如此既有系統即無須修改,能在最低的成本下達到有效防禦的目的。 / With the dynamic development of network application and the increasing population of using internet, providing customer service and making business through network has been a prevalent trend recently. However, the risk appears with this trend. In a borderless net world, threaten comes from all directions. With the progress of information technology, the technique of network attack becomes timeless and widespread. It seems that defense methods have to develop against these attack techniques. But the root of all should regress on the original program design – check the input data of data fields. The prevention of unceasing network attack is precisely check the content of data field and adhere to the webpage security design on principle, furthermore, the authority to access database is essential. Since most existing systems do not have exactly checkpoints of those data fields such as the length, the data type, and the data format, as a result, those conditions resulted in several network attacks like Injection Flaws and XSS. In response to various website attack constantly, the majority remodify the system source code, inspect vulnerabilities by the service of penetration test, and purchase the equipment of Intrusion Prevention Systems(IPS). However, several limitations influence the performance, such as the massive workload of remodify source code, the difficulty to implement the daily penetration test, and the costly expenses of IPS equipment. The fundamental method of this research is to check the input data of data fields which bases on the length, the data type and the data format to check input data. The hypothesis is that to implement the original design principle should prevent most website attacks. Unfortunately, most legacy system programs are massive and numerous. It is time-consuming to review and remodify all the data fields. This research investigates the analysis of network interception, integrates with the database schema and the easy-defined data type, to automatically process these procedures and rapidly generates the checklist of input data. Then, using the method of website dynamic captures technique to receive user request first and webpage input data before the system application commences to process it. According to those input data can be checked by the predefined data filed type and the length, there is no necessary to modify existing systems and can achieve the goal to prevent web attack with the minimum cost.
14

Java網頁程式安全弱點驗證之測試案例產生工具 / Test Case Generation for Verifying Security Vulnerabilities in Java Web Applications

黃于育, Huang, Yu Yu Unknown Date (has links)
近年來隨著網路的發達,網頁應用程式也跟著快速且普遍化地發展。網頁應用程式快速盛行卻忽略程式設計時的安全性考量,進而成為網路駭客的攻擊目標。因此,網頁應用程式的安全議題日益重要。目前已有許多網頁應用程式安全弱點的相關研究,以程式分析的技術找出弱點,主要分成靜態分析與動態分析兩大類。但無論是使用靜態或是動態的分析方法,仍有其不完美的地方。其中靜態分析結果完備但會產生過多弱點誤報;動態分析結果準確率高但會因為測試案例的不完備而造成弱點的漏報。因此,本論文研究結合了動靜態分析,利用靜態分析方法發展一套測試案例產生工具;再結合動態分析方法隨著測試案例的執行來追蹤測試資料並作弱點的驗證,以達到沒有弱點漏報的產生以及改善弱點誤報的目標。 本論文研究的重點集中在以靜態分析技術產生涵蓋目標程式中所有可執行路徑的測試案例。我們應用測試案例產生常見的符號化執行技巧,利用程式的路徑限制蒐集與解決來達成測試案例產生。實作上我們利用跨程序性路徑分析找出目標程式中所有潛在弱點的路徑,再以反向路徑限制蒐集將限制資訊完整蒐集;最後交給限制分析器解限制並產生測試案例。接著利用剖面導向程式語言AspectJ的程式插碼技術實現動態的汙染資料流分析,配合產生的測試案執行程式觸發動態的汙染資料流分析並產生可信賴的弱點分析結果。 / Due to the rapid development of the internet in recent years, web applications have become very popular and ubiquitous. However, developers may neglect the issues of security while designing a program so that web applications become the targets of attackers. Hence, the issue of web application vulnerabilities has become very crucial. There have been many research results of web application security vulnerabilities and many of them exploit the technique of program analysis to detect vulnerabilities. These analysis approaches can be can basically be categorized into dynamic analysis and static analysis. However, both of them still have their own problems to be improved. Specifically static analysis supports high coverage of vulnerabilities, but causes too many false positives. As for the dynamic analysis, although it produces high confident results, yet it may cause false negatives without complete test cases. In this thesis, we integrate both static analysis and dynamic analysis to achieve the objectives that no false negatives are produced and reduce false positives. We develop a test case generation tool by the static analysis approach and a program execution tool that dynamically track the execution of the target program with those test data to detect its vulnerabilities. Our test case generation tool first employs both intra- and inter-procedural analysis to cover all vulnerable paths in a program, and then apply the symbolic execution technique to collect all path constraints. With these collected constraints, we use a constraint solver to solve them and finally generate the test cases. As to the execution tool, it utilizes the instrumentation mechanism provided by the aspect-oriented programming language AspectJ to implement a dynamic taint analysis that tracks the flow of tainted data derived from those generated test cases. As a result, all vulnerable program paths will be detected by our tools.

Page generated in 0.023 seconds