• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 736
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1526
  • 300
  • 288
  • 284
  • 233
  • 193
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Utilization of Dynamic Attributes in Resource Discovery for Network Virtualization

Amarasinghe, Heli 16 July 2012 (has links)
The success of the internet over last few decades has mainly depended on various infrastructure technologies to run distributed applications. Due to diversification and multi-provider nature of the internet, radical architectural improvements which require mutual agreement between infrastructure providers have become highly impractical. This escalating resistance towards the further growth has created a rising demand for new approaches to address this challenge. Network virtualization is regarded as a prominent solution to surmount these limitations. It decouples the conventional Internet service provider’s role into infrastructure provider (InP) and service provider (SP) and introduce a third player known as virtual network Provider (VNP) which creates virtual networks (VNs). Resource discovery aims to assist the VNP in selecting the best InP that has the best matching resources for a particular VN request. In the current literature, resource discovery focuses mainly on static attributes of network resources highlighting the fact that utilization on dynamic attributes imposes significant overhead on the network itself. In this thesis we propose a resource discovery approach that is capable of utilizing the dynamic resource attributes to enhance the resource discovery and increase the overall efficiency of VN creation. We realize that recourse discovery techniques should be fast and cost efficient, enough to not to impose any significant load. Hence our proposed scheme calculates aggregation values of the dynamic attributes of the substrate resources. By comparing aggregation values to VN requirements, a set of potential InPs is selected. The potential InPs satisfy basic VN embedding requirements. Moreover, we propose further enhancements to the dynamic attribute monitoring process using a vector based aggregation approach.
112

Discovery writing and genre

Heeks, Richard James January 2012 (has links)
This study approaches ‘discovery writing’ in relation to genre, investigating whether different genres of writing might be associated with different kinds of writing processes. Discovery writing can be thought of as writing to find out what you think, and represents a reversal of the more usual sense that ideas precede writing, or that planning should precede writing. Discovery writing has previously been approached in terms of writers’ orientations, such as whether writers are Planners or Discoverers. This study engages with these previous theories, but places an emphasis on genres of writing, and on textual features, such as how writers write fictional characters, or how writers generate arguments when writing essays. The two main types of writing investigated are fiction writing and academic writing. Particular genres include short stories, crime novels, academic articles, and student essays. 11 writers were interviewed, ranging from professional fiction authors to undergraduate students. Interviews were based on a recent piece of a writer’s own writing. Most of the writers came from a literary background, being either fiction writers or Literature students. Interviews were based on set questions, but also allowed writers to describe their writing largely in their own terms and to describe aspects of their writing that interested them. A key aspect of this approach was that of engaging writers in their own interests, from where interview questions could provide a basis for discussion. Fiction writing seemed characterized by emergent processes, where writers experienced real life events and channelled their experiences and feelings into stories. The writing of characters was often associated with discovery. A key finding for fiction writing was that even writers who planned heavily and identified themselves somewhat as Planners, also tended to discover more about their characters when writing. Academic writing was characterized by difficulty, where discovery was often described in relation to struggling to summarize arguments or with finding key words. A key conclusion from this study is that writers may be Planners or Discoverers by orientation, as previous theory has recognised. However, the things that writers plan and discover, such as plots and characters, also play an important role in their writing processes.
113

Diseño del modelo interno de un proceso de investigación exploratoria para el desarrollo de propuestas de valor diferenciadas en el sector construcción

Bossi Cortés, Benjamín Ignacio January 2016 (has links)
Ingeniero Civil Industrial / La industria del acero a nivel mundial está pasando momentos difíciles, y se hace cada vez más relevante para los actores del mercado acercarse a sus clientes empresa y conocerlos de mejor forma, cambiando el paradigma histórico en el cual las siderúrgicas se dedican únicamente a producir acero, y luego esperar a que se venda, estrategia que solía funcionar principalmente debido a la escasa competencia, pero dado el contexto actual descrito en el presente trabajo ya no basta con simplemente producir el acero, sino que hay que acercarse al cliente del producto y conocerlo en profundidad. La problemática planteada ha sido abordada desde la metodología Discovery Teams, la cual consiste en crear equipos multidisciplinarios para visitar a los distintos eslabones de una cadena industrial, con un fuerte foco en el cliente final, buscando ideas revolucionarias para beneficiarlos mediante nuevos y/o mejores productos. La metodología ha sido adaptada al contexto de Gerdau Chile, principal proveedor de acero de la industria nacional en la actualidad. El hito principal de la metodología es la visita a terreno, y el principal foco de interés del trabajo presentado consiste en el levantamiento de los procesos necesarios para una correcta aplicación de la metodología, para así abarcar toda la estructura de lo que debe suceder tanto antes como después del hito principal, tanto para corroborar que la visita está bien planificada, como para también asegurar la continuidad en el tiempo de la metodología. El trabajo contempla la descripción detallada del programa Discovery Teams, siendo los programas de desarrollo e introducción de producto una tarea a futuro al interior de la organización, pero haciendo mención a la labor del equipo de exploración en ambas etapas. El trabajo presentado también contempla un capítulo dedicado exclusivamente a analizar las posibles barreras al interior de una organización que podrían dificultar la implementación, donde es posible destacar que existe un miedo al cambio y un escepticismo con respecto a los resultados de algo tan desconocido, pero también hay elementos facilitadores que permiten desarrollar la metodología adecuadamente, como una fuerte red de contactos y una gran reputación y reconocimiento a la calidad del trabajo hecho.
114

The Effectiveness of a Guided Discovery Method of Teaching in a College Mathematics Course for Non-Mathematics and Non-Science Majors

Reimer, Dennis D., 1940- 01 1900 (has links)
The purpose of this study was to ascertain the value, as determined by student achievement, of using a discovery method of teaching mathematics in a college freshman mathematics course for non-mathematics and non-science majors.
115

Novel stochastic and entropy-based Expectation-Maximisation algorithm for transcription factor binding site motif discovery

Kilpatrick, Alastair Morris January 2015 (has links)
The discovery of transcription factor binding site (TFBS) motifs remains an important and challenging problem in computational biology. This thesis presents MITSU, a novel algorithm for TFBS motif discovery which exploits stochastic methods as a means of both overcoming optimality limitations in current algorithms and as a framework for incorporating relevant prior knowledge in order to improve results. The current state of the TFBS motif discovery field is surveyed, with a focus on probabilistic algorithms that typically take the promoter regions of coregulated genes as input. A case is made for an approach based on the stochastic Expectation-Maximisation (sEM) algorithm; its position amongst existing probabilistic algorithms for motif discovery is shown. The algorithm developed in this thesis is unique amongst existing motif discovery algorithms in that it combines the sEM algorithm with a derived data set which leads to an improved approximation to the likelihood function. This likelihood function is unconstrained with regard to the distribution of motif occurrences within the input dataset. MITSU also incorporates a novel heuristic to automatically determine TFBS motif width. This heuristic, known as MCOIN, is shown to outperform current methods for determining motif width. MITSU is implemented in Java and an executable is available for download. MITSU is evaluated quantitatively using realistic synthetic data and several collections of previously characterised prokaryotic TFBS motifs. The evaluation demonstrates that MITSU improves on a deterministic EM-based motif discovery algorithm and an alternative sEM-based algorithm, in terms of previously established metrics. The ability of the sEM algorithm to escape stable fixed points of the EM algorithm, which trap deterministic motif discovery algorithms and the ability of MITSU to discover multiple motif occurrences within a single input sequence are also demonstrated. MITSU is validated using previously characterised Alphaproteobacterial motifs, before being applied to motif discovery in uncharacterised Alphaproteobacterial data. A number of novel results from this analysis are presented and motivate two extensions of MITSU: a strategy for the discovery of multiple different motifs within a single dataset and a higher order Markov background model. The effects of incorporating these extensions within MITSU are evaluated quantitatively using previously characterised prokaryotic TFBS motifs and demonstrated using Alphaproteobacterial motifs. Finally, an information-theoretic measure of motif palindromicity is presented and its advantages over existing approaches for discovering palindromic motifs discussed.
116

Real-Time and Data-Driven Operation Optimization and Knowledge Discovery for an Enterprise Information System

Duan, Qing January 2014 (has links)
<p>An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions. </p><p>This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods. </p><p> </p><p>On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.</p><p>In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.</p><p>We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,</p><p>and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy. </p><p>In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.</p> / Dissertation
117

Using Primo for undergraduate research: a usability study

Kliewer, Greta, Monroe-Gulick, Amalia, Gamble, Stephanie, Radio, Erik 21 November 2016 (has links)
Purpose - The purpose of this paper is to observe how undergraduate students approach open-ended searching for a research assignment, specifically as it affected their use of the discovery interface Primo. Design/methodology/approach - In total, 30 undergraduate students were provided with a sample research assignment and instructed to find resources for it using web tools of their choice, followed by the Primo discovery tool. Students were observed for 30 minutes. A survey was provided at the end to solicit additional feedback. Sources students found were evaluated for relevance and utility. Findings - Students expressed a high level of satisfaction with Primo despite some difficulty navigating through more complicated tasks. Despite their interest in the tool and previous exposure to it, it was usually not the first discovery tool students used when given the research assignment. Students approached the open-ended search environment much like they would with a commercial search engine. Originality/value - This paper focused on an open-ended search environment as opposed to a known- item scenario in order to assess students' preferences for web search tools and how a library discovery layer such as Primo was a part of that situation. Evaluation of the resources students found relevant were also analyzed to determine to what degree the students understood the level of quality they exhibited and from which tool they were obtained.
118

Scalable Discovery and Analytics on Web Linked Data

Abdelaziz, Ibrahim 07 1900 (has links)
Resource Description Framework (RDF) provides a simple way for expressing facts across the web, leading to Web linked data. Several distributed and federated RDF systems have emerged to handle the massive amounts of RDF data available nowadays. Distributed systems are optimized to query massive datasets that appear as a single graph, while federated systems are designed to query hundreds of decentralized and interlinked graphs. This thesis starts with a comprehensive experimental study of the state-of-the-art RDF systems. It identifies a set of research problems for improving the state-of-the-art, including: supporting the emerging RDF analytics required by many modern applications, querying linked data at scale, and enabling discovery on linked data. Addressing these problems is the focus of this thesis. First, we propose Spartex; a versatile framework for complex RDF analytics. Spartex extends SPARQL to seamlessly combine generic graph algorithms with SPARQL queries. Spartex implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. We demonstrate that Spartex scales to datasets with billions of edges, and is at least as fast as the state-of-the-art specialized RDF engines. For analytical tasks, Spartex is an order of magnitude faster than existing alternatives. To address the scalability limitation of federated RDF engines, we propose Lusail; a scalable system for querying geo-distributed RDF graphs. Lusail follows a two-tier strategy: (i) locality-aware decomposition of the query into subqueries to maximize the computations at the endpoints and minimize intermediary results, and (ii) selectivity-aware execution to reduce network latency and increase parallelism. Our experiments on billions of triples show that Lusail outperforms existing systems by orders of magnitude in scalability and response time. Finally, enabling discovery on linked data is challenging due to the prior knowledge required to formulate SPARQL queries. To address these challenges; we develop novel techniques to (i) predict semantically equivalent SPARQL queries from a set of keywords by leveraging word embeddings, and (ii) generate fine-grained and non-blocking query plans to get fast and early results.
119

Computer-aided drug discovery and protein-ligand docking / CUHK electronic theses & dissertations collection

January 2015 (has links)
Developing a new drug costs up to US$2.6B and 13.5 years. To save money and time, we have developed a toolset for computer-aided drug discovery, and utilized our toolset to discover drugs for the treatment of cancers and influenza. / We first implemented a fast protein-ligand docking tool called idock, and obtained a substantial speedup over a popular counterpart. To facilitate the large-scale use of idock, we designed a heterogeneous web platform called istar, and collected a huge database of more than 23 million small molecules. To elucidate molecular interactions in web, we developed an interactive visualizer called iview. To synthesize novel compounds, we developed a fragment-based drug design tool called iSyn. To improve the predictive accuracy of binding affinity, we exploited the machine learning technique random forest to re-score both crystal and docked poses. To identify structurally similar compounds, we ported the ultrafast shape recognition algorithms to istar. All these tools are free and open source. / We applied our novel toolset to real world drug discovery. We repurposed anti-acne drug adapalene for the treatment of human colon cancer, and identified potential inhibitors of influenza viral proteins. Such new findings could hopefully save human lives. / 開發一種新藥需要多至26億美元和13年半的時間。為節省金錢和時間,我們開發了一套計算機輔助藥物研發工具集,並運用該工具集尋找藥物治療癌症和流感。 / 我們首先實現了一個快速的蛋白與配體對接工具idock,相比一個同類流行軟件獲得了顯著的速度提升。為輔助idock 的大規模使用,我們設計了一個異構網站平台istar,收集了多達兩千三百萬個小分子的大型數據庫。為在網頁展示分子間相互作用,我們開發了一個交互式可視化軟件iview。為生成全新的化合物,我們開發了一個基於分子片段的藥物設計工具iSyn。為改進結合強度預測的精度,我們利用了機器學習技術隨機森林去重新打分晶體及預測構象。為尋找結構相似的化合物,我們移植了超快形狀識別算法至istar。所有這些工俱全是免費和開源。 / 我們應用了此創新工具集至現實世界藥物尋找中。我們發現抗痤瘡藥阿達帕林可用於治療人類結腸癌,亦篩選出流感病毒蛋白的潛在抑制物。這些新發現可望拯救人類生命。 / Li, Hongjian. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2015. / Includes bibliographical references (leaves 340-394). / Abstracts also in Chinese. / Title from PDF title page (viewed on 15, September, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
120

Analysis Guided Visual Exploration of Multivariate Data

Yang, Di 04 May 2007 (has links)
Visualization systems traditionally focus on graphical representation of information. They tend not to provide integrated analytical services that could aid users in tackling complex knowledge discovery tasks. Users¡¯ exploration in such environments is usually impeded due to several problems: 1) Valuable information is hard to discover, when too much data is visualized on the screen. 2) They have to manage and organize their discoveries off line, because no systematic discovery management mechanism exists. 3) Their discoveries based on visual exploration alone may lack accuracy. 4) They have no convenient access to the important knowledge learned by other users. To tackle these problems, it has been recognized that analytical tools must be introduced into visualization systems. In this paper, we present a novel analysis-guided exploration system, called the Nugget Management System (NMS). It leverages the collaborative effort of human comprehensibility and machine computations to facilitate users¡¯ visual exploration process. Specifically, NMS first extracts the valuable information (nuggets) hidden in datasets based on the interests of users. Given that similar nuggets may be re-discovered by different users, NMS consolidates the nugget candidate set by clustering based on their semantic similarity. To solve the problem of inaccurate discoveries, data mining techniques are applied to refine the nuggets to best represent the patterns existing in datasets. Lastly, the resulting well-organized nugget pool is used to guide users¡¯ exploration. To evaluate the effectiveness of NMS, we integrated NMS into XmdvTool, a freeware multivariate visualization system. User studies were performed to compare the users¡¯ efficiency and accuracy of finishing tasks on real datasets, with and without the help of NMS. Our user studies confirmed the effectiveness of NMS. Keywords: Visual Analytics, Visual Knowledge

Page generated in 0.0404 seconds