• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 8
  • 8
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 144
  • 24
  • 24
  • 18
  • 16
  • 14
  • 13
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Feature Extraction using Dimensionality Reduction Techniques: Capturing the Human Perspective

Coleman, Ashley B. January 2015 (has links)
No description available.
82

Specification, Configuration and Execution of Data-intensive Scientific Applications

Kumar, Vijay Shiv 14 December 2010 (has links)
No description available.
83

Implementation of decision trees for embedded systems

Badr, Bashar January 2014 (has links)
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm. Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree. The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time. The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time.
84

A redação de vestibular sob uma perspectiva multidimensional: uma abordagem da linguística de corpus

Barreto, Juliana Pereira Souto 30 June 2016 (has links)
Submitted by Jailda Nascimento (jmnascimento@pucsp.br) on 2016-10-03T15:02:31Z No. of bitstreams: 1 Juliana Pereira Souto Barreto.pdf: 3340189 bytes, checksum: 30f53e6df08e1c7c68807c3dacf8ad70 (MD5) / Made available in DSpace on 2016-10-03T15:02:31Z (GMT). No. of bitstreams: 1 Juliana Pereira Souto Barreto.pdf: 3340189 bytes, checksum: 30f53e6df08e1c7c68807c3dacf8ad70 (MD5) Previous issue date: 2016-06-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The research reported here intends to analyze the production and the evaluation of the written text in the college entrance essays produced by undergraduate applicants. More specifically, this study verifies how the composition tests, written by the applicants during the admission process to the Rio Grande do Norte Federal University (UFRN), are related to the varying dimensions of Brazilian Portuguese, presented in Berber Sardinha, Kauffmann and Acunzo (2014). The research uses the theoretical framework of Corpus Linguistics and the methodological approach of Multidimensional Analysis. The study corpus is composed of one hundred essays written by applicants for admission to higher education undergraduate courses, tagged by Palavras parser and post-processed with a script, which calculates the score of each text into all of the six variation dimensions of Brazilian Portuguese. At first, it is checked how the applicants’ texts are related to the six Brazilian Portuguese varying dimensions. Then, it is observed the variation in relation to the grades award to these essays by examiners in order to determine whether and which correction criteria were met, based on the scores of the Brazilian Portuguese multidimensional analysis. Hence, the outcomes here are likely to provide important contributions to the field of textual production in Portuguese in Brazil, considering that it is vital to develop a more accurate understanding of the language in use applied to the teaching and learning of argumentative text production, written by applicants during their admission to undergraduate courses in Higher Education / A pesquisa aqui relatada tem por objetivo analisar a qualidade do texto escrito nas redações de vestibular produzidas por candidatos à graduação. Mais especificamente, esse estudo verifica o modo como as redações de candidatos ao Ensino Superior da Universidade Federal do Rio Grande do Norte (UFRN) elacionam-se às dimensões de variação do português brasileiro, apresentadas em Berber Sardinha, Kauffmann e Acunzo (2014a; 2014b; no prelo). Para isso, a pesquisa recorre ao arcabouço teórico da Linguística de Corpus e à abordagem metodológica da Análise Multidimensional (Biber, 1988). O corpus de estudo, composto por cem redações escritas por candidatos ao ingresso em cursos de graduação do Ensino Superior, foi analisado com o etiquetador Palavras e pós-processado com um script que calcula o escore de cada texto em cada uma das seis dimensões de variação do português brasileiro. Em um primeiro momento, é verificado o modo como as redações dos candidatos relacionam-se às seis dimensões de variação do português brasileiro. Em seguida, a variação é observada em relação às notas dadas às redações por examinadores, a fim de determinar se e quais os critérios de avalição, estipulados para a correção das redações de vestibular, foram atendidos, com base nos escores da análise multidimensional do português brasileiro. Por fim, acredita-se que os resultados aqui encontrados possam vir a proporcionar contribuições relevantes ao campo da produção textual em língua portuguesa no Brasil, uma vez que se faz necessário uma compreensão mais apurada da língua em uso aplicada ao ensino e ao aprendizado da produção de textos argumentativos por candidatos a ingresso em cursos de graduação do Ensino Superior
85

Passage à l'échelle pour les contraintes d'ordonnancement multi-ressources / Scalable multi-dimensional resources scheduling constraints

Letort, Arnaud 28 October 2013 (has links)
La programmation par contraintes est une approche régulièrement utilisée pour résoudre des problèmes combinatoires d’origines diverses. Dans cette thèse nous nous focalisons sur les problèmes d’ordonnancement cumulatif. Un problème d’ordonnancement consiste à déterminer les dates de débuts et de fins d’un ensemble de tâches, tout en respectant certaines contraintes de capacité et de précédence. Les contraintes de capacité concernent aussi bien des contraintes cumulatives classiques où l’on restreint la somme des hauteurs des tâches intersectant un instant donné, que des contraintes cumulatives colorées où l’on restreint le nombre maximum de couleurs distinctes prises par les tâches. Un des objectifs récemment identifiés pour la programmation par contraintes est de traiter des problèmes de grandes tailles, habituellement résolus à l’aide d’algorithmes dédiés et de métaheuristiques. Par exemple, l’utilisation croissante de centres de données virtualisés laisse apparaitre des problèmes d’ordonnancement et de placement multi-dimensionnels de plusieurs milliers de tâches. Pour atteindre cet objectif, nous utilisons l’idée de balayage synchronisé considérant simultanément une conjonction de contraintes cumulative et des précédences, ce qui nous permet d’accélérer la convergence au point fixe. De plus, de ces algorithmes de filtrage nous dérivons des procédures gloutonnes qui peuvent être appelées à chaque nœud de l’arbre de recherche pour tenter de trouver plus rapidement une solution au problème. Cette approche permet de traiter des problèmes impliquant plus d’un million de tâches et 64 ressources cumulatives. Ces algorithmes ont été implémentés dans les solveurs de contraintes Choco et SICStus, et évalués sur divers problèmes déplacement et d’ordonnancement. / Constraint programming is an approach often used to solve combinatorial problems in different application areas. In this thesis we focus on the cumulative scheduling problems. A scheduling problem is to determine the starting dates of a set of tasks while respecting capacity and precedence constraints. Capacity constraints affect both conventional cumulative constraints where the sum of the heights of tasks intersecting a given time point is limited, and colored cumulative constraints where the number of distinct colors assigned to the tasks intersecting a given time point is limited. A newly identified challenge for constraint programming is to deal with large problems, usually solved by dedicated algorithms and metaheuristics. For example, the increasing use of virtualized datacenters leads to multi dimensional placement problems of thousand of jobs. Scalability is achieved by using a synchronized sweep algorithm over the different cumulative and precedence constraints that allows to speed up convergence to the fix point. In addition, from these filtering algorithms we derive greedy procedures that can be called at each node of the search tree to find a solution more quickly. This approach allows to deal with scheduling problems involving more than one million jobs and 64 cumulative resources. These algorithms have been implemented within Choco and SICStussolvers and evaluated on a variety of placement and scheduling problems.
86

企業建立平衡計分卡之研究 / Research of Balanced ScoreCard Using Case Study

王清弘, Wang, Ching-Hung Unknown Date (has links)
在科技發達的現今,而企業間的經營競爭也愈來愈激烈,企業需要有一套績效評估的標準以知道本身是否有成長?是否有利潤產生?是否具有競爭力?然而在現今的資訊時代中,大部分的企業所採用的企業績效評估的標準仍以傳統的財務會計模式為主,然而財務面只是企業經營成果的最後衡量,而哪些因素是影響、導致最後在財務面的成果展現則是非常重要且需要去探討的。 Kaplan & Norton所提出的平衡計分卡(The Balanced Scorecard; BSC)能幫助企業達成願景,滿足股東的期望。平衡計分卡(BSC)包括四個構面(財務、顧客、內部流程、學習與成長),強調企業最後的營運績效(利潤)不僅只來自於財務報表的資料成果,還包括其它層面(如顧客、內部流程、學習與成長)所間接產生的因果關係所造成的影響。 另一方面,企業的資訊系統(IS)可以產生有用的資訊以提供給高階主管決策之用,幫助調整企業的策略目標與願景,可反應出企業策略的制定有無妥善。然而,傳統的資訊系統(IS)卻無法即時有效地提供資訊;因此,就應利用多維度模式(Multi-Dimensional Model)的資料結構來找出所追求的關鍵性資訊以供作決策。 在資訊時代中,對於如何幫助企業發展一個平衡計分卡(BSC),使企業根據策略目標與願景獲取競爭優勢以達到永續經營;而以多維度模型(Multi-Dimensional Model)所建構的資訊系統,可反應出經過分析後所得到的資訊是否可以支援(Support)企業在平衡計分卡(BSC)中所設計的策略性指標(Indicators)。 根據以上所述,本研究針對製造業中的一家製帽公司進行研究,探討企業是否能夠建立平衡計分卡(BSC)系統?同時,探討企業的資訊系統(IS)對於平衡計分卡(BSC)的支援情況如何?以及從一般的資訊系統建立平衡計分卡(BSC)的困難處在哪?最後,要探討的是如何利用資料結構(多維度模型)去支援(Support)平衡計分卡(BSC)? 研究所得結論可歸納為,受訪的個案公司能夠接受平衡計分卡(BSC)的觀念;然而平衡計分卡(BSC)的實施有賴於各層面因果關係的建立;若要達到各層面因果關係的建立,則企業需要有以流程為主的整合資訊系統;而受訪個案公司的傳統資訊系統不易支援平衡計分卡(BSC);若企業沒有良好的資料模型就無法支援平衡計分卡(BSC);而平衡計分卡(BSC)需要以多維度模型(Multi-Dimensional Model)來支援。
87

Directed Enzyme Evolution of Theta Class Glutathione Transferase : Studies of Recombinant Libraries and Enhancement of Activity toward the Anticancer Drug 1,3-bis(2-Chloroethyl)-1-nitrosourea

Larsson, Anna-Karin January 2003 (has links)
<p>Glutathione transferases (GSTs) are detoxication enzymes involved in the cellular protection against a wide range of reactive substances. The role of GSTs is to catalyze the conjugation of glutathione with electrophilic compounds, which generally results in less toxic products. </p><p>The ability to catalyze the denitrosation of the anticancer drug 1,3-bis(2-chloroethyl)- 1-nitrosourea (BCNU) was measured in twelve different GSTs. Only three of the enzymes showed any measurable activity with BCNU, of which human GST T1-1 was the most efficient. This is of special interest, since human GST T1-1 is a polymorphic protein and its expression in different patients may be crucial for the response to BCNU.</p><p>DNA shuffling was used to create a mutant library by recombination of cDNA coding for two different Theta-class GSTs. In total, 94 randomly picked mutants were characterized with respect to their catalytic activity with six different substrates, expression level and sequence. A clone with only one point mutation compared to wild-type rat GST T2-2 had a significantly different substrate-activity pattern. A high expressing mutant of human GST T1-1 was also identified, which is important, since the yield of the wild-type GST T1-1 is generally low. </p><p>Characterization of the Theta library demonstrated divergence of GST variants both in structure and function. The properties of every mutant were treated as a point in a six-dimensional substrate-activity space. Groups of mutants were formed based on euclidian distances and K-means cluster analyses. Both methods resulted in a set of five mutants with high alkyltransferase activities toward dichloromethane and 4-nitrophenethyl bromide (NPB). </p><p>The five selected mutants were used as parental genes in a new DNA shuffling. Addition of cDNA coding for mouse and rat GST T1-1 improved the genetic diversity of the library. The evolution of GST variants was directed towards increased alkyltransferase activity including activity with the anticancer drug BCNU. NPB was used as a surrogate substrate in order to facilitate the screening process. A mutant from the second generation displayed a 65-fold increased catalytic activity with NPB as substrate compared to wild-type human GST T1-1. The BCNU activity with the same mutant had increased 175-fold, suggesting that NPB is a suitable model substrate for the anticancer drug. Further evolution presented a mutant in the fifth generation of the library with 110 times higher NPB activity than wild-type human GST T1-1.</p>
88

Directed Enzyme Evolution of Theta Class Glutathione Transferase : Studies of Recombinant Libraries and Enhancement of Activity toward the Anticancer Drug 1,3-bis(2-Chloroethyl)-1-nitrosourea

Larsson, Anna-Karin January 2003 (has links)
Glutathione transferases (GSTs) are detoxication enzymes involved in the cellular protection against a wide range of reactive substances. The role of GSTs is to catalyze the conjugation of glutathione with electrophilic compounds, which generally results in less toxic products. The ability to catalyze the denitrosation of the anticancer drug 1,3-bis(2-chloroethyl)- 1-nitrosourea (BCNU) was measured in twelve different GSTs. Only three of the enzymes showed any measurable activity with BCNU, of which human GST T1-1 was the most efficient. This is of special interest, since human GST T1-1 is a polymorphic protein and its expression in different patients may be crucial for the response to BCNU. DNA shuffling was used to create a mutant library by recombination of cDNA coding for two different Theta-class GSTs. In total, 94 randomly picked mutants were characterized with respect to their catalytic activity with six different substrates, expression level and sequence. A clone with only one point mutation compared to wild-type rat GST T2-2 had a significantly different substrate-activity pattern. A high expressing mutant of human GST T1-1 was also identified, which is important, since the yield of the wild-type GST T1-1 is generally low. Characterization of the Theta library demonstrated divergence of GST variants both in structure and function. The properties of every mutant were treated as a point in a six-dimensional substrate-activity space. Groups of mutants were formed based on euclidian distances and K-means cluster analyses. Both methods resulted in a set of five mutants with high alkyltransferase activities toward dichloromethane and 4-nitrophenethyl bromide (NPB). The five selected mutants were used as parental genes in a new DNA shuffling. Addition of cDNA coding for mouse and rat GST T1-1 improved the genetic diversity of the library. The evolution of GST variants was directed towards increased alkyltransferase activity including activity with the anticancer drug BCNU. NPB was used as a surrogate substrate in order to facilitate the screening process. A mutant from the second generation displayed a 65-fold increased catalytic activity with NPB as substrate compared to wild-type human GST T1-1. The BCNU activity with the same mutant had increased 175-fold, suggesting that NPB is a suitable model substrate for the anticancer drug. Further evolution presented a mutant in the fifth generation of the library with 110 times higher NPB activity than wild-type human GST T1-1.
89

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
90

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.

Page generated in 0.0977 seconds