• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 87
  • 17
  • 14
  • 14
  • 11
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A generic processing in memory cycle accurate simulator under hybrid memory cube architecture / Um simulador genérico ciclo-acurado para processamento em memória baseado na arquitetura da hybrid memory cube

Oliveira Junior, Geraldo Francisco de January 2017 (has links)
PIM - uma técnica onde elementos computacionais são adicionados perto, ou idealmente, dentro de dispositivos de memória - foi uma das tentativas criadas durante os anos 1990 visando mitigar o notório memory wall problem. Hoje em dia, com o amadurecimento do processo de integração 3D, um novo horizonte para novas arquiteturas PIM pode ser explorado. Para investigar este novo cenário, pesquisadores dependem de simuladores em software para navegar pelo espaço de exploração de projeto. Hoje, a maioria dos trabalhos que focam em PIM, implementam simuladores locais para realizar seus experimentos. Porém, esta metodologia pode reduzir a produtividade e reprodutibilidade. Neste trabalho, nós mostramos o desenvolvimento de um simulador de PIM preciso, modular e parametrizável. Nosso simulador, chamado CLAPPS, visa a arquitetura de memória HMC, uma memória 3D popular, que é amplamente utilizada em aceleradores PIM do estado da arte. Nós desenvolvemos nosso mecanismo utilizando a linguagem de programação SystemC, o que permite uma simulação paralela nativamente. A principal contribuição do nosso trabalho se baseia em desenvolver a interface amigável que permite a fácil exploração de arquiteturas PIM. Para avaliar o nosso sistema, nós implementamos um modulo de PIM que pode executar operações vetoriais com diferente tamanhos de operandos utilizando o proposto conjunto de ferramentas. / PIM - a technique which computational elements are added close, or ideally, inside memory devices - was one of the attempts created during the 1990s to try to mitigate the memory wall problem. Nowadays, with the maturation of 3D integration technologies, a new landscape for novel PIM architectures can be investigated. To exploit this new scenario, researchers rely on software simulators to navigate throughout the design evaluation space. Today, most of the works targeting PIM implement in-house simulators to perform their experiments. However, this methodology might hurt overall productivity, while it might also preclude replicability. In this work, we showed the development of a precise, modular and parametrized PIM simulation environment. Our simulator, named CLAPPS, targets the HMC architecture, a popular 3D-stacked memory widely employed in state-of-the-art PIM accelerators. We have designed our mechanism using the SystemC programming language, which allows native parallel simulation. The primary contribution of our work lies in developing a user-friendly interface to allow easy PIM architectures exploitation. To evaluate our system, we have implemented a PIM module that can perform vector operations with different operand sizes using the proposed set of tools.
32

Um método para deduplicação de metadados bibliográficos baseado no empilhamento de classificadores / A method for bibliographic metadata deduplication based on stacked generalization

Borges, Eduardo Nunes January 2013 (has links)
Metadados bibliográficos duplicados são registros que correspondem a referências bibliográficas semanticamente equivalentes, ou seja, que descrevem a mesma publicação. Identificar metadados bibliográficos duplicados em uma ou mais bibliotecas digitais é uma tarefa essencial para garantir a qualidade de alguns serviços como busca, navegação e recomendação de conteúdo. Embora diversos padrões de metadados tenham sido propostos, eles não resolvem totalmente os problemas de interoperabilidade porque mesmo que exista um mapeamento entre diferentes esquemas de metadados, podem existir variações na representação do conteúdo. Grande parte dos trabalhos propostos para identificar duplicatas aplica uma ou mais funções sobre o conteúdo de determinados campos no intuito de captar a similaridade entre os registros. Entretanto, é necessário escolher um limiar que defina se dois registros são suficientemente similares para serem considerados semanticamente equivalentes ou duplicados. Trabalhos mais recentes tratam a deduplicação de registros como um problema de classificação de dados, em que um modelo preditivo é treinado para estimar a que objeto do mundo real um registro faz referência. O objetivo principal desta tese é o desenvolvimento de um método efetivo e automático para identificar metadados bibliográficos duplicados, combinando o aprendizado de múltiplos classificadores supervisionados, sem a necessidade de intervenção humana na definição de limiares de similaridade. Sobre o conjunto de treinamento são aplicadas funções de similaridade desenvolvidas especificamente para o contexto de bibliotecas digitais e com baixo custo computacional. Os escores produzidos pelas funções são utilizados para treinar múltiplos modelos de classificação heterogêneos, ou seja, a partir de algoritmos de diversos tipos: baseados em árvores, regras, redes neurais artificiais e probabilísticos. Os classificadores aprendidos são combinados através da estratégia de empilhamento visando potencializar o resultado da deduplicação a partir do conhecimento heterogêneo adquirido individualmente pelos algoritmo de aprendizagem. O modelo de classificação final é aplicado aos pares candidatos ao casamento retornados por uma estratégia de blocagem de dois níveis bastante eficiente. A solução proposta é baseada na hipótese de que o empilhamento de classificadores supervisionados pode aumentar a qualidade da deduplicação quando comparado a outras estratégias de combinação. A avaliação experimental mostra que a hipótese foi confirmada quando o método proposto é comparado com a escolha do melhor classificador e com o voto da maioria. Ainda são analisados o impacto da diversidade dos classificadores no resultado do empilhamento e os casos de falha do método proposto. / Duplicated bibliographic metadata are semantically equivalent records, i.e., references that describe the same publication. Identifying duplicated bibliographic metadata in one or more digital libraries is an essential task to ensure the quality of some services such as search, navigation, and content recommendation. Although many metadata standards have been proposed, they do not completely solve interoperability problems because even if there is a mapping between different metadata schemas, there may be variations in the content representation. Most of work proposed to identify duplicated records uses one or more functions on some fields in order to capture the similarity between the records. However, we need to choose a threshold that defines whether two records are sufficiently similar to be considered semantically equivalent or duplicated. Recent studies deal with record deduplication as a data classification problem, in which a predictive model is trained to estimate the real-world object to which a record refers. The main goal of this thesis is the development of an effective and automatic method to identify duplicated bibliographic metadata, combining multiple supervised classifiers, without any human intervention in the setting of similarity thresholds. We have applied on the training set cheap similarity functions specifically designed for the context of digital libraries. The scores returned by these functions are used to train multiple and heterogeneous classification models, i.e., using learning algorithms based on trees, rules, artificial neural networks and probabilistic models. The learned classifiers are combined by stacked generalization strategy to improve the deduplication result through heterogeneous knowledge acquired by each learning algorithm. The final model is applied to pairs of records that are candidate to matching. These pairs are defined by an efficient two phase blocking strategy. The proposed solution is based on the hypothesis that stacking supervised classifiers can improve the quality of deduplication when compared to other combination strategies. The experimental evaluation shows that the hypothesis has been confirmed by comparing the proposed method to selecting the best classifier or the majority vote technique. We also have analyzed the impact of classifiers diversity on the stacking results and the cases for which the proposed method fails.
33

Um método para deduplicação de metadados bibliográficos baseado no empilhamento de classificadores / A method for bibliographic metadata deduplication based on stacked generalization

Borges, Eduardo Nunes January 2013 (has links)
Metadados bibliográficos duplicados são registros que correspondem a referências bibliográficas semanticamente equivalentes, ou seja, que descrevem a mesma publicação. Identificar metadados bibliográficos duplicados em uma ou mais bibliotecas digitais é uma tarefa essencial para garantir a qualidade de alguns serviços como busca, navegação e recomendação de conteúdo. Embora diversos padrões de metadados tenham sido propostos, eles não resolvem totalmente os problemas de interoperabilidade porque mesmo que exista um mapeamento entre diferentes esquemas de metadados, podem existir variações na representação do conteúdo. Grande parte dos trabalhos propostos para identificar duplicatas aplica uma ou mais funções sobre o conteúdo de determinados campos no intuito de captar a similaridade entre os registros. Entretanto, é necessário escolher um limiar que defina se dois registros são suficientemente similares para serem considerados semanticamente equivalentes ou duplicados. Trabalhos mais recentes tratam a deduplicação de registros como um problema de classificação de dados, em que um modelo preditivo é treinado para estimar a que objeto do mundo real um registro faz referência. O objetivo principal desta tese é o desenvolvimento de um método efetivo e automático para identificar metadados bibliográficos duplicados, combinando o aprendizado de múltiplos classificadores supervisionados, sem a necessidade de intervenção humana na definição de limiares de similaridade. Sobre o conjunto de treinamento são aplicadas funções de similaridade desenvolvidas especificamente para o contexto de bibliotecas digitais e com baixo custo computacional. Os escores produzidos pelas funções são utilizados para treinar múltiplos modelos de classificação heterogêneos, ou seja, a partir de algoritmos de diversos tipos: baseados em árvores, regras, redes neurais artificiais e probabilísticos. Os classificadores aprendidos são combinados através da estratégia de empilhamento visando potencializar o resultado da deduplicação a partir do conhecimento heterogêneo adquirido individualmente pelos algoritmo de aprendizagem. O modelo de classificação final é aplicado aos pares candidatos ao casamento retornados por uma estratégia de blocagem de dois níveis bastante eficiente. A solução proposta é baseada na hipótese de que o empilhamento de classificadores supervisionados pode aumentar a qualidade da deduplicação quando comparado a outras estratégias de combinação. A avaliação experimental mostra que a hipótese foi confirmada quando o método proposto é comparado com a escolha do melhor classificador e com o voto da maioria. Ainda são analisados o impacto da diversidade dos classificadores no resultado do empilhamento e os casos de falha do método proposto. / Duplicated bibliographic metadata are semantically equivalent records, i.e., references that describe the same publication. Identifying duplicated bibliographic metadata in one or more digital libraries is an essential task to ensure the quality of some services such as search, navigation, and content recommendation. Although many metadata standards have been proposed, they do not completely solve interoperability problems because even if there is a mapping between different metadata schemas, there may be variations in the content representation. Most of work proposed to identify duplicated records uses one or more functions on some fields in order to capture the similarity between the records. However, we need to choose a threshold that defines whether two records are sufficiently similar to be considered semantically equivalent or duplicated. Recent studies deal with record deduplication as a data classification problem, in which a predictive model is trained to estimate the real-world object to which a record refers. The main goal of this thesis is the development of an effective and automatic method to identify duplicated bibliographic metadata, combining multiple supervised classifiers, without any human intervention in the setting of similarity thresholds. We have applied on the training set cheap similarity functions specifically designed for the context of digital libraries. The scores returned by these functions are used to train multiple and heterogeneous classification models, i.e., using learning algorithms based on trees, rules, artificial neural networks and probabilistic models. The learned classifiers are combined by stacked generalization strategy to improve the deduplication result through heterogeneous knowledge acquired by each learning algorithm. The final model is applied to pairs of records that are candidate to matching. These pairs are defined by an efficient two phase blocking strategy. The proposed solution is based on the hypothesis that stacking supervised classifiers can improve the quality of deduplication when compared to other combination strategies. The experimental evaluation shows that the hypothesis has been confirmed by comparing the proposed method to selecting the best classifier or the majority vote technique. We also have analyzed the impact of classifiers diversity on the stacking results and the cases for which the proposed method fails.
34

Stacked-Value of Battery Storage: Effect of Battery Storage Penetration on Power Dispatch

January 2020 (has links)
abstract: In this work, the stacked values of battery energy storage systems (BESSs) of various power and energy capacities are evaluated as they provide multiple services such as peak shaving, frequency regulation, and reserve support in an ‘Arizona-based test system’ - a simplified, representative model of Salt River Project’s (SRP) system developed using the resource stack information shared by SRP. This has been achieved by developing a mixed-integer linear programming (MILP) based optimization model that captures the operation of BESS in the Arizona-based test system. The model formulation does not include any BESS cost as the objective is to estimate the net savings in total system operation cost after a BESS is deployed in the system. The optimization model has been formulated in such a way that the savings due to the provision of a single service, either peak shaving or frequency regulation or spinning reserve support, by the BESS, can be determined independently. The model also allows calculation of combined savings due to all the services rendered by the BESS. The results of this research suggest that the savings obtained with a BESS providing multiple services are significantly higher than the same capacity BESS delivering a single service in isolation. It is also observed that the marginal contribution of BESS reduces with increasing BESS energy capacity, a result consistent with the law of diminishing returns. Further, small changes in the simulation environment, such as factoring in generator forced outage rates or projection of future solar penetration, can lead to changes as high as 10% in the calculated stacked value. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2020
35

Parallel Memory System Architectures for Packet Processing in Network Virtualization / ネットワーク仮想化におけるパケット処理のための並列メモリシステムアーキテクチャ

Korikawa, Tomohiro 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23326号 / 情博第762号 / 新制||情||130(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 大木 英司, 教授 守倉 正博, 教授 岡部 寿男 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
36

Optimizing Memory Systems for High Efficiency in Computing Clusters

Liu, Wenjie January 2022 (has links)
DRAM-based memory system suffers from increasing aggravating row buffer interference, which causes significant performance degradation and power consumption. With DRAM scaling, the overheads of row buffer interference become even worse due to higher row activation and precharge latency. Clusters have been a prevalent and successful computing framework for processing large amount of data due to their distributed and parallelized working paradigm. A task submitted to a cluster is typically divided into a number of subtasks which are designated to different work nodes running the same code but dealing with different equal portion of the dataset to be processed. Due to the existence of heterogeneity, it could easily result in stragglers unfairly slowing down the entire processing, because work nodes finish their subtasks at different rates. With the increasing problem complexity, more irregular applications are deployed on high-performance clusters due to the parallel working paradigm, and yield irregular memory access behaviors across nodes. However, the irregularity of memory access behaviors is not comprehensively studied, which results in low utilization of the integrated hybrid memory system compositing of stacked DRAM and off-chip DRAM. This dissertation lists our research results on the above three mentioned challenges in order to optimize the memory system for high efficiency in computing clusters. Details are as follows: To address low row buffer utilization caused by row buffer interference, we propose Row Buffer Cache (RBC) architecture to efficiently mitigate row buffer interference overheads. At the core of the RBC architecture, the DRAM pages with good locality are cached and escape from the row buffer interference.Such an RBC architecture significantly reduces the overheads caused by row activation and precharge, thus improves overall system performance and energy efficiency. We evaluate our RBC using SPEC CPU2006 on a DDR4 memory compared to the commodity baseline memory system along with the state-of-art methods, DICE and Bingo. Results show that RBC improves the memory performance by up to 2.24X (16.1% on average) and reduces the overall memory energy by up to 68.2% (23.6% on average) for single-core simulations. For multi-core simulations, RBC increases the performance by up to 1.55X (16.7% on average) and reduces the energy by up to 35.4% (21.3% on average). Comparing with the state-of-art methods, RBC outperforms DICE and Bingo by 8% and 5.1% on average for single-core scenario, and by 10.1% and 4.7% for multi-core scenario. To relax the straggling effect observed in clusters, we aim to speed up straggling work nodes to quicken the overall processing by leveraging exhibited performance variation, and propose StragglerHelper which conveys the memory access characteristics experienced by the forerunner to the stragglers such that stragglers can be sped up due to the accurately informed memory prefetching. A Progress Monitor is deployed to supervise the respective progresses of the work nodes and inform the memory access patterns of forerunner to straggling nodes. Our evaluation results with the SPEC MPI 2007 and BigDataBench on a cluster of 64 work nodes have shown that StragglerHelper is able to improve the execution time of stragglers by up to 99.5% with an average of 61.4%, contributing to an overall improvement of the entire cohort of the cluster by up to 46.7% with an average of 9.9% compared to the baseline cluster. To address the performance difference in the irregular application, we devise a novel method called Similarity-Managed Hybrid Memory System (SM-HMS) to improve the hybrid memory system performance by leveraging the memory access similarity among nodes in a cluster. Within SM-HMS, two techniques are proposed, Memory Access Similarity Measuring and Similarity-based Memory Access Behavior Sharing. To quantify the memory access similarity, memory access behaviors of each node are vectorized, and the distance between two vectors is used as the memory access similarity. The calculated memory access similarity is used to share memory access behaviors precisely across nodes. With the shared memory access behaviors, SM-HMS divides the stacked DRAM into two sections, the sliding window section and the outlier section. The shared memory access behaviors guide the replacement of the sliding window section while the outlier section is managed in the LRU manner. Our evaluation results with a set of irregular applications on various clusters consisting of up to 256 nodes have shown that SM-HMS outperforms the state-of-the-art approaches, Cameo, Chameleon, and Hyrbid2, on job finish time reduction by up to 58.6%, 56.7%, and 31.3%, with 46.1%, 41.6%, and 19.3% on average, respectively. SM-HMS can also achieve up to 98.6% (91.9% on average) of the ideal hybrid memory system performance. / Computer and Information Science
37

THERMAL HYDRAULIC PERFORMANCE OF AN OSCILLATING HEAT PIPE FOR AXIAL HEAT TRANSFER AND AS A HEAT SPREADER

Abdelnabi, Mohamed January 2022 (has links)
In this thesis, a stacked double-layer flat plate oscillating heat pipe charged with degassed DI water was designed, fabricated and characterized under different operating conditions (orientation, system or cooling water temperature and heat load). The oscillating heat pipe was designed to dissipate 500 W within a footprint of 170 x 100 mm2. The oscillating heat pipe had a total of 46 channels (23 channels per layer) with a nominal diameter of 2 mm. Tests were performed to characterize the performance of the oscillating heat pipe for (i) axial heat transfer and (ii) as a heat spreader. The stacked oscillating heat pipe showed a distinctive feature in that it overcame the absence of the gravity effect when operated in a horizontal orientation. The thermal performance was found to be greatly dependent on the operational parameters. The oscillating heat pipe was able to dissipate a heat load greater than 500 W without any indication of dry-out. An increase in the cooling water temperature enhanced the performance and was accompanied with an increase in the on/off oscillation ratio. The lowest thermal resistance of 0.06 K/W was achieved at 500 W with a 50℃ cooling water temperature, with a corresponding evaporator heat transfer coefficient of 0.78 W/cm2K. The oscillating heat pipe improved the heat spreading capability when locally heated at the middle and end locations. The thermal performance was enhanced by 27 percent and 21 percent, respectively, when compared to a plain heat spreader. / Thesis / Master of Applied Science (MASc)
38

Computational Studies on Evolution and Functionality of Genomic Repeats

Alkan, Can 11 July 2005 (has links)
No description available.
39

Evaluating the Impacts of Climate and Stacked Conservation Practices on Nutrient Loss from Legacy Phosphorus Agricultural Fields

Crow, Rachelle Leah 09 August 2022 (has links)
No description available.
40

Magnetron Sputtering in Silicon Rich Film Deposition

Chelomentsev, Evgueni 08 1900 (has links)
<p>The technique of magnetron sputtering is considered for the deposition of silicon rich films. Attention was paid to the possibility to produce light emitting silicon rich films by three different methods, such as reactive magnetron sputtering, co-sputtering from silicon/silicon dioxide targets and a stacked film approach. It was found that photoluminescence of the films deposited in presence of hydrogen was much grater then that for other samples.</p> <p>A model of reactive sputter deposition was also developed and tested in this work. Results achieved in numerical simulation are compared with experimental data and show a good correlation.</p> <p>An experimental unit for magnetron sputtering was designed and built based on a sphere-shaped vacuum chamber equipped with a commercially available magnetron sputter gun, RF generator as power supply unit, vacuum controllers, mass flow controllers, in-situ thickness monitor, temperature controller and other supplemental equipment.</p> <p>Characterization of deposited film was made usmg Profile Measurement, ellipsometry, Rutherford Back Scattering measurements, X-Ray Diffraction measurements and photoluminescence measurements.</p> <p>The achieved results show the principal possibility of getting microcrystalline silicon films and light emitting silicon nanocrystals embedded into silicon dioxide matrix by the sputter deposition technique.</p> / Master of Science (MS)

Page generated in 0.0825 seconds