• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 87
  • 17
  • 14
  • 14
  • 11
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Toward The Frontiers Of Stacked Generalization Architecture For Learning

Mertayak, Cuneyt 01 September 2007 (has links) (PDF)
In pattern recognition, &ldquo / bias-variance&rdquo / trade-off is a challenging issue that the scientists has been working to get better generalization performances over the last decades. Among many learning methods, two-layered homogeneous stacked generalization has been reported to be successful in the literature, in different problem domains such as object recognition and image annotation. The aim of this work is two-folded. First, the problems of stacked generalization are attacked by a proposed novel architecture. Then, a set of success criteria for stacked generalization is studied. A serious drawback of stacked generalization architecture is the sensitivity to curse of dimensionality problem. In order to solve this problem, a new architecture named &ldquo / unanimous decision&rdquo / is designed. The performance of this architecture is shown to be comparably similar to two layered homogeneous stacked generalization architecture in low number of classes while it performs better than stacked generalization architecture in higher number of classes. Additionally, a new success criterion for two layered homogeneous stacked generalization architecture is proposed based on the individual properties of the used descriptors and it is verified in synthetic datasets.
22

Performance Analysis Of Stacked Generalization

Ozay, Mete 01 September 2008 (has links) (PDF)
Stacked Generalization (SG) is an ensemble learning technique, which aims to increase the performance of individual classifiers by combining them under a hierarchical architecture. This study consists of two major parts. In the first part, the performance of Stacked Generalization technique is analyzed with respect to the performance of the individual classifiers and the content of the training data. In the second part, based on the findings for a new class of algorithms, called Meta-Fuzzified Yield Value (Meta-FYV) is introduced. The first part introduces and verifies two hypotheses by a set of controlled experiments to assure the performance gain for SG. The learning mechanisms of SG to achieve high performance are explored and the relationship between the performance of the individual classifiers and that of SG is investigated. It is shown that if the samples in the training set are correctly classified by at least one base layer classifier, then, the generalization performance of the SG is increased, compared to the performance of the individual classifiers. In the second hypothesis, the effect of the spurious samples, which are not correctly labeled by any of the base layer classifiers, is investigated. In the second part of the thesis, six theorems are constructed based on the analysis of the feature spaces and the stacked generalization architecture. Based on the theorems and hypothesis, a new class of SG algorithms is proposed. The experiments are performed on both Corel data and synthetically generated data, using parallel programming techniques, on a high performance cluster.
23

Architecting heterogeneous memory systems with 3D die-stacked memory

Sim, Jae Woong 21 September 2015 (has links)
The main objective of this research is to efficiently enable 3D die-stacked memory and heterogeneous memory systems. 3D die-stacking is an emerging technology that allows for large amounts of in-package high-bandwidth memory storage. Die-stacked memory has the potential to provide extraordinary performance and energy benefits for computing environments, from data-intensive to mobile computing. However, incorporating die-stacked memory into computing environments requires innovations across the system stack from hardware and software. This dissertation presents several architectural innovations to practically deploy die-stacked memory into a variety of computing systems. First, this dissertation proposes using die-stacked DRAM as a hardware-managed cache in a practical and efficient way. The proposed DRAM cache architecture employs two novel techniques: hit-miss speculation and self-balancing dispatch. The proposed techniques virtually eliminate the hardware overhead of maintaining a multi-megabytes SRAM structure, when scaling to gigabytes of stacked DRAM caches, and improve overall memory bandwidth utilization. Second, this dissertation proposes a DRAM cache organization that provides a high level of reliability for die-stacked DRAM caches in a cost-effective manner. The proposed DRAM cache uses error-correcting code (ECCs), strong checksums (CRCs), and dirty data duplication to detect and correct a wide range of stacked DRAM failures—from traditional bit errors to large-scale row, column, bank, and channel failures—within the constraints of commodity, non-ECC DRAM stacks. With only a modest performance degradation compared to a DRAM cache with no ECC support, the proposed organization can correct all single-bit failures, and 99.9993% of all row, column, and bank failures. Third, this dissertation proposes architectural mechanisms to use large, fast, on-chip memory structures as part of memory (PoM) seamlessly through the hardware. The proposed design achieves the performance benefit of on-chip memory caches without sacrificing a large fraction of total memory capacity to serve as a cache. To achieve this, PoM implements the ability to dynamically remap regions of memory based on their access patterns and expected performance benefits. Lastly, this dissertation explores a new usage model for die-stacked DRAM involving a hybrid of caching and virtual memory support. In the common case where system’s physical memory is not over-committed, die-stacked DRAM operates as a cache to provide performance and energy benefits to the system. However, when the workload’s active memory demands exceed the capacity of the physical memory, the proposed scheme dynamically converts the stacked DRAM cache into a fast swap device to avoid the otherwise grievous performance penalty of swapping to disk.
24

The geologic and economic analysis of stacked CO₂ storage systems : a carbon management strategy for the Texas Gulf Coast

Coleman, Stuart Hedrick 21 December 2010 (has links)
Stacked storage systems are a viable carbon management operation, especially in regions with potential growth in CO₂ enhanced oil recovery (EOR) projects. Under a carbon constrained environment, the industrial Texas Gulf Coast is an ideal area for development of stacked storage operations, with a characteristically high CO₂ intensity and abundance of aging oil fields. The development of EOR along the Texas Gulf Coast is limited by CO₂ supply constraints. A stacked storage system is implemented with an EOR project to manage the temporal differences between the operation of a coal-fired power plant and EOR production. Currently, most EOR operations produce natural CO₂ from geologic formations. A switch to anthropogenic CO₂ sources would require an EOR operator to handle volumes of CO₂ beyond EOR usage. The use of CO₂ in an EOR operation is controlled and managed to maximize oil production, but increasing injection rates to handle the volume of CO₂ captured from a coal plant can decrease oil production efficiency. With stacked storage operations, a CO₂ storage reservoir is implemented with an EOR project to maintain injection capacity equivalent to a coal plant's emissions under a carbon constrained environment. By adding a CO₂ storage operation, revenue can still be generated from EOR production, but it is considerably less than just operating an EOR project. The challenge for an efficient stacked storage project is to optimize oil production and maximize profits, while minimizing the revenue reduction of pure carbon sequestration. There is an abundance of saline aquifers along the Texas Gulf Coast, including the Wilcox, Vicksburg, and Miocene formations. To make a stacked storage system more viable and reduce storage costs, maximizing injectivity is critical, as storage formations are evaluated on a cost-per-ton injected basis. This cost-per-ton injected criteria, also established as injection efficiency, incorporates reservoir injectivity and depth dependant drilling costs to determine the most effective storage formation to incorporate with an EOR project. With regionally adequate depth to maximize injectivity while maintaining reasonable drilling costs, the Vicksburg formation is typically the preferred storage reservoir in a stacked storage system along the Texas Gulf Coast. Of the eleven oil fields analyzed on a net present value basis, the Hastings field has the greatest potential for both EOR and stacked storage operations. / text
25

Thermal and mechanical analysis of interconnect structures in 3D stacked packages

Wakil, Jamil Abdul 07 January 2011 (has links)
Physical scaling limits of microelectronic devices and the need to improve electrical performance have driven significant research and development into 3D architecture. The development of die stacks in first level packaging is one of the more viable short-term options for improved performance. Placement of memory die above or below processors in a traditional flip chip C4 package with through-silicon vias (TSVs) has significant benefits in reducing data and power transmission paths. However, with the electrical performance benefits come great thermal and mechanical challenges. There are two key objectives for this work. The first is understanding of the die-die interface resistance, R[subscript dd], composed of the back end of line (BEOL) layers and micro-C4 interconnects. The interfacial resistance between BEOL material layers, the impact of TSVs and the impact of strain on R[subscript dd] are subtopics. The second key objective is the understanding of package thermal and mechanical behavior under operating conditions, such as local thermal disturbances. To date, these topics have not been adequately addressed in the literature. It is found that R[subscript dd] can be affected by TSVs, and that the interfacial contributions predicted by theoretical sub-continuum models can be significantly different than measurements. Using validated finite element models, the significance of the power distribution and R[subscript dd] on the temporal responses of 2D vs. 3D packages is highlighted. The results suggest local thermal hotspots can greatly exacerbate the thermal penalty due to the R[subscript dd] and that no peaks in stress arise in the transient period from power on to power off. / text
26

Thermal management of 3-D stacked chips using thermoelectric and microfluidic devices

Redmond, Matthew J. 13 January 2014 (has links)
This thesis employs computational and experimental methods to explore hotspot cooling and high heat flux removal from a 3-D stacked chip using thermoelectric and microfluidic devices. Stacked chips are expected to improve microelectronics performance, but present severe thermal management challenges. The thesis provides an assessment of both thermoelectric and microfluidic technologies and provides guidance for their implementation in the 3-D stacked chips. A detailed 3-D thermal model of a stacked electronic package with two dies and four ultrathin integrated TECs is developed to investigate the efficacy of TECs in hotspot cooling for 3-D technology. The numerical analysis suggests that TECs can be used for on demand cooling of hotspots in 3-D stacked chip architecture. A strong vertical coupling is observed between the top and bottom TECs and it is found that the bottom TECs can detrimentally heat the top hotspots. As a result, TECs need to be carefully placed inside the package to avoid such undesired heating. Thermal contact resistances between dies, inside the TEC module, and between the TEC and heat spreader are shown to significantly affect TEC performance. TECs are most effective for cooling localized hotspots, but microchannels are advantageous for cooling large background heat fluxes. In the present work, the results of heat transfer and pressure drop experiments in the microchannels with water as the working fluid are presented and compared to the previous microchannel experiments and CFD simulations. Heat removal rates of greater than 100 W/cm2 are demonstrated with these microchannels, with a pressure drop of 75 kPa or less. A novel empirical correlation modeling method is proposed, which uses finite element modeling to model conduction in the channel walls and substrate, coupled with an empirical correlation to determine the convection coefficient. This empirical correlation modeling method is compared to resistor network and CFD modeling. The proposed modeling method produced more accurate results than resistor network modeling, while solving 60% faster than a conjugate heat transfer model using CFD. The results of this work demonstrate that microchannels have the ability to remove high heat fluxes from microelectronic packages using water as a working fluid. Additionally, TECs can locally cool hotspots, but must be carefully placed to avoid undesired heating. Future work should focus on overcoming practical challenges including fabrication, cost, and reliability which are preventing these technologies from being fully leveraged.
27

EFFECT of the LENGTH of the SUPERFICIAL PLATE in STACKED VETERINARY CUTTABLE PLATE CONSTRUCTS: An IN VITRO STUDY on the BENDING STRENGTH and STIFFNESS, and on the STRAIN DISTRIBUTION

Bichot, Sylvain 06 January 2012 (has links)
This thesis investigated the effect of the length of the superficial plate on the mechanical properties of a stacked-plate construct made with 2.0-2.7 Veterinary Cuttable platesTM (VCP). Stacking VCP increases construct stiffness compared to using a single VCP but increases stress protection and concentrates stress at the extremities of the implants. We hypothesized that shortening the superficial plate would not reduce the stiffness of the construct, and would reduce stress concentration at the plate ends. A fracture gap model was created with a bone surrogate (copolymer acetal rods), stacked 2.0-2.7 VCP and 2.7 screws. The constructs consisted of an 11-hole VCP bottom plate and a 5-, 7-, 9- or 11-hole VCP superficial plate. In phase one, 5 of each construct were randomly tested for failure in 4-point bending and axial loading. Stiffness, load at yield, and work until failure were measured. In phase two, strains were recorded during elastic deformation for each configuration. During both testing methods, stiffness, load at yield and work to failure progressively decreased when decreasing the length of the superficial plate. No statistically significant differences were obtained for load at yield in 4-point bending and work to failure in axial loading. The strain within the implant over the gap increased as the length of the superficial plate decreased. Shortening the superficial plate reduces the stiffness and strength of the construct, and decreases stress concentration at the implant ends. As the cross section of the implant covering the gap remained constant, friction between the plates may play a role in the mechanical properties of stacked VCP. / Synthes Canada - OVC Pet Trust Fund
28

FPGA BASED PARALLEL IMPLEMENTATION OF STACKED ERROR DIFFUSION ALGORITHM

Kora Venugopal, Rishvanth 01 January 2010 (has links)
Digital halftoning is a crucial technique used in digital printers to convert a continuoustone image into a pattern of black and white dots. Halftoning is used since printers have a limited availability of inks and cannot reproduce all the color intensities in a continuous image. Error Diffusion is an algorithm in halftoning that iteratively quantizes pixels in a neighborhood dependent fashion. This thesis focuses on the development and design of a parallel scalable hardware architecture for high performance implementation of a high quality Stacked Error Diffusion algorithm. The algorithm is described in ‘C’ and requires a significant processing time when implemented on a conventional CPU. Thus, a new hardware processor architecture is developed to implement the algorithm and is implemented to and tested on a Xilinx Virtex 5 FPGA chip. There is an extraordinary decrease in the run time of the algorithm when run on the newly proposed parallel architecture implemented to FPGA technology compared to execution on a single CPU. The new parallel architecture is described using the Verilog Hardware Description Language. Post-synthesis and post-implementation, performance based Hardware Description Language (HDL), simulation validation of the new parallel architecture is achieved via use of the ModelSim CAD simulation tool.
29

Um método para deduplicação de metadados bibliográficos baseado no empilhamento de classificadores / A method for bibliographic metadata deduplication based on stacked generalization

Borges, Eduardo Nunes January 2013 (has links)
Metadados bibliográficos duplicados são registros que correspondem a referências bibliográficas semanticamente equivalentes, ou seja, que descrevem a mesma publicação. Identificar metadados bibliográficos duplicados em uma ou mais bibliotecas digitais é uma tarefa essencial para garantir a qualidade de alguns serviços como busca, navegação e recomendação de conteúdo. Embora diversos padrões de metadados tenham sido propostos, eles não resolvem totalmente os problemas de interoperabilidade porque mesmo que exista um mapeamento entre diferentes esquemas de metadados, podem existir variações na representação do conteúdo. Grande parte dos trabalhos propostos para identificar duplicatas aplica uma ou mais funções sobre o conteúdo de determinados campos no intuito de captar a similaridade entre os registros. Entretanto, é necessário escolher um limiar que defina se dois registros são suficientemente similares para serem considerados semanticamente equivalentes ou duplicados. Trabalhos mais recentes tratam a deduplicação de registros como um problema de classificação de dados, em que um modelo preditivo é treinado para estimar a que objeto do mundo real um registro faz referência. O objetivo principal desta tese é o desenvolvimento de um método efetivo e automático para identificar metadados bibliográficos duplicados, combinando o aprendizado de múltiplos classificadores supervisionados, sem a necessidade de intervenção humana na definição de limiares de similaridade. Sobre o conjunto de treinamento são aplicadas funções de similaridade desenvolvidas especificamente para o contexto de bibliotecas digitais e com baixo custo computacional. Os escores produzidos pelas funções são utilizados para treinar múltiplos modelos de classificação heterogêneos, ou seja, a partir de algoritmos de diversos tipos: baseados em árvores, regras, redes neurais artificiais e probabilísticos. Os classificadores aprendidos são combinados através da estratégia de empilhamento visando potencializar o resultado da deduplicação a partir do conhecimento heterogêneo adquirido individualmente pelos algoritmo de aprendizagem. O modelo de classificação final é aplicado aos pares candidatos ao casamento retornados por uma estratégia de blocagem de dois níveis bastante eficiente. A solução proposta é baseada na hipótese de que o empilhamento de classificadores supervisionados pode aumentar a qualidade da deduplicação quando comparado a outras estratégias de combinação. A avaliação experimental mostra que a hipótese foi confirmada quando o método proposto é comparado com a escolha do melhor classificador e com o voto da maioria. Ainda são analisados o impacto da diversidade dos classificadores no resultado do empilhamento e os casos de falha do método proposto. / Duplicated bibliographic metadata are semantically equivalent records, i.e., references that describe the same publication. Identifying duplicated bibliographic metadata in one or more digital libraries is an essential task to ensure the quality of some services such as search, navigation, and content recommendation. Although many metadata standards have been proposed, they do not completely solve interoperability problems because even if there is a mapping between different metadata schemas, there may be variations in the content representation. Most of work proposed to identify duplicated records uses one or more functions on some fields in order to capture the similarity between the records. However, we need to choose a threshold that defines whether two records are sufficiently similar to be considered semantically equivalent or duplicated. Recent studies deal with record deduplication as a data classification problem, in which a predictive model is trained to estimate the real-world object to which a record refers. The main goal of this thesis is the development of an effective and automatic method to identify duplicated bibliographic metadata, combining multiple supervised classifiers, without any human intervention in the setting of similarity thresholds. We have applied on the training set cheap similarity functions specifically designed for the context of digital libraries. The scores returned by these functions are used to train multiple and heterogeneous classification models, i.e., using learning algorithms based on trees, rules, artificial neural networks and probabilistic models. The learned classifiers are combined by stacked generalization strategy to improve the deduplication result through heterogeneous knowledge acquired by each learning algorithm. The final model is applied to pairs of records that are candidate to matching. These pairs are defined by an efficient two phase blocking strategy. The proposed solution is based on the hypothesis that stacking supervised classifiers can improve the quality of deduplication when compared to other combination strategies. The experimental evaluation shows that the hypothesis has been confirmed by comparing the proposed method to selecting the best classifier or the majority vote technique. We also have analyzed the impact of classifiers diversity on the stacking results and the cases for which the proposed method fails.
30

A generic processing in memory cycle accurate simulator under hybrid memory cube architecture / Um simulador genérico ciclo-acurado para processamento em memória baseado na arquitetura da hybrid memory cube

Oliveira Junior, Geraldo Francisco de January 2017 (has links)
PIM - uma técnica onde elementos computacionais são adicionados perto, ou idealmente, dentro de dispositivos de memória - foi uma das tentativas criadas durante os anos 1990 visando mitigar o notório memory wall problem. Hoje em dia, com o amadurecimento do processo de integração 3D, um novo horizonte para novas arquiteturas PIM pode ser explorado. Para investigar este novo cenário, pesquisadores dependem de simuladores em software para navegar pelo espaço de exploração de projeto. Hoje, a maioria dos trabalhos que focam em PIM, implementam simuladores locais para realizar seus experimentos. Porém, esta metodologia pode reduzir a produtividade e reprodutibilidade. Neste trabalho, nós mostramos o desenvolvimento de um simulador de PIM preciso, modular e parametrizável. Nosso simulador, chamado CLAPPS, visa a arquitetura de memória HMC, uma memória 3D popular, que é amplamente utilizada em aceleradores PIM do estado da arte. Nós desenvolvemos nosso mecanismo utilizando a linguagem de programação SystemC, o que permite uma simulação paralela nativamente. A principal contribuição do nosso trabalho se baseia em desenvolver a interface amigável que permite a fácil exploração de arquiteturas PIM. Para avaliar o nosso sistema, nós implementamos um modulo de PIM que pode executar operações vetoriais com diferente tamanhos de operandos utilizando o proposto conjunto de ferramentas. / PIM - a technique which computational elements are added close, or ideally, inside memory devices - was one of the attempts created during the 1990s to try to mitigate the memory wall problem. Nowadays, with the maturation of 3D integration technologies, a new landscape for novel PIM architectures can be investigated. To exploit this new scenario, researchers rely on software simulators to navigate throughout the design evaluation space. Today, most of the works targeting PIM implement in-house simulators to perform their experiments. However, this methodology might hurt overall productivity, while it might also preclude replicability. In this work, we showed the development of a precise, modular and parametrized PIM simulation environment. Our simulator, named CLAPPS, targets the HMC architecture, a popular 3D-stacked memory widely employed in state-of-the-art PIM accelerators. We have designed our mechanism using the SystemC programming language, which allows native parallel simulation. The primary contribution of our work lies in developing a user-friendly interface to allow easy PIM architectures exploitation. To evaluate our system, we have implemented a PIM module that can perform vector operations with different operand sizes using the proposed set of tools.

Page generated in 0.0497 seconds