• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9042
  • 9042
  • 3028
  • 1688
  • 1534
  • 1522
  • 1417
  • 1358
  • 1192
  • 1186
  • 1158
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
931

Lithography Hotspot Detection

Park, Jea Woo 21 July 2017 (has links)
The lithography process for chip manufacturing has been playing a critical role in keeping Moor's law alive. Even though the wavelength used for the process is bigger than actual device feature size, which makes it difficult to transfer layout patterns from the mask to wafer, lithographers have developed a various technique such as Resolution Enhancement Techniques (RETs), Multi-patterning, and Optical Proximity Correction (OPC) to overcome the sub-wavelength lithography gap. However, as feature size in chip design scales down further to a point where manufacturing constraints must be applied to early design phase before generating physical design layout. Design for Manufacturing (DFM) is not optional anymore these days. In terms of the lithography process, circuit designer should consider making their design as litho-friendly as possible. Lithography hotspot is a place where it is susceptible to have fatal pinching (open circuit) or bridging (short circuit) error due to poor printability of certain patterns in a design layout. To avoid undesirable patterns in layout, it is mandatory to find hotspots in early design stage. One way to find hotspots is to run lithography simulation on a layout. However, lithography simulation is too computationally expensive for full-chip design. Therefore, there have been suggestions such as pattern matching and machine learning (ML) technique for an alternative and practical hotspot detection method. Pattern matching is fast and accurate. Large hotspot pattern library is utilized to find hotspots. Its drawback is that it can not detect hotspots that are unseen before. On contrast, ML is effective to find previously unseen hotspots, but it may produce false positives. This research presents a novel geometric pattern matching methodology using edge driven dissected rectangles and litho award machine learning for hotspot detection. 1. Edge Driven Dissected Rectangles (EDDR) based pattern matching EDDR pattern matching employs member concept inside a pattern bounding box. Unlike the previous pattern matching, the idea proposed in this thesis uses simple Design Rule Check (DRC) operations to create member rectangles for pattern matching. Our approach shows significant speedup against a state-of-art commercial pattern matching tool as well as other methods. Due to its simple DRC edge operation rules, it is flexible for fuzzy pattern match and partial pattern match, which enable us to check previously unseen hotspots as well as the exact pattern match. 2. Litho-aware Machine Learning A new methodology for machine learning (ML)-based hotspot detection harnesses lithography information to build SVM (Support Vector Machine) during its learning process. Unlike the previous research that uses only geometric information or requires a post-OPC (Optical Proximity Correction) mask, our method utilizes detailed optical information but bypasses post-OPC mask by sampling latent image intensity and use those points to train an SVM model. Our lithography-aware machine learning guides learning process using actual lithography information combined with lithography domain knowledge. While the previous works for SVM modeling to identify hotspots have used only geometric related information, which is not directly relevant to the lithographic process, our SVM model was trained with lithographic information which has a direct impact on causing pinching or bridging hotspots. Furthermore, rather than creating a monolithic SVM trying to cover all hotspot patterns, we utilized lithography domain knowledge and separated hotspot types such as HB(Horizontal Bridging), VB (Vertical Bridging), HP(Horizontal Pinching), and VP(Vertical Pinching) for our SVM model. Out results demonstrated high accuracy and low false alarm, and faster runtime compared with methods that require a post-OPC mask. We also showed the importance of lithography domain knowledge to train ML for hotspot detection.
932

Bounding Box Improvement with Reinforcement Learning

Cleland, Andrew Lewis 12 June 2018 (has links)
In this thesis, I explore a reinforcement learning technique for improving bounding box localizations of objects in images. The model takes as input a bounding box already known to overlap an object and aims to improve the fit of the box through a series of transformations that shift the location of the box by translation, or change its size or aspect ratio. Over the course of these actions, the model adapts to new information extracted from the image. This active localization approach contrasts with existing bounding-box regression methods, which extract information from the image only once. I implement, train, and test this reinforcement learning model using data taken from the Portland State Dog-Walking image set. The model balances exploration with exploitation in training using an ε-greedy policy. I find that the performance of the model is sensitive to the ε-greedy configuration used during training, performing best when the epsilon parameter is set to very low values over the course of training. With = 0.01, I find the algorithm can improve bounding boxes in about 78% of test cases for the "dog" object category, and 76% for the "human" category.
933

An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data

Chavez, Wesley 12 June 2018 (has links)
Object recognition in video has seen giant strides in accuracy improvements in the last few years, a testament to the computational capacity of deep convolutional neural networks. However, this computational capacity of software-based neural networks coincides with high power consumption compared to that of some spiking neural networks (SNNs), up to 300,000 times more energy per synaptic event in IBM's TrueNorth chip, for example. SNNs are also well-suited to exploit the precise timing of event-driven image sensors, which transmit asynchronous "events" only when the luminance of a pixel changes above or below a threshold value. The combination of event-based imagers and SNNs becomes a straightforward way to achieve low power consumption in object recognition tasks. This thesis compares different linear classifiers for two low-power, hardware-friendly, spiking, unsupervised neural network architectures, SSLCA and HFirst, in response to asynchronous event-based data, and explores their ability to learn and recognize patterns from two event-based image datasets, N-MNIST and CIFAR10-DVS. By performing a grid search of important SNN and classifier hyperparameters, we also explore how to improve classification performance of these architectures. Results show that a softmax regression classifier exhibits modest accuracy gains (0.73%) over the next-best performing linear support vector machine (SVM), and considerably outperforms a single layer perceptron (by 5.28%) when classification performance is averaged over all datasets and spiking neural network architectures with varied hyperparameters. Min-max normalization of the inputs to the linear classifiers aides in classification accuracy, except in the case of the single layer perceptron classifier. We also see the highest reported classification accuracy for spiking convolutional networks on N-MNIST and CIFAR10-DVS, increasing this accuracy from 97.77% to 97.82%, and 29.67% to 31.76%, respectively. These findings are relevant for any system employing unsupervised SNNs to extract redundant features from event-driven data for recognition.
934

Knowing Without Knowing: Real-Time Usage Identification of Computer Systems

Hawana, Leila Mohammed 18 January 2019 (has links)
Contemporary computers attempt to understand a user's actions and preferences in order to make decisions that better serve the user. In pursuit of this goal, computers can make observations that range from simple pattern recognition to listening in on conversations without the device being intentionally active. While these developments are incredibly useful for customization, the inherent security risks involving personal data are not always worth it. This thesis attempts to tackle one issue in this domain, computer usage identification, and presents a solution that identifies high-level usage of a system at any given moment without looking into any personal data. This solution, what I call "knowing without knowing," gives the computer just enough information to better serve the user without knowing any data that compromises privacy. With prediction accuracy at 99% and system overhead below 0.5%, this solution is not only reliable but is also scalable, giving valuable information that will lead to newer, less invasive solutions in the future.
935

Developing Toward Generality: Combating Catastrophic Forgetting with Developmental Compression

Beaulieu, Shawn L 01 January 2018 (has links)
General intelligence is the exhibition of intelligent behavior across multiple problems in a variety of settings, however intelligence is defined and measured. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting, in which sequential learning corrupts knowledge obtained earlier in the sequence or in which tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have either sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or enforce modularity such that minimally overlapping sub-functions contain task-specific knowledge. While successful in some domains, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Presented here is a method called developmental compression that addresses catastrophic forgetting in the neural networks of embodied agents. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously evolved capabilities and `compresses' specialized neural networks into a single generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation, and does so in a way that suggests better scalability than existing approaches. This method is validated on a robot control problem and may be extended to other machine learning domains in the future.
936

Exploring the Impact of Challenging Behaviors on Treatment Efficacy in Autism Spectrum Disorder

Hoag, Juliana 29 May 2019 (has links)
The focus of this study was to explore the impact of challenging behaviors on Applied Behaviors Analysis treatment in Autism Spectrum Disorder. The prevalence of ASD is on the rise, so it is important that we understand how patients are responding to treatment. In this study, we cluster patients (N=854) based on their eight observed challenging behaviors using k-means, a machine learning algorithm, and then perform a multiple linear regression analysis to find significant differences between average exemplars mastered. The goal of this study was to expand the research in the area of ABA treatment for ASD and to help provide more insight helpful for creating personalized therapeutic interventions with maximum efficacy, minimum time and minimum cost for individuals.
937

Modelos agrometeorológicos para previsão de pragas e doenças em Coffea arabica L. em Minas Gerais /

Aparecido, Lucas Eduardo de Oliveira. January 2019 (has links)
Orientador: Glauco de Souza Rolim / Resumo: O café é a bebida mais consumida no mundo e uma das principais causas para a redução da produtividade e qualidade são os problemas fitossanitários. A estratégia mais comum de controle dessas doenças e pragas é a aplicação de fungicidas e inseticidas foliares, dependendo da intensidade dos mesmos na região. Esse método tradicional pode ser melhorado utilizando de sistemas de alertas por meio de modelos de estimativas dos índices de doenças e pragas. Este trabalho tem como OBJETIVOS: A) Calibrar as variáveis meteorológicas: temperatura do ar e precipitação pluviométrica do sistema ECMWF em relação aos dados de reais de superfície mensurados pelo sistema nacional de meteorologia (INMET) para o estado de Minas Gerais; B) Avaliar quais os elementos meteorológicos exercem maior influência nas principais pragas (broca e bicho-mineiro) e doenças (ferrugem e cercosporiose) do cafeeiro arábica nas principais localidades cafeeiras do Sul de Minas Gerais e do Cerrado Mineiro; C) Desenvolver modelos agrometeorológicos para previsão de pragas e doenças em função das variáveis meteorológicas usando algoritmos de machine learning e procurando uma antecipação temporal suficiente para tomada de decisões. MATERIAL E MÉTODOS: Para o objetivo “A” foram utilizados dados climáticos mensais de temperatura do ar (T, ºC) e precipitação pluviométrica (P, mm) provenientes do ECMWF e do INMET no período de 1979 a 2017. A evapotranspiração potencial foi estimada por Thornthwaite (1948) e balanço hídrico p... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Coffee is the most consumed beverage in the world, but phytosanitary problems are amongst the main causes of reduced productivity and quality. The application of foliar fungicides and insecticides is the most common strategy for controlling these diseases and pests, depending on their intensity in a region. This traditional method can be improved by using alert systems with models of disease and pest indices. This work has as OBJECTIVES: A) To calibrate the meteorological variables: air temperature and rainfall of the European Center for Medium Range Weather Forecast (ECMWF) in relation to the real surface data measured by the national meteorological system (INMET) for the state of Minas Gerais; B) To evaluate which meteorological elements, and at what time, have a greater influence on the main pests (coffee borer and coffee miner) and diseases (coffee rust and cercosporiosis) of Coffee arabica in the main coffee regions of the South of Minas Gerais and Cerrado Mineiro; C) To develop agrometeorological models for pest and disease prediction in function of the meteorological variables of the South of Minas Gerais and Cerrado Mineiro using algorithms of machine learning with sufficient temporal anticipation for decision making. MATERIAL AND METHODS: To achieve goal "A" we used monthly climatic data (T, ºC) and rainfall (P, mm) from the ECMWF and INMET from 1979 to 2015. Potential evapotranspiration was estimated by Thornthwaite (1948) and water balance by Thornthwaite and Mathe... (Complete abstract click electronic access below) / Doutor
938

Application of artificial neural networks to genome-enabled prediction in Nellore cattle /

Ribeiro, André Mauric Frossard January 2019 (has links)
Orientador: Henrique Nunes de [UNESP] Oliveira / Resumo: Nos últimos anos, o rápido desenvolvimento de tecnologias de sequenciamento de alto rendimento permitiu a genotipagem em larga escala de milhares de marcadores genéticos. Diversos modelos estatísticos foram desenvolvidos para predizer os valores genéticos para traços complexos usando as informações de marcadores moleculares em alta densidade, pedigrees ou ambos. Esses modelos incluem, entre outros, as redes neurais artificiais (RNA) que têm sido amplamente utilizadas em problemas de previsão em outros campos de aplicação e, mais recentemente, para predição genômica. O objetivo deste trabalho foi avaliar o desempenho de redes neurais artificiais na predição genômica de bovinos Nelore. Para isso foram testadas diferentes arquiteturas de rede (1 a 4 neurônios em camada oculta), 5 estratégias para seleção de animais com base na acurácia do EBV a serem declaradas para a rede de treinamento como entrada e avaliação de matrizes de relacionamento (NN_G (G como entrada); NN_GD (combinados G com D); e N_Guar (Guar como entrada)) a serem utilizados como entrada para predição genômica em características de peso corporal de bovinos Nelore em relação a modelos de regressão lineares bayesianos hierárquicos (BayesB). . Para isso, utilizou-se o dEBV de 8652 animais genotipados para peso corporal aos 120 dias, 240 dias, 365 dias e 455 dias. Esses animais foram divididos pela acurácia do EBV em população de treinamento e na validação. Todas as estratégias foram repetidas 5 vezes e a correlação ... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In recent years, the fast development of high-throughput sequencing technologies has enabled large-scale genotyping of thousands of genetic markers. Several statistical models have been developed for predicting breeding genetic values for complex traits using the information on dense molecular markers, pedigrees, or both. These models include, among others, the artificial neural networks (ANN) that have been widely used in prediction problems in other fields of application and, more recently, for genome-enabled prediction. The objective of this work was to evaluate the performance of artificial neural networks in the genomic prediction of complex trait in Nellore cattle. For this, we has been tested different network architectures (1 to 4 neurons on hidden layer), 5 strategies to select animals based on their EBV accuracy to be declared for the training network as input and evaluation of relationship matrices [ NN_G (G as input); NN_GD(combined G with D), and N_Guar (Guar as input)] to be used as input for genomic prediction in body weight traits in Nellore cattle relative to hierarchical linear Bayesian regression models (BayesB) . The dEBV of 8652 animals genotyped for body weight at 120 days, 240 days, 365 days, and 455 days was used. Animals were divided into training population and validation by the predicted EBV accuracy. All strategies were repeated five times, and the correlation between dEBV and predicted dEBV was used as the accuracy measure of the models tested. Th... (Complete abstract click electronic access below) / Doutor
939

Optimizing Shipping Container Damage Prediction and Maritime Vessel Service Time in Commercial Maritime Ports Through High Level Information Fusion

Panchapakesan, Ashwin 09 September 2019 (has links)
The overwhelming majority of global trade is executed over maritime infrastructure, and port-side optimization problems are significant given that commercial maritime ports are hubs at which sea trade routes and land/rail trade routes converge. Therefore, optimizing maritime operations brings the promise of improvements with global impact. Major performance bottlenecks in maritime trade process include the handling of insurance claims on shipping containers and vessel service time at port. The former has high input dimensionality and includes data pertaining to environmental and human attributes, as well as operational attributes such as the weight balance of a shipping container; and therefore lends itself to multiple classification method- ologies, many of which are explored in this work. In order to compare their performance, a first-of-its-kind dataset was developed with carefully curated attributes. The performance of these methodologies was improved by exploring metalearning techniques to improve the collective performance of a subset of these classifiers. The latter problem formulated as a schedule optimization, solved with a fuzzy system to control port-side resource deployment; whose parameters are optimized by a multi-objective evolutionary algorithm which outperforms current industry practice (as mined from real-world data). This methodology has been applied to multiple ports across the globe to demonstrate its generalizability, and improves upon current industry practice even with synthetically increased vessel traffic.
940

NON-CONTACT BASED PERSON’S SLEEPINESS DETECTION USING HEART RATE VARIABILITY

Danielsson, Fanny January 2019 (has links)
Today many strategies of monitoring health status and well-being are done through measurementmethods that are connected to the body, e.g. sensors or electrodes. These are often complicatedand requires personal assistance in order to use, because of advanced hardware and attachmentissues. This paper proposes a new method of making it possible for a user to self-monitoring theirwell-being and health status over time by using a non-contact camera system. The camera systemextracts physiological parameters (e.g. Heart Rate (HR), Respiration Rate (RR), Inter-bit-Interval(IBI)) based on facial color variations, due to blood circulation in facial skin. By examining anindividual’s physiological parameters, one can extract measurements that can be used in order tomonitor their well-being. The measurements used in this paper is features of heart rate variability(HRV) that are calculated from the physiological parameter IBI. The HRV features included andtested in this paper is SDNN, RMSSD, NN50 and pNN50 from Time Domain and VLF, LF andLF/HF from Frequency Domain. Machine Learning classification is done in order to classifyan individual’s sleepiness from the given features. The Machine Learning classification modelwhich gave the best results, in forms of accuracy, were Support Vector Machines (SVM). The bestmean accuracy achieved was 84,16% for the training set and 81,67% for the test set for sleepinessdetection with SVM. This paper has great potential for personal health care monitoring and can befurther extended to detect other factors that could help a user to monitor their well-being, such asmeasuring stress level

Page generated in 0.415 seconds