• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 19
  • 15
  • 8
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 207
  • 90
  • 72
  • 59
  • 49
  • 49
  • 41
  • 36
  • 36
  • 31
  • 30
  • 25
  • 23
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Silicon-based millimeter-wave front-end development for multi-gigabit wireless applications

Sarkar, Saikat 02 November 2007 (has links)
With rapid advances in semiconductor technologies and packaging schemes, wireless products have become more versatile, portable, inexpensive, and user friendly over last few decades. However, the ever-growing demand of consumers to share information efficiently at higher speeds requires higher data rates, increased functionality, lower cost, and more reliability. The 60-GHz-frequency band, with 7 GHz license-free bandwidth addresses, such demands, and promises a low-cost multi-Gbps wireless transmission with a power budget in the order of 100 mW. This dissertation presents the systematic development of key building blocks and integrated 60-GHz-receiver solutions. Two different approaches are investigated and implemented in this dissertation: (1) low-cost SiGe-based direct-conversion low-power receiver front-end utilizing gain-boosting techniques in the front-end low-noise amplifier, and (2) CMOS-based heterodyne receiver front-end suitable for high-performance single-chip 60 GHz transceiver solution. The ASK receiver chip, implemented using 0.18 ?m SiGe, presents a complete antenna-to-baseband multi-gigabit 60 GHz solution with the lowest reported power budget (25 pJ/bit) to date. The subharmonic direct conversion front-end, implemented using 0.18 ?m SiGe, presents excellent conversion properties with a 4 GHz DSB RF bandwidth. On the other hand, the CMOS heterodyne implementation of the 60 GHz front-end receiver, targeted towards a robust, single-chip, high-performance, low-power, and integrated 60 GHz transceiver solution, presents the most wideband receiver front-end reported to date. Finally, different multi-band and tunable millimeter-wave circuits are presented towards the future implementation of cognitive and multi-band millimeter-wave radio.
142

[en] OIL AND GAS SUBSEA PROCESSING ANALYSIS: NEW PERSPECTIVES WITHOUT THE USE OF PLATAFORMS / [pt] ANÁLISE DE PROCESSAMENTO SUBMARINO NA PRODUÇÃO DE ÓLEO E GÁS: AS NOVAS PERSPECTIVAS SEM O USO DE PLATAFORMAS

BRUNO FONTES RODRIGUES 21 September 2018 (has links)
[pt] O petróleo tem importância inegável nos tempos atuais. Junto com outros combustíveis fósseis, representa uma considerável parcela da matriz energética da sociedade. Porém esta é uma fonte de energia não renovável. Por isso a medida que o petróleo é produzido em regiões de fácil acesso, sua fonte vai se esgotando e criando a necessidade de se explorar em regiões cada vez mais inóspitas. Diante deste cenário o processamento submarino na produção de óleo e gás tem grande relevância ao permitir a produção de petróleo em regiões onde não seria possível a produção sem esta opção. O objetivo deste trabalho é comparar 2 sistemas de processamento submarino, um com bomba multifásica e outro com separador submarino, visando identificar as oportunidades de aplicação para cada sistema e o estado da arte atual de cada tecnologia. O grande salto do processamento submarino será um futuro de produção sem o uso de plataformas. Cenário este que já se observa nos dias de hoje em alguns campos de gás. O campo de gás foi o primeiro a apresentar a possibilidade de produção sem o uso de plataforma por possuir energia suficiente para escoar por distâncias maiores sem necessidade de incremento artificial de pressão. Porém com o avanço da tecnologia da bomba e dos separadores submarinos, o futuro indica a aplicação deste processo também em campos de óleo. Este trabalho disponibiliza uma ferramenta simplificada para análise de escoamento multifásico de fácil acesso que permite o cálculo sem a necessidade de softwares avançados e de difícil acesso. Apesar de ser uma ferramenta simplificada é de grande utilidade para cálculos rápidos sem necessidade de detalhamento. / [en] Oil has undeniable importance in modern times. Along with other fossil fuels, represents a considerable portion of the energy matrix of society. However this is non-renewable energy source. As the oil is been produced in areas of easy access, its source is being exhausting and creating the need to explore in inhospitable regions. Looking to this scenario subsea processing of oil and gas has great importance to allow oil production in regions where the production would not be possible without this option. The objective of this study is to compare two subsea processing systems, one with a multiphase pump and other with a separator and a monophase pump, identify opportunities of each system and the current state of the art of each technology. The leap of subsea processing will be a future of production systems without the use of platforms. This scenario already being felt today in some gas fields. The gas field was the first to present the possibility of production without the use of the platform due to the fact that gas fields has enough energy to flow over large distances without the need of artificial lifting. However, with the advancement in pump technology and subsea separators, the future indicates the application of this procedure also in oil fields. This paper provides a simplified calculation tool for multiphase flow of easy access that allows the calculation without the need of advanced commercial software. Despite being a simplified tool is useful for quick calculations.
143

A cloud-based intelligent and energy efficient malware detection framework : a framework for cloud-based, energy efficient, and reliable malware detection in real-time based on training SVM, decision tree, and boosting using specified heuristics anomalies of portable executable files

Mirza, Qublai K. A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
144

Cost-sensitive boosting : a unified approach

Nikolaou, Nikolaos January 2016 (has links)
In this thesis we provide a unifying framework for two decades of work in an area of Machine Learning known as cost-sensitive Boosting algorithms. This area is concerned with the fact that most real-world prediction problems are asymmetric, in the sense that different types of errors incur different costs. Adaptive Boosting (AdaBoost) is one of the most well-studied and utilised algorithms in the field of Machine Learning, with a rich theoretical depth as well as practical uptake across numerous industries. However, its inability to handle asymmetric tasks has been the subject of much criticism. As a result, numerous cost-sensitive modifications of the original algorithm have been proposed. Each of these has its own motivations, and its own claims to superiority. With a thorough analysis of the literature 1997-2016, we find 15 distinct cost-sensitive Boosting variants - discounting minor variations. We critique the literature using {\em four} powerful theoretical frameworks: Bayesian decision theory, the functional gradient descent view, margin theory, and probabilistic modelling. From each framework, we derive a set of properties which must be obeyed by boosting algorithms. We find that only 3 of the published Adaboost variants are consistent with the rules of all the frameworks - and even they require their outputs to be calibrated to achieve this. Experiments on 18 datasets, across 21 degrees of cost asymmetry, all support the hypothesis - showing that once calibrated, the three variants perform equivalently, outperforming all others. Our final recommendation - based on theoretical soundness, simplicity, flexibility and performance - is to use the original Adaboost algorithm albeit with a shifted decision threshold and calibrated probability estimates. The conclusion is that novel cost-sensitive boosting algorithms are unnecessary if proper calibration is applied to the original.
145

Novas abordagens para configura??es autom?ticas dos par?metros de controle em comit?s de classificadores

Nascimento, Diego Silveira Costa 05 December 2014 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-02-03T20:29:00Z No. of bitstreams: 1 DiegoSilveiraCostaNascimento_TESE.pdf: 3953454 bytes, checksum: 3237fa5d0296298ccc738a2ba7eab05e (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-02-03T23:54:37Z (GMT) No. of bitstreams: 1 DiegoSilveiraCostaNascimento_TESE.pdf: 3953454 bytes, checksum: 3237fa5d0296298ccc738a2ba7eab05e (MD5) / Made available in DSpace on 2016-02-03T23:54:37Z (GMT). No. of bitstreams: 1 DiegoSilveiraCostaNascimento_TESE.pdf: 3953454 bytes, checksum: 3237fa5d0296298ccc738a2ba7eab05e (MD5) Previous issue date: 2014-12-05 / Significativos avan?os v?m surgindo em pesquisas relacionadas ao tema de Comit?s de Classificadores. Os modelos que mais recebem aten??o na literatura s?o aqueles de natureza est?tica, ou tamb?m conhecidos por ensembles. Dos algoritmos que fazem parte dessa classe, destacam-se os m?todos que utilizam reamostragem dos dados de treinamento: Bagging, Boosting e Multiboosting. A escolha do tipo de arquitetura e dos componentes a serem recrutados n?o ? uma tarefa trivial, e tem motivado, ainda mais, o surgimento de novas propostas na tentativa de se construir tais modelos de forma autom?tica e, muitas delas, s?o baseadas em m?todos de otimiza??o. Muitas dessas contribui??es n?o t?m apresentado resultados satisfat?rios quando aplicadas a problemas mais complexos ou de natureza distinta. Em contrapartida, a tese aqui apresentada prop?e tr?s novas abordagens h?bridas para constru??o autom?tica em ensembles de classificadores: Incremento de Diversidade, Fun??o de Avalia??o Adaptativa e Meta-aprendizado para a elabora??o de sistemas de configura??o autom?tica dos par?metros de controle para os modelos de ensemble. Na primeira abordagem, ? proposta uma solu??o que combina diferentes t?cnicas de diversidade em um ?nico arcabou?o conceitual, na tentativa de se alcan?ar n?veis mais elevados de diversidade em ensemble, e com isso, melhor o desempenho de tais sistemas. J? na segunda abordagem, ? utilizado um algoritmo gen?tico para o design autom?tico de ensembles. A contribui??o consiste em combinar as t?cnicas de filtro e wrapper de forma adaptativa para evoluir uma melhor distribui??o do espa?o de atributos a serem apresentados aos componentes de um ensemble. E por fim, a ?ltima abordagem, que prop?e uma nova t?cnica de recomenda??o de arquitetura e componentes base em ensemble, via t?cnicas de meta-aprendizado tradicional e multirr?tulo. De forma geral os resultados s?o animadores, e corroboram com a tese de que ferramentas h?bridas s?o uma poderosa solu??o na constru??o de ensembles eficazes em problemas de classifica??o de padr?es / Significant advances have emerged in research related to the topic of Classifier Committees. The models that receive the most attention in the literature are those of the static nature, also known as ensembles. The algorithms that are part of this class, we highlight the methods that using techniques of resampling of the training data: Bagging, Boosting and Multiboosting. The choice of the architecture and base components to be recruited is not a trivial task and has motivated new proposals in an attempt to build such models automatically, and many of them are based on optimization methods. Many of these contributions have not shown satisfactory results when applied to more complex problems with different nature. In contrast, the thesis presented here, proposes three new hybrid approaches for automatic construction for ensembles: Increment of Diversity, Adaptive-fitness Function and Meta-learning for the development of systems for automatic configuration of parameters for models of ensemble. In the first one approach, we propose a solution that combines different diversity techniques in a single conceptual framework, in attempt to achieve higher levels of diversity in ensembles, and with it, the better the performance of such systems. In the second one approach, using a genetic algorithm for automatic design of ensembles. The contribution is to combine the techniques of filter and wrapper adaptively to evolve a better distribution of the feature space to be presented for the components of ensemble. Finally, the last one approach, which proposes new techniques for recommendation of architecture and based components on ensemble, by techniques of traditional meta-learning and multi-label meta-learning. In general, the results are encouraging and corroborate with the thesis that hybrid tools are a powerful solution in building effective ensembles for pattern classification problems.
146

Técnicas de machine learning aplicadas na recuperação de crédito do mercado brasileiro

Forti, Melissa 08 August 2018 (has links)
Submitted by Melissa Forti (melissaforti@gmail.com) on 2018-09-03T12:07:02Z No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-09-03T17:14:01Z (GMT) No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-09-04T13:30:27Z (GMT) No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Made available in DSpace on 2018-09-04T13:30:28Z (GMT). No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) Previous issue date: 2018-08-08 / A necessidade de conhecer o cliente sempre foi um diferencial para o mercado e nestes últimos anos vivenciamos um crescimento exponencial de informações e técnicas que promovem a avaliação para todas as fases do ciclo de crédito, desde a prospecção até a recuperação de dívidas. Nesse contexto, as empresas estão investindo cada vez mais em métodos de Machine Learning para que possam extrair o máximo de informações e assim terem processos mais assertivos e rentáveis. No entanto, essas técnicas possuem ainda alguma desconfiança no ambiente financeiro. Diante desse contexto, o objetivo desse trabalho foi aplicar as técnicas de Machine Learning: Random Forest, Support Vector Machine e Gradient Boosting para um banco de dados real de cobrança, a fim de identificar os clientes mais propensos a quitar suas dívidas (Collection Score) e comparar a acurácia e interpretação desses modelos com a metodologia tradicional de Regressão Logística. A principal contribuição desse trabalho está relacionada com a comparação das técnicas em um cenário de recuperação de crédito considerando as principais características, vantagens e desvantagens. / The need to know the customer has always been a differential for the market, and in currently years we have experienced an exponential growth of information and techniques that promote this evaluation for all phases of the credit cycle, from prospecting to debt recovery. In this context, companies are increasingly investing in Machine Learning methods, so that they can extract the maximum information and thus have more assertive and profitable processes. However, these models still have a lot of distrust in the financial environment. Given this need and uncertainty, the objective of this work was to apply the Machine Learning techniques: Random Forest, Support Vector Machine and Gradient Boosting to a real collection database in order to identify the recover clients (Collection Score) and to compare the accuracy and interpretation of these models with the classical logistic regression methodology. The main contribution of this work is related to the comparison of the techniques and if they are suitable for this application, considering its main characteristics, pros and cons.
147

Strategies for Combining Tree-Based Ensemble Models

Zhang, Yi 01 January 2017 (has links)
Ensemble models have proved effective in a variety of classification tasks. These models combine the predictions of several base models to achieve higher out-of-sample classification accuracy than the base models. Base models are typically trained using different subsets of training examples and input features. Ensemble classifiers are particularly effective when their constituent base models are diverse in terms of their prediction accuracy in different regions of the feature space. This dissertation investigated methods for combining ensemble models, treating them as base models. The goal is to develop a strategy for combining ensemble classifiers that results in higher classification accuracy than the constituent ensemble models. Three of the best performing tree-based ensemble methods – random forest, extremely randomized tree, and eXtreme gradient boosting model – were used to generate a set of base models. Outputs from classifiers generated by these methods were then combined to create an ensemble classifier. This dissertation systematically investigated methods for (1) selecting a set of diverse base models, and (2) combining the selected base models. The methods were evaluated using public domain data sets which have been extensively used for benchmarking classification models. The research established that applying random forest as the final ensemble method to integrate selected base models and factor scores of multiple correspondence analysis turned out to be the best ensemble approach.
148

Analog and Digital Approaches to UWB Narrowband Interference Cancellation

Omid, Abedi January 2012 (has links)
Ultra wide band (UWB) is an extremely promising wireless technology for researchers and industrials. One of the most interesting is its high data rate and fading robustness due to selective frequency fading. However, beside such advantages, UWB system performance is highly affected by existing narrowband interference (NBI), undesired UWB signals and tone/multi-tone noises. For this reason, research about NBI cancellation is still a challenge to improve the system performance vs. receiver complexity, power consumption, linearity, etc. In this work, the two major receiver sections, i.e., analog (radiofrequency or RF) and digital (digital signal processing or DSP), were considered and new techniques proposed to reduce circuit complexity and power consumption, while improving signal parameters. In the RF section, different multiband UWB low-noise amplifier key design parameters were investigated like circuit configuration, input matching and desired/undesired frequency band filtering, highlighting the most suitable filtering package for efficient UWB NBI cancellation. In the DSP section, due to pulse transmitter signals, different issues like modulation type and level, pulse variety, shape and color noise/tone noise assumptions, were addressed for efficient NBI cancelation. A comparison was performed in terms of bit-error rate, signal-to-interference ratio, signal-to-noise ratio, and channel capacity to highlight the most suitable parameters for efficient DSP design. The optimum number of filters that allows the filter bandwidth to be reduced by following the required low sampling rate and thus improving the system bit error rate was also investigated.
149

Analys av prestations- och prediktionsvariabler inom fotboll

Ulriksson, Marcus, Armaki, Shahin January 2017 (has links)
Uppsatsen ämnar att försöka förklara hur olika variabler angående matchbilden i en fotbollsmatch påverkar slutresultatet. Dessa variabler är uppdelade i prestationsvariabler och kvalitétsvariabler. Prestationsvariablerna är baserade på prestationsindikatorer inspirerat av Hughes och Bartlett (2002). Kvalitétsvariablerna förklarar hur bra de olika lagen är. Som verktyg för att uppnå syftet används olika klassificeringsmodeller utifrån både prestationsvariablerna och kvalitétsvariablerna. Först undersöktes vilka prestationsindikatorer som var viktigast. Den bästa modellen klassificerade cirka 60 % rätt och rensningar och skott på mål var de viktigaste prestationsvariablerna. Sedan undersöktes vilka prediktionsvariabler som var bäst. Den bästa modellen klassificerade rätt slutresultat cirka 88 % av matcherna. Utifrån vad författarna ansågs vara de viktigaste prediktionsvariablerna skapades en prediktionsmodell med färre variabler. Denna lyckades klassificera rätt cirka 86 % av matcherna. Prediktionsmodellen var konstruerad med spelarbetyg, odds på oavgjort och domare.
150

Měnič s tranzistory GaN pro elektrický kompresor / Inverter for electric supercharger with GaN transistors

Galia, Jan January 2021 (has links)
This master’s thesis deals with the design and realization of a functional sample power inverter for an electric compressor, which is used in hybrid cars. The electric compressor powered by the inverter is E-compressor by Garrett Advancing Motion. An inverter will be using modern High Electron Mobility Transistors which are based on gallium nitride (GaN). The purpose of this thesis is to find if GaN transistors can be used in E-boosting application.

Page generated in 0.073 seconds