11 |
Antenna subset modulation for secure millimeter-wave wireless communicationValliappan, Nachiappan 10 July 2012 (has links)
The small carrier wavelength at millimeter-wave (mm-Wave) frequencies allows the possibility of implementing a large number of antennas on a single chip. This work uses the potential of large antenna arrays at these frequencies to develop a low-complexity directional modulation technique: Antenna Subset Modulation (ASM) for point-to-point secure wireless communication. The main idea in ASM is to communicate information by modulating the far-field radiation pattern of the array at the symbol rate. By driving only a subset of antennas and changing the subset used for each symbol transmission the far-field pattern is modulated. Two techniques for implementing antenna subset selection are proposed. The first technique is simple where the antenna subset to be used is selected at random for every symbol transmission. While randomly switching antenna subsets does not affect the symbol modulation for a desired receiver along the main lobe direction, it effectively randomizes the amplitude and phase of the received symbol for an eavesdropper along a sidelobe. Using a simplified statistical model for random antenna subset selection, an expression for the average symbol error rate (SER) is derived as a function of observation angle for linear arrays. To overcome the problem of large peak sidelobe level in random antenna subset switching, an optimized antenna subset selection procedure based on simulated annealing is then discussed. Finally, numerical results comparing the average SER performance of the proposed techniques against conventional array transmission are presented. While both methods produce a narrower information beam-width in the desired direction, the optimized antenna subset selection technique is shown to offer better security and array performance. / text
|
12 |
On Universal Cycles for New Classes of Combinatorial StructuresBlanca, Antonio, Godbole, Anant P. 01 December 2011 (has links)
A universal cycle (u-cycle) is a compact listing of a collection of combinatorial objects. In this paper, we use natural encodings of these objects to show the existence of u-cycles for collections of subsets, restricted multisets, and lattice paths. For subsets, we show that a u-cycle exists for the κ-subsets of an n-set if we let κ vary in a non zero length interval. We use this result to construct a "covering" of length (1+o(1))(n/κ) for all subsets of [n] of size exactly κ with a specific formula for the o(1) term. We also show that u-cycles exist for all n-length words over some alphabet ∑, which contain all characters from R ⊂ ∑. Using this result we provide u-cycles for encodings of Sperner families of size 2 and proper chains of subsets.
|
13 |
Studies on Parallel Solvers Based on Bisection and Inverse Iterationfor Subsets of Eigenpairs and Singular Triplets / 2分法と逆反復法を基礎とした部分固有対および部分特異対のための並列ソルバについての研究Ishigami, Hiroyuki 23 March 2016 (has links)
5章(本文31~40ページ)と元となった論文の著作権はIEEEに属するため、規約に従い、本文79ページにおいて出典を示すともに、コピーライト表記を付している。本文39、40ページの全ての図の著作権は、IEEEに属する。このため、これら全ての図においてコピーライト表記を付している。 / 京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19858号 / 情博第609号 / 新制||情||106(附属図書館) / 32894 / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 中村 佳正, 教授 梅野 健, 教授 中島 浩 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
14 |
Using maximal feasible subset of constraints to accelerate a logic-based Benders decomposition scheme for a multiprocessor scheduling problemGrgic, Alexander, Andersson, Filip January 2022 (has links)
Logic-based Benders decomposition (LBBD) is a strategy for solving discrete optimisation problems. In LBBD, the optimisation problem is divided into a master problem and a subproblem and each part is solved separately. LBBD methods that combine mixed-integer programming and constraint programming have been successfully applied to solve large-scale scheduling and resource allocation problems. Such combinations typically solve an assignment-type master problem and a scheduling-type subproblem. However, a challenge with LBBD methods that have feasibility subproblems are that they do not provide a feasible solution until an optimal solution is found. In this thesis, we show that feasible solutions can be obtained by finding and combining feasible parts of an infeasible master problem assignment. We use these insights to develop an acceleration technique for LBBD that solves a series of subproblems, according to algorithms for constructing a maximal feasible subset of constraints (MaFS). Using a multiprocessor scheduling problem as a benchmark, we study the computational impact from using this technique. We evaluate three variants of LBBD schemes. The first uses MaFS, the second uses irreducible subset of constraints (IIS) and the third combines MaFS with IIS. Computational tests were performed on an instance set of multiprocessor scheduling problems. In total, 83 instances were tested, and their number of tasks varied between 2794 and 10,661. The results showed that when applying our acceleration technique in the decomposition scheme, the pessimistic bounds were strong, but the convergence was slow. The decomposition scheme combining our acceleration technique with the acceleration technique using IIS showed potential to accelerate the method.
|
15 |
Finding A Subset Of Non-defective Items From A Large Population : Fundamental Limits And Efficient AlgorithmsSharma, Abhay 05 1900 (has links) (PDF)
Consider a large population containing a small number of defective items. A commonly
encountered goal is to identify the defective items, for example, to isolate them. In the classical non-adaptive group testing (NAGT) approach, one groups the items into subsets, or pools, and runs tests for the presence of a defective itemon each pool. Using the outcomes the tests, a fundamental goal of group testing is to reliably identify the complete set of defective items with as few tests as possible. In contrast, this thesis studies a non-defective subset identification problem, where the primary goal is to identify a “subset” of “non-defective” items given the test outcomes. The main contributions of this thesis are:
We derive upper and lower bounds on the number of nonadaptive group tests
required to identify a given number of non-defective items with arbitrarily small
probability of incorrect identification as the population size goes to infinity. We
show that an impressive reduction in the number of tests is achievable compared
to the approach of first identifying all the defective items and then picking the
required number of non-defective items from the complement set. For example, in the asymptotic regime with the population size N → ∞, to identify L nondefective items out of a population containing K defective items, when the tests are reliable, our results show that O _ K logK L N _ measurements are sufficient when L ≪ N − K and K is fixed. In contrast, the necessary number of tests using the conventional approach grows with N as O _ K logK log N K_ measurements. Our
results are derived using a general sparse signal model, by virtue of which, they
are also applicable to other important sparse signal based applications such as
compressive sensing.
We present a bouquet of computationally efficient and analytically tractable nondefective subset recovery algorithms. By analyzing the probability of error of the
algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. By comparing with the information theoretic lower bounds, we show that the upper bounds bounds on the number of tests are order-wise tight up to a log(K) factor, where K is the number of defective items. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). We also provide extensive simulation results that compare the relative performance of the
different algorithms and provide further insights into their practical utility. The
proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate.
We investigate the use of adaptive group testing in the application of finding a
spectrum hole of a specified bandwidth in a given wideband of interest. We propose
a group testing based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent sub-bands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios. Energy-based hypothesis tests are used to provide an occupancy decision over the group of sub-bands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme. We illustrate the performance of the proposed algorithms via Monte Carlo simulations.
|
16 |
Verifiering av ERTMS-signalprojektering : Tillämpning mot Estland / Verification of ERTMS signalproject : Application area EstoniaHaidari, Bahaman, Holm, Isak January 2015 (has links)
Inom EU finns ett regelverk som strävar efter interoperabilitet att det ska vara lätt att kunna resa och göra affärer mellan alla nationer inom EU utan att behöva vare sig byta tåg eller förare. Denna rapport specificerar de olika lagar och regler som utfärdats och specificerats av EU kommissionen och UNISIG samt det svenska nationella regelverket som gäller för ERTMS-signalprojekteringen för markutrustning. Idag finns ett behov av att effektivisera signalprojekteringen eftersom det finns flera pågående och kommande projekt. För att nå detta kommer en checklista för signalprojektering tas fram som tydligt ska visa exempelvis var en balis i förhållande till en signalpunktstavla ska placeras genom att hänvisa till korrekt delkapitel i respektive dokument. Uppdraget med examensarbetet är att det ska bli enklare att verifiera ERTMS-signalprojekteringen på helt nya sträckor i Sverige och internationellt. Detta är något som ännu inte är standardiserat. För att detta ska vara möjligt görs studier om TSD och SUBSET (de dokument som styr och reglerar ERTMS signalering) och relevanta dokument samt analys av några redan genomförda signalprojekteringar på Ådalsbanan. I rapporten presenteras en sammanställning av signalprojektering och hur de olika faserna ska ske, hur de ska genomföras och vad de har för betydelse. Arbetet har resulterat i en sammanställning och analys av det mest relevanta fakta som gruppen hittat i de behandlade dokumenten samt en checklista för signalprojektering. I denna rapport har gruppen producerat en komplett sammanställning av de behandlade dokumenten för ERTMS-signalprojektering samt en checklista för verifiering av ERTMS-signalprojektering. / EU has a framework that strives to interoperability to make it easy to travel and do business with all nations in the EU without the need to change trains or drivers. This report discribes the various laws and regulations that issued and specified by the EU Commission, UNISIG and the Swedish national rules that apply for ERTMS signalling project planning for trackside equipment. Today there is a need to more efficient the signal project planning. To achieve this, a checklist for signaling project planning will be developed that clear show, for example, where an Eurobalise in relation to a signal point should be located by referring to the correct subchapter in the right document. The mission of the project is to make it easier to design new production of ERTMS signalling project planning. To make this be possible studies has been done to TSI, SUBSET (the documents that controls and regulates the ERTMS signalling), relevant documents and analysis already implemented signalling project planning on Ådallines. The report presents a compilation of the signal project and how the different phases will occur, how they will be implemented and their significance. In this report, the group has produced a complete compilation of the processed documents for ERTMS signalling project and a checklist for verification of the ERTMS signalling project.
|
17 |
Chebyshev Subsets in Smooth Normed Linear SpacesSvrcek, Frank J. 12 1900 (has links)
This paper is a study of the relation between smoothness of the norm on a normed linear space and the property that every Chebyshev subset is convex. Every normed linear space of finite dimension, having a smooth norm, has the property that every Chebyshev subset is convex. In the second chapter two properties of the norm, uniform Gateaux differentiability and uniform Frechet differentiability where the latter implies the former, are given and are shown to be equivalent to smoothness of the norm in spaces of finite dimension. In the third chapter it is shown that every reflexive normed linear space having a uniformly Gateaux differentiable norm has the property that every weakly closed Chebyshev subset, with non-empty weak interior that is norm-wise dense in the subset, is convex.
|
18 |
The Axiom of DeterminacyStanton, Samantha 04 May 2010 (has links)
Working within the Zermelo-Frankel Axioms of set theory, we will introduce two important contradictory axioms: Axiom of Choice and Axiom of Determinacy. We will explore perfect polish spaces and games on these spaces to see that the Axiom of Determinacy is inconsistent with the Axiom of Choice. We will see some of the major consequences of accepting the Axiom of Determinacy and how some of these results change when accepting the Axiom of Choice. We will consider 2-player games of perfect information wherein we will see some powerful results having to do with properties of the real numbers. We will use a game to illustrate a weak proof of the continuum hypothesis.
|
19 |
"Abordagem genética para seleção de um conjunto reduzido de características para construção de ensembles de redes neurais: aplicação à língua eletrônica" / A genetic approach to feature subset selection for construction of neural network ensembles: an application to gustative sensorsFerreira, Ednaldo José 10 August 2005 (has links)
As características irrelevantes, presentes em bases de dados de diversos domínios, deterioram a acurácia de predição de classificadores induzidos por algoritmos de aprendizado de máquina. As bases de dados geradas por uma língua eletrônica são exemplos típicos onde a demasiada quantidade de características irrelevantes e redundantes prejudicam a acurácia dos classificadores induzidos. Para lidar com este problema, duas abordagens podem ser utilizadas. A primeira é a utilização de métodos para seleção de subconjuntos de características. A segunda abordagem é por meio de ensemble de classificadores. Um ensemble deve ser constituído por classificadores diversos e acurados. Uma forma efetiva para construção de ensembles de classificadores é por meio de seleção de características. A seleção de características para ensemble tem o objetivo adicional de encontrar subconjuntos de características que promovam acurácia e diversidade de predição nos classificadores do ensemble. Algoritmos genéticos são técnicas promissoras para seleção de características para ensemble. No entanto, a busca genética, assim como outras estratégias de busca, geralmente visam somente a construção do ensemble, permitindo que todas as características (relevantes, irrelevantes e redundantes) sejam utilizadas. Este trabalho apresenta uma abordagem baseada em algoritmos genéticos para construção de ensembles de redes neurais artificiais com um conjunto reduzido das características totais. Para melhorar a acurácia dos ensembles, duas abordagens diferenciadas para treinamento de redes neurais foram utilizadas. A primeira baseada na interrupção precoce do treinamento com o algoritmo back-propagation e a segunda baseada em otimização multi-objetivo. Os resultados obtidos comprovam a eficácia do algoritmo proposto para construção de ensembles de redes neurais acurados. Também foi constatada sua eficiência na redução das características totais, comprovando que o algoritmo proposto é capaz de construir um ensemble utilizando um conjunto reduzido de características. / The irrelevant features in databases of some domains spoil the accuracy of the classifiers induced by machine learning algorithms. Databases generated by an electronic tongue are examples where the huge quantity of irrelevant and redundant features spoils the accuracy of classifiers. There are basically two approaches to deal with this problem: feature subset selection and ensemble of classifiers. A good ensemble is composed by accurate and diverse classifiers. An effective way to construct ensembles of classifiers is to make it through feature selection. The ensemble feature selection has an additional objective: to find feature subsets to promote accuracy and diversity in the ensemble of classifiers. Genetic algorithms are promising techniques for ensemble feature selection. However, genetic search, as well as other search strategies, only aims the ensemble construction, allowing the selection of all features (relevant, irrelevant and redundant). This work proposes an approach based on genetic algorithm to construct ensembles of neural networks using a reduced feature subset of totality. Two approaches were used to train neural networks to improve the ensembles accuracy. The first is based on early stopping with back-propagation algorithm and the second is based on multi-objective optimization. The results show the effectiveness and accuracy of the proposed algorithm to construct ensembles of neural networks, and also, its efficiency in the reduction of total features was evidenced, proving its capacity for constructing an ensemble using a reduced feature subset.
|
20 |
Seleção de atributos relevantes para aprendizado de máquina utilizando a abordagem de Rough Sets. / Machine learning feature subset selection using Rough Sets approach.Pila, Adriano Donizete 25 May 2001 (has links)
No Aprendizado de Máquina Supervisionado---AM---o algoritmo de indução trabalha com um conjunto de exemplos de treinamento, no qual cada exemplo é constituído de um vetor com os valores dos atributos e as classes, e tem como tarefa induzir um classificador capaz de predizer a qual classe pertence um novo exemplo. Em geral, os algoritmos de indução baseiam-se nos exemplos de treinamento para a construção do classificador, sendo que uma representação inadequada desses exemplos, bem como inconsistências nos mesmos podem tornar a tarefa de aprendizado difícil. Um dos problemas centrais de AM é a Seleção de um Subconjunto de Atributos---SSA---cujo objetivo é diminuir o número de atributos utilizados na representação dos exemplos. São três as principais razões para a realização de SSA. A primeira razão é que a maioria dos algoritmos de AM, computacionalmente viáveis, não trabalham bem na presença de vários atributos. A segunda razão é que, com um número menor de atributos, o conceito induzido através do classificador pode ser melhor compreendido. E, a terceira razão é o alto custo para coletar e processar grande quantidade de informações. Basicamente, são três as abordagens para a SSA: embedded, filtro e wrapper. A Teoria de Rough Sets---RS---é uma abordagem matemática criada no início da década de 80, cuja principal funcionalidade são os redutos, e será tratada neste trabalho. Segundo essa abordagem, os redutos são subconjuntos mínimos de atributos que possuem a propriedade de preservar o poder de descrição do conceito relacionado ao conjunto de todos os atributos. Neste trabalho o enfoque esta na abordagem filtro para a realização da SSA utilizando como filtro os redutos calculados através de RS. São descritos vários experimentos sobre nove conjuntos de dados naturais utilizando redutos, bem como outros filtros para SSA. Feito isso, os atributos selecionados foram submetidos a dois algoritmos simbólicos de AM. Para cada conjunto de dados e indutor, foram realizadas várias medidas, tais como número de atributos selecionados, precisão e números de regras induzidas. Também, é descrito um estudo de caso sobre um conjunto de dados do mundo real proveniente da área médica. O objetivo desse estudo pode ser dividido em dois focos: comparar a precisão dos algoritmos de indução e avaliar o conhecimento extraído com a ajuda do especialista. Embora o conhecimento extraído não apresente surpresa, pôde-se confirmar algumas hipóteses feitas anteriormente pelo especialista utilizando outros métodos. Isso mostra que o Aprendizado de Máquina também pode ser visto como uma contribuição para outros campos científicos. / In Supervised Machine Learning---ML---an induction algorithm is typically presented with a set of training examples, where each example is described by a vector of feature values and a class label. The task of the induction algorithm is to induce a classifier that will be useful in classifying new cases. In general, the inductive-learning algorithms rely on existing provided data to build their classifiers. Inadequate representation of the examples through the description language as well as inconsistencies in the training examples can make the learning task hard. One of the main problems in ML is the Feature Subset Selection---FSS---problem, i.e. the learning algorithm is faced with the problem of selecting some subset of feature upon which to focus its attention, while ignoring the rest. There are three main reasons that justify doing FSS. The first reason is that most ML algorithms, that are computationally feasible, do not work well in the presence of many features. The second reason is that FSS may improve comprehensibility, when using less features to induce symbolic concepts. And, the third reason for doing FSS is the high cost in some domains for collecting data. Basically, there are three approaches in ML for FSS: embedded, filter and wrapper. The Rough Sets Theory---RS---is a mathematical approach developed in the early 1980\'s whose main functionality are the reducts, and will be treated in this work. According to this approach, the reducts are minimal subsets of features capable to preserve the same concept description related to the entire set of features. In this work we focus on the filter approach for FSS using as filter the reducts obtained through the RS approach. We describe a series of FSS experiments on nine natural datasets using RS reducts as well as other filters. Afterwards we submit the selected features to two symbolic ML algorithms. For each dataset, various measures are taken to compare inducers performance, such as number of selected features, accuracy and number of induced rules. We also present a case study on a real world dataset from the medical area. The aim of this case study is twofold: comparing the induction algorithms performance as well as evaluating the extracted knowledge with the aid of the specialist. Although the induced knowledge lacks surprising, it allows us to confirm some hypothesis already made by the specialist using other methods. This shows that Machine Learning can also be viewed as a contribution to other scientific fields.
|
Page generated in 0.0241 seconds