Spelling suggestions: "subject:"auswahl"" "subject:"fallauswahl""
41 |
Pairwise Classification and Pairwise Support Vector MachinesBrunner, Carl 16 May 2012 (has links)
Several modifications have been suggested to extend binary classifiers to multiclass classification, for instance the One Against All technique, the One Against One technique, or Directed Acyclic Graphs. A recent approach for multiclass classification is the pairwise classification, which relies on two input examples instead of one and predicts whether the two input examples belong to the same class or to different classes. A Support Vector Machine (SVM), which is able to handle pairwise classification tasks, is called pairwise SVM. A common pairwise classification task is face recognition. In this area, a set of images is given for training and another set of images is given for testing. Often, one is interested in the interclass setting. The latter means that any person which is represented by an image in the training set is not represented by any image in the test set. From the mentioned multiclass classification techniques only the pairwise classification technique provides meaningful results in the interclass setting.
For a pairwise classifier the order of the two examples should not influence the classification result. A common approach to enforce this symmetry is the use of selected kernels. Relations between such kernels and certain projections are provided. It is shown, that those projections can lead to an information loss. For pairwise SVMs another approach for enforcing symmetry is the symmetrization of the training sets. In other words, if the pair (a,b) of examples is a training pair then (b,a) is a training pair, too. It is proven that both approaches do lead to the same decision function for selected parameters. Empirical tests show that the approach using selected kernels is three to four times faster. For a good interclass generalization of pairwise SVMs training sets with several million training pairs are needed. A technique is presented which further speeds up the training time of pairwise SVMs by a factor of up to 130 and thus enables the learning of training sets with several million pairs. Another element affecting time is the need to select several parameters. Even with the applied speed up techniques a grid search over the set of parameters would be very expensive. Therefore, a model selection technique is introduced that is much less computationally expensive.
In machine learning, the training set and the test set are created by using some data generating process. Several pairwise data generating processes are derived from a given non pairwise data generating process. Advantages and disadvantages of the different pairwise data generating processes are evaluated.
Pairwise Bayes' Classifiers are introduced and their properties are discussed. It is shown that pairwise Bayes' Classifiers for interclass generalization tasks can differ from pairwise Bayes' Classifiers for interexample generalization tasks. In face recognition the interexample task implies that each person which is represented by an image in the test set is also represented by at least one image in the training set. Moreover, the set of images of the training set and the set of images of the test set are disjoint.
Pairwise SVMs are applied to four synthetic and to two real world datasets. One of the real world datasets is the Labeled Faces in the Wild (LFW) database while the other one is provided by Cognitec Systems GmbH. Empirical evidence for the presented model selection heuristic, the discussion about the loss of information and the provided speed up techniques is given by the synthetic databases and it is shown that classifiers of pairwise SVMs lead to a similar quality as pairwise Bayes' classifiers. Additionally, a pairwise classifier is identified for the LFW database which leads to an average equal error rate (EER) of 0.0947 with a standard error of the mean (SEM) of 0.0057. This result is better than the result of the current state of the art classifier, namely the combined probabilistic linear discriminant analysis classifier, which leads to an average EER of 0.0993 and a SEM of 0.0051. / Es gibt verschiedene Ansätze, um binäre Klassifikatoren zur Mehrklassenklassifikation zu nutzen, zum Beispiel die One Against All Technik, die One Against One Technik oder Directed Acyclic Graphs. Paarweise Klassifikation ist ein neuerer Ansatz zur Mehrklassenklassifikation. Dieser Ansatz basiert auf der Verwendung von zwei Input Examples anstelle von einem und bestimmt, ob diese beiden Examples zur gleichen Klasse oder zu unterschiedlichen Klassen gehören. Eine Support Vector Machine (SVM), die für paarweise Klassifikationsaufgaben genutzt wird, heißt paarweise SVM. Beispielsweise werden Probleme der Gesichtserkennung als paarweise Klassifikationsaufgabe gestellt. Dazu nutzt man eine Menge von Bildern zum Training und ein andere Menge von Bildern zum Testen. Häufig ist man dabei an der Interclass Generalization interessiert. Das bedeutet, dass jede Person, die auf wenigstens einem Bild der Trainingsmenge dargestellt ist, auf keinem Bild der Testmenge vorkommt. Von allen erwähnten Mehrklassenklassifikationstechniken liefert nur die paarweise Klassifikationstechnik sinnvolle Ergebnisse für die Interclass Generalization.
Die Entscheidung eines paarweisen Klassifikators sollte nicht von der Reihenfolge der zwei Input Examples abhängen. Diese Symmetrie wird häufig durch die Verwendung spezieller Kerne gesichert. Es werden Beziehungen zwischen solchen Kernen und bestimmten Projektionen hergeleitet. Zudem wird gezeigt, dass diese Projektionen zu einem Informationsverlust führen können. Für paarweise SVMs ist die Symmetrisierung der Trainingsmengen ein weiter Ansatz zur Sicherung der Symmetrie. Das bedeutet, wenn das Paar (a,b) von Input Examples zur Trainingsmenge gehört, dann muss das Paar (b,a) ebenfalls zur Trainingsmenge gehören. Es wird bewiesen, dass für bestimmte Parameter beide Ansätze zur gleichen Entscheidungsfunktion führen. Empirische Messungen zeigen, dass der Ansatz mittels spezieller Kerne drei bis viermal schneller ist. Um eine gute Interclass Generalization zu erreichen, werden bei paarweisen SVMs Trainingsmengen mit mehreren Millionen Paaren benötigt. Es wird eine Technik eingeführt, die die Trainingszeit von paarweisen SVMs um bis zum 130-fachen beschleunigt und es somit ermöglicht, Trainingsmengen mit mehreren Millionen Paaren zu verwenden. Auch die Auswahl guter Parameter für paarweise SVMs ist im Allgemeinen sehr zeitaufwendig. Selbst mit den beschriebenen Beschleunigungen ist eine Gittersuche in der Menge der Parameter sehr teuer. Daher wird eine Model Selection Technik eingeführt, die deutlich geringeren Aufwand erfordert.
Im maschinellen Lernen werden die Trainingsmenge und die Testmenge von einem Datengenerierungsprozess erzeugt. Ausgehend von einem nicht paarweisen Datengenerierungsprozess werden unterschiedliche paarweise Datengenerierungsprozesse abgeleitet und ihre Vor- und Nachteile bewertet.
Es werden paarweise Bayes-Klassifikatoren eingeführt und ihre Eigenschaften diskutiert. Es wird gezeigt, dass sich diese Bayes-Klassifikatoren für Interclass Generalization Aufgaben und für Interexample Generalization Aufgaben im Allgemeinen unterscheiden. Bei der Gesichtserkennung bedeutet die Interexample Generalization, dass jede Person, die auf einem Bild der Testmenge dargestellt ist, auch auf mindestens einem Bild der Trainingsmenge vorkommt. Außerdem ist der Durchschnitt der Menge der Bilder der Trainingsmenge mit der Menge der Bilder der Testmenge leer.
Paarweise SVMs werden an vier synthetischen und an zwei Real World Datenbanken getestet. Eine der verwendeten Real World Datenbanken ist die Labeled Faces in the Wild (LFW) Datenbank. Die andere wurde von Cognitec Systems GmbH bereitgestellt. Die Annahmen der Model Selection Technik, die Diskussion über den Informationsverlust, sowie die präsentierten Beschleunigungstechniken werden durch empirische Messungen mit den synthetischen Datenbanken belegt. Zudem wird mittels dieser Datenbanken gezeigt, dass Klassifikatoren von paarweisen SVMs zu ähnlich guten Ergebnissen wie paarweise Bayes-Klassifikatoren führen. Für die LFW Datenbank wird ein paarweiser Klassifikator bestimmt, der zu einer durchschnittlichen Equal Error Rate (EER) von 0.0947 und einem Standard Error of The Mean (SEM) von 0.0057 führt. Dieses Ergebnis ist besser als das des aktuellen State of the Art Klassifikators, dem Combined Probabilistic Linear Discriminant Analysis Klassifikator. Dieser führt zu einer durchschnittlichen EER von 0.0993 und einem SEM von 0.0051.
|
42 |
From a Comprehensive Experimental Survey to a Cost-based Selection Strategy for Lightweight Integer Compression AlgorithmsDamme, Patrick, Ungethüm, Annett, Hildebrandt, Juliana, Habich, Dirk, Lehner, Wolfgang 11 January 2023 (has links)
Lightweight integer compression algorithms are frequently applied in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. In recent years, the vectorization of basic techniques such as delta coding and null suppression has considerably enlarged the corpus of available algorithms. As a result, today there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. However, a comparative evaluation of these algorithms with different data and hardware characteristics has never been sufficiently conducted in the literature. To close this gap, we conducted an exhaustive experimental survey by evaluating several state-of-the-art lightweight integer compression algorithms as well as cascades of basic techniques. We systematically investigated the influence of data as well as hardware properties on the performance and the compression rates. The evaluated algorithms are based on publicly available implementations as well as our own vectorized reimplementations. We summarize our experimental findings leading to several new insights and to the conclusion that there is no single-best algorithm. Moreover, in this article, we also introduce and evaluate a novel cost model for the selection of a suitable lightweight integer compression algorithm for a given dataset.
|
43 |
Die Kalifatidee bei den sunnitischen und schiitischen Gelehrten des 20. JahrhundertsRastegarfar, Akbar 16 October 2014 (has links)
Das arabische Wort „Khalifa“ in der Bedeutung „Stellvertreter“ oder „Nachfolger“ wird im Koran, dem heiligen Buch der Muslime an zwei Stellen verwendet (Sure 7, Vers 69 und Sure 38, Vers 26). Darin wird der Mensch als der Stellvertreter Gottes auf Erden bezeichnet. Im historischen Kontext entsteht der Begriff nach dem Tod des Propheten Muhammad im Jahr 632. Die ersten vier Nachfolger in der politischen Führung der Gemeinde werden in der sunnitischen Geschichtsschreibung als die „Raschidun“ (die Rechtgeleiteten) bezeichnet. In dieser Zeit, also zwischen 632 und 661, entsteht auch der Begriff „Amir al-Muminin“ (Beherrscher der Gläubigen) als Titel des Kalifen, mit dem die Herrscher auch angeredet wurden. Die Frage der Nachfolge des Propheten Muhammad entwickelte sich zu einem grundlegenden Streitpunkt innerhalb der jungen muslimischen Gemeinde. Aus diesen Auseinandersetzungen heraus entstand dann die konfessionelle Spaltung der muslimischen Welt in die sunnitische Mehrheit und die schiitische Minderheit. Grundlegend gibt es Gemeinsamkeiten sowie Unterschiede zwischen den zwei Hauptströmungen der islamischen Gemeinde, den Sunniten und Schiiten. Der Umfang der Meinungsverschiedenheiten zwischen Sunniten und Schiiten sind mehr als deren Ähnlichkeiten, obwohl diese Unterschiede auf den ersten Blick nicht erkennbar sind. Die Wurzeln all dieser Differenzen sind darauf zurückzuführen, dass die Schiiten nach dem Hinscheiden des Propheten die Dogmen ihres Glaubens von den Ahl al-Bayt (Angehörige des Hauses Man darf das Thema Kalifat als den Schwerpunkt aller anderen Diskrepanzen der Glaubensauffassungen der sunnitischen und schiitischen Gelehrten betrachten. / The Arabic word "Khalifa" in the meaning "deputy" or "successor" has been used in the Coran, the holy book of the Muslims at two locations: sura 7, verse 69 and sura 38, verse 26. In These verses is the human being known as God''s representative on earth. In historical context, the term arises after the death of Prophet Muhammad in 632. The first four successors of him in the political leadership of the community calls in the Sunni historiography as the "Rashidun" means (the rightly guided). During this time, i. e. from 632 to 661, the term "Amir al-Mu''minin" (Commander of the Faithful) was created as the title for the Caliphs; thereby the rulers were also addressed. The question of the succession of the Prophet Muhammad became a fundamental point of conflicts within the young Muslim community. From these contentions arose then the confessional division of the Muslim world in the Sunni majority and the Shia minority. Basically, there are similarities and differences between the two mainstreams of the Islamic community, the Sunnis and Shiites. The dimensions of disagreements between Sunnis and Shiites are more than their similarities, although these differences at first glance are not recognizable. The roots of all these differences are due to the fact that the Shiites after the passing away of the Prophet adopted the dogmas of their faith from the Ahl al-Bayt (solely 13 members of the house of the Prophet) and the Sunnis from others. It must be considered, that the issue Caliphate is the focus of all other discrepancies in the beliefs of the Sunni and Shiite scholars.
|
44 |
Analyse und praktische Umsetzung unterschiedlicher Methoden des <i>Randomized Branch Sampling</i> / Analysis and practical application of different methods of the <i>Randomized Branch Sampling</i>Cancino Cancino, Jorge Orlando 26 June 2003 (has links)
No description available.
|
45 |
Optimal Combination of Reduction Methods in Structural Mechanics and Selection of a Suitable Intermediate Dimension / Optimale Kombination von strukturmechanischen Modellreduktionsverfahren und Wahl einer geeigneten ZwischendimensionPaulke, Jan 19 August 2014 (has links) (PDF)
A two-step model order reduction method is investigated in order to overcome problems of certain one-step methods. Not only optimal combinations of one-step reductions are considered but also the selection of a suitable intermediate dimension (ID) is described. Several automated selection methods are presented and their application tested on a gear box model. The implementation is realized using a Matlab-based Software MORPACK. Several recommendations are given towards the selection of a suitable ID, and problems in Model Order Reduction (MOR) combinations are pointed out. A pseudo two-step is suggested to reduce the full system without any modal information. A new node selection approach is proposed to enhance the SEREP approximation of the system’s response for small reduced representations. / Mehrschrittverfahren der Modellreduktion werden untersucht, um spezielle Probleme konventioneller Einschrittverfahren zu lösen. Eine optimale Kombination von strukturmechanischen Reduktionsverfahren und die Auswahl einer geeigneten Zwischendimension wird untersucht. Dafür werden automatische Verfahren in Matlab implementiert, in die Software MORPACK integriert und anhand des Finite Elemente Modells eines Getriebegehäuses ausgewertet. Zur Auswahl der Zwischendimension werden Empfehlungen genannt und auf Probleme bei der Kombinationen bestimmter Reduktionsverfahren hingewiesen. Ein Pseudo- Zweischrittverfahren wird vorgestellt, welches eine Reduktion ohne Kenntnis der modalen Größen bei ähnlicher Genauigkeit im Vergleich zu modalen Unterraumverfahren durchführt. Für kleine Reduktionsdimensionen wird ein Knotenauswahlverfahren vorgeschlagen, um die Approximation des Frequenzganges durch die System Equivalent Reduction Expansion Process (SEREP)-Reduktion zu verbessern.
|
46 |
Optimal Combination of Reduction Methods in Structural Mechanics and Selection of a Suitable Intermediate Dimension: Optimal Combination of Reduction Methods in Structural Mechanics and Selection of a Suitable Intermediate DimensionPaulke, Jan 08 May 2014 (has links)
A two-step model order reduction method is investigated in order to overcome problems of certain one-step methods. Not only optimal combinations of one-step reductions are considered but also the selection of a suitable intermediate dimension (ID) is described. Several automated selection methods are presented and their application tested on a gear box model. The implementation is realized using a Matlab-based Software MORPACK. Several recommendations are given towards the selection of a suitable ID, and problems in Model Order Reduction (MOR) combinations are pointed out. A pseudo two-step is suggested to reduce the full system without any modal information. A new node selection approach is proposed to enhance the SEREP approximation of the system’s response for small reduced representations.:Contents
Kurzfassung..........................................................................................iv
Abstract.................................................................................................iv
Nomenclature........................................................................................ix
1 Introduction........................................................................................1
1.1 Motivation........................................................................................1
1.2 Objectives........................................................................................1
1.3 Outline of the Thesis........................................................................2
2 Theoretical Background.......................................................................3
2.1 Finite Element Method......................................................................3
2.1.1 Modal Analysis...............................................................................4
2.1.2 Frequency Response Function.......................................................4
2.2 Model Order Reduction.....................................................................5
2.3 Physical Subspace Reduction Methods.............................................7
2.3.1 Guyan Reduction...........................................................................7
2.3.2 Improved Reduced System Method...............................................8
2.4 Modal Subspace Reduction Methods...............................................10
2.4.1 Modal Reduction...........................................................................11
2.4.2 Exact Modal Reduction..................................................................11
2.4.3 System Equivalent Reduction Expansion Process.........................13
2.5 Krylov Subspace Reduction Methods...............................................14
2.6 Hybrid Subspace Reduction Methods..............................................15
2.6.1 Component Mode Synthesis........................................................16
2.6.2 Hybrid Exact Modal Reduction......................................................19
2.7 Model Correlation Methods.............................................................21
2.7.1 Normalized Relative Frequency Difference...................................21
2.7.2 Modified Modal Assurance Criterion.............................................22
2.7.3 Pseudo-Orthogonality Check.......................................................22
2.7.4 Comparison of Frequency Response Function.............................23
3 Selection of Active Degrees of Freedom............................................25
3.1 Non-Iterative Methods...................................................................26
3.1.1 Modal Kinetic Energy and Variants..............................................26
3.1.2 Driving Point Residue and Variants..............................................27
3.1.3 Eigenvector Component Product..................................................28
3.2 Iterative Reduction Methods...........................................................29
3.2.1 Effective Independence Distribution.............................................29
3.2.2 Mass-Weighted Effective Independence.......................................32
3.2.3 Variance Based Selection Method.................................................33
3.2.4 Singular Value Decomposition Based Selection Method................34
3.2.5 Stiffness-to-Mass Ratio Selection Method.....................................34
3.3 Iterative Expansion Methods...........................................................35
3.3.1 Modal-Geometrical Selection Criterion...........................................36
3.3.2 Triaxial Effective Independence Expansion...................................36
3.4 Measure of Goodness for Selected Active Set..................................39
3.4.1 Determinant and Rank of the Fisher Information Matrix................39
3.4.2 Condition Number of the Partitioned Modal Matrix........................40
3.4.3 Measured Energy per Mode..........................................................40
3.4.4 Root Mean Square Error of Pseudo-Orthogonality Check.............41
3.4.5 Eigenvalue Comparison................................................................41
4 Two-Step Reduction in MORPACK.......................................................42
4.1 Structure of MORPACK.....................................................................42
4.2 Selection of an Intermediate Dimension.........................................43
4.2.1 Intermediate Dimension Requirements........................................44
4.2.2 Implemented Selection Methods..................................................45
4.2.3 Recommended Selection of an Intermediate Dimension...............48
4.3 Combination of Reduction Methods.................................................49
4.3.1 Overview of All Candidates..........................................................50
4.3.2 Combinations with Modal Information.........................................54
4.3.3 Combinations without Modal Information....................................54
5 Applications........................................................................................57
5.1 Gear Box Model...............................................................................57
5.2 Selection of Additional Active Nodes................................................58
5.3 Optimal Intermediate Dimension......................................................64
5.4 Two-Step Model Order Reduction Results........................................66
5.5 Comparison to One-Step Model Order Reduction Methods..............70
5.6 Comparison to One-Step Hybrid Model Order Reduction Methods...72
5.7 Proposal of a New Approach for Additional Node Selection..............73
6 Summary and Conclusions...................................................................77
7 Zusammenfassung und Ausblick..........................................................79
Bibliography............................................................................................81
List of Tables..........................................................................................86
List of Figures.........................................................................................88
A Appendix.............................................................................................89
A.1 Results of Two-Step Model Order Reduction.....................................89
A.2 Data CD............................................................................................96 / Mehrschrittverfahren der Modellreduktion werden untersucht, um spezielle Probleme konventioneller Einschrittverfahren zu lösen. Eine optimale Kombination von strukturmechanischen Reduktionsverfahren und die Auswahl einer geeigneten Zwischendimension wird untersucht. Dafür werden automatische Verfahren in Matlab implementiert, in die Software MORPACK integriert und anhand des Finite Elemente Modells eines Getriebegehäuses ausgewertet. Zur Auswahl der Zwischendimension werden Empfehlungen genannt und auf Probleme bei der Kombinationen bestimmter Reduktionsverfahren hingewiesen. Ein Pseudo- Zweischrittverfahren wird vorgestellt, welches eine Reduktion ohne Kenntnis der modalen Größen bei ähnlicher Genauigkeit im Vergleich zu modalen Unterraumverfahren durchführt. Für kleine Reduktionsdimensionen wird ein Knotenauswahlverfahren vorgeschlagen, um die Approximation des Frequenzganges durch die System Equivalent Reduction Expansion Process (SEREP)-Reduktion zu verbessern.:Contents
Kurzfassung..........................................................................................iv
Abstract.................................................................................................iv
Nomenclature........................................................................................ix
1 Introduction........................................................................................1
1.1 Motivation........................................................................................1
1.2 Objectives........................................................................................1
1.3 Outline of the Thesis........................................................................2
2 Theoretical Background.......................................................................3
2.1 Finite Element Method......................................................................3
2.1.1 Modal Analysis...............................................................................4
2.1.2 Frequency Response Function.......................................................4
2.2 Model Order Reduction.....................................................................5
2.3 Physical Subspace Reduction Methods.............................................7
2.3.1 Guyan Reduction...........................................................................7
2.3.2 Improved Reduced System Method...............................................8
2.4 Modal Subspace Reduction Methods...............................................10
2.4.1 Modal Reduction...........................................................................11
2.4.2 Exact Modal Reduction..................................................................11
2.4.3 System Equivalent Reduction Expansion Process.........................13
2.5 Krylov Subspace Reduction Methods...............................................14
2.6 Hybrid Subspace Reduction Methods..............................................15
2.6.1 Component Mode Synthesis........................................................16
2.6.2 Hybrid Exact Modal Reduction......................................................19
2.7 Model Correlation Methods.............................................................21
2.7.1 Normalized Relative Frequency Difference...................................21
2.7.2 Modified Modal Assurance Criterion.............................................22
2.7.3 Pseudo-Orthogonality Check.......................................................22
2.7.4 Comparison of Frequency Response Function.............................23
3 Selection of Active Degrees of Freedom............................................25
3.1 Non-Iterative Methods...................................................................26
3.1.1 Modal Kinetic Energy and Variants..............................................26
3.1.2 Driving Point Residue and Variants..............................................27
3.1.3 Eigenvector Component Product..................................................28
3.2 Iterative Reduction Methods...........................................................29
3.2.1 Effective Independence Distribution.............................................29
3.2.2 Mass-Weighted Effective Independence.......................................32
3.2.3 Variance Based Selection Method.................................................33
3.2.4 Singular Value Decomposition Based Selection Method................34
3.2.5 Stiffness-to-Mass Ratio Selection Method.....................................34
3.3 Iterative Expansion Methods...........................................................35
3.3.1 Modal-Geometrical Selection Criterion...........................................36
3.3.2 Triaxial Effective Independence Expansion...................................36
3.4 Measure of Goodness for Selected Active Set..................................39
3.4.1 Determinant and Rank of the Fisher Information Matrix................39
3.4.2 Condition Number of the Partitioned Modal Matrix........................40
3.4.3 Measured Energy per Mode..........................................................40
3.4.4 Root Mean Square Error of Pseudo-Orthogonality Check.............41
3.4.5 Eigenvalue Comparison................................................................41
4 Two-Step Reduction in MORPACK.......................................................42
4.1 Structure of MORPACK.....................................................................42
4.2 Selection of an Intermediate Dimension.........................................43
4.2.1 Intermediate Dimension Requirements........................................44
4.2.2 Implemented Selection Methods..................................................45
4.2.3 Recommended Selection of an Intermediate Dimension...............48
4.3 Combination of Reduction Methods.................................................49
4.3.1 Overview of All Candidates..........................................................50
4.3.2 Combinations with Modal Information.........................................54
4.3.3 Combinations without Modal Information....................................54
5 Applications........................................................................................57
5.1 Gear Box Model...............................................................................57
5.2 Selection of Additional Active Nodes................................................58
5.3 Optimal Intermediate Dimension......................................................64
5.4 Two-Step Model Order Reduction Results........................................66
5.5 Comparison to One-Step Model Order Reduction Methods..............70
5.6 Comparison to One-Step Hybrid Model Order Reduction Methods...72
5.7 Proposal of a New Approach for Additional Node Selection..............73
6 Summary and Conclusions...................................................................77
7 Zusammenfassung und Ausblick..........................................................79
Bibliography............................................................................................81
List of Tables..........................................................................................86
List of Figures.........................................................................................88
A Appendix.............................................................................................89
A.1 Results of Two-Step Model Order Reduction.....................................89
A.2 Data CD............................................................................................96
|
47 |
A 64-channel back-gate adapted ultra-low-voltage spike-aware neural recording front-end with on-chip lossless/near-lossless compression engine and 3.3V stimulator in 22nm FDSOISchüffny, Franz Marcus, Zeinolabedin, Seyed Mohammad Ali, George, Richard, Guo, Liyuan, Weiße, Annika, Uhlig, Johannes, Meyer, Julian, Dixius, Andreas, Hänzsche, Stefan, Berthel, Marc, Scholze, Stefan, Höppner, Sebastian, Mayr, Christian 21 February 2024 (has links)
In neural implants and biohybrid research systems, the integration of electrode recording and stimulation front-ends with pre-processing circuitry promises a drastic increase in real-time capabilities [1,6]. In our proposed neural recording system, constant sampling with a bandwidth of 9.8kHz yields 6.73μV input-referred noise (IRN) at a power-per-channel of 0.34μW for the time-continuous ΔΣ−modulator, and 0.52μW for the digital filters and spike detectors. We introduce dynamic current/bandwidth selection at the ΔΣ and digital filter to reduce recording bandwidth at the absence of spikes (i.e. local field potentials). This is controlled by a two-level spike detection and adjusted by adaptive threshold estimation (ATE). Dynamic bandwidth selection reduces power by 53.7%, increasing the available channel count at a low heat dissipation. Adaptive back-gate voltage tuning (ABGVT) compensates for PVT variation in subthreshold circuits. This allows 1.8V input/output (IO) devices to operate at 0.4V supply voltage robustly. The proposed 64-channel neural recording system moreover includes a 16-channel adaptive compression engine (ACE) and an 8-channel on-chip current stimulator at 3.3V. The stimulator supports field-shaping approaches, promising increased selectivity in future research.
|
Page generated in 0.0356 seconds