Spelling suggestions: "subject:"defined"" "subject:"efined""
641 |
Automatic target recognition using passive bistatic radar signals. / Reconnaissance automatique de cibles par utilisation de signaux de radars passifs bistatiquesPisane, Jonathan 04 April 2013 (has links)
Dans cette thèse, nous présentons la conception, le développement et le test de trois systèmes de reconnaissance automatique de cibles (ATR) visant à reconnaître des avions non-coopératifs, c’est-à-dire des avions ne fournissant par leur identité, en utilisant des signaux de radars passifs bistatiques. Les radars passifs bistatiques utilisent un ou plusieurs émetteurs d’opportunité (déjà présents sur le terrain), avec des fréquences allant jusqu’à 1 GHz pour les émetteurs considérés ici, et un ou plusieurs récepteurs déployés par le gestionnaire du système et non-colocalisés avec les émetteurs. Les seules informations utilisées sont les signaux réfléchis sur les avions et les signaux directement reçus qui sont tous les deux collectés par le récepteur, quelques informations concernant l’émetteur, et la configuration géométrique du radar bistatique.Les trois systèmes ATR que nous avons construits utilisent respectivement les images radar, les surfaces équivalentes radar (SER) complexes bistatiques et les SER réelles bistatiques. Nous utilisons des données acquises soit sur des modèles d’avions placés en chambre anéchoique à l’ONERA, soit sur des avions réels en utilisant un banc d’essai bistatique consistant en un émetteur VOR et un récepteur basé sur la radio-logicielle (SDR), et que nous avons déployé aux alentours de l’aéroport d’Orly. Nous décrivons d’abord la phénoménologie radar pertinente pour notre problème ainsi que les fondements mathématiques pour la dérivation de la SER bistatique d’un objet, et pour la construction d’images radar d’un objet.Nous utilisons deux méthodes pour la classification de cibles en classes prédéfinies : les arbres extrêmement aléatoires (extra-trees) et les méthodes de sous-espaces. Une caractéristique-clé de notre approche est que nous divisons le problème de reconnaissance global en un ensemble de sous-problèmes par décomposition de l’espace des paramètres (fréquence, polarisation, angle d’aspect et angle bistatique) en régions. Nous construisons un classificateur par région.Nous validons en premier lieu la méthode des extra-trees sur la base de données MSTAR, composée d’images radar de véhicules terrestres. Ensuite, nous testons cette méthode sur des images radar d’avions que nous avons construites à partir des données acquises en chambre anéchoique. Nous obtenons un pourcentage de classification allant jusqu’à 99%. Nous testons ensuite la méthode de sous-espaces sur les SER bistatiques (complexes et réelles) des avions que nous avons extraits des données de chambre anéchoique. Nous obtenons un pourcentage de classification allant jusqu’à 98%, avec des variations suivant la fréquence, la polarisation, l’angle d’aspect, l’angle bistatique et le nombre de paires émetteur-récepteur utilisées. Nous testons enfin la méthode de sous-espaces sur les SER bistatiques (réelles) extraites des signaux acquis par le banc d’essai déployé à Orly. Nous obtenons une probabilité de classification de 82%, avec des variations suivant l’angle d’aspect et l’angle bistatique. On a donc démontré dans cette thèse que l’on peut reconnaitre des cibles aériennes à partir de leur SER acquise en utilisant des signaux de radars passifs bistatiques. / We present the design, development, and test of three novel, distinct automatic target recognition (ATR) systems for the recognition of airplanes and, more specifically, non-cooperative airplanes, i.e. airplanes that do not provide information when interrogated, in the framework of passive bistatic radar systems. Passive bistatic radar systems use one or more illuminators of opportunity (already present in the field), with frequencies up to 1 GHz for the transmitter part of the systems considered here, and one or more receivers, deployed by the persons managing the system, and not co-located with the transmitters. The sole source of information are the signal scattered on the airplane and the direct-path signal that are collected by the receiver, some basic knowledge about the transmitter, and the geometrical bistatic radar configuration. The three distinct ATR systems that we built respectively use the radar images, the bistatic complex radar cross-section (BS-RCS), and the bistatic radar cross-section (BS-RCS) of the targets. We use data acquired either on scale models of airplanes placed in an anechoic, electromagnetic chamber or on real-size airplanes using a bistatic testbed consisting of a VOR transmitter and a software-defined radio (SDR) receiver, located near Orly airport, France. We describe the radar phenomenology pertinent for the problem at hand, as well as the mathematical underpinnings of the derivation of the bistatic RCS values and of the construction of the radar images.For the classification of the observed targets into pre-defined classes, we use either extremely randomized trees or subspace methods. A key feature of our approach is that we break the recognition problem into a set of sub-problems by decomposing the parameter space, which consists of the frequency, the polarization, the aspect angle, and the bistatic angle, into regions. We build one recognizer for each region. We first validate the extra-trees method on the radar images of the MSTAR dataset, featuring ground vehicles. We then test the method on the images of the airplanes constructed from data acquired in the anechoic chamber, achieving a probability of correct recognition up to 0.99.We test the subspace methods on the BS-CRCS and on the BS-RCS of the airplanes extracted from the data acquired in the anechoic chamber, achieving a probability of correct recognition up to 0.98, with variations according to the frequency band, the polarization, the sector of aspect angle, the sector of bistatic angle, and the number of (Tx,Rx) pairs used. The ATR system deployed in the field gives a probability of correct recognition of $0.82$, with variations according to the sector of aspect angle and the sector of bistatic angle.
|
642 |
Investigating the moment when solutions emerge in problem solvingLösche, Frank January 2018 (has links)
At some point during a creative action something clicks, suddenly the prospective problem solver just knows the solution to a problem, and a feeling of joy and relief arises. This phenomenon, called Eureka experience, insight, Aha moment, hunch, epiphany, illumination, or serendipity, has been part of human narrations for thousands of years. It is the moment of a subjective experience, a surprising, and sometimes a life-changing event. In this thesis, I narrow down this moment 1. conceptually, 2. experientially, and 3. temporally. The concept of emerging solutions has a multidisciplinary background in Cognitive Science, Arts, Design, and Engineering. Through the discussion of previous terminology and comparative reviews of historical literature, I identify sources of ambiguity surrounding this phenomenon and suggest unifying terms as the basis for interdisciplinary exploration. Tracking the experience based on qualitative data from 11 creative practitioners, I identify conflicting aspects of existing models of creative production. To bridge this theoretical and disciplinary divide between iterative design thinking and sequential models of creativity, I suggest a novel multi-layered model. Empirical support for this proposal comes from Dira, a computer-based open-ended experimental paradigm. As part of this thesis I developed the task and 40 unique sets of stimuli and response items to collect dynamic measures of the creative process and evade known problems of insightful tasks. Using Dira, I identify the moment when solutions emerge from the number and duration of mouse-interactions with the on-screen elements and the 124 participants' self-reports. I provide an argument for the multi-layered model to explain a discrepancy between the timing observed in Dira and existing sequential models. Furthermore, I suggest that Eureka moments can be assessed on more than a dichotomous scale, as the empirical data from interviews and Dira demonstrates for this rich human experience. I conclude that the research on insight benefits from an interdisciplinary approach and suggest Dira as an instrument for future studies.
|
643 |
Synthesis and modification of abiotic sequence-defined poly(phosphodiester)s / Synthèse et modification de poly(phosphodiester)s non-biologiques contenant des séquences codées de monomèresKönig, Niklas Felix 03 September 2018 (has links)
Récemment, la chimie des phosphoramidites s’est montrée efficace et polyvalente en tant que plateforme pour accéder à des poly(phosphodiester)s à séquence définies abiotiques. Grâce à cette stratégie, les monomères peuvent être placés dans la chaîne à des positions choisies, ouvrant la voie à de nombreuses possibilités pour la préparation de macromolécules fonctionnelles. Ici, la méthode phosphoramidite a été explorée pour la synthèse de polymères dits numériques, qui contiennent des séquences de monomères encodées binairement. Des polymères dont les longueurs de chaînes et les séquences numériques sont contrôlées ont été préparés en utilisant une stratégie phosphoramidite classique impliquant des groupements protecteurs diméthoxytrityles, ou bien un procédé photocontrôlé faisant intervenir des groupements nitrophénylpropyloxycarbonyles clivables à la lumière. En outre, plusieurs stratégies pour modifier l’information contenue dans les chaînes latérales ont été étudiées dans cette thèse. Une modification binaire post-polymérisation à travers deux cycloadditions alcyne-azoture catalysées par le cuivre(I) consécutives a été examinée pour optimiser les chaînes latérales des poly(phosphodiester)s à séquences définies. De plus, la libération photocontrôlée de différents motifs éthers ortho-nitrobenzyliques latéraux a été étudiée. Ces fonctions ont permis la conception d’oligo(phosphodiester)s numériques dont les séquences d’information peuvent être effacées ou révélées grâce à la lumière. / Phosphoramidite chemistry has recently been evidenced to be an efficient and versatile platform to access sequence-defined abiotic poly(phosphodiester)s. Using this strategy, monomers can be placed at defined positions positions in a chain, thus opening up wide possibilities for the preparation of functional macromolecules.Here, the phosphoramidite platform was explored to synthesize so-called digital polymers, which contain monomer-coded binary sequences. Polymers with controlled chain lengths and digital sequences were prepared using either a standard phosphoramidite strategy involving dimethoxytrityl protective groups or a photo-controlled process involving light-cleavable nitrophenylpropyloxycarbonyl protective groups. Additionally, several strategies to modify the side chain information were investigated in this thesis. A binary post-polymerization modification by means of sequential copper(I)-catalyzed alkyne-azide cycloadditions was investigated for tuning the side chain functionality of sequence-defined poly(phosphodiester)s. Moreover, the photo-controlled release of several ortho-nitrobenzylic ether side chain motifs was studied. These moieties allowed the design of digital oligo(phosphodiester)s whose sequence information can be erased or revealed with light as a trigger.
|
644 |
確定提撥制退休金之評價:馬可夫調控跳躍過程模型下股價指數之實證 / Valuation of a defined contribution pension plan: evidence from stock indices under Markov-Modulated jump diffusion model張玉華, Chang, Yu Hua Unknown Date (has links)
退休金是退休人未來生活的依靠,確保在退休後能得到適足的退休給付,政府在退休金上實施保證收益制度,此制度為最低保證利率與投資報酬率連結。本文探討退休金給付標準為確定提撥制,當退休金的投資報酬率是根據其連結之股價指數的表現來計算時,股價指數報酬率的模型假設為馬可夫調控跳躍過程模型,考慮市場狀態與布朗運動項、跳躍項的跳躍頻率相關,即為Elliot et al. (2007) 的模型特例。使用1999年至2012年的道瓊工業指數與S&P 500指數的股價指數對數報酬率作為研究資料,採用EM演算法估計參數及SEM演算法估計參數共變異數矩陣。透過概似比檢定說明馬可夫調控跳躍過程模型比狀態轉換模型、跳躍風險下狀態轉換模型更適合描述股價指數報酬率變動情形,也驗證馬可夫調控跳躍過程模型具有描述報酬率不對稱、高狹峰及波動叢聚的特性。最後,假設最低保證利率為固定下,利用Esscher轉換法計算不同模型下型I保證之確定提撥制退休金的評價公式,從公式中可看出受雇人提領的退休金價值可分為政府補助與個人帳戶擁有之退休金兩部分。以執行敏感度分析探討估計參數對於馬可夫調控跳躍過程模型評價公式的影響,而型II保證之確定提撥制退休金的價值則以蒙地卡羅法模擬並探討其敏感度分析結果。 / Pension plan make people a guarantee life in their retirement. In order to ensure the appropriate amount of pension plan, government guarantees associated with pension plan which ties minimum rate of return guarantees and underlying asset rate of return. In this paper, we discussed the pension plan with defined contribution (DC). When the return of asset is based on the stock indices, the return model was set on the assumption that markov-modulated jump diffusion model (MMJDM) could the Brownian motion term and jump rate be both related to market states. This model is the specific case of Elliot et al. (2007) offering. The sample observations is Dow-Jones industrial average and S&P 500 index from 1999 to 2012 by logarithm return of the stock indices. We estimated the parameters by the Expectation-Maximization (EM) algorithm and calculated the covariance matrix of the estimates by supplemented EM (SEM) algorithm. Through the likelihood ratio test (LRT), the data fitted the MMJDM better than other models. The empirical evidence indicated that the MMJDM could describe the asset return for asymmetric, leptokurtic, volatility clustering particularly. Finally, we derived different model's valuation formula for DC pension plan with type-I guarantee by Esscher transformation under rate of return guarantees is constant. From the formula, the value of the pension plan could divide into two segment: government supplement and employees deposit made pension to their personal bank account. And then, we done sensitivity analysis through the MMJDM valuation formula. We used Monte Carlo simulations to evaluate the valuation of DC pension plan with type-II guarantee and discussed it from sensitivity analysis.
|
645 |
Méthodes de traitement numérique du signal pour l'annulation d'auto-interférences dans un terminal mobile / Digital processing for auto-interference cancellation in mobile architectureGerzaguet, Robin 26 March 2015 (has links)
Les émetteurs-récepteurs actuels tendent à devenir multi-standards c’est-àdireque plusieurs standards de communication peuvent cohabiter sur la même puce. Lespuces sont donc amenées à traiter des signaux de formes très différentes, et les composantsanalogiques subissent des contraintes de conception de plus en plus fortes associées au supportdes différentes normes. Les auto-interférences, c’est à dire les interférences généréespar le système lui-même, sont donc de plus en plus présentes, et de plus en plus problématiquesdans les architectures actuelles. Ces travaux s’inscrivent dans le paradigmede la « radio sale » qui consiste à accepter une pollution partielle du signal d’intérêtet à réaliser, par l’intermédiaire d’algorithmes, une atténuation de l’impact de ces pollutionsauto-générées. Dans ce manuscrit, on s’intéresse à différentes auto-interférences(phénomène de "spurs", de "Tx leakage", ...) dont on étudie les modèles numériques etpour lesquelles nous proposons des stratégies de compensation. Les algorithmes proposéssont des algorithmes de traitement du signal adaptatif qui peuvent être vus comme des« algorithmes de soustraction de bruit » basés sur des références plus ou moins précises.Nous dérivons analytiquement les performances transitionnelles et asymptotiques théoriquesdes algorithmes proposés. On se propose également d’ajouter à nos systèmes unesur-couche originale qui permet d’accélérer la convergence, tout en maintenant des performancesasymptotiques prédictibles et paramétrables. Nous validons enfin notre approchesur une puce dédiée aux communications cellulaires ainsi que sur une plateforme de radiologicielle. / Radio frequency transceivers are now massively multi-standards, which meansthat several communication standards can cohabit in the same environment. As a consequence,analog components have to face critical design constraints to match the differentstandards requirements and self-interferences that are directly introduced by the architectureitself are more and more present and detrimental. This work exploits the dirty RFparadigm : we accept the signal to be polluted by self-interferences and we develop digitalsignal processing algorithms to mitigate those aforementioned pollutions and improve signalquality. We study here different self-interferences and propose baseband models anddigital adaptive algorithms for which we derive closed form formulae of both transientand asymptotic performance. We also propose an original adaptive step-size overlay toimprove transient performance of our method. We finally validate our approach on a systemon chip dedicated to cellular communications and on a software defined radio.
|
646 |
The techno-economics of bitumen recovery from oil and tar sands as a complement to oil exploration in Nigeria / E. OrireOrire, Endurance January 2009 (has links)
The Nigeria economy is wholly dependent on revenue from oil. However, bitumen has been discovered in
the country since 1903 and has remained untapped over the years. The need for the country to
complement oil exploration with the huge bitumen deposit cannot be overemphasized. This will help to
improve the country's gross domestic product (GDP) and revenue available to government. Bitumen is
classifled as heavy crude with API (American petroleum Institute) number ranging between 50 and 110
and occurs in Nigeria, Canada, Saudi Arabia, Venezuela etc from which petroleum products could be
derived.
This dissertation looked at the Canadian experience by comparing the oil and tar sand deposit found in Canada with particular reference to Athabasca (Grosmont, Wabiskaw McMurray and Nsiku) with
that in Nigeria with a view of transferring process technology from Canada to Nigeria. The Nigeria and Athabasca tar sands occur in the same type of environment. These are the deltaic, fluvial marine deposit in an incised valley with similar reservoir, chemical and physical properties. However, the Nigeria tar sand is more asphaltenic and also contains more resin and as such will yield more product volume during
hydro cracking albeit more acidic. The differences in the components (viscosity, resin and asphaltenes
contents, sulphur and heavy metal contents) of the tar sands is within the limit of technology adaptation.
Any of the technologies used in Athabasca, Canada is adaptable to Nigeria according to the findings of this research.
The techno-economics of some of the process technologies are. x-rayed using the PTAC (petroleum
technology alliance Canada) technology recovery model in order to obtain their unit cost for Nigeria
bitumen. The unit cost of processed bitumen adopting steam assisted gravity drainage (SAGD), in situ
combustion (ISC) and cyclic steam stimulation (CSS) process technology is 40.59, 25.00 and 44.14
Canadian dollars respectively. The unit cost in Canada using the same process technology is 57.27, 25.00
and 61.33 Canadian dollars respectively. The unit cost in Nigeria is substantively lesser than in Canada.
A trade off is thereafter done using life cycle costing so as to select the best process technology for the
Nigeria oil/tar sands. The net present value/internal rate of return is found to be B$3,062/36.35% for
steam assisted gravity drainage, B$I,570124.51 % for cyclic steam stimulation and B$3,503/39.64% for in
situ combustion. Though in situ combustion returned the highest net present value and internal rate of
return, it proved not to be the best option for Nigeria due to environmental concern and response time to
production. The best viable option for the Nigeria tar sand was then deemed to be steam assisted gravity
drainage.
An integrated oil strategy coupled with cogeneration using MSAR was also seen to considerably amplify
the benefits accruable from bitumen exploration; therefore, an investment in bitumen exploration in
Nigeria is a wise economic decision. / Thesis (M.Ing. (Development and Management))--North-West University, Potchefstroom Campus, 2010.
|
647 |
The techno-economics of bitumen recovery from oil and tar sands as a complement to oil exploration in Nigeria / E. OrireOrire, Endurance January 2009 (has links)
The Nigeria economy is wholly dependent on revenue from oil. However, bitumen has been discovered in
the country since 1903 and has remained untapped over the years. The need for the country to
complement oil exploration with the huge bitumen deposit cannot be overemphasized. This will help to
improve the country's gross domestic product (GDP) and revenue available to government. Bitumen is
classifled as heavy crude with API (American petroleum Institute) number ranging between 50 and 110
and occurs in Nigeria, Canada, Saudi Arabia, Venezuela etc from which petroleum products could be
derived.
This dissertation looked at the Canadian experience by comparing the oil and tar sand deposit found in Canada with particular reference to Athabasca (Grosmont, Wabiskaw McMurray and Nsiku) with
that in Nigeria with a view of transferring process technology from Canada to Nigeria. The Nigeria and Athabasca tar sands occur in the same type of environment. These are the deltaic, fluvial marine deposit in an incised valley with similar reservoir, chemical and physical properties. However, the Nigeria tar sand is more asphaltenic and also contains more resin and as such will yield more product volume during
hydro cracking albeit more acidic. The differences in the components (viscosity, resin and asphaltenes
contents, sulphur and heavy metal contents) of the tar sands is within the limit of technology adaptation.
Any of the technologies used in Athabasca, Canada is adaptable to Nigeria according to the findings of this research.
The techno-economics of some of the process technologies are. x-rayed using the PTAC (petroleum
technology alliance Canada) technology recovery model in order to obtain their unit cost for Nigeria
bitumen. The unit cost of processed bitumen adopting steam assisted gravity drainage (SAGD), in situ
combustion (ISC) and cyclic steam stimulation (CSS) process technology is 40.59, 25.00 and 44.14
Canadian dollars respectively. The unit cost in Canada using the same process technology is 57.27, 25.00
and 61.33 Canadian dollars respectively. The unit cost in Nigeria is substantively lesser than in Canada.
A trade off is thereafter done using life cycle costing so as to select the best process technology for the
Nigeria oil/tar sands. The net present value/internal rate of return is found to be B$3,062/36.35% for
steam assisted gravity drainage, B$I,570124.51 % for cyclic steam stimulation and B$3,503/39.64% for in
situ combustion. Though in situ combustion returned the highest net present value and internal rate of
return, it proved not to be the best option for Nigeria due to environmental concern and response time to
production. The best viable option for the Nigeria tar sand was then deemed to be steam assisted gravity
drainage.
An integrated oil strategy coupled with cogeneration using MSAR was also seen to considerably amplify
the benefits accruable from bitumen exploration; therefore, an investment in bitumen exploration in
Nigeria is a wise economic decision. / Thesis (M.Ing. (Development and Management))--North-West University, Potchefstroom Campus, 2010.
|
648 |
Méthodes de traitement numérique du signal pour l'annulation d'auto-interférences dans un terminal mobile / Digital processing for auto-interference cancellation in mobile architectureGerzaguet, Robin 26 March 2015 (has links)
Les émetteurs-récepteurs actuels tendent à devenir multi-standards c’est-àdireque plusieurs standards de communication peuvent cohabiter sur la même puce. Lespuces sont donc amenées à traiter des signaux de formes très différentes, et les composantsanalogiques subissent des contraintes de conception de plus en plus fortes associées au supportdes différentes normes. Les auto-interférences, c’est à dire les interférences généréespar le système lui-même, sont donc de plus en plus présentes, et de plus en plus problématiquesdans les architectures actuelles. Ces travaux s’inscrivent dans le paradigmede la « radio sale » qui consiste à accepter une pollution partielle du signal d’intérêtet à réaliser, par l’intermédiaire d’algorithmes, une atténuation de l’impact de ces pollutionsauto-générées. Dans ce manuscrit, on s’intéresse à différentes auto-interférences(phénomène de "spurs", de "Tx leakage", ...) dont on étudie les modèles numériques etpour lesquelles nous proposons des stratégies de compensation. Les algorithmes proposéssont des algorithmes de traitement du signal adaptatif qui peuvent être vus comme des« algorithmes de soustraction de bruit » basés sur des références plus ou moins précises.Nous dérivons analytiquement les performances transitionnelles et asymptotiques théoriquesdes algorithmes proposés. On se propose également d’ajouter à nos systèmes unesur-couche originale qui permet d’accélérer la convergence, tout en maintenant des performancesasymptotiques prédictibles et paramétrables. Nous validons enfin notre approchesur une puce dédiée aux communications cellulaires ainsi que sur une plateforme de radiologicielle. / Radio frequency transceivers are now massively multi-standards, which meansthat several communication standards can cohabit in the same environment. As a consequence,analog components have to face critical design constraints to match the differentstandards requirements and self-interferences that are directly introduced by the architectureitself are more and more present and detrimental. This work exploits the dirty RFparadigm : we accept the signal to be polluted by self-interferences and we develop digitalsignal processing algorithms to mitigate those aforementioned pollutions and improve signalquality. We study here different self-interferences and propose baseband models anddigital adaptive algorithms for which we derive closed form formulae of both transientand asymptotic performance. We also propose an original adaptive step-size overlay toimprove transient performance of our method. We finally validate our approach on a systemon chip dedicated to cellular communications and on a software defined radio.
|
649 |
Gene expression programming for logic circuit designMasimula, Steven Mandla 02 1900 (has links)
Finding an optimal solution for the logic circuit design problem is challenging and time-consuming especially
for complex logic circuits. As the number of logic gates increases the task of designing optimal logic circuits
extends beyond human capability. A number of evolutionary algorithms have been invented to tackle a range
of optimisation problems, including logic circuit design. This dissertation explores two of these evolutionary
algorithms i.e. Gene Expression Programming (GEP) and Multi Expression Programming (MEP) with the
aim of integrating their strengths into a new Genetic Programming (GP) algorithm. GEP was invented by
Candida Ferreira in 1999 and published in 2001 [8]. The GEP algorithm inherits the advantages of the Genetic
Algorithm (GA) and GP, and it uses a simple encoding method to solve complex problems [6, 32]. While
GEP emerged as powerful due to its simplicity in implementation and
exibility in genetic operations, it is
not without weaknesses. Some of these inherent weaknesses are discussed in [1, 6, 21]. Like GEP, MEP is a
GP-variant that uses linear chromosomes of xed length [23]. A unique feature of MEP is its ability to store
multiple solutions of a problem in a single chromosome. MEP also has an ability to implement code-reuse which
is achieved through its representation which allow multiple references to a single sub-structure.
This dissertation proposes a new GP algorithm, Improved Gene Expression Programming (IGEP) which im-
proves the performance of the traditional GEP by combining the code-reuse capability and simplicity of gene encoding method from MEP and GEP, respectively. The results obtained using the IGEP and the traditional
GEP show that the two algorithms are comparable in terms of the success rate when applied on simple problems
such as basic logic functions. However, for complex problems such as one-bit Full Adder (FA) and AND-OR
Arithmetic Logic Unit (ALU) the IGEP performs better than the traditional GEP due to the code-reuse in IGEP / Mathematical Sciences / M. Sc. (Applied Mathematics)
|
650 |
Uplatňování principu participace v domech dětí a mládeže / Application of a participation principle in children and youth centresČENOVSKÁ, Petra January 2012 (has links)
The next part of this thesis deals with the development of children and youth participation, it explains the term 'participation', brings up the specification and importance of youth participation and it deals with the range of participative pedagogy as well. The second part enquires into leisure time centres. The term 'leisure time' is explained in conection with children and youth and conditions and activities of these centres are discussed especially aming at children and youth centres. A function of after-school education is mentioned and the issue of using free time is dealt with in such a way that its utilization can support their active participation. The final part discusses functioning of participation in assorted children and youth centres in South Bohemia. The survey is focused on the examination of grades and forms of participation in a hobby group.
|
Page generated in 0.0283 seconds