Spelling suggestions: "subject:"attern recognition systems."" "subject:"battern recognition systems.""
491 |
Efficient construction of multi-scale image pyramids for real-time embedded robot visionEntschev, Peter Andreas 16 December 2013 (has links)
Detectores de pontos de interesse, ou detectores de keypoints, têm sido de grande interesse para a área de visão robótica embarcada, especialmente aqueles que possuem robustez a variações geométricas, como rotação, transformações afins e mudanças em escala. A detecção de características invariáveis a escala é normalmente realizada com a construção de pirâmides de imagens em multiescala e pela busca exaustiva de extremos no espaço de escala, uma abordagem presente em métodos de reconhecimento de objetos como SIFT e SURF. Esses métodos são capazes de encontrar pontos de interesse bastante robustos, com propriedades adequadas para o reconhecimento de objetos, mas são ao mesmo tempo computacionalmente custosos. Nesse trabalho é apresentado um método eficiente para a construção de pirâmides de imagens em sistemas embarcados, como a plataforma BeagleBoard-xM, de forma similar ao método SIFT. O método aqui apresentado tem como objetivo utilizar técnicas computacionalmente menos custosas e a reutilização de informações previamente processadas de forma eficiente para reduzir a complexidade computacional. Para simplificar o processo de construção de pirâmides, o método utiliza filtros binomiais em substituição aos filtros Gaussianos convencionais utilizados no método SIFT original para calcular múltiplas escalas de uma imagem. Filtros binomiais possuem a vantagem de serem implementáveis utilizando notação ponto-fixo, o que é uma grande vantagem para muitos sistemas embarcados que não possuem suporte nativo a ponto-flutuante. A quantidade de convoluções necessária é reduzida pela reamostragem de escalas já processadas da pirâmide. Após a apresentação do método para construção eficiente de pirâmides, é apresentada uma maneira de implementação eficiente do método em uma plataforma SIMD (Single Instruction, Multiple Data, em português, Instrução Única, Dados Múltiplos) – a plataforma SIMD usada é a extensão ARM Neon disponível no processador ARM Cortex-A8 da BeagleBoard-xM. Plataformas SIMD em geral são muito úteis para aplicações multimídia, onde normalmente é necessário realizar a mesma operação em vários elementos, como pixels em uma imagem, permitindo que múltiplos dados sejam processados com uma única instrução do processador. Entretanto, a extensão Neon no processador Cortex-A8 não suporta operações em ponto-flutuante, tendo o método sido cuidadosamente implementado de forma a superar essa limitação. Por fim, alguns resultados sobre o método aqui proposto e método SIFT original são apresentados, incluindo seu desempenho em tempo de execução e repetibilidade de pontos de interesse detectados. Com uma implementação direta (sem o uso da plataforma SIMD), é mostrado que o método aqui apresentado necessita de aproximadamente 1/4 do tempo necessário para construir a pirâmide do método SIFT original, ao mesmo tempo em que repete até 86% dos pontos de interesse. Com uma abordagem completamente implementada em ponto-fixo (incluindo a vetorização com a plataforma SIMD) a repetibilidade chega a 92% dos pontos de interesse do método SIFT original, porém, reduzindo o tempo de processamento para menos de 3%. / Interest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
|
492 |
Vision-based moving pedestrian recognition from imprecise and uncertain data / Reconnaissance de piétons par vision à partir de données imprécises et incertainesZhou, Dingfu 05 December 2014 (has links)
La mise en oeuvre de systèmes avancés d’aide à la conduite (ADAS) basée vision, est une tâche complexe et difficile surtout d’un point de vue robustesse en conditions d’utilisation réelles. Une des fonctionnalités des ADAS vise à percevoir et à comprendre l’environnement de l’ego-véhicule et à fournir l’assistance nécessaire au conducteur pour réagir à des situations d’urgence. Dans cette thèse, nous nous concentrons sur la détection et la reconnaissance des objets mobiles car leur dynamique les rend plus imprévisibles et donc plus dangereux. La détection de ces objets, l’estimation de leurs positions et la reconnaissance de leurs catégories sont importants pour les ADAS et la navigation autonome. Par conséquent, nous proposons de construire un système complet pour la détection des objets en mouvement et la reconnaissance basées uniquement sur les capteurs de vision. L’approche proposée permet de détecter tout type d’objets en mouvement en fonction de deux méthodes complémentaires. L’idée de base est de détecter les objets mobiles par stéréovision en utilisant l’image résiduelle du mouvement apparent (RIMF). La RIMF est définie comme l’image du mouvement apparent causé par le déplacement des objets mobiles lorsque le mouvement de la caméra a été compensé. Afin de détecter tous les mouvements de manière robuste et de supprimer les faux positifs, les incertitudes liées à l’estimation de l’ego-mouvement et au calcul de la disparité doivent être considérées. Les étapes principales de l’algorithme sont les suivantes : premièrement, la pose relative de la caméra est estimée en minimisant la somme des erreurs de reprojection des points d’intérêt appariées et la matrice de covariance est alors calculée en utilisant une stratégie de propagation d’erreurs de premier ordre. Ensuite, une vraisemblance de mouvement est calculée pour chaque pixel en propageant les incertitudes sur l’ego-mouvement et la disparité par rapport à la RIMF. Enfin, la probabilité de mouvement et le gradient de profondeur sont utilisés pour minimiser une fonctionnelle d’énergie de manière à obtenir la segmentation des objets en mouvement. Dans le même temps, les boîtes englobantes des objets mobiles sont générées en utilisant la carte des U-disparités. Après avoir obtenu la boîte englobante de l’objet en mouvement, nous cherchons à reconnaître si l’objet en mouvement est un piéton ou pas. Par rapport aux algorithmes de classification supervisée (comme le boosting et les SVM) qui nécessitent un grand nombre d’exemples d’apprentissage étiquetés, notre algorithme de boosting semi-supervisé est entraîné avec seulement quelques exemples étiquetés et de nombreuses instances non étiquetées. Les exemples étiquetés sont d’abord utilisés pour estimer les probabilités d’appartenance aux classes des exemples non étiquetés, et ce à l’aide de modèles de mélange de gaussiennes après une étape de réduction de dimension réalisée par une analyse en composantes principales. Ensuite, nous appliquons une stratégie de boosting sur des arbres de décision entraînés à l’aide des instances étiquetées de manière probabiliste. Les performances de la méthode proposée sont évaluées sur plusieurs jeux de données de classification de référence, ainsi que sur la détection et la reconnaissance des piétons. Enfin, l’algorithme de détection et de reconnaissances des objets en mouvement est testé sur les images du jeu de données KITTI et les résultats expérimentaux montrent que les méthodes proposées obtiennent de bonnes performances dans différents scénarios de conduite en milieu urbain. / Vision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios.
|
493 |
Une méthodologie de Reverse Engineering à partir de données hétérogènes pour les pièces et assemblages mécaniques / A methodology of Reverse Engineering from heterogeneous data for parts and mechanical assembliesBruneau, Marina 22 March 2016 (has links)
Cette thèse traite d'une méthodologie de Reverse Engineering (RE) d'assemblages mécaniques à partir de données hétérogènes dans un contexte routinier. Cette activité consiste à partir d'un produit ou d'un assemblage, à récupérer la donnée numérique en partant de la donnée physique dans le but de reconstruire sa maquette numérique. Plusieurs techniques de numérisation peuvent être employées et permettent de générer des données de différents types (ex : nuage de points, photographies). Ces dernières sont utilisées comme données d'entrée à notre processus de RE et peuvent aussi être associées à des données liées au produit, existantes au préalable, telles que des mises en plan ou encore une version antérieure de la maquette numérique du produit. Le traitement de l'ensemble de ces données, dites "hétérogènes", requiert une solution qui soit capable de gérer d'une part, l'hétérogénéité des données et des informations qu'elles contiennent et d'autre part, l'incomplétude de certaines données qui est liée au bruit ou à la technologie utilisée pour numériser l'assemblage (ex : scanner ou photographie). Enfin la pertinence des informations extraites lors de la phase de traitement doit permettre, dans certains cas, de générer des modèles CAO paramétrés, propres à l'activité de RE de l'entreprise ou du domaine d'application. L'état de l'art sur la reconnaissance de formes dans des données hétérogènes ainsi que sur la gestion de connaissances dans le cadre d'activités routinières, propose des approches qui traitent soit d'un seul type de données, soit du RE de pièce unique ou soit elles ne permettent pas d'obtenir un modèle CAO qui soit exploitable (paramétrage géométrique des entités) pour une activité de RE. Cette thèse propose une méthodologie nommée Heterogeneous Data Integration for Reverse Engineering (HDI-RE) et qui se décompose en trois étapes : la segmentation, la signature et la comparaison avec une base de connaissances. Le but de cette méthode est d'automatiser le processus de RE et notamment en ce qui concerne les étapes de reconnaissance de composants dans les données d'entrée et d'aide à la reconstruction de modèles CAO (paramétrés ou non) en récupérant des informations géométriques et topologiques dans des données d'entrée. Pour cela, ces dernières sont segmentées afin d'en extraire des informations qui sont en suite formalisées sous la forme de signatures. Les signatures générées sont ensuite comparées à une base de connaissances comportant un ensemble de signatures de différents types et appartenant à des produits ou objets déjà connus. Le calcul des similarités issu de la comparaison permet d'identifier les composants présents dans les données en entrée. L'apport scientifique de ces travaux repose principalement sur l'utilisation de signatures qui, en fonction du souhait de l'utilisateur, permettent de reconstruire une maquette numérique en sortie du processus de RE avec trois niveaux d'information : un niveau global, un niveau géométrique et topologique ou un niveau fonctionnel. Par rapport à chaque niveau et du type de données traité, un mécanisme de signature dédié est proposé. / This thesis deals with a methodology of Reverse Engineering (RE) of mechanical assemblies from heterogeneous data in a routine context. This activity consists, from the existing data of a part or an assembly, in rebuilding their digital mock-up. The data used in entrance of our process of RE can be drawings, photos, points clouds or another existing version of the digital mock-up. The proposed approach, called Heterogeneous Data Integration for Reverse Engineering (HDI-RE), is divided into three steps : the segmentation, the signature and the comparison of the initial data with a knowledge database. The signatures of the studied object are compared with the signatures of the same type existing in the database in order to extract components ordered by similarity (distance with the object). The parameterized digital mock-up which is the most similar to the object is then extracted and its parameters identified from the initial data. Data set processing, called "heterogeneous" data, requires a solution which is able to manage on one hand, the heterogeneousness of the data and the information which they contain and on the other hand, the incompleteness of some data which are in link with the noise (into points cloud) or with the technology used to digitize the assembly (ex: scanner or photography).
|
494 |
Sistema de apoio na inspeção radiográfica computadorizada de juntas soldadas de tubulações de petróleoKroetz, Marcel Giovani 22 December 2012 (has links)
Petrobras / A inspeção radiográfica de juntas soldadas de tubulações é a atividade minuciosa e cuidadosa de observar imagens radiográficas de juntas soldadas em busca de pequenos defeitos e descontinuidades que possam comprometer a resistência mecânica dessas juntas. Como toda atividade que requer atenção constante, a inspeção radiográfica está sujeita a erros principalmente devido a fadiga visual e distrações naturais devido a repetitividade e monotonia inerentes à essa atividade. No presente trabalho, apresentam-se duas metodologias que têm por objetivo o auxílio e a automação da atividade de inspeção: a detecção automática dos cordões de solda nas radiografias e o realce das descontinuidades; compondo entre outras funcionalidades, um aplicativo completo de auxílio na inspeção radiográfica que agrega ainda a possibilidade de automação do processamento dessas imagens através da construção de rotinas e sua posterior aplicação a um conjunto de imagens semelhantes. Os resultados obtidos na detecção automática do cordão de solda são promissores, sendo possível, através da metodologia proposta, detectar cordões provenientes diferentes técnicas de ensaios radiográficos usuais. Quanto aos resultados do realce das descontinuidades, apesar de estes ainda não levarem a uma inspeção completamente autônoma e não supervisionada, apresentam resultados melhores do que aqueles existentes atualmente na literatura, principalmente quanto a correlação entre contraste visual do resultado do realce e a probabilidade de ocorrência de descontinuidades nas regiões demarcadas. Por fim, o realce das descontinuidades em conjunto com um aplicativo completo e iterativo contribui para uma maior leveza na atividade de inspeção, com o que se espera uma expressiva redução das taxas de erro devido à fadiga visual e um aumento considerável da produtividade através da automação das rotinas mais repetitivas de processamento digital a que as imagens radiográficas são submetidas durante sua inspeção. / The weld bead radiographic inspection is the activity of meticulously observe a radiographic image looking for small defects and discontinuities in the welded joints that can compromise the mechanical resistance of that joints. As any other activity than requires constant attention, the weld bead inspection is error prone due to visual fatigue, repetition and others distractions inherent to these activity. In this work, two new methodologies for help in the inspection activities are presented: the automatic detection of the weld bead and the highlighting of the weld bead discontinuities. Those that, among others functionalities, are included in a complete software solution for help in the weld bead inspection. Including the feature of macro programing for automation of the most common image processing routines and further processing bath of images in an automatic way. The results from the automatic weld bead detection is beyond the satisfactory, detecting weld bead from all the usual radiographic techniques. About the results of the highlight of the discontinuities, although that are not suited for a complete non supervised weld bead inspection, their correlation among intensity and the probability of the presence of a discontinuity is very well suited for discontinuities highlighting, a helpful tool in weld bead inspection. In conclusion, the proposed methodologies. combined with a fully featured interactive software solution, a lot contribute for the weld bead inspection activity, a decreased error rate due to visual fatigue and a better overall performance due to the automation of the most common procedures involved in this activity.
|
495 |
Multiresolution variance-based image fusionRagozzino, Matthew 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Multiresolution image fusion is an emerging area of research for use in military and commercial applications. While many methods for image fusion have been developed, improvements can still be made. In many cases, image fusion methods are tailored to specific applications and are limited as a result. In order to make improvements to general image fusion, novel methods have been developed based on the wavelet transform and empirical variance. One particular novelty is the use of directional filtering in conjunction with wavelet transforms. Instead of treating the vertical, horizontal, and diagonal sub-bands of a wavelet transform the same, each sub-band is handled independently by applying custom filter windows. Results of the new methods exhibit better performance across a wide range of images highlighting different situations.
|
496 |
Design and evaluation of a secure, privacy-preserving and cancelable biometric authentication : Bio-CapsuleSui, Yan 04 September 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A large portion of system breaches are caused by authentication failure either during the system login process or even in the post-authentication session, which is further related to the limitations associated with existing authentication approaches. Current authentication methods, whether proxy based or biometrics based, are hardly user-centric; and they either put burdens on users or endanger users' (biometric) security and privacy. In this research, we propose a biometrics based user-centric authentication approach. The main idea is to introduce a reference subject (RS) (for each system), securely fuse the user's biometrics with the RS, generate a BioCapsule (BC) (from the fused biometrics), and employ BCs for authentication. Such an approach is user-friendly, identity-bearing yet privacy-preserving, resilient, and revocable once a BC is compromised. It also supports "one-click sign on" across multiple systems by fusing the user's biometrics with a distinct RS on each system. Moreover, active and non-intrusive authentication can be automatically performed during the user's post-authentication on-line session. In this research, we also formally prove that the proposed secure fusion based BC approach is secure against various attacks and compare the new approach with existing biometrics based approaches. Extensive experiments show that the performance (i.e., authentication accuracy) of the new BC approach is comparable to existing typical biometric authentication approaches, and the new BC approach also possesses other desirable features such as diversity and revocability.
|
497 |
Automated image classification via unsupervised feature learning by K-meansKarimy Dehkordy, Hossein 09 July 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming.
This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.
|
498 |
Query Segmentation For E-Commerce SitesGong, Xiaojing 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Query segmentation module is an integral part of Natural Language Processing which analyzes users' query and divides them into separate phrases. Published works on the query segmentation focus on the web search using Google n-gram frequencies corpus or text retrieval from relational databases. However, this module is also useful in the domain of E-Commerce for product search. In this thesis, we will discuss query segmentation in the context of the E-Commerce area. We propose a hybrid
unsupervised segmentation methodology which is based on prefix tree, mutual information and relative frequency count to compute the score of query pairs and involve Wikipedia for new words recognition. Furthermore, we use two unique E-Commerce evaluation methods to quantify the accuracy of our query segmentation method.
|
499 |
Machine Vision Assisted In Situ Ichthyoplankton Imaging SystemIyer, Neeraj 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Recently there has been a lot of effort in developing systems for sampling and automatically classifying plankton from the oceans. Existing methods assume the specimens have already been precisely segmented, or aim at analyzing images containing single specimen (extraction of their features and/or recognition of specimens as single targets in-focus in small images). The resolution in the existing systems is limiting. Our goal is to develop automated, very high resolution image sensing of critically important, yet under-sampled, components of the planktonic community by addressing both the physical sensing system (e.g. camera, lighting, depth of field), as well as crucial image extraction and recognition routines. The objective of this thesis is to develop a framework that aims at (i) the detection and segmentation of all organisms of interest automatically, directly from the raw data, while filtering out the noise and out-of-focus instances, (ii) extract the best features from images and (iii) identify and classify the plankton species. Our approach focusses on utilizing the full computational power of a multicore system by implementing a parallel programming approach that can process large volumes of high resolution plankton images obtained from our newly designed imaging system (In Situ Ichthyoplankton Imaging System (ISIIS)). We compare some of the widely used segmentation methods with emphasis on accuracy and speed to find the one that works best on our data. We design a robust, scalable, fully automated system for high-throughput processing of the ISIIS imagery.
|
500 |
POLYNOMIAL CURVE FITTING INDICES FOR DYNAMIC EVENT DETECTION IN WIDE-AREA MEASUREMENT SYSTEMSLongbottom, Daniel W. 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In a wide-area power system, detecting dynamic events is critical to maintaining system stability. Large events, such as the loss of a generator or fault on a transmission line, can compromise the stability of the system by causing the generator rotor angles to diverge and lose synchronism with the rest of the system. If these events can be detected as they happen, controls can be applied to the system to prevent it from losing synchronous stability. In order to detect these events, pattern recognition tools can be applied to system measurements. In this thesis, the pattern recognition tool decision trees (DTs) were used for event detection. A single DT produced rules distinguishing between and the event and no event cases by learning on a training set of simulations of a power system model. The rules were then applied to test cases to determine the accuracy of the event detection. To use a DT to detect events, the variables used to produce the rules must be chosen. These variables can be direct system measurements, such as the phase angle of bus voltages, or indices created by a combination of system measurements. One index used in this thesis was the integral square bus angle (ISBA) index, which provided a measure of the overall activity of the bus angles in the system. Other indices used were the variance and rate of change of the ISBA. Fitting a polynomial curve to a sliding window of these indices and then taking the difference between the polynomial and the actual index was found to produce a new index that was non-zero during the event and zero all other times for most simulations. After the index to detect events was chosen to be the error between the curve and the ISBA indices, a set of power system cases were created to be used as the training data set for the DT. All of these cases contained one event, either a small or large power injection at a load bus in the system model. The DT was then trained to detect the large power injection but not the small one. This was done so that the rules produced would detect large events on the system that could potentially cause the system to lose synchronous stability but ignore small events that have no effect on the overall system. This DT was then combined with a second DT that predicted instability such that the second DT made the decision whether or not to apply controls only for a short time after the end of every event, when controls would be most effective in stabilizing the system.
|
Page generated in 0.1317 seconds