• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 17
  • 12
  • 11
  • 6
  • 6
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 187
  • 101
  • 43
  • 29
  • 28
  • 24
  • 24
  • 22
  • 19
  • 19
  • 18
  • 18
  • 18
  • 17
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Řízení nelineárních systémů s využitím lokálních aproximačních metod / Control of Nonlinear Systems using Local Approximation Methods

Brablc, Martin January 2016 (has links)
Tato práce se zabývá návrhem adaptivního řídícího algortitmu pro konkrétní třídu elektromechanických aktuátorů, založeného na principu dopředného řízení pomocí inverzního dynamického modelu. Adaptibilita řízení spočívá v mechanismu získání inverzního dynamického modelu. Tato práce se zaměřuje na jeho online aproximaci pomocí lokálních aproximačních metod. Výstupem práce je shrnutí analýzy, simulačního testování a reálných experimentů, které testovaly možnosti praktického využití lokálných aproximačních metod pro účely adaptivního řízení v reálném prostředí.
172

Numerical Modeling of Blast-Induced Liquefaction

Lee, Wayne Yeung 13 July 2006 (has links) (PDF)
A research study has been conducted to simulate liquefaction in saturated sandy soil induced by nearby controlled blasts. The purpose of the study is to help quantify soil characteristics under multiple and consecutive high-magnitude shock environments similar to those produced by large earthquakes. The simulation procedure involved the modeling of a three-dimensional half-space soil region with pre-defined, embedded, and strategically located explosive charges to be detonated at specific time intervals. LS-DYNA, a commercially available finite element hydrocode, was the solver used to simulate the event. A new geo-material model developed under the direction of the U.S. Federal Highway Administration was applied to evaluate the liquefaction potential of saturated sandy soil subjected to sequential blast environments. Additional procedural enhancements were integrated into the analysis process to represent volumetric effects of the saturated soil's transition from solid to liquid during the liquefaction process. Explosive charge detonation and pressure development characteristics were modeled using proven and accepted modeling techniques. As explosive charges were detonated in a pre-defined order, development of pore water pressure, volumetric (compressive) strains, shear strains, and particle accelerations were carefully computed and monitored using custom developed MathCad and C/C++ routines. Results of the study were compared against blast-test data gathered at the Fraser River Delta region of Vancouver, British Columbia in May of 2005 to validate and verify the modeling procedure's ability to simulate and predict blast-induced liquefaction events. Reasonable correlations between predicted and measured data were observed from the study.
173

Development of Dynamic Test Method and Optimisation of Hybrid Carbon Fibre B-pillar

Johansson, Emil, Lindmark, Markus January 2017 (has links)
The strive for lower fuel consumption and downsizing in the automotive industry has led to the use of alternative high performance materials, such as fibre composites. Designing chassis components with composite materials require accurate simulation models in order to capture the behaviour in car crashes. By simplifying the development process of a B-pillar with a new dynamic test method, composite material products could reach the market faster. The setup has to predict a cars side impact crash performance by only testing the B-pillar in a component based environment. The new dynamic test method with more realistic behaviour gives a better estimation of how the B-pillar, and therefore the car, will perform in a full-scale car side impact test. With the new improved tool for the development process, the search for a lighter product with better crash worthiness is done by optimising a steel carbon fibre hybrid structure in the B-pillar. The optimisation includes different carbon fibre materials, composite laminate lay-up and stiffness analysis. By upgrading simulation models with new material and adhesive representation physical prototypes could be built to verify the results. Finally the manufactured steel carbon fibre hybrid B-pillar prototypes were tested in the developed dynamic test method for a comparison to the steel B-pillar. The hybrid B-pillars perform better than the reference steel B-pillar in the dynamic tests also being considerably lighter. As a final result a hybrid B-pillar is developed that will decrease fuel consumption and meet the requirements of any standardized side impact crash test. / Strävan efter lägre bränsleförbrukning och minimalistiskt tänkande inom bilindustrin har lett till användning av alternativa högpresterande material, såsom fiberkompositer. Vid design av chassi-komponenter utav kompositer krävs noggranna simuleringsmodeller för att fånga upp bilens beteende vid en krock. Genom att förenkla utvecklingsprocessen för en B-stolpe med en ny dynamisk testmetod kan produkter bestående av fiberkompositer nå marknaden snabbare. Provuppställningen skall förutse bilens prestanda vid ett sidokrocktest genom att endast testa B-stolpen i en komponentbaserad miljö. Den nya dynamiska testmetoden med ett mer realistiskt beteende skall ge en bättre uppskattning om hur B-stolpen, och därmed bilen, kommer att prestera i ett fullskaligt sidokrocktest. Med utvecklingsprocessens nya förbättrade verktyg kan strävan mot lättare produkter med bättre krocksäkerhet utvecklas genom optimering av en hybrid B-stolpe i stål och kolfiber. Optimeringen innefattar olika kolfibermaterial, laminatvarianter och styvhetsanalyser. Genom att uppgradera simuleringsmodeller med nya material och adhesiva metoder kunde fysiska prototyper tillverkas för att verifiera resultaten. Slutligen testades de tillverkade prototyperna utav stål och kolfiber i den nyutvecklade dynamiska testmetoden för jämförelse mot den ursprungliga stål B-stolpen. Hybrid B-stolparna presterade bättre än referensstolpen utav stål i de dynamiska provningarna och är samtidigt betydligt lättare. Det slutgiltigt resultatet är en utvecklad hybrid B-stolpe som både ger minskad bränsleförbrukningen och uppfyller kraven för ett standardiserat sidokrocktest.
174

Structural responses due to underwater detonations : Validation of explosion modelling methods using LS-DYNA

Blomgren, Gustav, Carlsson, Ebba January 2023 (has links)
Modelling the full event of an underwater explosion (UNDEX) is complex and requires advanced modelling methods in order to achieve accurate responses. The process of an UNDEX includes a series of events that has to be considered. When a detonation is initiated, a shock-wave propagates and the rest products from the explosive material creates a gaseous bubble with high pressure which pulsates and impacts the surroundings. Reflections of the initial shock-wave can also appear if it hits the sea floor, water surface or other obstacles. There are different approaches how to numerically model the impact of an UNDEX on a structure, some with analytical approaches without a water domain and others where a water domain has to be modelled. This master’s thesis focuses on two modelling methods that are available in the finite element software LS-DYNA. The simpler method is called Sub-Sea Analysis (SSA) and does not require a water domain, thus it can be beneficial to use in an early design stage, or when only approximated responses are desired. To increase the accuracy, a more complex method called S-ALE can be used. By implementing this method, the full process of an UNDEX can be studied since both the fluid domain and explosive material are meshed. These methods are studied separately together with a combination of them. Another important aspect to be considered is that oscillations of a structure submerged in water differs from the behavior it has in air. Depending on the numerical method used, the impact of the water can be included. Natural frequencies of structures submerged in water are studied, how it changes and how the methods takes this into account. To verify the numerical models, experiments were executed with a cylindrical test object where the distance and weight of charge were altered through out the test series. It was found that multiple aspects affects the results from the experiments, that are not captured in the numerical models. These aspects have for instance to do with reflections, how accurate the test object is modelled and the damping effects of the water. It is concluded that the numerical models are sensitive when small charges and fragile structures are studied. High frequency oscillations were not triggered in the experiment but found for both methods. It should be further investigated if the methods are more accurate for larger charges and stronger structures. Experiments with larger water domain would also be beneficial to reduce effects from reflections, as well as a more accurate model of the cylinder in the simulations.
175

Occupant restraint modeling: Seat belt design

Patlu, Srikanth January 2001 (has links)
No description available.
176

Investigation of a thermomechanical process in a high temperature deformation simulator using an FE software : Using LS-DYNA to create a digital twin of the hot deformation simulator Gleeble-3800 GTC Hydrawedge module.

Tregulov, Farhad January 2024 (has links)
Thermomechanical processes such as hot rolling have been used in the industry for a long time to process and shape metals to a desired form with specific properties. However it can be difficult to make changes to the different process parameters. That's where it is beneficial to use a hot deformation simulator such as the Gleeble 3800-GTC. It can be used to test metals in a controlled environment where the deformation, temperature and other parameters are easily changed. When the machine uses a Hydrawedge module, it is able to simulate hot rolling using uniaxial compression at high temperatures. Swerim AB has one such machine and has requested to investigate what occurs inside a specimen during testing in the Gleeble, specifically inside two low-alloyed steels with a hardness between 400 and 500 HV. Such tests were replicated using LS-DYNA, an FE software. The goal was to acquire true stress-strain graphs that showed similar behaviour to the data from the Gleeble and plots of the effective plastic strain which could be correlated to the grain structure pattern inside the deformed cylinders. An FE-model was created which replicates the procedure. The model was verified through numerous steps. An initial mesh verification was done where the simulation time took at least 5 hours and at most 86 hours. Using a technique called mass scaling, the elements inside the model were manipulated using additional mass to increase their time step and reduce the computational time. A verification of the mass scaling was done where the computational time was weighed off against accuracy. Afterwards the friction had to be verified where it was found that the Gleeble test specimens were deformed more than necessary which was taken into account and the models were adjusted for friction verification. After all was said and done, the model had a reasonable friction coefficient with an optimal mesh and mass scaling configuration. The resulting model simulated a test of 0.5 seconds in 15 minutes and only costing at most 10 MPa in accuracy when experimental results have maximum values between 110 to 220 MPa depending on the scenario. This equals an approximate error of around 5-10%. When investigating the grain structure after 100 seconds of relaxation, the computational time amounted to 52 hours but could be reduced to 12 hours when simulating 30 seconds as there was no change in the effective plastic strain after that time. The final model has a high enough accuracy which, when combined with the Gleeble, is able to confirm material models and describe what occurs in the material during conditions akin to hot rolling.
177

Invloed van denkontwikkeling op die aanleer van Afrikaans as tweede taal by hoërskoolleerders / The influence that thought development has on high school students when learning Afrikaans as a second language

Noke, Daisy Deseré 11 1900 (has links)
Text in Afrikaans / Die doel van die ondersoek sentreer rondom die verband tussen denkontwikkeling (‘n kognitiewe veranderlike) en prestasie in Afrikaans as tweede taal. Denkont-wikkeling is egter nie die enigste kognitiewe veranderlike wat prestasie in ‘n taal kan beïnvloed nie. Verbale begrip en geheue is ook as vername kognitiewe verander-likes geïdentifiseer. Affektiewe veranderlikes soos motivering, selfkonsep en angs, kan ook met prestasie in Afrikaans as tweede taal in verband gebring word. ‘n Empiriese ondersoek is uitgevoer waarby 174 hoërskoolleerders betrek is. Kognitiewe en affektiewe veranderlikes, asook leerstyl, is gemeet. Uit die empiriese ondersoek blyk dit dat selfkonsep, geheue, verbale begrip en motivering as die vernaamste veranderlikes beskou kan word wat met prestasie in Afrikaans as tweede taal verband hou. Denkontwikkeling is nie as ‘n vername faktor geïdentifiseer nie. Die bevindinge in die literatuurstudie en die empiriese ondersoek is bespreek om ouers en onderwysers van riglyne te voorsien om prestasie in Afrikaans te verhoog. / The aim of the research focusses on the relationship between thought development (a cognitive variable) and achievement in Afrikaans as a second language. Besides thought development, verbal understanding and memory recollection are also identified as distinctive cognitive variables. Affective variables such as motivation, self-concept and anxiety could also relate to performance in Afrikaans as a second language. Empirical research was conducted amongst 174 High School learners. Cognitive variables, affective variables and learning styles were measured. Resulting from the empirical research, it appears that self-concept, memory recollection, verbal comprehension and motivation are the main variables that impact on the performance of Afrikaans as a second language. Thought process development was not identified as a main factor. Results from the literature study and the empirical research are discussed in order to assist parents and teachers with guidelines to increase achievement in Afrikaans as a second language. / Psychology of Education / M. Ed. (Sielkundige Opvoedkunde)
178

Performance, efficiency and complexity in multiple access large-scale MIMO Systems. / Desempenho, eficiência e complexidade de sistemas de comunicação MIMO denso de múltiplo acesso.

Mussi, Alex Miyamoto 08 May 2019 (has links)
Systems with multiple transmitting and receiving antennas in large-scale (LS-MIMO - large-scale multipleinput multiple-output) enable high spectral and energy efficiency gains, which results in an increase in the data transmission rate in the same band, without increasing the transmitted power per user. In addition, with the increase of the number of antennas in the base station (BS) it is possible to attend to a larger number of users per cell, in the same occupied band. Furthermore, it has been found in the literature that the reported advantages of LS-MIMO systems can be obtained with a large number of antennas on at least one side of the communication, usually in BS due to physical restriction in user equipments. However, such advantages have their cost: the use of a large number of antennas also difficult tasks involving signal processing, such as estimation of channel coefficients, precoding and signal detection. It is at this juncture that this Doctoral Thesis is developed, in which the computational complexity of performing efficient detection methods in LSMIMO communication systems is explored through the analysis of algorithms and optimization techniques in the solution of specific problems and still open. More precisely, this Thesis discusses and proposes promising detection techniques in LS-MIMO systems, aiming to improve performance metrics - in terms of error rate - and computational complexity - in terms of the number of mathematical operations. Initially, the problem is introduced through a conventional MIMO system model, where channels with imperfect estimates and correlation between transmitter (Tx) and receiver (Rx) antennas are considered. Preprocessing techniques based on lattice reduction (LR) are applied in linear detectors, in addition to the sphere decoder (SD), which proposes a lookup table procedure in order to provide a reduction in computational complexity. It is shown that the LR method in the pre-detection results in a significant performance gain in both the condition of uncorrelated and correlated channels, and in the latter scenario the improvement is even more remarkable due to the diversity gain provided. On the other hand, the complexity involved in the application of LR in high correlation scenarios becomes preponderant in linear detectors. In the LR-SD using the lookup table procedure, the optimum gain was reached in all scenarios, as expected, and resulted in a lower complexity than maximum likelihood (ML) detector, even with maximum correlation between antennas, which represents the most complex scenario for the LR technique. Next, the message passing (MP) detector is investigated, which makes use of Markov random fields (MRF) and factor graph (FG) graphical models. Moreover, it is shown in the literature that the message damping (MD) method applied to the MRF detector brings relevant performance gain without increasing computational complexity. On the other hand, the DF value is specified for only a restricted range of scenarios. Numerical results are extensively generated, in order to obtain a range of analysis of the MRF with MD, which resulted in the proposition of an optimal value for the DF, based on numerical curve fitting. Finally, in the face of the MGS detector, two approaches are proposed to reduce the negative impact caused by the random solution when high modulation orders are employed. The first is based on an average between multiple samples, called aMGS (averaged MGS). The second approach deploys a direct restriction on the range of the random solution, limiting in d the neighborhood of symbols that can be sorted, being called d-sMGS. Numerical simulation results show that both approaches result in gain of convergence in relation to MGS, especially: in regions of high system loading, d-sMGS detection demonstrated significant gain in both performance and complexity compared to aMGS and MGS; although in low-medium loading, the aMGS strategy showed less complexity, with performance marginally similar to the others. Furthermore, it is concluded that increasing the dimensions of the system favors a smaller restriction in the neighborhood. / Sistemas com múltiplas antenas transmissoras e múltiplas antenas receptoras em larga escala (LS-MIMO - large-scale multiple-input multiple-output) possibilitam altos ganhos em eficiência espectral e energética, o que resulta em aumento da taxa de transmissão de dados numa mesma banda ocupada, sem acréscimo da potência transmitida por usuário. Além disso, com o aumento do número de antenas na estação rádio-base (BS- base station) possibilita-se o atendimento de maior número de usuários por célula, em uma mesma banda ocupada. Ademais, comprovou-se na literatura que as vantagens relatadas dos sistemas LS-MIMO podem ser obtidas com um grande número de antenas em, pelo menos, um dos lados da comunicação, geralmente na BS devido à restrição física nos dispositivos móveis. Contudo, tais vantagens têm seu custo: a utilização de um grande número de antenas também dificulta tarefas que envolvem processamento de sinais, como estimação dos coeficientes de canal, precodificação e detecção de sinais. É nessa conjuntura em que se desenvolve esta Tese de Doutorado, na qual se explora o compromisso desempenho versus complexidade computacional de métodos eficientes de detecção em sistemas de comunicações LS-MIMO através da análise de algoritmos e técnicas de otimização na solução de problemas específicos e ainda em aberto. Mais precisamente, a presente Tese discute e propõe técnicas promissoras de detecção em sistemas LS-MIMO, visando a melhoria de métricas de desempenho - em termos de taxa de erro - e complexidade computacional - em termos de quantidade de operações matemáticas. Inicialmente, o problema é introduzido através de um modelo de sistema MIMO convencional, em que são considerados canais com estimativas imperfeitas e com correlação entre as antenas transmissoras (Tx) e entre as receptoras (Rx). Aplicam-se técnicas de pré-processamanto baseadas na redução treliça (LR - lattice reduction) em detectores lineares, além do detector esférico (SD - sphere decoder), o qual é proposto um procedimento de tabela de pesquisa a fim de prover redução na complexidade computacional. Mostra-se que o método LR na pré-detecção resulta em ganho de desempenho significante tanto na condição de canais descorrelacionados quanto fortemente correlacionados, sendo que, neste último cenário a melhoria é ainda mais notável, devido ao ganho de diversidade proporcionado. Por outro lado, a complexidade envolvida na aplicação da LR em alta correlação torna-se preponderante em detectores lineares. No LR-SD utilizando o procedimento de tabela de pesquisa, o ganho ótimo foi alcançado em todos os cenários, como esperado, e resultou em complexidade inferior ao detector de máxima verossimilhança (ML - maximum likelihood), mesmo com máxima correlação entre antenas, a qual representa o cenário de maior complexidade a técnica LR. Em seguida, o detector por troca de mensagens (MP - message passing) é investigado, o qual faz uso de modelos grafos do tipo MRF (Markov random fields) e FG (factor graph). Além disso, mostra-se na literatura que o método de amortecimento de mensagens (MD - message damping) aplicado ao detector MRF traz relevante ganho de desempenho sem aumento na complexidade computacional. Por outro lado, o valor do DF (damping factor) é especificado para somente uma variedade restrita de cenários. Resultados numéricos são extensivamente gerados, de forma a dispor de uma gama de análises de comportamento do MRF com MD, resultando na proposição de um valor ótimo para o DF, baseando-se em ajuste de curva numérico. Finalmente, em face ao detector MGS (mixed Gibbs sampling), são propostas duas abordagens visando a redução do impacto negativo causado pela solução aleatória quando altas ordens de modulação são empregadas. A primeira é baseada em uma média entre múltiplas amostras, chamada aMGS (averaged MGS). A segunda abordagem realiza uma restrição direta no alcance da solução aleatória, limitando em até d a vizinhança de símbolos que podem ser sorteados, sendo chamada de d-sMGS (d-simplificado MGS). Resultados de simulação numérica demonstram que ambas abordagens resultam em ganho de convergência em relação ao MGS, destacando-se: em regiões de alto carregamento, a detecção d-sMGS demonstrou ganho expressivo tanto em desempenho quanto em complexidade se comparada à aMGS e MGS; já em baixo-médio carregamentos, a estratégia aMGS demonstrou menor complexidade, com desempenho marginalmente semelhante às demais. Além disso, conclui-se que o aumento do número de dimensões do sistema favorece uma menor restrição na vizinhança.
179

Process Control in High-Noise Environments Using A Limited Number Of Measurements

Barajas, Leandro G. January 2003 (has links)
The topic of this dissertation is the derivation, development, and evaluation of novel hybrid algorithms for process control that use a limited number of measurements and that are suitable to operate in the presence of large amounts of process noise. As an initial step, affine and neural network statistical process models are developed in order to simulate the steady-state system behavior. Such models are vitally important in the evaluation, testing, and improvement of all other process controllers referred to in this work. Afterwards, fuzzy logic controller rules are assimilated into a mathematical characterization of a model that includes the modes and mode transition rules that define a hybrid hierarchical process control. The main processing entity in such framework is a closed-loop control algorithm that performs global and then local optimizations in order to asymptotically reach minimum bias error; this is done while requiring a minimum number of iterations in order to promptly reach a desired operational window. The results of this research are applied to surface mount technology manufacturing-lines yield optimization. This work achieves a practical degree of control over the solder-paste volume deposition in the Stencil Printing Process (SPP). Results show that it is possible to change the operating point of the process by modifying certain machine parameters and even compensate for the difference in height due to change in print direction.
180

Ανάπτυξη και υλοποίηση τεχνικών εντοπισμού και παρακολούθησης θέσης κυρίαρχης πηγής από δίκτυα τυχαία διασκορπισμένων αισθητήρων / Development and implementation of dominant source localization and tracking techniques in randomly distributed sensor networks

Αλεξανδρόπουλος, Γεώργιος 16 May 2007 (has links)
Αντικείμενο αυτής της μεταπτυχιακής εργασίας είναι ο εντοπισμός της ύπαρξης μιας κυρίαρχης ευρείας ζώνης ισοτροπικής πηγής κι η εκτίμηση των συντεταγμένων θέσης αυτής, όταν αυτή βρίσκεται σ’ έναν τρισδιάστατο ή δισδιάστατο χώρο, ο οποίος εποπτεύεται και παρακολουθείται από ένα δίκτυο τυχαία διασκορπισμένων αισθητήρων. Οι κόμβοι του δικτύου μπορούν να περιέχουν ακουστικά, παλμικά κι άλλου είδους μικροηλεκτρομηχανολογικά στοιχεία αίσθησης του περιβάλλοντος. Κατά την αίσθηση ενός γεγονότος ενδιαφέροντος μπορούν να αυτοοργανωθούν σ’ ένα συγχρονισμένο ασύρματο ραδιοδίκτυο χρησιμοποιώντας χαμηλής κατανάλωσης πομποδέκτες spread spectrum, ώστε να επικοινωνούν μεταξύ τους και με τους κεντρικούς επεξεργαστές. Ο εντοπισμός της ύπαρξης μιας κυρίαρχης πηγής σ’ ένα δίκτυο αισθητήρων, με τα παραπάνω χαρακτηριστικά, επιτεύχθηκε με τη χρήση μιας τυφλής μεθόδου μορφοποίησης λοβού, γνωστή ως μέθοδος συλλογής της μέγιστης ισχύος. Η μέθοδος αυτή, η οποία υλοποιήθηκε στα πλαίσια αυτής της εργασίας, παρέχει τις εκτιμήσεις των σχετικών χρόνων καθυστέρησης άφιξης του σήματος της κυρίαρχης πηγής στους αισθητήρες του δικτύου ως προς έναν αισθητήρα αναφοράς. Κύριο αντικείμενο μελέτης αυτής της εργασίας είναι ο υπολογισμός του κυρίαρχου ιδιοδιανύσματος του δειγματοληπτημένου πίνακα αυτοσυσχέτισης. Αυτό επιτυγχάνεται στη βιβλιογραφία που μελετήθηκε είτε με χρήση της δυναμικής μεθόδου είτε με χρήση της μεθόδου ιδιοανάλυσης. Ανά στιγμιότυπο δειγμάτων απαιτείται η ανανέωση του πίνακα αυτοσυσχέτισης κι ο υπολογισμός του κυρίαρχου ιδιοδιανύσματος. Όμως, οι δύο παραπάνω μέθοδοι για τον υπολογισμό αυτό χρειάζονται αυξημένη πολυπλοκότητα μιας κι η διάσταση του πίνακα είναι αρκετά μεγάλη. Η συνεισφορά της εργασίας αυτής έγκειται στη μείωση αυτής της πολυπλοκότητας με τη χρήση μιας προσαρμοστικής μεθόδου υπολογισμού του κυρίαρχου ιδιοδιανύσματος. Τέλος, αντικείμενο της εργασίας αυτής είναι και το πρόβλημα εντοπισμού και παρακολούθησης των συντεταγμένων θέσης της κυρίαρχης πηγής από τις εκτιμήσεις των σχετικών χρόνων καθυστέρησης άφιξης. / Object of this postgraduate work are the detection of presence of an isotropic wideband dominant source and the estimate of its coordinates of placement (localization), when the source is found in a three or two dimensional space, which is supervised and watched by a randomly distributed sensor network. The nodes of the network may contain acoustical, vibrational and other MEM-sensing (Micro-Electro-Mechanical) elements. Upon sensing an event of interest, they can self-organize into a synchronized wireless radio network using low-power spread-spectrum transceivers to communicate among themselves and central processors. The detection of presence of a dominant source in a sensor network, with the above characteristics, was achieved with the use of a blind beamforming method, known as the maximum power collection method. This method, which was implemented in the context of this work, provides estimates of the relative time delays of arrival (relative TDEs - Time Delay Estimations) of the dominant source’s signal to the sensors of the network referenced to a reference sensor. The main object of study of the work is the calculation of the dominant eigenvector of the sampled correlation matrix. This is achieved, in the bibliography that was studied, either by using the power method or with use of the SVD method (Singular Value Decomposition). Per snapshot of samples it is required to update the autocorrelation matrix and to calculate the dominant eigenvector. However, the above two methods for this calculation have an increased complexity because the dimension of the matrix is high enough. The contribution of this work lies in the reduction of that complexity by using an adaptive method for the dominant eigenvector calculation. Finally, this work also focuses on the problem of localization and tracking of the coordinates of placement of the dominant source from the estimates of the relative time delays of arrival.

Page generated in 0.0769 seconds