• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 17
  • 12
  • 11
  • 6
  • 6
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 187
  • 101
  • 43
  • 29
  • 28
  • 24
  • 24
  • 22
  • 19
  • 19
  • 18
  • 18
  • 18
  • 17
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

The Development of an Improved Finite Element Muscle Model and the Investigation of the Pre-loading Effects of Active Muscle on the Femur During Frontal Crashes

Mendes, Sebastian B 31 August 2010 (has links)
"Mammalian skeletal muscle is a very complicated biological structure to model due to its non-homogeneous and non-linear material properties as well as its complex geometry. Finite element discrete one-dimensional Hill-based elements are largely used to simulate muscles in both passive and active states. There are, however, several shortfalls to utilizing one-dimensional elements, such as the impossibility to represent muscle physical mass and complex lines of action. Additionally, the use of one-dimensional elements restricts muscle insertion sites to a limited number of nodes causing unrealistic loading distributions in the bones. The behavior of various finite element muscle models was investigated and compared to manually calculated muscle behavior. An improved finite element muscle model consisting of shell elements and Hill-based contractile truss elements in series and parallel was ultimately developed. The muscles of the thigh were then modeled and integrated into an existing 50th percentile musculo-skeletal model of the knee-thigh-hip complex. Impact simulations representing full frontal car crashes were then conducted on the model and the pre-loading effects from active thigh muscles on the femur were investigated and compared to cadaver sled test data. It was found that the active muscles produced a pre-load femoral axial force that acted to slightly stabilize the rate of stress intensification on critical stress areas on the femur. Additionally, the active muscles served to direct the distribution of stress to more concentrated areas on the femoral neck. Furthermore, the pre-load femoral axial force suggests that a higher percentage of injuries to the knee-thigh-hip complex may be due to the effects of active muscles on the femur. "
162

Numerical Evaluation of Classification Techniques for Flaw Detection

Vallamsundar, Suriyapriya January 2007 (has links)
Nondestructive testing is used extensively throughout the industry for quality assessment and detection of defects in engineering materials. The range and variety of anomalies is enormous and critical assessment of their location and size is often complicated. Depending upon final operational considerations, some of these anomalies may be critical and their detection and classification is therefore of importance. Despite the several advantages of using Nondestructive testing for flaw detection, the conventional NDT techniques based on the heuristic experience-based pattern identification methods have many drawbacks in terms of cost, length and result in erratic analysis and thus lead to discrepancies in results. The use of several statistical and soft computing techniques in the evaluation and classification operations result in the development of an automatic decision support system for defect characterization that offers the possibility of an impartial standardized performance. The present work evaluates the application of both supervised and unsupervised classification techniques for flaw detection and classification in a semi-infinite half space. Finite element models to simulate the MASW test in the presence and absence of voids were developed using the commercial package LS-DYNA. To simulate anomalies, voids of different sizes were inserted on elastic medium. Features for the discrimination of received responses were extracted in time and frequency domains by applying suitable transformations. The compact feature vector is then classified by different techniques: supervised classification (backpropagation neural network, adaptive neuro-fuzzy inference system, k-nearest neighbor classifier, linear discriminate classifier) and unsupervised classification (fuzzy c-means clustering). The classification results show that the performance of k-nearest Neighbor Classifier proved superior when compared with the other techniques with an overall accuracy of 94% in detection of presence of voids and an accuracy of 81% in determining the size of the void in the medium. The assessment of the various classifiers’ performance proved to be valuable in comparing the different techniques and establishing the applicability of simplified classification methods such as k-NN in defect characterization. The obtained classification accuracies for the detection and classification of voids are very encouraging, showing the suitability of the proposed approach to the development of a decision support system for non-destructive testing of materials for defect characterization.
163

Numerical Evaluation of Classification Techniques for Flaw Detection

Vallamsundar, Suriyapriya January 2007 (has links)
Nondestructive testing is used extensively throughout the industry for quality assessment and detection of defects in engineering materials. The range and variety of anomalies is enormous and critical assessment of their location and size is often complicated. Depending upon final operational considerations, some of these anomalies may be critical and their detection and classification is therefore of importance. Despite the several advantages of using Nondestructive testing for flaw detection, the conventional NDT techniques based on the heuristic experience-based pattern identification methods have many drawbacks in terms of cost, length and result in erratic analysis and thus lead to discrepancies in results. The use of several statistical and soft computing techniques in the evaluation and classification operations result in the development of an automatic decision support system for defect characterization that offers the possibility of an impartial standardized performance. The present work evaluates the application of both supervised and unsupervised classification techniques for flaw detection and classification in a semi-infinite half space. Finite element models to simulate the MASW test in the presence and absence of voids were developed using the commercial package LS-DYNA. To simulate anomalies, voids of different sizes were inserted on elastic medium. Features for the discrimination of received responses were extracted in time and frequency domains by applying suitable transformations. The compact feature vector is then classified by different techniques: supervised classification (backpropagation neural network, adaptive neuro-fuzzy inference system, k-nearest neighbor classifier, linear discriminate classifier) and unsupervised classification (fuzzy c-means clustering). The classification results show that the performance of k-nearest Neighbor Classifier proved superior when compared with the other techniques with an overall accuracy of 94% in detection of presence of voids and an accuracy of 81% in determining the size of the void in the medium. The assessment of the various classifiers’ performance proved to be valuable in comparing the different techniques and establishing the applicability of simplified classification methods such as k-NN in defect characterization. The obtained classification accuracies for the detection and classification of voids are very encouraging, showing the suitability of the proposed approach to the development of a decision support system for non-destructive testing of materials for defect characterization.
164

Estimation And Hypothesis Testing In Stochastic Regression

Sazak, Hakan Savas 01 December 2003 (has links) (PDF)
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
165

Analysis, Diagnosis and Design for System-level Signal and Power Integrity in Chip-package-systems

Ambasana, Nikita January 2017 (has links) (PDF)
The Internet of Things (IoT) has ushered in an age where low-power sensors generate data which are communicated to a back-end cloud for massive data computation tasks. From the hardware perspective this implies co-existence of several power-efficient sub-systems working harmoniously at the sensor nodes capable of communication and high-speed processors in the cloud back-end. The package-board system-level design plays a crucial role in determining the performance of such low-power sensors and high-speed computing and communication systems. Although there exist several commercial solutions for electromagnetic and circuit analysis and verification, problem diagnosis and design tools are lacking leading to longer design cycles and non-optimal system designs. This work aims at developing methodologies for faster analysis, sensitivity based diagnosis and multi-objective design towards signal integrity and power integrity of such package-board system layouts. The first part of this work aims at developing a methodology to enable faster and more exhaustive design space analysis. Electromagnetic analysis of packages and boards can be performed in time domain, resulting in metrics like eye-height/width and in frequency domain resulting in metrics like s-parameters and z-parameters. The generation of eye-height/width at higher bit error rates require longer bit sequences in time domain circuit simulation, which is compute-time intensive. This work explores learning based modelling techniques that rapidly map relevant frequency domain metrics like differential insertion-loss and cross-talk, to eye-height/width therefore facilitating a full-factorial design space sweep. Numerical results performed with artificial neural network as well as least square support vector machine on SATA 3.0 and PCIe Gen 3 interfaces generate less than 2% average error with order of magnitude speed-up in eye-height/width computation. Accurate power distribution network design is crucial for low-power sensors as well as a cloud sever boards that require multiple power level supplies. Achieving target power-ground noise levels for low power complex power distribution networks require several design and analysis cycles. Although various classes of analysis tools, 2.5D and 3D, are commercially available, the presence of design tools is limited. In the second part of the thesis, a frequency domain mesh-based sensitivity formulation for DC and AC impedance (z-parameters) is proposed. This formulation enables diagnosis of layout for maximum impact in achieving target specifications. This sensitivity information is also used for linear approximation of impedance profile updates for small mesh variations, enabling faster analysis. To enable designing of power delivery networks for achieving target impedance, a mesh-based decoupling capacitor sensitivity formulation is presented. Such an analytical gradient is used in gradient based optimization techniques to achieve an optimal set of decoupling capacitors with appropriate values and placement information in package/boards, for a given target impedance profile. Gradient based techniques are far less expensive than the state of the art evolutionary optimization techniques used presently for a decoupling capacitor network design. In the last part of this work, the functional similarities between package-board design and radio frequency imaging are explored. Qualitative inverse-solution methods common to the radio frequency imaging community, like Tikhonov regularization and Landweber methods are applied to solve multi-objective, multi-variable signal integrity package design problems. Consequently a novel Hierarchical Search Linear Back Projection algorithm is developed for an efficient solution in the design space using piecewise linear approximations. The presented algorithm is demonstrated to converge to the desired signal integrity specifications with minimum full wave 3D solve iterations.
166

LA MITIGAZIONE NELLA PROSA SCIENTIFICO-ACCADEMICA ITALIANA E NELLA PROSPETTIVA DELL'INSEGNAMENTO DELL'ITALIANO LS.

GIORDANO, CARLO 04 April 2018 (has links)
La presente ricerca, che si inserisce negli ambiti della pragmatica, della linguistica testuale e della linguistica applicata, analizza il fenomeno della mitigazione all’interno di un corpus di 25 articoli scientifici in italiano, con l’intento di comprendere meglio questo fenomeno pragmatico nel suo contesto di azione. Si cerca quindi di rispondere ad alcune domande riguardanti forme, funzioni e domini testuali della mitigazione. Per riuscire in ciò, si è elaborato un approccio pragmatico integrato, derivato dal modello tripartito di Caffi (2007), basato sulla nozione di scope (contenuto proposizionale, dimensione illocutiva e origine deittica dell’enunciato), e dalla lunga tradizione di studi di stampo funzionalista. Si presentano quindi i primi risultati di questa ricerca originale, sia da un punto di vista qualitativo sia qualitativo. Infine, tale ricerca investiga alcune possibili implicazioni per l’insegnamento dell’italiano LS in ambito accademico, offrendo a quanti coinvolti nell’insegnamento, sia in qualità di ricercatori che di insegnanti, alcune prime conclusioni, strumenti e risorse immediatamente utilizzabili per la progettazione di percorsi formativi volti allo sviluppo di sensibilità testuali e di genere e di abilità come quella di scrittura accademica, definibili comunicative, accademiche e transferable, , abilità cruciali per qualunque studente universitario che intenda completare con successo il proprio percorso di studi. / This research, framed into the domain of pragmatics, textual linguistics and applied linguistics, aims to analyse mitigation phenomena within a 25 Italian RA’s corpus, to contribute to a better comprehension of these phenomenon in its context of action. More in details, this work attempts to answers to some questions regarding forms, functions and text domains of mitigation. In order to do so, an integrated pragmatic approach was elaborated, derived from Caffi’s tripartite model (2007), based on the notion of scope (propositional content, illocutionary dimension and deictic origin), and on the tradition of literature in a functionalist perspective. The first results of this original investigation, both qualitative and quantitative, will be presented. Furthermore, this research investigates some possible implications in the domain of Italian as FL teaching, and some potential implementations. It provides then, to those involved in teaching Italian LS in academic context, as both teacher and researcher, some first conclusions, tools and resources immediately expendable to design language formation paths, meant to develop textual and genre sensibility as well as competences and skills, like academic writing, defined communicative, academic and transferable, which are crucial for any kind of student to achieve success in their studies.
167

基於最小一乘法的室外WiFi匹配定位之研究 / Study on Outdoor WiFi Matching Positioning Based on Least Absolute Deviation

林子添 Unknown Date (has links)
隨著WiFi訊號在都市的涵蓋率逐漸普及,基於WiFi訊號強度值的定位方法逐漸發展。WiFi匹配定位(Matching Positioning)是透過參考點坐標與WiFi訊號強度(Received Signal Strength Indicator, RSSI)的蒐集,以最小二乘法(Least Squares, LS)計算RSSI模型參數;然後,利用模型參數與使用者位置的WiFi訊號強度,推估出使用者的位置。然而WiFi訊號強度容易受到環境因素影響,例如降雨、建物遮蔽、人群擾動等因素,皆會使訊號強度降低,若以受影響的訊號強度進行定位,將使定位成果與真實位置產生偏移。 為了降低訊號強度的錯誤造成定位結果的誤差,本研究嘗試透過具有穩健性的最小一乘法( Least Absolute Deviation, LAD)結合WiFi匹配定位,去克服WiFi訊號易受環境影響的特性,期以獲得較精確的WiFi定位成果。研究首先透過模擬資料的建立,測試不同粗差狀況最小一乘法WiFi匹配定位之表現,最後再以真實WiFi訊號進行匹配定位的演算,並比較最小一乘法WiFi匹配定位與最小二乘法WiFi匹配定位的成果差異,探討二種方法的特性。 根據本研究成果顯示,於模擬資料中,最小一乘法WiFi匹配定位相較於最小二乘法WiFi匹配定位,在面對參考點接收的AP訊號與檢核點接收的AP訊號強度含有粗差的情形皆能有較好的穩健性,且在參考點接收的AP訊號含有粗差的情況有良好的偵錯能力。而於真實環境之下,最小一乘法WiFi匹配定位之精度也較最小二乘法WiFi匹配定位具有穩健性;在室外資料的部份,最小一乘法WiFi匹配定位之精度為8.46公尺,最小二乘法WiFi匹配定位之精度為8.57公尺。在室內資料的部份,最小一乘法WiFi匹配定位之精度為2.20公尺,最小二乘法WiFi匹配定位之精度為2.41公尺。 / Because of the extensive coverage of WiFi signal, the positioning methods by the WiFi signal are proposed. WiFi Matching Positioning is a method of WiFi positioning. By collecting the WiFi signal strength and coordiates of reference points to calculate the signal strength transformation parameters, then, user’s location can be calculated with the LS (Least Squares). However, the WiFi signal strength is easily degraded by the environment. Using the degraded WiFi signal to positioning will produce wrong coordinates. Hence this research tries to use the robustness of LAD (Least Absolute Deviation) combining with WiFi Matching Positioning to overcome the sensibility of WiFi signal strength, expecting to make the result of WiFi positioning more reliable. At first, in order to test the ability of LAD, this research uses simulating data to add different kind of outliers in the database, and checks the performance of LAD WiFi Matching Positioning. Finally, this research uses real data to compare the difference between the results of LAD and LS WiFi Matching Positioning. In the simulating data, the test result shows that LAD WiFi Matching Positioning can not only have better robust ability to deal with the reference and check points AP signal strength error than LS WiFi Matching Positioning but also can detect the outlier in the reference points AP signal strength. In the real data, LAD WiFi Matching Positioning can also have better result. In the outdoor situation, the RMSE (Root Mean Square Error) of LAD WiFi Matching Positioning and LS (Least Squares) WiFi Matching Positioning are 8.46 meters and 8.57 meters respectively. In the indoor situation, the RMSE (Root Mean Square Error) of LAD WiFi Matching Positioning and LS (Least Squares) WiFi Matching Positioning are 2.20 meters and 2.41 meters respectively.
168

Estudio de la integración de procedimientos multivariantes para la regulación óptima y monitorización estadística de procesos

Barceló Cerdá, Susana 04 May 2016 (has links)
[EN] Statistical Process Control (SPC) and the Automatic Process Control (APC) are two control philosophies have evolved independently until recently. The overall objective of both APC and SPC is to optimize the performance of processes, reducing the variability of the resulting characteristics around the desired values. The fundamentals of the two disciplines arise from the idea that the whole process has operation variations. These variations may affect, to a greater or lesser extent, the final product quality and the process productivity. The two methodologies conceptualize processes and control in different ways, they originated in different industrial sectors and have evolved independently, until the interest to integrate them in the control of industrial processes was deduced. It was warned that they could be complementary rather than conflicting methodologies as they were understood until then. The possibility of combining the advantages of both, integrating them into a new control paradigm was explored. First, the problem of identifying and estimating a process model is considered. Controlled variables in this model are the main feature of product quality and a productivity variable. The latter is innovative since the productivity variables are measured, but they are not considered as controlled variables. For this, two methods of multivariate time series are used, the Box-Jenkins multiple transfer function in the parsimonious way and the impulse response function obtained by Partial Least Squares regression (Time Series-Partial Least Squares, TS-PLS). These two methods were compared taking into account different aspects such as the simplicity of the modeling process in the stages of identification, estimation and model validation, as well as the utility of graphical tools that provide both methodologies, the goodness of fit obtained, and the simplicity of the mathematical structure of the model. The DMC (Dynamic Matrix Control) controller, an automatic control algorithm belonging to the family of MPC (Model Predictive Control) controllers, is derived from the estimated Box-Jenkins multiple transfer function that has been selected as the most suitable for this kind of processes. An optimal tuning method to maximize the controller performance, applying experimental design 2k-p, is presented. Finally, an integrated control system MESPC (Multivariate Engineering Statistical Process Control) whose monitoring component has been implemented applying latent structures based multivariate statistical process control methods (Lsb-MSPC), has been developed. The monitoring module is designed to act as both process and DMC controller supervisor. To do this, we estimate a NOC-PCA model (Normal Operation Conditions Principal Component Analysis), which has as variables both process-related and quality-related variables, all derived from the automatic control system. From this model, and DModX graphics have been derived. We assessed the performance of MESPC system, subjecting it to simulated potential failures or special causes of variability. / [ES] El Control Estadístico de Procesos (Statistical Process Control, SPC) y el Control Automático de Procesos (Automatic Process Control, APC) son dos filosofías de control se han desarrollado hasta recientemente de forma independiente. El objetivo general tanto del SPC como del APC, es optimizar el funcionamiento de los procesos, reduciendo la variabilidad de las características resultantes en torno a los valores deseados. El fundamento de ambas disciplinas, parte de la idea de que todo proceso presenta variaciones en su funcionamiento. Estas variaciones pueden afectar en mayor o en menor medida a la calidad final del producto y a la productividad del proceso. Las dos metodologías conceptualizan los procesos y su control de diferentes formas, se originaron en diferentes sectores industriales y han evolucionado de forma independiente, hasta que se dedujo el interés de integrarlas en el control de los procesos industriales, ya que se advirtió que podían ser complementarias, antes que contrapuestas, como se entendían hasta entonces y se exploró la posibilidad de aunar las ventajas de ambas, integrándolas en un nuevo paradigma de control. Esta tesis se centra en el estudio de la integración de procedimientos multivariantes para la regulación óptima y la monitorización estadística de procesos, con el propósito de contribuir a la mejora de la calidad y de la productividad de los procesos. La metodología propuesta se ha aplicado con fines ilustrativos a un proceso MIMO de producción en continuo de Polietileno de Alta Densidad (PEAD).En primer lugar, se considera el problema de la identificación y posterior estimación de un modelo del proceso. Las variables controladas en este modelo han sido la principal característica de calidad del producto y una variable de productividad, esto último es innovador puesto que las variables de productividad se miden, pero no se consideran variables controladas. Para ello, se emplean dos metodologías de series temporales multivariantes, la obtención de la función de transferencia múltiple en forma parsimoniosa de Box-Jenkins y la obtención de la función de respuesta a impulsos mediante los modelos de regresión por mínimos cuadrados parciales (Time Series-Partial Least Squares, TS-PLS). Estas dos metodologías se han comparado teniendo en cuenta distintos aspectos como son la simplicidad del proceso de modelado en las etapas de identificación, estimación y validación del modelo, así como la utilidad de las herramientas gráficas que proporcionan ambas metodologías, la bondad de ajuste obtenida, y la simplicidad de la estructura matemática del modelo. A partir del modelo de función de transferencia múltiple estimado, elegido como el más adecuado para este tipo de procesos, se desarrolla el controlador DMC (Dynamic Matrix Control), un algoritmo de control automático que pertenece a la familia del Control Predictivo basado en Modelos (Model Predictive Control, MPC). Se presenta un método de sintonizado óptimo del controlador que permita maximizar su rendimiento, aplicando diseño de experimentos 2k-p.Finalmente, se ha desarrollado un sistema de control integrado MESPC (Multivariate Engineering Statistical Process Control), cuya componente de monitorización se ha implementado aplicando métodos de control estadístico multivariante de procesos basados en técnicas de proyección en estructuras latentes. Este módulo de monitorización se ha diseñado para que actúe como supervisor tanto del proceso como del controlador DMC. Para ello, se ha estimado un modelo NOC-PCA (Normal Operation Conditions Principal Component Analysis), en el que han intervenido tanto variables relacionadas con el proceso como con la calidad, todas derivadas de la componente del control automático. A partir de este modelo se han derivado los gráficos y DModX. Se ha evaluado el funcionamiento del sistema MESPC, sometiéndolo a fallos potenciales o causas especiales de variabiliabilidad. / [CAT] El Control Estadístic de Processos (Statistical Process Control, SPC) i del Control Automàtic de Processos (Automatic Process Control, APC) son dues filosofies de control s'han desenvolupat fins a recentment de forma independent. L'objectiu general tant del SPC com del APC, és optimitzar el funcionament dels processos, reduint la variabilitat de les característiques resultants entorn dels valors desitjats. El fonament d'ambdues disciplines, part de la idea que tot procés presenta variacions en el seu funcionament. Aquestes variacions poden afectar en major o en menor mesura a la qualitat final del producte i a la productivitat del procés. Les dues metodologies conceptualitzen els processos i el seu control de diferents formes, es van originar en diferents sectors industrials i han evolucionat de forma independent, fins que es va deduir l'interès d'integrar-les en el control dels processos industrials, ja que es va advertir que podien ser complementàries, abans que contraposades, com s'entenien fins llavors i es va explorar la possibilitat de conjuminar els avantatges d'ambdues, integrant-les en un nou paradigma de control. Aquesta tesi se centra en l'estudi de la integració de procediments multivariants per a la regulació òptima i el monitoratge estadístic de processos amb el propòsit de contribuir a la millora de la qualitat i de la productivitat dels processos. La metodologia proposada s'ha aplicat amb finalitats il·lustratives a un procés MIMO de producció en continu de Polietilè d'Alta Densitat (PEAD). En primer lloc, es considera el problema de la identificació i posterior estimació d'un model del procés. Les variables controlades en aquest model han sigut la principal característica de qualitat del producte i una variable de productivitat, açò últim és innovador ja que les variables de productivitat es mesuren, però no es consideren variables controlades. Per a açò, s'utilitzen dues metodologies de sèries temporals multivariants, l'obtenció de la funció de transferència múltiple en forma parsimòniosa de Box-Jenkins i l'obtenció de la funció de resposta a impulsos mitjançant els models de regressió per mínims quadrats parcials (Times Series-Partial Least Squares, TS-PLS). Aquestes dues metodologies s'han comparat tenint en compte diferents aspectes com són la simplicitat del procés de modelatge en les etapes d'identificació, estimació i validació del model, així com la utilitat de les eines gràfiques que proporcionen ambdues metodologies, la bondat d'ajust obtinguda, i la simplicitat de l'estructura matemàtica del model. A partir del model de funció de transferència múltiple estimat, triat com el més adequat per a aquest tipus de processos, es desenvolupa el controlador DMC (Dynamic Matrix Control), un algorisme de control automàtic que pertany a la família del Control Predictiu basat en Models (Model Predictive Control, MPC). Es presenta un mètode de sintonitzat òptim del controlador que permeta maximitzar el seu rendiment, aplicant disseny d'experiments 2k-p. Finalment, s'ha desenvolupat un sistema de control integrat MESPC (Multivariate Engineering Statistical Process Control). Per a implementar la component de monitoratge d'aquest sistema integrat s'han usat mètodes de control estadístic multivariants de processos basats en tècniques de projecció en estructures latents (Latent structures based-Multivariate Statistical Process Control). Aquest mòdul de monitoratge s'ha dissenyat perquè actue com a supervisor tant del procés com del controlador DMC. Per a açò, s'ha estimat un model NOC-PCA (Normal Operation Conditions Principal Component Analysis), en el qual han intervingut variables relacionades tant amb el procés, com amb la qualitat, totes derivades de la component del control automàtic. A partir d'aquest model s'han derivat els gràfics i DModX. S'ha avaluat el funcionament del sistema MESPC, sotmetent-lo a fallades potencials o causes especials de / Barceló Cerdá, S. (2016). Estudio de la integración de procedimientos multivariantes para la regulación óptima y monitorización estadística de procesos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/63442 / TESIS
169

Odhad kanálu v OFDM systémech pomocí deep learning metod / Utilization of deep learning for channel estimation in OFDM systems

Hubík, Daniel January 2019 (has links)
This paper describes a wireless communication model based on IEEE 802.11n. Typical methods for channel equalisation and estimation are described, such as the least squares method and the minimum mean square error method. Equalization based on deep learning was used as well. Coded and uncoded bit error rate was used as a performance identifier. Experiments with topology of the neural network has been performed. Programming languages such as MATLAB and Python were used in this work.
170

Deformačně-napěťová analýza tenkostěnné skříně vystavené rázovému zatížení od výbuchu / Stress-strain analysis of the thin wall structure subjected to impact load

Tatalák, Adam January 2016 (has links)
This master thesis deals with stress-strain analysis of simplified model of the thin wall transformer case subjected to impact load of electrical blast. Electrical blast is replaced by chemical blast (detonation of high explosive). The problem is solved using computational modeling utilizing the Finite Element Method (FEM) and LS-DYNA solver. After the introduction where detonation and shock wave propagation is explained the analytical approach is presented. This approach serves to results verification. In the next chapter is conducted research of applicable methods from which ALE method is chosen. In preliminary study is performed the mesh size analysis that is focused on finding the size of element which is both computational effective and gives accurate results. Next the infulence of input conditions (shape, location and parametres of high explosive, location of detonation point, boundary conditions) on distribution and time progress of pressure is investigated. Then influence of the opening on upper side of the case on overall pressure redistribution and strain and stress of the case is analysed. The stress-strain analysis of the case´s door which are connected to case by various types of contact models is performed as well as stiffness analysis of these types of contact.

Page generated in 0.0601 seconds