• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 18
  • 10
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 23
  • 19
  • 18
  • 17
  • 17
  • 17
  • 17
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

MVAPICH2-AutoTune: An Automatic Collective Tuning Framework for the MVAPICH2 MPI Library

Srivastava, Siddhartha January 2021 (has links)
No description available.
142

Experimental Validation of the Generalized Harvey-Shack Surface Scatter Theory

Nattinger, Kevin T. 10 September 2018 (has links)
No description available.
143

Investigating "Lithic Scatter" Variability: Space, Time, and Form

Manning, Kate M 07 May 2016 (has links)
Using flake dimensions and attributes commonly agreed are associated with site use, occupation age, and occupation duration, it was argued that relative estimations of site function and occupation age could be determined using debitage. This is particularly beneficial for assemblages that have little to no diagnostics that could provide a general cultural period for one or more occupations at a site. The results of this study suggest that, although certain attributes are generally associated with lithic production stage, relative age, and duration indicators, they were not all applicable within this study. The methods employed were relatively successful; however, reducing the number of classes, removing of a dimension, and more sites that meet the definition of lithic scatter is needed. Furthermore, testing occupation duration using the number of breaks on a flake is not possible unless it is proven a single occupation site.
144

Narrow Pretraining of Deep Neural Networks : Exploring Autoencoder Pretraining for Anomaly Detection on Limited Datasets in Non-Natural Image Domains

Eriksson, Matilda, Johansson, Astrid January 2022 (has links)
Anomaly detection is the process of detecting samples in a dataset that are atypical or abnormal. Anomaly detection can for example be of great use in an industrial setting, where faults in the manufactured products need to be detected at an early stage. In this setting, the available image data might be from different non-natural domains, such as the depth domain. However, the amount of data available is often limited in these domains. This thesis aims to investigate if a convolutional neural network (CNN) can be trained to perform anomaly detection well on limited datasets in non-natural image domains. The attempted approach is to train the CNN as an autoencoder, in which the CNN is the encoder network. The encoder is then extracted and used as a feature extractor for the anomaly detection task, which is performed using Semantic Pyramid Anomaly Detection (SPADE). The results are then evaluated and analyzed. Two autoencoder models were used in this approach. As the encoder network, one of the models uses a MobileNetV3-Small network that had been pretrained on ImageNet, while the other uses a more basic network, which is a few layers deep and initialized with random weights. Both these networks were trained as regular convolutional autoencoders, as well as variational autoencoders. The results were compared to a MobileNetV3-Small network that had been pretrained on ImageNet, but had not been trained as an autoencoder. The models were tested on six different datasets, all of which contained images from the depth and intensity domains. Three of these datasets additionally contained images from the scatter domain, and for these datasets, the combination of all three domains was tested as well. The main focus was however on the performance in the depth domain. The results show that there is generally an improvement when training the more complex autoencoder on the depth domain. Furthermore, the basic network generally obtains an equivalent result to the more complex network, suggesting that complexity is not necessarily an advantage for this approach. Looking at the different domains, there is no apparent pattern to which domain yields the best performance. This rather seems to depend on the dataset. Lastly, it was found that training the networks as variational autoencoders did generally not improve the performance in the depth domain compared to the regular autoencoders. In summary, an improved anomaly detection was obtained in the depth domain, but for optimal anomaly detection with regard to domain and network, one must look at the individual datasets. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
145

Laser Levitation of Solid Particles for Combustion and Gasification Applications

Lewis, Skigh E. 20 March 2009 (has links) (PDF)
This dissertation details theoretical and experimental work in the development of a novel combustion diagnostic: laser levitation of solid particles. Theoretical analyses of the forces involved in the suspension of solid particles in a laser beam provide a comprehensive description of the levitation mechanism. Experimental work provides extensive observations and data that describe each of the forces involved, including results from detailed models. Theoretical models establish that a free-convective drag force, light scattering, photon momentum, and other minor forces contribute to the trapping mechanism. The theory quantitatively predicts particle temperature and magnitudes of each of the forces involved. Experimental measurements contain significant scatter, primarily due to the difficulty of making measurements on these very small particles. However, the best estimate trends of the measurements agree well with the predicted behavior despite the scatter. Computational fluid dynamics (CFD) predictions of the free-convective drag force qualitatively agree with published experimental values. The technique represents a tool for studying combustion and gasification of single, micron-sized, solid particles. Biomass fuels and coal (among many others) provide experimental demonstration of particle suspension. The system suspends particles near the focal point of a visible-light laser, allowing continuous monitoring of their size, shape, temperature, and possibly mass. The Particle Levitation Model (PLM) establishes the trapping mechanism using data from three submodels: an energy balance, a drag force model, and a photon force model. Biomass fuels provide experimental demonstrations of particle levitation under a variety of conditions that illustrate each of the primary levitation mechanisms. Several different trapping techniques provide single-particle data in literature, including optical tweezers and electrodynamic levitation. However, optical levitation of opaque particles is a relatively new technique and, although less-well understood, provides a potentially powerful novel diagnostic technique for single-particle combustion investigations. The diagnostic consists of a solid-state laser, a high-speed color camera, an infrared camera, and a variety of optics. All experimental data are obtained optically, including particle dynamics, size and shape, and particle temperature. Thus, this technique enables the in situ investigation of micron-sized, solid particles under conditions similar to commercial combustion and gasification processes.
146

Parameter estimation in a cardiovascular computational model using numerical optimization : Patient simulation, searching for a digital twin

Tuccio, Giulia January 2022 (has links)
Developing models of the cardiovascular system that simulates the dynamic behavior of a virtual patient’s condition is fundamental in the medical domain for predictive outcome and hypothesis generation. These models are usually described through Ordinary Differential Equation (ODE). To obtain a patient-specific representative model, it is crucial to have an accurate and rapid estimate of the hemodynamic model parameters. Moreover, when adequate model parameters are found, the resulting time series of state variables can be clinically used for predicting the response to treatments and for non-invasive monitoring. In the Thesis, we address the parameter estimation or inverse modeling, by solving an optimization problem, which aims at minimizing the error between the model output and the target data. In our case, the target data are a set of user-defined state variables, descriptive of a hospitalized specific patient and obtained from time-averaged state variables. The Thesis proposes a comparison of both state-of-the-art and novel methods for the estimation of the underlying model parameters of a cardiovascular simulator Aplysia. All the proposed algorithms are selected and implemented considering the constraints deriving from the interaction with Aplysia. In particular, given the inaccessibility of the ODE, we selected gradient-free methods, which do not need to estimate numerically the derivatives. Furthermore, we aim at having a small number of iterations and objective function calls, since these importantly impact the speed of the estimation procedure, and thus the applicability of the knowledge gained through the parameters at the bedside. Moreover, the Thesis addresses the most common problems encountered in the inverse modeling, among which are the non-convexity of the objective function and the identifiability problem. To assist in resolving the latter issue an identifiability analysis is proposed, after which the unidentifiable parameters are excluded. The selected methods are validated using heart failure data, representative of different pathologies commonly encountered in Intensive Care Unit (ICU) patients. The results show that the gradient-free global algorithms Enhanced Scatter Search and Particle Swarm estimate the parameters accurately at the price of a high number of function evaluations and CPU time. As such, they are not suitable for bedside applications. Besides, the local algorithms are not suitable to find an accurate solution given their dependency on the initial guess. To solve this problem, we propose two methods: the hybrid, and the prior-knowledge algorithms. These methods, by including prior domain knowledge, can find a good solution, escaping the basin of attraction of local minima and producing clinically significant parameters in a few minutes. / Utveckling av modeller av det kardiovaskulära systemet som simulerar det dynamiska beteendet hos en virtuell patients är grundläggande inom det medicinska området för att kunna förutsäga resultat och generera hypoteser. Dessa modeller beskrivs vanligtvis genom Ordinary Differential Equation (ODE). För att erhålla en patientspecifik representativ modell är det viktigt att ha en exakt och snabb uppskattning av de hemodynamiska modellparametrarna. När adekvata modellparametrar har hittats kan de resulterande tidsserierna av tillståndsvariabler dessutom användas kliniskt för att förutsäga svaret på behandlingar och för icke-invasiv övervakning. I avhandlingen behandlar vi parameteruppskattning eller invers modellering genom att lösa ett optimeringsproblem som syftar till att minimera följande felet mellan modellens utdata och måldata. I vårt fall är måldata en uppsättning användardefinierade tillståndsvariabler som beskriver en specifik patient som är inlagd på sjukhus och som erhålls från tidsgenomsnittliga tillståndsvariabler. I avhandlingen föreslås en jämförelse av befintlinga och nya metoder. för uppskattning av de underliggande modellparametrarna i en kardiovaskulär simulator, Aplysia. Alla föreslagna algoritmer är valts och implementerade med hänsyn tagna till de begränsningar som finnis i simulatorn Aplysia. Med tanke på att ODE är otillgänglig har vi valt gradientfria metoder som inte behöver uppskatta derivatorna numeriskt. Dessutom strävar vi efter att ha få interationer och funktionsanrop eftersom dessa påverkar hastigheten på estimeringen och därmed den kliniska användbartheten vid patientbehandling. Avhandlingen behandlas dessutom de vanligaste problemen vid inversmodellering som icke-konvexitet och identifierbarhetsproblem. För att lösa det sistnämnda problemet föreslås en identifierbarhetsanalys varefter de icke-identifierbara parametrarna utesluts. De valda metoderna valideras med hjälp av data om hjärtsvikt som är representativa för olika patologier som ofta förekommer hos Intensive Care Unit (ICU)-patienter. Resultaten visar att de gradientfria globala algoritmerna Enhanced Scatter Search och Particle Swarm uppskattar parametrarna korrekt till priset av ett stort antal funktionsutvärderingar och processortid. De är därför inte lämpliga för tillämpningar vid sängkanten. Dessutom är de lokala algoritmerna inte lämpliga för att hitta en exakt lösning eftersom de är beroende av den ursprungliga gissningen. För att lösa detta problem föreslår vi två metoder: hybridalgoritmer och algoritmer med förhandsinformation. Genom att inkludera tidigare domänkunskap kan dessa metoder hitta en bra lösning som undviker de lokala minimernas attraktionsområde och producerar kliniskt betydelsefulla parametrar på några minuter.
147

Highly Robust and Efficient Estimators of Multivariate Location and Covariance with Applications to Array Processing and Financial Portfolio Optimization

Fishbone, Justin Adam 21 December 2021 (has links)
Throughout stochastic data processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-Gaussian or may be corrupted by outliers or impulsive noise. To address this, robust estimators should be employed. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed, such as M-estimators, provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the high-breakdown-point class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading high-breakdown-point multivariate estimators, such as the Rocke S-estimator and the smoothed hard rejection MM-estimator, is that they lack statistical efficiency at non-Gaussian distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading maximum-breakdown estimators, and it is also shown to generally be more stable with respect to initial conditions. To illustrate the theoretical benefits of the Sq for complex-valued applications, the efficiencies and influence functions of adaptive minimum variance distortionless response (MVDR) beamformers based on S- and M-estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of multiple signal classification (MUSIC) direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance. / Doctor of Philosophy / Throughout stochastic processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-normal or may be corrupted by outliers or large sporadic noise. To address this, estimators should be employed that are robust to these conditions. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the highly robust class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading highly robust multivariate estimators is that they may require unreasonably large numbers of samples (i.e. they may have low statistical efficiency) in order to provide good estimates at non-normal distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading highly robust estimators, and its solutions are also shown to generally be less sensitive to initial conditions. To illustrate the theoretical benefits of the Sq-estimator for complex-valued applications, the statistical efficiencies and robustness of adaptive beamformers based on various estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of signal direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance.
148

Measurements of Mean Corpuscular Volume and Hemoglobin using Optical Scatter Data from Flow Cytometry / Mätning av medelvolym av röda blodkroppar och hemoglobin med hjälp av ljusspridning från flödescytometri

Gustavsson, You January 2024 (has links)
Complete blood count (CBC) analysis, often provided by an automated hematology analyzer, is a fundamental diagnostic test for evaluating a patient’s overall health status as a tool in diagnosing various medical conditions. CBC provides insights into the composition of the blood cells with parameters such as Mean Corpuscular Volume (MCV) and Hemoglobin (HGB). MCV represents the average size and volume of red blood cells, while HGB indicates the oxygen-carrying capacity of the blood. Different technologies are used in hematology analyzers, such as the impedance method and spectrophotometry, to achieve precise measurements of MCV and HGB. However, exploring other methodologies is of interest to potentially reduce instrument complexity and cost. Flow cytometry, based on light scatter, provides detailed information on the characteristics of individual cells and is commonly used in CBC analysis to differentiate white blood cells and reticulocytes. However, while the potential of this method for investigating MCV and HGB levels is well established, it is of significant interest to determine if the measurement techniques can be streamlined from three to one by solely using flow cytometry in this prototype analyzer. In this thesis, the possible use of measuring MCV and HGB using a flow cytometry system based on optical scatter data from a prototype hematology analyzer has been examined. The need to sphere the red blood cells before the measurements have also been investigated to evaluate reagent needs. The results have been evaluated based on the correlation factor, accuracy and precision of the proposed optical method. It is shown that the optical method in this thesis, can be used to measure MCV and HGB. However, the necessity of sphering the cells remains. Furthermore, a comparison is made between the optical method and the Sysmex XN-1000 to evaluate the accuracy of the obtained values. Finally, possible improvements and future work are suggested. / Blodstatus, utförd av automatiserade hematologiinstrument, är ett grundlägggande diagnostiskt test för att utvärdera en patients allmänna hälsotillstånd som ett verktyg för att diagnostisera olika medicinska tillsånd. Blodstatus ger inblick i blodets sammansättning med parameterar som Medelvolym av röda blodkroppar (MCV) och Hemoglobin (HGB). MCV representerar den genomsnittliga storleken och volymen av röda blodkroppar, medan HGB indikerar blodets syrebärande förmåga. Olika metoder används i hematologiinstrument, såsom impedansmetoden och spektrofotometri för att mäta MCV och HGB värden. Undersökning med hjälp av andra metoder för mätning av dessa parameter är dock utav intresse för att potentiellt minska instrument komplexitet och kostnader. Flödescytometri, baserad på ljusspridning, ger detaljerad information om egenskaperna hos enskilda celler och används ofta för att differentiera vita blodkroppar och reticulocyter. Även om potentialen att undersöka MCV- och HGB-nivåer med denna metod redan är väletablerad, är det av betydande intresse att undersöka om man kan minska mätmetoderna från tre till en, genom att enbart använda flödescytometri i detta prototypinstrument. I detta examensarbete har möjligheten att mäta MCV och HGB med hjälp av flödescytometri baserat på optisk spridningsdata från en prototyp hematologiinstrument undersökts. Behovet av att ”sfära” de röda blodkropparna innan mätningarna har också undersökts för att utvärdera reagensbehov. Resultaten har utvärderats utifrån den förslagna optiska metodens korrelationsfaktor, noggrannhet och precision. Resultatet visar att den optiska metoden som presenteras i denna uppsats, kan användas för att mäta MCV och HGB. Dock kvarstår behovet av att ”sfära” cellerna. Dessutom görs en jämförelse mellan den optiska metoden och Sysmex XN-1000 för att utvärdera noggrannheten hos de erhållna värdena. Till sist ges förslag på möjliga förbättringar och vidareutveckling av den optiska metoden för framtiden.
149

國際級工程公司設計能耐提昇個案研究-以人才培育與專業強化觀點 / The Research on Competent Promotion in Engineering of an International Engineering Company

簡錫雲 Unknown Date (has links)
企業為了不斷成長並追求永續經營,並因應快速變化的外在環境與日趨激烈的國際競爭,需不斷地加強或擴增員工的知識、技術及能力,因而建立短期與長期性的人才培育之訓練有其重要性。 國際級工程案競爭越形激烈,尤其有韓國工程公司競爭性策略搶標及大陸崛起積極跨入國際市場之隱憂,加上客戶在維持高品質及持續壓縮工期之需求,個案公司(簡稱以下A公司)成長及獲利空間均受到嚴苛的挑戰。為提昇在國際上競爭力,A公司設計部必須面對培植具豐富工程經驗及國際級之設計人才問題。而目前面臨5年以下經驗工程師達人數比例佔42%以上(即年輕工程師比例偏高) ,7~15年經驗工程師人數佔10%相對偏低之M型人力結構及未來幾年面臨退休潮,如何加速培育年輕工程師須具備世界級的專業技術及落實資深工程師經驗傳承,達成各職級人力結構合理化,提升設計品質,為A公司當務之急。 為能瞭解A公司現行各項教育訓練之效益,揭示員工對A公司教育訓練方式的意見或需求,本研究採用問卷調查並分為五大類,第一,專業技術訓練的同意度與重要度;第二,經驗傳承的同意度與重要度;第三,跨部門訓練的同意度;第四,管理訓練的同意度;第五,英文能力方面的同意度。以上各問項再與人口統計變數做交叉分析,尋求出目前教育訓練之成效如何及提出改善建議。 其中專業技術訓練與經驗傳承再將每個同意度和重要性的問項平均後做出二維分布圖(IPA分析圖),利用同意度和重要性整體的平均切割分成四個區域,即優越區、過剩區、優先改進區及建議改進區,尤其問項落在優先改進區、建議改進區及過剩區時,則進一步進行原因分析及提出改善建議。 對跨部門訓練、管理課程與英文能力等問項,則以同意度與人口統計交叉分析結果,根據其滿意度程度及卡方檢定(Chi-Square Test)結果有統計上顯著差異關係之受訪者基本資料提出改善建議。 / Enterprises need to continuously enhance knowledge, skills, and abilities of their employees for sustainable business development and adapting to rapid changing environment as well as fierce international competition. The importance of developing short-term and long-term talent training programs is thus recognized. Under some unfavorable conditions, which include keen competition in global EPC (engineering, procurement and construction) industry, Korean engineering companies’ strategic bidding, mainland China contending in international markets, clients’ request for quality and compressed work schedule at the same time, etc…, the case company (designated as company A) is facing severe challenges in achieving profitable growth. The company A must cope with the problem of cultivating world-class and experienced talent to promote its competitiveness in international markets. It’s the company A’s top priority to tackle the engineering talent gap at the range of seniority between 7 to 15 years (10%) whereas the young engineers with less than 5 years of experience account for more than 42% of its talent pool and engineers will retire that are increased in near future year. It’s important for the company A to professionalize young engineers, facilitate knowledge transferring, and optimize its staffing structure. A questionnaire survey research was undertaken to understand the effectiveness of the training programs and reveal the employees’ opinion or request on the training activities of the company A. The questionnaire variables were classified into five categories: 1. degree of agreement and importance of professional skills training; 2. degree of agreement and importance of intergenerational transfer of experience; 3. degree of agreement on cross-disciplinary training; 4. degree of agreement on management training; 5. degree of agreement on English competency requirements. Demographic variables were used to cross analyze the survey result. The analysis was also used to find the causes of training deficiency and to explore improvement ideas. The survey results of questionnaire variables in the first two categories, degree of agreement and importance of professional skills training and intergenerational transfer of experiencewere, were used to develop two-dimensional scatter diagrams (IPA matrices). The mean ratings of degree of agreement and importance were plotted in a two-dimensional grid to produce a four-quadrant matrix that identifies areas of Keep Up the Good Work, Possible Overkill, Concentrate Here, and Low Priority. Causes and improvement plans were further investigated for variables situate in the quadrant Concentrate Here, Low Priority, and Possible Overkill. For the survey results of questionnaire variables in the other three categories, degree of agreement on cross-disciplinary training, management training and English competency requirements, cross tabulation analyses using demographic variables were performed. According improvement plans were proposed for the respondent demographics when statistically significant differences between the degree of satisfaction and chi-square test result were observed.
150

Θεωρητική μελέτη της ηλεκτρομαγνητικά επαγώμενης δύναμης σε σωματίδια μίκρο – και νανομετρικών διαστάσεων

Γαλιατσάτος, Παύλος 23 June 2008 (has links)
Όταν ηλεκρομαγνητική (ΗΜ) ακτινοβολία, προερχόμενη από κάποια πηγή, προσπίπτει σε σύνολο από σωμάτια τότε λαμβάνουν χώρα δύο φαινόμενα. Πρώτον, ασκούνται δυνάμεις στα σωμάτια οι οποίες οφείλονται αποκλειστικά στην σκέδαση της ΗΜ ακτινοβολίας της πηγής από αυτά. Οι δυνάμεις αυτές ονομάζονται Optical Trapping Forces. Δεύτερον, τα ίδια τα σωμάτια σκεδάζοντας την ΗΜ ακτινοβολία της πηγής, λειτουργούν και αυτά ως πηγές ακτινοβολίας. Έτσι ασκούν δυνάμεις το ένα στο άλλο. Οι δυνάμεις αυτές ονομάζονται Optical Binding Forces. H παράλληλη δράση των δύο αυτών ειδών δυνάμεων έχει ως αποτέλεσμα την δημιουργία ευσταθών δομών από τα σωμάτια. Προκειμένου την θεωρητική πρόβλεψη των δομών που αναπτύσσονται, χρειαζόμαστε έναν ταχύτατο αλγόριθμο υπολογισμού των δυνάμεων. Ο πιο ταχύς αλγόριθμος θα είναι το αποτέλεσμα της εύρεσης ενός αναλυτικού τύπου υπολογισμού των δυνάμεων. Η κατασκευή και η παρουσίαση του αναλυτικού τύπου αυτού είναι και το περιεχόμενο της εργασίας που ακολουθεί. / When the electromagnetic radiation, originating from a source, meets an ensemble of particles, there are two phenomena which take place. First, there are forces acting on these particles due exclusively to the scattering of the electromagnetic radiation from the particles. These are the so-called “Optical Trapping Forces”. Second, particles themselves act as sources of radiation since they scatter the radiation, and they exert forces one to another. These are the so-called “Optical Binding Forces”. The coexistence of these two different forces results in the creation of stable structures where the particles are self-organized. To achieve the theoretical prediction of these structures, we need a very efficient algorithm to calculate the forces. The fastest possible and thus more efficient algorithm originates from the analytical formula of the forces. The construction and the solution of the forces analytical formula is the content of this research work.

Page generated in 0.06 seconds