• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Capacity and Shopping Rate Under a Social Distancing Regime

Zhong, Haitian 15 November 2021 (has links)
Capacity restrictions in stores, maintained by mechanisms like spacing customer intake at certain time intervals, have become familiar features in the time of the pandemic. The effect on total spending is not a linear function of reduced capacity, since shopping in a crowded store under a social distance regime is prone to considerable slowdown. In this thesis, We introduce a simple dynamical model of the evolution of shopping rate as a function of a given customer intake rate, starting with an empty store. The slowdown of each individual customer is incorporated as an additive term to a baseline value shopping time, proportional to the number of other customers in the store. We determine analytically and by simulation the trajectory of the model as it approaches a Little's Law equilibrium, and identify the point of phase change, beyond which equilibrium cannot be achieved. By relating customer shopping rate to the slowdown compared to the baseline, We can calculate the optimal intake rate leading to maximum equilibrium spending. This turns out to be the maximum rate compatible with equilibrium. The slowdown is not enough to justify a lower intake rate. Because the slowdown due to the largest possible number of shoppers is more than compensated for by the increased volume of shopping.
2

Desenvolvimento de modelos dinâmicos para a formação de clusters aplicados em dados biológicos / Developing dynamical systems for data clustering applied to biological data

Damiance Junior, Antonio Paulo Galdeano 16 October 2006 (has links)
Com o advento da tecnologia de microarray, uma grande quantidade de dados de expressão gênica encontra-se disponível. Após a extração das taxas de expressão dos genes, técnicas de formação de clusters são utilizadas para a análise dos dados. Diante da diversidade do conhecimento que pode ser extraído dos dados de expressão gênica, existe a necessidade de diferentes técnicas de formação de clusters. O modelo dinâmico desenvolvido em (Zhao et. al. 2003a) apresenta diversas características interessantes para o problema de formação de clusters, entre as quais podemos citar: a não necessidade de fornecer o número de cluster, a propriedade de multi-escala, serem altamente paralelos e, principalmente, permitirem a inserção de regras e mecanismos mais complexos para a formação dos clusters. Todavia, este modelo apresenta dificuldades em determinar clusters de formato e tamanho arbitrários, além de não realizar a clusterização hierárquica, sendo estas duas características desejáveis para uma técnica de clusterização. Neste trabalho, foram desenvolvidas três técnicas para superar as limitações do modelo dinâmico proposto em (Zhao et. al. 2003a). O Modelo1, o qual é uma simplificação do modelo dinâmico original, porém mais eficiente. O Modelo2, que a partir da inserção de um novo conjunto de elementos no modelo dinâmico, permite a formação de clusters de formato e tamanho arbitrário. E um algoritmo para a clusterização hierárquica que utiliza o Modelo1 como bloco de construção. Os modelos desenvolvidos foram aplicados em dados biológicos, segmentando imagens de microarray e auxiliando na análise do conjunto expressão de genes de St. Jude Leukemia. / With the advent of microarray technology, a large amount of gene expression data is now available. Clustering is the computational technique usually employed to analyze and explore the data produced by microarrays. Due to the variety of information that can be extracted from the expression data, many clustering techniques with different approaches are needed. In the work proposed by (Zhao et. al. 2003a), the dynamical model for data clustering has several interesting features to the clustering task: the number of clusters does not need to be known, the multi-scale property, high parallelism, and it is flexible to use more complex rules while clustering the data. However, two desirable features for clustering techniques are not present: the ability to detect different clusters sizes and shapes, and a hierarchical representation of the clusters. This project presents three techniques, overcoming the restrictions of the dynamical model proposed by (Zhao et. al. 2003a). The first technique, called Model1, is more effective than the original model and was obtained simplifying it. The second technique, called Model2, is capable of detecting different clusters sizes and shapes. The third technique consists in a hierarchical algorithm that uses Model1 as a building block. The techniques here developed were used with biological data. Microarray image segmentation was performed and the St. Jude Leukemia gene expression data was analyzed and explored.
3

Modelagem e controle ótimo de um robô quadrúpede. / Modelling and optimal control of a quadruped robot.

Segundo Potts, Alain 11 November 2011 (has links)
O presente trabalho visa à modelagem e ao controle ótimo de um robô quadrúpede autônomo. Devido a variações na topologia e nos graus de liberdade do robô ao longo do seu movimento, duas abordagens diferentes de modelagem foram consideradas: na primeira, foi considerado o robô com pelo menos duas pernas suportando seu corpo ou plataforma e, na segunda, considerou-se o modelo de uma perna no ar. Em ambos os casos, apresentou-se a solução dos problemas cinemáticos de posição direta e inversa por meio da parametrização de Denavit-Hartenberg. Analisaram-se também os problemas cinemáticos de velocidade e suas singularidades através da Matriz Jacobiana, e ainda obtiveram-se os modelos dinâmicos do sistema utilizando-se o Principio do Trabalho Virtual e o método iterativo de Newton-Euler para a plataforma e as pernas, respectivamente. A partir destes modelos dinâmicos, desenvolveu-se um algoritmo de otimização das perdas de energia elétrica dos motores das juntas. Neste sentido, utilizou-se a estratégia do controle independente por junta. Estratégia esta que, junto com a discretização no tempo do modelo do sistema, permitiu transformar o problema inicial de otimização para cada junta em outro de Programação Quadrática bem mais simples de ser resolvido. Depois de resolver estes problemas, para levar em conta as interações entre as dinâmicas das várias juntas, procedeu-se à busca de um ponto fixo ou mínimo global que caracterizasse a energia total gasta no movimento do sistema. Finalmente, realizada a demonstração e a análise de convergência do algoritmo, este foi testado no controle da andadura (gait) do robô Kamambaré. Como resultado do teste, observou-se o bom desempenho da formulação e a viabilidade de sua implementação em sistemas reais. / The present work aims the modeling and optimal control of an autonomous quadruped robot. Due to variations in the topology and the degree of freedom of the robot during its motion, two different modeling approaches were considered: firstly, the robot was considered with at least two legs supporting its body or platform and, second one, was considered the model of a leg in the air. In both cases, was presented the solution of the direct and inverse kinematic problem of position through the Denavit-Hartenberg parameterization. Were analyzed also, the kinematic problem of speed and the singularities through the Jacobian matrix, and was also obtained the dynamic model of the system using the Principle of Virtual Work or the dAlembert method and the iterative Newton-Euler method for the platform and legs, respectively. From these two dynamic model, were developed an algorithm for optimizing the power losses of the motors that driven the joints. In this sense, was used the strategy of independent control for each joint. Such a strategy, along with the discretization in time of the system model, has helped to change the initial optimization problem for each joint in a Quadratic Programming Problem, more simpler to solve. After solving these problems, and to take into account the interactions between the dynamics of various joints, was proceeded to search for a fixed point or a global minimum that would characterize the total energy spent in moving for the system. Finally, held the demonstration and analysis of convergence of the algorithm was tested in the control of gait of the Kamambaré robot. As a result of the test, we observed the good performance of the formulation and the feasibility of its implementation in real systems.
4

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
5

Modeling of Magnetic Fields and Extended Objects for Localization Applications

Wahlström, Niklas January 2015 (has links)
The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed.In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information. Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This enables three-dimensional interaction with computer programs that cannot be handled with other localization techniques. Further, an additional model is proposed for detecting wrong-way drivers on highways based on sensor data from magnetometers deployed in the vicinity of traffic lanes. Models for mapping complex magnetic environments are also analyzed. Such magnetic maps can be used for indoor localization where other systems, such as GPS, do not work. In the second application area, models for tracking objects from laser range sensor data are analyzed. The target shape is modeled with a Gaussian process and is estimated jointly with target position and orientation. The resulting algorithm is capable of tracking various objects with different shapes within the same surveillance region. In the third application area, autonomous learning based on high-dimensional sensor data is considered. In this thesis, we consider one instance of this challenge, the so-called pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. To solve this problem, high-dimensional time series are described using a low-dimensional dynamical model. Techniques from machine learning together with standard tools from control theory are used to autonomously design a controller for the system without any prior knowledge. System models used in the applications above are often provided in continuous time. However, a major part of the applied theory is developed for discrete-time systems. Discretization of continuous-time models is hence fundamental. Therefore, this thesis ends with a method for performing such discretization using Lyapunov equations together with analytical solutions, enabling efficient implementation in software. / Hur kan man få en dator att följa pucken i bordshockey för att sammanställa match-statistik, en pensel att måla virtuella vattenfärger, en skalpell för att digitalisera patologi, eller ett multi-verktyg för att skulptera i 3D?  Detta är fyra applikationer som bygger på den patentsökta algoritm som utvecklats i avhandlingen. Metoden bygger på att man gömmer en liten magnet i verktyget, och placerar ut ett antal tre-axliga magnetometrar - av samma slag som vi har i våra smarta telefoner - i ett nätverk kring vår arbetsyta. Magnetens magnetfält ger upphov till en unik signatur i sensorerna som gör att man kan beräkna magnetens position i tre frihetsgrader, samt två av dess vinklar. Avhandlingen tar fram ett komplett ramverk för dessa beräkningar och tillhörande analys. En annan tillämpning som studerats baserat på denna princip är detektion och klassificering av fordon. I ett samarbete med Luleå tekniska högskola med projektpartners har en algoritm tagits fram för att klassificera i vilken riktning fordonen passerar enbart med hjälp av mätningar från en två-axlig magnetometer. Tester utanför Luleå visar på i princip 100% korrekt klassificering. Att se ett fordon som en struktur av magnetiska dipoler i stället för en enda stor, är ett exempel på ett så kallat utsträckt mål. I klassisk teori för att följa flygplan, båtar mm, beskrivs målen som en punkt, men många av dagens allt noggrannare sensorer genererar flera mätningar från samma mål. Genom att ge målen en geometrisk utsträckning eller andra attribut (som dipols-strukturer) kan man inte enbart förbättra målföljnings-algoritmerna och använda sensordata effektivare, utan också klassificera målen effektivare. I avhandlingen föreslås en modell som beskriver den geometriska formen på ett mer flexibelt sätt och med en högre detaljnivå än tidigare modeller i litteraturen. En helt annan tillämpning som studerats är att använda maskininlärning för att lära en dator att styra en plan pendel till önskad position enbart genom att analysera pixlarna i video-bilder. Metodiken går ut på att låta datorn få studera mängder av bilder på en pendel, i det här fallet 1000-tals, för att förstå dynamiken av hur en känd styrsignal påverkar pendeln, för att sedan kunna agera autonomt när inlärningsfasen är klar. Tekniken skulle i förlängningen kunna användas för att utveckla autonoma robotar. / <p>In the electronic version figure 2.2a is corrected.</p> / COOPLOC
6

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
7

Modelagem e controle ótimo de um robô quadrúpede. / Modelling and optimal control of a quadruped robot.

Alain Segundo Potts 11 November 2011 (has links)
O presente trabalho visa à modelagem e ao controle ótimo de um robô quadrúpede autônomo. Devido a variações na topologia e nos graus de liberdade do robô ao longo do seu movimento, duas abordagens diferentes de modelagem foram consideradas: na primeira, foi considerado o robô com pelo menos duas pernas suportando seu corpo ou plataforma e, na segunda, considerou-se o modelo de uma perna no ar. Em ambos os casos, apresentou-se a solução dos problemas cinemáticos de posição direta e inversa por meio da parametrização de Denavit-Hartenberg. Analisaram-se também os problemas cinemáticos de velocidade e suas singularidades através da Matriz Jacobiana, e ainda obtiveram-se os modelos dinâmicos do sistema utilizando-se o Principio do Trabalho Virtual e o método iterativo de Newton-Euler para a plataforma e as pernas, respectivamente. A partir destes modelos dinâmicos, desenvolveu-se um algoritmo de otimização das perdas de energia elétrica dos motores das juntas. Neste sentido, utilizou-se a estratégia do controle independente por junta. Estratégia esta que, junto com a discretização no tempo do modelo do sistema, permitiu transformar o problema inicial de otimização para cada junta em outro de Programação Quadrática bem mais simples de ser resolvido. Depois de resolver estes problemas, para levar em conta as interações entre as dinâmicas das várias juntas, procedeu-se à busca de um ponto fixo ou mínimo global que caracterizasse a energia total gasta no movimento do sistema. Finalmente, realizada a demonstração e a análise de convergência do algoritmo, este foi testado no controle da andadura (gait) do robô Kamambaré. Como resultado do teste, observou-se o bom desempenho da formulação e a viabilidade de sua implementação em sistemas reais. / The present work aims the modeling and optimal control of an autonomous quadruped robot. Due to variations in the topology and the degree of freedom of the robot during its motion, two different modeling approaches were considered: firstly, the robot was considered with at least two legs supporting its body or platform and, second one, was considered the model of a leg in the air. In both cases, was presented the solution of the direct and inverse kinematic problem of position through the Denavit-Hartenberg parameterization. Were analyzed also, the kinematic problem of speed and the singularities through the Jacobian matrix, and was also obtained the dynamic model of the system using the Principle of Virtual Work or the dAlembert method and the iterative Newton-Euler method for the platform and legs, respectively. From these two dynamic model, were developed an algorithm for optimizing the power losses of the motors that driven the joints. In this sense, was used the strategy of independent control for each joint. Such a strategy, along with the discretization in time of the system model, has helped to change the initial optimization problem for each joint in a Quadratic Programming Problem, more simpler to solve. After solving these problems, and to take into account the interactions between the dynamics of various joints, was proceeded to search for a fixed point or a global minimum that would characterize the total energy spent in moving for the system. Finally, held the demonstration and analysis of convergence of the algorithm was tested in the control of gait of the Kamambaré robot. As a result of the test, we observed the good performance of the formulation and the feasibility of its implementation in real systems.
8

Desenvolvimento de modelos dinâmicos para a formação de clusters aplicados em dados biológicos / Developing dynamical systems for data clustering applied to biological data

Antonio Paulo Galdeano Damiance Junior 16 October 2006 (has links)
Com o advento da tecnologia de microarray, uma grande quantidade de dados de expressão gênica encontra-se disponível. Após a extração das taxas de expressão dos genes, técnicas de formação de clusters são utilizadas para a análise dos dados. Diante da diversidade do conhecimento que pode ser extraído dos dados de expressão gênica, existe a necessidade de diferentes técnicas de formação de clusters. O modelo dinâmico desenvolvido em (Zhao et. al. 2003a) apresenta diversas características interessantes para o problema de formação de clusters, entre as quais podemos citar: a não necessidade de fornecer o número de cluster, a propriedade de multi-escala, serem altamente paralelos e, principalmente, permitirem a inserção de regras e mecanismos mais complexos para a formação dos clusters. Todavia, este modelo apresenta dificuldades em determinar clusters de formato e tamanho arbitrários, além de não realizar a clusterização hierárquica, sendo estas duas características desejáveis para uma técnica de clusterização. Neste trabalho, foram desenvolvidas três técnicas para superar as limitações do modelo dinâmico proposto em (Zhao et. al. 2003a). O Modelo1, o qual é uma simplificação do modelo dinâmico original, porém mais eficiente. O Modelo2, que a partir da inserção de um novo conjunto de elementos no modelo dinâmico, permite a formação de clusters de formato e tamanho arbitrário. E um algoritmo para a clusterização hierárquica que utiliza o Modelo1 como bloco de construção. Os modelos desenvolvidos foram aplicados em dados biológicos, segmentando imagens de microarray e auxiliando na análise do conjunto expressão de genes de St. Jude Leukemia. / With the advent of microarray technology, a large amount of gene expression data is now available. Clustering is the computational technique usually employed to analyze and explore the data produced by microarrays. Due to the variety of information that can be extracted from the expression data, many clustering techniques with different approaches are needed. In the work proposed by (Zhao et. al. 2003a), the dynamical model for data clustering has several interesting features to the clustering task: the number of clusters does not need to be known, the multi-scale property, high parallelism, and it is flexible to use more complex rules while clustering the data. However, two desirable features for clustering techniques are not present: the ability to detect different clusters sizes and shapes, and a hierarchical representation of the clusters. This project presents three techniques, overcoming the restrictions of the dynamical model proposed by (Zhao et. al. 2003a). The first technique, called Model1, is more effective than the original model and was obtained simplifying it. The second technique, called Model2, is capable of detecting different clusters sizes and shapes. The third technique consists in a hierarchical algorithm that uses Model1 as a building block. The techniques here developed were used with biological data. Microarray image segmentation was performed and the St. Jude Leukemia gene expression data was analyzed and explored.
9

BAYESIAN OPTIMIZATION FOR DESIGN PARAMETERS OF AUTOINJECTORS.pdf

Heliben Naimeshkum Parikh (15340111) 24 April 2023 (has links)
<p>The document describes the computational framework to optimize spring-driven Autoinjectors. It involves Bayesian Optimization for efficient and cost-effective design of Autoinjectors.</p>
10

A Dynamical Approach to Plastic Deformation of Nano-Scale Materials : Nano and Micro-Indentation

Srikanth, K 07 1900 (has links) (PDF)
Recent studies demonstrate that mechanical deformation of small volume systems can be significantly different from those of the bulk. One such interesting length scale dependent property is the increase in the yield stress with decrease in diameter of micrometer rods, particularly when the diameter is below a micrometer. Intermittent flow may also result when the diameter of the rods is decreased below a certain value. The second such property is the intermittent plastic deformation during nano-indentation experiments. Here again, the instability manifests due to smallness of the sample size, in the form of force fluctuations or displacement bursts. The third such length scale dependent property manifests as ’smaller is stronger’ property in indentation experiments on thin films, commonly called as the indentation size effect (ISE). More specifically, the ISE refers to the increase in the hardness with decreasing indentation depth, particularly below a fraction of a micrometer depth of indentation. The purpose of this thesis is to extend nonlinear dynamical approach to plastic deformation originally introduced by Anantha krishna and coworkers in early 1980’s to nano and micro-indentation process. More specifically, we address three distinct problems : (a) intermittent force/load fluctuations during displacement controlled mode of nano-indentation, (b) displacement bursts during load controlled mode of nano-indentation and (c) devising an alternate framework for the indentation size effect. In this thesis, we demonstrate that our approach predicts not just all the generic features of nano-and micro-indentation and the ISE, the predicted numbers also match with experiments. Nano-indentation experiments are usually carried-out either in a displacement controlled (DC) mode or load controlled (LC) mode. The indenter tip radius typically ranges from few tens of nanometer to few hundreds of nanometers-meters. Therefore, the indented volume is so small that the probability of finding a dislocation is close to zero. This implies that dislocations must be nucleated for further plastic deformation to proceed. This is responsible for triggering intermittent flow as indentation proceeds. While several load drops are seen beyond the elastic limit in the DC controlled experiments, several displacement jumps are seen in the LC experiments. In both cases, the stress corresponding to load maximum on the elastic branch is close to the theoretical yield stress of an ideal crystal, a feature attributed to the absence of dislocations in the indented volume. Hardness is defined as the ratio of the load to the imprint area after unloading and is conventionally measured by unloading the indenter from desired loads to measure the residual plastic imprint area. Then, the hardness so calculated is found to increase with decreasing indentation depth. However, such size dependent effects cannot be explained on the basis of conventional continuum plasticity theories since all mechanical properties are independent of length scales. Early theories suggest that strong strain gradients exist under the indenter that require geometrically necessary dislocations (GNDs) to relax the strain gradients. In an effort to explain the the size effect, these theories introduce a length scale corresponding to the strain gradients. One other feature predicted by subsequent models of the ISE is the linear relation between the square of the hardness and the inverse of the indentation depth. Early investigations on the ISE did recognize that GNDs were required to accommodate strain gradients and that the hardness H is determined by the sum of the statistically stored dislocation (SSD) and GND densities. Following these steps, Nix and Gao derived an expression for the hardness as a function of the indentation depth z. The relevant variables are the SSD and GND densities. An expression for the GND density was obtained by assuming that the GNDs are contained within a hemispherical volume of mean contact radius. The authors derive an expression for the hardness H as a function of indentation depth z given by [ HH 0 ]2 = 1+ zz ∗ . The intercept H0 represents the hardness arising only from SSDs and corresponds to the hardness in the limit of large sample size. The slope z ∗ can be identified as the length scale below which the ISE becomes significant. The authors showed that this linear relation was in excellent agreement with the published results of McElhaney et al for cold rolled polycrystalline copper and single crystals of copper, and single crystals of silver by Ma and Clarke. Subsequent investigations showed that the linear relationship between H2 verses 1/z breaks down at small indentation depths. Much insight into nano-indentation process has come from three distinct types of studies. First, early studies using bubble raft indentation and later studies using colloidal crystals (soft matter equivalent of the crystalline phase) allowed visualization of dislocation nucleation mechanism. Second, more recently, in-situ transmission electron microscope studies of nano-indentation experiments have been useful in understanding the dislocation nucleation mechanism in real materials. Third, considerable theoretical understanding has come largely from various types of simulation studies such as molecular dynamics (MD) simulations, dis¬location dynamics simulations and multiscale modeling simulations (using MD together with dislocation dynamics simulations). A major advantage of simulation methods is their ability to include a range of dislocation mechanisms participating in the evolution of dislocation microstructure starting from the nucleation of a dislocation, its multiplication, formation of locks, junctions etc. However, this advantage is offset by the serious limitations set by short time scales inherent to the above mentioned simulations and the limited size of simulated volumes that can be implemented. Thus, simulation approaches cannot impose experimental parameters such as the indentation rates or radius of the indenter and thickness of the sample, for example in MD simulations. Indeed, the imposed deformation rates are often several orders of magnitude higher than the experimental rates. Consequently, the predicted values of force, indentation depth etc., differ considerably from those reported by experiments. For these reasons, the relevance of these simulations to real materials has been questioned. While several simulations, particularly MD simulation predict several force drops, there are no simulations that predict displacement jumps seen in LC mode experiments. The inability of simulation methods to adopt experimental parameters and the mismatch of the predicted numbers with experiments is main motivation for devising an alternate framework to simulations that can adopt experimental parameters and predict numbers that are comparable to experiments. The basic premise of our approach is that describing time evolution of the relevant variables should be adequate to capture most generic features of nano and micro-indentation phenomenon. In the particular case under study, this point of view is based on the following observation. While one knows that dislocations are the basic defects responsible for plastic deformation occurring inside the sample, the load-indentation depth curve does not include any information about the spatial location of dislocation activity inside the sample. In fact, the measured load and displacement are sample averaged response of the dislocation activity in the sample. This suggests that it should be adequate to use sample averaged dislocation densities to obtain load-indentation depth curve. Keeping this in mind, we devise a method for calculating the contribution from plastic deformation arising from dislocation activity in the entire sample. This is done by setting up rate equations for the relevant sample averaged dislocation densities. The first problem we consider is the force/load fluctuations in displacement controlled nano-indentation. We devise a novel approach that combines the power of nonlinear dynamics with the evolution equations for the mobile and forest dislocation densities. Since the force serrations result from plastic deformation occurring inside the sample, we devise a method for calculating this contribution by setting-up a system of coupled nonlinear time evolution equations for the mobile and forest dislocation densities. The approach follows closely the steps used in the Anantha krishna (AK) model for the Portevin-Le Chatelier (PLC) effect. The model includes nucleation, multiplication and propagation of dislocation loops in the time evolution equation for the mobile dislocation density. We also include other well known dislocation transformation mechanisms to forest dislocation. Several of these dislocation mechanisms are drawn from the AK model for the PLC effect. To illustrate the ability of the model to predict force fluctuations that match experiments, we use the work of Kiely at that employs a spherical indenter. The ability of the approach is illustrated by adopting experimental parameters such as the indentation rate, the radius the indenter etc. The model predicts all the generic features of nano-indentation such as the Hertzian elastic branch followed by several force drops of decreasing magnitudes, and residual plas¬ticity after unloading. The stress corresponding to the elastic force maximum is close to the yield stress of an ideal solid. The predicted values for all the quantities are close to those reported by experiments. Our model allows us to address the indentation-size effect including the ambiguity in defining the hardness in the force drop dominated regime. At large indentation depths where the load drops disappear, the hardness shows decreasing trend, though marginal. The second problem we consider is the load controlled mode of indentation where sev¬eral displacement jumps of decreasing magnitudes are seen. Even though, the LC mode is routinely used in nano-indentation experiments, there are no models or simulations that predict the generic features of force-displacement curves, in particular, the existence of sev¬eral displacement jumps of decreasing magnitudes. The basic reason for this is the inability of these methods to impose constant load rate during displacement jumps. We then show that an extension of the model for the DC mode predicts all the generic features when the model is appropriately coupled to an equation defining the load rate. Following the model for DC mode, we retain the system of coupled nonlinear time evolution equations for mobile and forest dislocation densities that includes nucleation, multiplication, and propagation threshold mechanisms for mobile dislocations, and other dislocation transformation mechanisms. The commonly used Berkovich indenter is considered. The equations are then coupled to the force rate equation. We demonstrate that the model predicts all the generic features of the LC mode nano-indentation such as the existence of an initial elastic branch followed by several displacement jumps of decreasing magnitudes, and residual plasticity after unloading for a range of model parameter values. In this range, the predicted values of the load, displacement jumps etc., are similar to those found in experiments. Further, optimized set of parameter values can be easily determined that provide a good fit to the load-indentation depth curve of Gouldstone et al for single crystals of Aluminum. The stress corresponding to the maximum force on the Berkovich elastic branch is close to the theoretical yield stress. We also elucidate the ambiguity in defining hardness at nanometer scales where the displacement jumps dominate. The approach also provides insights into several open questions. The third problem we consider is the indentation size effect. The conventional definition of hardness is that it is the ratio of the load to the residual imprint area. The latter is determined by the residual plastic indentation depth through area-depth relation. Yet, the residual plastic indentation depth that is a measure of dislocation mobility, never enters into most hardness models. Rather, the conventional hardness models are based on the Taylor relation for the flow stress that characterizes the resistance to dislocation motion. This is a complimentary property to mobility. Our idea is to provide an alternate way of explaining the indentation size effect by devising a framework that directly calculates the residual plastic indentation depth by integrating the Orowan expression for the plastic strain rate. Following our general approach to plasticity problems, we set-up a system of coupled nonlinear time evolution equations for the mobile, forest (or the SSD) and GND densities. The model includes dislocation multiplication and other well known dislocation transformation mechanisms among the three types of dislocations. The main contributing factor for the evolution of the GND density is determined by the mean strain gradient and the number of sites in the contact area that can activate dislocation loops of a certain size. The equations are then coupled to the load rate equation. The ability of the approach is illustrated by adopting experimental parameters such as the indentation rates, the geometrical quantities defining the Berkovich indenter including the nominal tip radius and other parameters. The hardness is obtained by calculating the residual plastic indentation depth after unloading by integrating the Orowan expression for the plastic strain rate. We demonstrate that the model predicts all features of the indentation size effect, namely, the increase in the hardness with decreasing indentation depth and the linear relation between the square of the hardness and inverse of the indentation depth, for all but 200nm, for a range of parameter values. The model also predicts deviation from the linear relation of H2 as a function of 1/z for smaller depths consistent with experiments. We also show that it is straightforward to obtain optimized parameter values that give a good fit to polycrystalline cold-worked copper and single crystals of silver. Our approach provides an alternate way of understanding the hardness and indentation size effect on the basis of the Orowan equation for plastic flow. This approach must be contrasted with most models of hardness that use the SSD and GND densities as parameters. The thesis is organized as follows. The first Chapter is devoted to background material that covers physical aspects of different kinds of plastic deformation relevant for the thesis. These include the conventional yield phenomenon and the intermittent plastic deformation in bulk materials in alloys exhibiting the Portevin-Le Chatelier (PLC) effect. We then provide background material on nano-and micro-indentation, both experimental aspects and the current status of the DC controlled and LC controlled modes of nano-indentation. Results of simulation methods are briefly summarized. The chapter also provides a survey of hardness models and the indentation size effect. A critical survey of experiments on dislocation microsructure that contradict / support certain predictions of the NixGao model. The current status of numerical simulations are also given. The second Chapter is devoted to introducing the basic steps in modeling plastic deformation using nonlinear dynamical approach. In particular, we describe how the time evolution equations are constructed based on known dislocation mechanisms such as nucleation, multiplication formations of junctions etc. We then consider a model for the continuous yield phenomenon that involves only the mobile and forest densities coupled to constant strain rate condition. This problem is considered in some detail to illustrate how the approach can be used for modeling nano-indentation and indentation size effect. The third Chapter deals with a model for displacement controlled nano-indentation. The fourth Chapter is devoted to adopting these equation to the load controlled mode of nano¬indentation. The fifth Chapter is devoted to modeling the indentation size effect based on calculating residual plastic indentation depth after unloading by using the Orowan’s expression for the plastic strain rate. We conclude the thesis with a Summary, Discussion and Conclusions.

Page generated in 0.4697 seconds