• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 465
  • 312
  • 159
  • 14
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 6
  • 3
  • 3
  • 3
  • Tagged with
  • 1131
  • 1131
  • 330
  • 326
  • 324
  • 255
  • 214
  • 161
  • 159
  • 148
  • 144
  • 120
  • 106
  • 105
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Development of techniques using finite element and meshless methods for the simulation of piercing /

Mabogo, Mbavhalelo. January 2009 (has links)
Thesis (MTech (Mechanical Engineering))--Cape Peninsula University of Technology, 2009. / Includes bibliographical references (leaves 94-98). Also available online.
592

Sensitivity analysis on a simulated helpdesk system with respect to input distributions with special reference to the circumference method

Roux, Johanna Wileria 01 January 2002 (has links)
Simulation analysis makes use of statistical distributions to specify the parameters of input data. It is well known that fitting a distribution to empirical data is more of an art than a science (Banks J., 1998, p. 74) because of the difficulty of constructing a 'good' histogram. The most difficult step is choosing an appropriate interval width. Too small a width will produce a ragged histogram, whereas too large a width will produce one that is overaggregated and block-like. De Beer and Swanepoel (1999) have developed 'Simple and effective number-of-bins circumference selectors' for creating histograms for the purpose of fitting distributions. When using simulation software such as Arena, one can generally fit distributions to input data using a built-in function in the software. If input distributions could be compared regarding their effect on the outcomes of a simulation model, one could assess whether input distributions generated by Arena could be accepted unconditionally or whether one should pay special attention to the input distributions used in the simulation model. In this study a simulation model of a computer helpdesk system is constructed to test the effect of input distributions. Distributions fitted with the 'circumference technique' are compared with those from the simulation package, Arena, and those calculated by the statistical package 'Statistica', and then compared with empirical distributions. In the helpdesk system, calls from employees experiencing problems with any computer hardware or software are logged, redirected when necessary, 'attended to, resolved and then closed. Queue statistics of the simulation model using input distributions suggested by Arena as opposed to input distributions deduced from the other methods are compared and a conclusion is reached as to how important or unimportant it is for this specific model to select appropriate input distributions. / Business Management / M. Com. (Quantitative Managemment)
593

Teoria quântica do campo escalar real com autoacoplamento quártico - simulações de Monte Carlo na rede com um algoritmo worm

Leme, Rafael Reis [UNESP] 13 June 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:25:34Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-13Bitstream added on 2014-06-13T19:53:28Z : No. of bitstreams: 1 leme_rr_me_ift.pdf: 924435 bytes, checksum: 92fdbfbe29ac1970f3d28a01d822ca6c (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Neste trabalho apresentamos resultados de simulações de Monte Carlo de uma teoria quântica de campos escalar com autointeração ´fi POT. 4' em uma rede (1+1) empregando o recentemente proposto algoritmo worm. Em simulações de Monte Carlo, a eficiência de um algoritmo é medida em termos de um expoente dinâmico 'zeta', que se relaciona com o tempo de autocorrelação 'tau' entre as medidas de acordo com a relação 'tau' 'alfa' 'L POT. zeta', onde L é o comprimento da rede. O tempo de autocorrelação fornece uma medida para a “memória” do processo de atualização de uma simulação de Monte Carlo. O algoritmo worm possui um 'zeta' comparável aos obtidos com os eficientes algoritmos do tipo cluster, entretanto utiliza apenas processos de atualização locais. Apresentamos resultados para observáveis em função dos parâmetros não renormalizados do modelo 'lâmbda' e 'mü POT. 2'. Particular atenção é dedicada ao valor esperado no vácuo < 'fi'('qui')> e a função de correlação de dois pontos <'fi'('qui')'fi'('qui' POT. 1')>. Determinamos a linha crítica ( ´lâmbda IND. C', 'mu IND C POT. 2') que separa a fase simétrica e com quebra espontânea de simetria e comparamos os resultados com a literatura / In this work we will present results of Monte Carlo simulations of the ´fi POT. 4'quantum field theory on a (1 + 1) lattice employing the recently-proposed worm algorithm. In Monte Carlo simulations, the efficiency of an algorithm is measured in terms of a dynamical critical exponent 'zeta', that is related with the autocorrelation time 'tau' of measurements as 'tau' 'alfa' 'L POT. zeta', where L is the lattice length. The autocorrelation time provides a measure of the “memory” of the Monte Carlo updating process. The worm algorithm has a 'zeta' comparable with the ones obtained with the efficient cluster algorithms, but uses local updates only. We present results for observables as functions of the unrenormalized parameters of the theory 'lâmbda and 'mü POT. 2'. Particular attention is devoted to the vacuum expectation value < 'fi'('qui')> and the two-point correlation function <'fi'('qui')fi(qui pot. 1')>. We determine the critical line( ´lâmbda IND. C', 'mu IND C POT. 2') that separates the symmetric and spontaneously-broken phases and compare with results of the literature
594

Development of numerical schemes to improve the efficiency of CFD simulation of high speed viscous aerodynamic flows

Mason, Kevin Richard January 2013 (has links)
No description available.
595

Análise comparativa de técnicas de compressão aplicadas a imagens médicas usando ultrassom

Zimbico, Acácio José 18 March 2014 (has links)
CAPES / A área de compressão de imagem é de grande importância para aplicações médicas. As técnicas de compressão têm sido muito estudadas por permitirem representar de forma eficiente os dados, reduzindo o espaço necessário para armazenamento e minimizando a demanda na transmissão através de canais de comunicação. A técnica de compressão JPEG usa a transformada discreta de cossenos bidimensional e as técnicas Joint Photographic Experts Group 2000 (JPEG2000), Set Partitioning in Hierarchical Trees (SPIHT) e Embedded Zerotree Wavelet (EZW) usam a Transformada Wavelet bidimensional. Neste trabalho, uma análise comparativa é feita usando as métricas Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Cross Correlation (CC) e Structural Similarity (SSIM) assim como uma avaliação indireta de esforço computacional através do tempo de processamento. A análise usando a técnica de compressão JPEG usando blocos de tamanhos diferentes permite concluir sobre a influência da frequência central dos transdutores ultrassônicos nos detalhes da imagem assim como concluir que o uso de blocos de tamanho 8x8 e 16x16 pixeis tem o melhor desempenho que o uso de blocos de menos tamanho. Adicionalmente, ao avaliar matrizes de quantização alternativas propostas por Hamamoto, Veraswamy e Abu para a técnica de compressão JPEG introduzem melhorias na qualidade da imagem de ultrassom reconstruída quando comparadas a matriz de quantização tradicional considerada taxa de compressão constante na comparação. Para técnicas usando a transformada wavelet foi possível observar o impacto significativo na qualidade da imagem de ultrassom reconstruída em função dos filtros utilizados. Finalmente, pode-se concluir que o algoritmo JPEG2000 apresenta o melhor desempenho em relação à qualidade de imagem e ao tempo de processamento quando comparado com os algoritmos JPEG, SPIHT e EZW. / The area of image compression is of great importance for medical applications. Compression techniques have been studied for allowing to efficiently representing the data, reducing the required storage space and minimizing the demand on transmission through communication channels. The JPEG compression technique uses a two-dimensional discrete cosine transform and techniques Joint Photographic Experts Group 2000 (JPEG2000), Set Partitioning in Hierarchical Trees (SPIHT) and Embedded Zerotree Wavelet (EZW) use the two-dimensional wavelet transform. In this work, a comparative analysis is made using the metrics Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Cross Correlation (CC) and Structural Similarity (SSIM) as well as an indirect assessment of computational effort through time processing. The analysis using the JPEG compression technique using blocks of different sizes allows us to conclude on the influence of the center frequency of the ultrasonic transducers in the details of the image so as to conclude that the use of blocks of size 8x8 and 16x16 pixels has better performance than the use of note less size. Additionally, in evaluating alternative quantization matrices proposed by Hamamoto, Veraswamy and Abu for JPEG compression technique is concluded that these introduce improvements in the quality of reconstructed ultrasound image when compared to traditional matrix quantization considered that given compression rate in comparison. Techniques for using the wavelet transform have been possible to observe a significant impact on the quality of the ultrasound image reconstructed on the basis of the filters used. Finally, it was concluded that the JPEG2000 algorithm has the best performance with respect to image quality and processing time compared with JPEG, EZW and SPIHT algorithms.
596

Design, modelling and simulation of a novel micro-electro-mechanical gyroscope with optical readouts

Zhang, Bo January 2007 (has links)
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2007 / Micro Electro-Machnical Systems (MEMS) applications are fastest development technology present. MEMS processes leverage mainstream IC technologies to achieve on chip sensor interface and signal processing circuitry, multi-vendor accessibility, short design cycles, more on-chip functions and low cost. MEMS fabrications are based on thin-film surface microstructures, bulk micromaching, and LIGA processes. This thesis centered on developing optical micromaching inertial sensors based on MEMS fabrication technology which incorporates bulk Si into microstructures. Micromachined inertial sensors, consisting of the accelerometers and gyroscopes, are one of the most important types of silicon-based sensors. Microaccelerometers alone have the second largest sales volume after pressure sensors, and it is believed that gyroscopes will soon be mass produced at the similar volumes occupied by traditional gyroscopes. A traditional gyroscope is a device for measuring or maintaining orientation, based on the principle of conservation of angular momentum. The essence of the gyroscope machine is a spinning wheel on an axle. The device, once spinning, tends to resist changes to its orientation due to the angular momentum of the wheel. In physics this phenomenon is also known as gyroscopic inertia or rigidity in space. The applications are limited by the huge volume. MEMS Gyroscopes, which are using the MEMS fabrication technology to minimize the size of gyroscope systems, are of great importance in commercial, medical, automotive and military fields. They can be used in cars for ASS systems, for anti-roll devices and for navigation in tall buildings areas where the GPS system might fail. They can also be used for the navigation of robots in tunnels or pipings, for leading capsules containing medicines or diagnostic equipment in the human body, or as 3-D computer mice. The MEMS gyroscope chips are limited by high precision measurement because of the unprecision electrical readout system. The market is in need for highly accurate, high-G-sustainable inertial measuring units (IMU's). The approach optical sensors have been around for a while now and because of the performance, the mall volume, the simplicity has been popular. However the production cost of optical applications is not satisfaction with consumer. Therefore, the MEMS fabrication technology makes the possibility for the low cost and micro optical devices like light sources, the waveguide, the high thin fiber optical, the micro photodetector, and vary demodulation measurement methods. Optic sensors may be defined as a means through which a measurand interacts with light guided in an optical fiber (an intrinsic sensor) or guided to (and returned from) an interaction region (an extrinsic sensor) by an optical fiber to produce an optical signal related to the parameter of interest. During its over 30 years of history, fiber optic sensor technology has been successfully applied by laboratories and industries worldwide in the detection of a large number of mechanical, thermal, electromagnetic, radiation, chemical, motion, flow and turbulence of fluids, and biomedical parameters. The fiber optic sensors provided advantages over conventional electronic sensors, of survivability in harsh environments, immunity to Electro Magnetic Interference (EMI), light weight, small size, compatibility with optical fiber communication systems, high sensitivity for many measurands, and good potential of multiplexing. In general, the transducers used in these fiber optic sensor systems are either an intensity-modulator or a phase-modulator. The optical interferometers, such as Mach-Zehnder, Michelson, Sagnac and Fabry-Perot interferometers, have become widely accepted as a phase modulator in optical sensors for the ultimate sensitivity to a range of weak signals. According to the light source being used, the interferometric sensors can be simply classified as either a coherence interferometric sensor if a the interferometer is interrogated by a coherent light source, such as a laser or a monochromatic light, or a lowcoherence interferometric sensor when a broadband source a light emitting diode (LED) or a superluminescent diode (SLD), is used. This thesis proposed a novel micro electro-mechanical gyroscope system with optical interferometer readout system and fabricated by MEMS technology, which is an original contribution in design and research on micro opto-electro-mechanical gyroscope systems (MOEMS) to provide the better performances than the current MEMS gyroscope. Fiber optical interferometric sensors have been proved more sensitive, precision than other electrical counterparts at the measurement micro distance. The MOMES gyroscope system design is based on the existing successful MEMS vibratory gyroscope and micro fiber optical interferometer distances sensor, which avoid large size, heavy weight and complex fabrication processes comparing with fiber optical gyroscope using Sagnac effect. The research starts from the fiber optical gyroscope based on Sagnac effect and existing MEMS gyroscopes, then moving to the novel design about MOEMS gyroscope system to discuss the operation principles and the structures. In this thesis, the operation principles, mathematics models and performances simulation of the MOEMS gyroscope are introduced, and the suitable MEMS fabrication processes will be discussed and presented. The first prototype model will be sent and fabricated by the manufacture for the further real time performance testing. There are a lot of inventions, further research and optimize around this novel MOEMS gyroscope chip. In future studying, the research will be putted on integration three axis Gyroscopes in one micro structure by optical sensor multiplexing principles, and the new optical devices like more powerful light source, photosensitive materials etc., and new demodulation processes, which can improve the performance and the interface to co-operate with other inertial sensors and navigation system.
597

Design and development of a smart inverter system

Adekola, Olawale Ibrahim January 2015 (has links)
Thesis (MTech (Electrical, Electronic and Computer Engineering))--Cape Peninsula University of Technology, 2015. / The growing interest in the use of solar energy to mitigate climate change, reduction in the cost of PV system and other favourable factors have increased the penetration of the PV(Photovoltaic) systems in the market and increase in the worldwide energy supply. The main component in a DG is a smart inverter connected in a grid-tied mode which serves as a direct interface between the grid and the RES (Renewable Energy System). This research work presents a three phase grid-tied inverter with active and reactive power control capabilities for renewable energy sources (RES) and distributed generators (DG). The type of the inverter to be designed is a Voltage Source Inverter (VSI). The VSI is capable of supplying energy to the utility grid with a well regulated DC link at its input. The solution this project proposes is an implementation of the designed filter to effectively reduce the harmonics injected into the grid to an acceptable value according to standards and also an approach to control the real and reactive power output of the inverters to help solve the problems of instability and power quality of the distribution system. The design, modelling and simulation of the smart inverter system is performed in MATLAB/SIMULINK software environment. A 10 kW three-phase voltage source inverter system connected to the utility grid was considered for this research. Series of simulations for the grid-connected inverter (GCI) model was carried out using different step changes in active and reactive power references which was used to obtain the tracking response of the set power references. The effectiveness of the control system which was designed to track the set references and supply improved power quality with reduced current ripples has been verified from the simulation results obtained.
598

Avaliação do desvio lateral do feixe em protonterapia

Cassetta Junior, Francisco Roberto 31 July 2013 (has links)
Neste trabalho foi desenvolvido um estudo sobre o desvio lateral do feixe utilizado em protonterapia. Por meio de softwares, como o SRIM2012 e GEANT4, que permitem simulações de passagem de partículas pesadas pela matéria, foi possível a obtenção de resultados para estudos específicos nessa área. A importância dessas simulações vem de sua simplicidade de uso frente ao alto custo dos aceleradores de hádrons e a enorme variedade de possibilidades para análises de métodos. A protonterapia vem apresentando resultados promissores frente ao tratamento convencional por fótons, e a análise dos parâmetros calculados por simulações e seus níveis de incerteza na região do pico de Bragg será essencial para o futuro aprimoramento de planos de tratamento em protonterapia. Foram utilizados alvos compostos de água e também de materiais substitutos aos tecidos humanos para a análise do espalhamento do feixe de prótons, e os resultados obtidos apresentam a tendência do aumento do desvio lateral na região de máximo alcance dos prótons com o aumento da energia inicial do feixe, sendo esta incerteza lateral da mesma ordem de grandeza da incerteza em profundidade encontrada na literatura atual. / This work developed a study of the lateral deviation of the beam used in proton therapy. Through software, such as SRIM2012 and GEANT4, which allow simulations of heavy particles passing through matter, it was possible to obtain results for specific studies in this area. The importance of these simulations comes from its simplicity of use mainly due to the high cost of hadron accelerators and the huge variety of possibilities for methods analysis. The proton therapy has shown promising results compared to conventional treatment by photons, the analysis of the parameters calculated by simulations and their levels of uncertainty in the region of the Bragg peak will be essential for future improvements of treatment plans in proton therapy. Targets used were composed of water and also of human tissue substitute materials for scattering analysis of the proton beam, and the results show the trend of increased lateral deviation with initial energy of the beam increasing in the region of maximum range of protons, where we find the Bragg peak, and this lateral uncertainty has the same order of magnitude than the uncertainty in depth found in the current literature.
599

Uma Simulação computacional do passeio aleatório simples / A computer simulation of simple random walk

Ighor Opiliar Mendes Rimes 24 February 2015 (has links)
Em 1828 foi observado um fenômeno no microscópio em que se visualizava minúsculos grãos de pólen mergulhados em um líquido em repouso que mexiam-se de forma aleatória, desenhando um movimento desordenado. A questão era compreender este movimento. Após cerca de 80 anos, Einstein (1905) desenvolveu uma formulação matemática para explicar este fenômeno, tratado por movimento Browniano, teoria cada vez mais desenvolvida em muitas das áreas do conhecimento, inclusive recentemente em modelagem computacional. Objetiva-se pontuar os pressupostos básicos inerentes ao passeio aleatório simples considerando experimentos com e sem problema de valor de contorno para melhor compreensão ao no uso de algoritmos aplicados a problemas computacionais. Foram explicitadas as ferramentas necessárias para aplicação de modelos de simulação do passeio aleatório simples nas três primeiras dimensões do espaço. O interesse foi direcionado tanto para o passeio aleatório simples como para possíveis aplicações para o problema da ruína do jogador e a disseminação de vírus em rede de computadores. Foram desenvolvidos algoritmos do passeio aleatório simples unidimensional sem e com o problema do valor de contorno na plataforma R. Similarmente, implementados para os espaços bidimensionais e tridimensionais,possibilitando futuras aplicações para o problema da disseminação de vírus em rede de computadores e como motivação ao estudo da Equação do Calor, embora necessita um maior embasamento em conceitos da Física e Probabilidade para dar continuidade a tal aplicação. / In 1828 it was observed a phenomenon under a microscope in which visualized tiny pollen grains dipped into a liquid at rest that move up at random drawing a disorderly movement. The point was to understand this movement. After about 80 years, Einstein (1905) developed a mathematical formulation to explain this phenomenon, called as Brown motion, increasingly theory developed in many areas, including recently on computational modeling. The goal is to score the basic assumptions inherent in the simple random walk considering experiments with and without boundary value problem for better understanding the use of algorithms applied to computational problems. The tools needed for applying simulation models of simple random walk in the first three dimensions of space were spelled out. The interest was directed as much to the simple random walk as to possible applications for the issue of the ruin of the player and the spread of viruses in computers network. Random walk algorithms simple one-dimensional without and with boundary value problem on the platform R were developed. At the same way, they were implemented for the two-dimensional and three-dimensional spaces, enabling future applications to the problem of the spread of viruses in computers network and as motivation to study the heat equation, although it requires a greater foundation in concepts of Physics and Probability to continue such application
600

Avaliação do desvio lateral do feixe em protonterapia

Cassetta Junior, Francisco Roberto 31 July 2013 (has links)
Neste trabalho foi desenvolvido um estudo sobre o desvio lateral do feixe utilizado em protonterapia. Por meio de softwares, como o SRIM2012 e GEANT4, que permitem simulações de passagem de partículas pesadas pela matéria, foi possível a obtenção de resultados para estudos específicos nessa área. A importância dessas simulações vem de sua simplicidade de uso frente ao alto custo dos aceleradores de hádrons e a enorme variedade de possibilidades para análises de métodos. A protonterapia vem apresentando resultados promissores frente ao tratamento convencional por fótons, e a análise dos parâmetros calculados por simulações e seus níveis de incerteza na região do pico de Bragg será essencial para o futuro aprimoramento de planos de tratamento em protonterapia. Foram utilizados alvos compostos de água e também de materiais substitutos aos tecidos humanos para a análise do espalhamento do feixe de prótons, e os resultados obtidos apresentam a tendência do aumento do desvio lateral na região de máximo alcance dos prótons com o aumento da energia inicial do feixe, sendo esta incerteza lateral da mesma ordem de grandeza da incerteza em profundidade encontrada na literatura atual. / This work developed a study of the lateral deviation of the beam used in proton therapy. Through software, such as SRIM2012 and GEANT4, which allow simulations of heavy particles passing through matter, it was possible to obtain results for specific studies in this area. The importance of these simulations comes from its simplicity of use mainly due to the high cost of hadron accelerators and the huge variety of possibilities for methods analysis. The proton therapy has shown promising results compared to conventional treatment by photons, the analysis of the parameters calculated by simulations and their levels of uncertainty in the region of the Bragg peak will be essential for future improvements of treatment plans in proton therapy. Targets used were composed of water and also of human tissue substitute materials for scattering analysis of the proton beam, and the results show the trend of increased lateral deviation with initial energy of the beam increasing in the region of maximum range of protons, where we find the Bragg peak, and this lateral uncertainty has the same order of magnitude than the uncertainty in depth found in the current literature.

Page generated in 0.1172 seconds