221 |
Scalable Collaborative Filtering Recommendation Algorithms on Apache SparkCasey, Walker Evan 01 January 2014 (has links)
Collaborative filtering based recommender systems use information about a user's preferences to make personalized predictions about content, such as topics, people, or products, that they might find relevant. As the volume of accessible information and active users on the Internet continues to grow, it becomes increasingly difficult to compute recommendations quickly and accurately over a large dataset. In this study, we will introduce an algorithmic framework built on top of Apache Spark for parallel computation of the neighborhood-based collaborative filtering problem, which allows the algorithm to scale linearly with a growing number of users. We also investigate several different variants of this technique including user and item-based recommendation approaches, correlation and vector-based similarity calculations, and selective down-sampling of user interactions. Finally, we provide an experimental comparison of these techniques on the MovieLens dataset consisting of 10 million movie ratings.
|
222 |
Numerical Study on Spark Ignition Characteristics of Methane-air Mixture Using Detailed Chemical Kinetics : Effect of Electrode Temperature and Energy Channel Length on Flame Propagation and Relationship between Minimum Ignition Energy and Equivalence RatioYAMAMOTO, Kazuhiro, YAMASHITA, Hiroshi, HAN, Jilin January 2009 (has links)
No description available.
|
223 |
Applying alternative fuels in place of hydrogen to the jet ignition processToulson, E. January 2008 (has links)
Hydrogen Assisted Jet Ignition (HAJI) is an advanced ignition process that allows ignition of ultra-lean mixtures in an otherwise standard gasoline fuelled spark ignition engine. Under typical operating conditions, a small amount of H2 (~ 2 % ofthe main fuel energy or roughly the equivalent of 1 g/km of H2) is injected just before ignition in the region of the spark plug. By locating the spark plug in a small prechamber (less than 1 % of the clearance volume) and by employing a H2 rich mixture, the content of the prechamber is plentiful in the active species that form radicals H and OH on decomposition and has a relatively high energy level compared to the lean main chamber contents. Thus, the vigorous jets of chemically active combustion products that issue through orifices, which connect to the main chamber, burn the main charge rapidly and with almost no combustion variability (less than 2% coefficient of variation in IMEP even at λ = 2.5). / The benefits from the low temperature combustion at λ = 2 and leaner are that almost zero NOx is formed and there is an improvement in thermal efficiency. Efficiency improvements are a result of the elimination of dissociation, such as CO2 to CO, which normally occurs at high temperatures, together with reduced throttling losses to maintain the same road power. It is even possible to run the engine in an entirely unthrottled mode, but at λ = 5. / Although only a small amount of H2 is required for the HAJI process, it is difficult to both refuel H2 and store it onboard. In order to overcome these obstacles, the viability of a variety of more convenient fuels was experimentally assessed based on criteria such as combustion stability, lean limit and emission levels. The prechamber fuels tested were liquefied petroleum gas (LPG), natural gas, reformed gasoline and carbon monoxide. Additionally, LPG was employed as the main fuel in conjunction with H2 or LPG in the prechamber. Furthermore, the effects of HAJI operation under sufficient exhaust gas recirculation to allow stoichiometric fuel-air supply, thus permitting three-way catalyst application were also examined. / In addition to experiments, prechamber and main chamber flame propagation modeling was completed to examine the effects of each prechamber fuel on the ignition of the main fuel, which consisted of either LPG or gasoline. The modeling and experimental results offered similar trends, with the modeling results giving insight into the physiochemical process by which main fuel combustion is initiated in the HAJI process. / Both the modeling and experimental results indicate that the level of ignition enhancement provided by HAJI is highly dependent on the generation of chemical species and not solely on the energy content of the prechamber fuel. Although H2 was found to be the most effective fuel, in a study of a very light load condition (70 kPa MAP) especially when running in the ultra-lean region, the alternative fuels were effective at running between λ = 2-2.5 with almost zero NOx formation. These lean limits are about twice the value possible with spark ignition (λ = 1.25) in this engine at similar load conditions. In addition, the LPG results are very encouraging as they offer the possibility of a HAJI like system where a commercially available fuel is used as both the main and prechamber fuel, while providing thermal efficiency improvements over stoichiometric operation and meeting current NOx emission standards.
|
224 |
The effectiveness of the SPARK program in increasing fitness among children and adolescentsSales, Latrice Stephanie. January 2007 (has links) (PDF)
Thesis (M.S.)--Georgia Southern University, 2007. / "A thesis submitted to the Graduate Faculty of Georgia Southern University in partial fulfillment of the requirements for the degree Master of Science." In Kinesiology, under the direction of Jim McMillan. ETD. Electronic version approved: July 2007. Includes bibliographical references (p. 44-48) and appendices.
|
225 |
Runtime and jitter of a laser triggered gas switchHutsel, Brian T. Kovaleski, Scott D. January 2008 (has links)
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on September 24, 2009). Thesis advisor: Dr. Scott Kovaleski. Includes bibliographical references.
|
226 |
Leveraging the entity matching performance through adaptive indexing and efficient parallelizationMESTRE, Demetrio Gomes. 11 September 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-09-11T19:44:07Z
No. of bitstreams: 1
DEMETRIO GOMES MESTRE – TESE (PPGCC) 2018.pdf: 15362740 bytes, checksum: eb531a72836b3c7f2f4e0171c7f563dc (MD5) / Made available in DSpace on 2018-09-11T19:44:07Z (GMT). No. of bitstreams: 1
DEMETRIO GOMES MESTRE – TESE (PPGCC) 2018.pdf: 15362740 bytes, checksum: eb531a72836b3c7f2f4e0171c7f563dc (MD5)
Previous issue date: 2018-03-27 / Entity Matching (EM), ou seja, a tarefa de identificar entidades que se referem a um mesmo objeto do mundo real, é uma tarefa importante e difícil para a integração e limpeza de fontes de dados. Uma das maiores dificuldades para a realização desta tarefa, na era de Big Data, é o tempo de execução elevado gerado pela natureza quadrática da execução da tarefa. Para minimizar a carga de trabalho preservando a qualidade na detecção de entidades similares, tanto para uma ou mais fontes de dados, foram propostos os chamados métodos de indexação ou blocagem. Estes métodos particionam o conjunto de dados em subconjuntos (blocos) de entidades potencialmente similares, rotulando-as com chaves de bloco, e restringem a execução da tarefa de EM entre entidades pertencentes ao mesmo bloco. Apesar de promover uma diminuição considerável no número de comparações realizadas, os métodos de indexação ainda podem gerar grandes quantidades de comparações, dependendo do tamanho dos conjuntos de dados envolvidos e/ou do número de entidades por índice (ou bloco). Assim, para reduzir ainda mais o tempo de execução, a tarefa de EM pode ser realizada em paralelo com o uso de modelos de programação tais como MapReduce e Spark. Contudo, a eficácia e a escalabilidade de abordagens baseadas nestes modelos
depende fortemente da designação de dados feita da fase de map para a fase de reduce, para o caso de MapReduce, e da designação de dados entre as operações de transformação, para o caso de Spark. A robustez da estratégia de designação de dados é crucial para se alcançar alta eficiência, ou seja, otimização na manipulação de dados enviesados (conjuntos de dados grandes que podem causar gargalos de memória) e no balanceamento da distribuição da carga de trabalho entre os nós da infraestrutura distribuída. Assim, considerando que a investigação de abordagens que promovam a execução eficiente, em modo batch ou tempo real, de métodos de indexação adaptativa de EM no contexto da computação distribuída ainda não foi contemplada na literatura, este trabalho consiste em propor um conjunto de abordagens capaz de executar a indexação adaptativas de EM de forma eficiente, em modo batch ou tempo real, utilizando os modelos programáticos MapReduce e Spark. O desempenho das abordagens propostas é analisado em relação ao estado da arte utilizando infraestruturas de cluster e fontes de dados reais. Os resultados mostram que as abordagens propostas neste trabalho apresentam padrões que evidenciam o aumento significativo de desempenho da tarefa de EM distribuída promovendo, assim, uma redução no tempo de
execução total e a preservação da qualidade da detecção de pares de entidades similares. / Entity Matching (EM), i.e., the task of identifying all entities referring to the same realworld object, is an important and difficult task for data sources integration and cleansing. A major difficulty for this task performance, in the Big Data era, is the quadratic nature of
the task execution. To minimize the workload and still maintain high levels of matching
quality, for both single or multiple data sources, the indexing (blocking) methods were
proposed. Such methods work by partitioning the input data into blocks of similar entities,
according to an entity attribute, or a combination of them, commonly called “blocking key”,
and restricting the EM process to entities that share the same blocking key (i.e., belong to
the same block). In spite to promote a considerable decrease in the number of comparisons executed, indexing methods can still generate large amounts of comparisons, depending on the size of the data sources involved and/or the number of entities per index (or block). Thus, to further minimize the execution time, the EM task can be performed in parallel using programming models such as MapReduce and Spark. However, the effectiveness and scalability of MapReduce and Spark-based implementations for data-intensive tasks depend on the data assignment made from map to reduce tasks, in the case of MapReduce, and the data assignment between the transformation operations, in the case of Spark. The robustness of this assignment strategy is crucial to achieve skewed data handling (large sets of data can cause memory bottlenecks) and balanced workload distribution among all nodes of the distributed infrastructure. Thus, considering that studies about approaches that perform the efficient execution of adaptive indexing EM methods, in batch or real-time modes, in the context of parallel computing are an open gap according to the literature, this work proposes a set of parallel approaches capable of performing efficient adaptive indexing EM approaches using MapReduce and Spark in batch or real-time modes. The proposed approaches are compared to state-of-the-art ones in terms of performance using real cluster infrastructures and data sources. The results carried so far show evidences that the performance of the proposed approaches is significantly increased, enabling a
decrease in the overall runtime while preserving the quality of similar entities detection.
|
227 |
Desenvolvimento do processo de estampagem para miniaturização de motores / Micro deep drawing applied in the fabrication of micromotorsBoff, Uilian January 2012 (has links)
O processo de microestampagem permite a fabricação de peças ou microcomponentes, podendo ser aplicado a diversas áreas da engenharia. Logo, este trabalho tem por objetivo desenvolver um micromotor de passo e avaliar os efeitos da miniaturização de seus componentes. A simulação computacional foi utilizada neste trabalho de forma a avaliar os defeitos surgidos com a miniaturização, através do software de elementos finitos DYNAFORM com “solver” LS-DYNA. O material empregado na carcaça foi o aço de baixo carbono ABNT 1010 e o aço inoxidável ABNT 304, e para o núcleo magnético do micromotor, composto pelo rotor e estator, utilizou-se o aço elétrico ABNT 35F 420M. A simulação computacional, além de identificar os problemas oriundos da miniaturização dos componentes, também foi utilizada para otimizar as ferramentas de microestampagem, demonstrando desta forma ser uma grande aliada para o desenvolvimento do processo. O processo de corte convencional em matriz não foi aplicado no corte do rotor e do estator, pois produziu defeitos como empenamento e rebarbas. Ao invés disso, empregou-se o processo de corte por eletroerosão a fio, que produziu peças planas e superfícies lisas. / The process of micro deep drawing is a micro-technology which allows the fabrication of microcomponents and can be applied to various fields of engineering. This study aims to develop the components of a micromotor step using this technology and to evaluate the effects of the microfabrication of the motor frame, rotor and stator. A computer simulation was carried out in order to evaluate miniaturization of the components trough the finite element software DYNAFORM with “Solver” LS-DYNA. The material used in the motor housing was low carbon steel ABNT 1010 and stainless steel ABNT 304. However, in magnetic core, comprising the rotor and stator, the electric steel ABNT 35F 420M was employed. Micro deep drawing tools were developed based on the results obtained through simulation is a great ally to create microcomponents. The cutting process in the matrix was not employed to cut de rotor and the stator, because it produced defects such as warping and butts along the surface. Instead, wire cutting spark erosion was used and resulted in hat part and surfaces.
|
228 |
Caractérisation théorique du plasma lors de l'application d'un courant impulsionnel : application à l'allumage des moteurs / Theoretical characterization of plasma during application of pulsed current profile : application to the ignition of enginesBenmouffok, Malyk 23 March 2016 (has links)
Le contexte économique et écologique difficile ainsi que la réglementation en matière d'émissions de CO2 poussent les industriels de l'automobile à améliorer les moteurs à allumage commandé. L'une des voies d'amélioration envisagées est l'admission de mélanges pauvres ou fortement dilués par des gaz d'échappement (EGR) dans la chambre de combustion. La difficulté de ce type de fonctionnement est l'initiation de l'allumage. Afin de pallier ce problème, les systèmes d'allumage sont étudiés et tout particulièrement l'étincelle. Cette décharge est à l'origine de l'apparition d'un plasma et la compréhension des mécanismes impliqués dans le transfert d'énergie entre ce plasma et le gaz réactif environnant est essentielle. Ce travail s'intéresse à la modélisation de l'étincelle dans sa phase d'arc électrique afin de pouvoir prédire le comportement hydrodynamique de l'arc et la propagation de l'onde de choc. Les modèles transitoires bidimensionnels ou tridimensionnels utilisés sont basés sur le logiciel @ANSYS Fluent couplé à des fonctions utilisateurs développées au sein de l'équipe AEPPT. Ils s'appuient dans un premier temps sur la littérature afin de comprendre le comportement général de la décharge, puis sur des configurations expérimentales utilisées dans le cadre du projet ANR FAMAC. Les simulations sont dans un premier temps et en majorité réalisées dans l'air sur des configurations simplifiées de type pointe-pointe afin de valider le modèle. Ensuite, une étude est faite dans une configuration de réacteur où l'arc est généré entre les bornes d'une bougie d'allumage. Le modèle permet de démontrer le rôle de chacun des paramètres initiaux des simulations ainsi que leur impact sur l'écoulement du plasma. L'influence de la prise en compte du champ magnétique est montrée dans le cadre d'un arc impulsionnel nanoseconde. Enfin, le modèle a permis de montrer le rôle d'un écoulement laminaire latéral en direction d'une décharge de type conventionnelle générée par une bobine d'allumage Audi. L'ensemble de ces résultats pourront être le point de départ d'une étude énergétique sur les systèmes d'allumage ainsi que d'une réflexion concernant la compréhension de l'initiation de la combustion. / The economic/ecological context and the CO2 regulation by the "euro" standards lead the automotive industry to improve the spark ignited engines. A way of improvement is the admission of a lean mixture or of a diluted mixture by recirculation of exhaust gases in the combustion chamber. The main difficulty in these conditions is to start the combustion. To overcome this problem, the ignition systems are studied and more particularly the spark. This discharge leads to the apparition of plasma and the understanding of the energy transfer mechanisms between this plasma and the reactive mixture is essential. This work is focus on the modeling of a spark during its electrical arc phase in order to predict the hydrodynamic behavior of the arc and the shock wave propagation. The 2D and 3D transient models are based on ANSYS Fluent coupled with user defined functions developed by the AEPPT team. First, the simulation is based on data from literature review in order to understand the general behavior of the discharge. Then, the model uses experimental configuration developed during the ANR FAMAC project. Simulations are mainly realized in air using simplified configurations (pin-to-pin configurations) in order to valid the model. Then, a study is done in a vessel configuration using real sparkplug geometry. This model allows us to show the role of each initial parameter as well as their impact on the plasma flow. The magnetic field influence is also determined for a nanosecond arc discharge. Finally, the model is used in order to determine the role of a cross flow on a discharge generated by a conventional Audi ignition coil. All these results could be the beginning of an energetic study on ignition systems and could lead to a discussion on the understanding of initiation of the combustion process.
|
229 |
Avaliação numérica e experimental do desempenho de um motor Otto operando com etanol hidratadoLanzanova, Thompson Diordinis Metzka January 2013 (has links)
Uma maneira ecologicamente correta de manejar os recursos energéticos disponíveis e reduzir as emissões de gases de efeito estufa é utilizar biocombustíveis ao invés de combustíveis de origem fóssil em motores de combustão interna. Entretanto, o preço mais alto dos biocombustíveis pode ser um fator limitante para o aumento e viabilização do seu uso. Em relação ao etanol, para se obter misturas com mais de 80% de etanol em água o custo de produção cresce exponencialmente. Assim, se misturas de etanol com alto percentual de água, de menor custo, puderem ser utilizadas em motores de combustão interna com sucesso, esse combustível pode se tornar mais atrativo e mais amplamente utilizado. Este trabalho analisa o desempenho de um motor de ignição por centelha operando com etanol em diferentes percentuais de hidratação, através de simulações computacionais e procedimentos experimentais. Foi utilizado um motor monocilíndrico de 0,668L e naturalmente aspirado, com relação de compressão de 19:1 e injeção direta em pré-câmara, ciclo Diesel, foi modificado para operação em ciclo Otto - injeção de combustível no duto de admissão e relação de compressão de 12:1. Testes em dinamômetro foram conduzidos com o etanol hidratado comercial (95% de etanol e 5% de água) e com misturas de etanol e água com maiores percentuais de hidratação (conteúdo volumétrico de até 60% de etanol e 40% de água). Simulação computacional através de software de volumes finitos unidimensional foi utilizada para realizar a análise da combustão. Foi possível alcançar operação estável com misturas de até 40% de água em etanol e ocorreu aumento de eficiência térmica para misturas de até 30% de água. / An environmentally friendly way to manage the available energetic resources and to reduce greenhouse gas emissions is to use bio instead of fossil fuels in internal combustion engines. However, the sometimes higher prices of biofuels can be a limiting factor for their widespread and viable use. Concerning ethanol and its production costs, to obtain above 80% ethanol-in-water mixtures demands an exponentially increasing energy supply. Hence, if a low-cost high water content ethanol could be successfully burned in internal combustion engines it would be even more attractive and extensively used. This work analyzes the performance of a spark ignition engine running with ethanol with different percentages of hydration through numeric and experimental simulations. To achieve this goal, a 0,668L naturally aspirated single cylinder engine, with compression ratio of 19:1 and pre-chamber direct injection, operating at Diesel cycle was modified to operate in Otto cycle - port fuel injection, with a compression ratio of 12:1. Dynamometer tests were carried out with commercial hydrous ethanol (95% ethanol and 5% water) and water-in-ethanol blends with higher hydration levels (volumetric content up to 60% ethanol and 40% water). Computer simulation through one-dimensional finite volume software was carried out to perform a heat release analysis. It was possible to achieve stable operation with up to 40% water-in-ethanol blends and thermal efficiency increase was achieved for blends with up to 30% of water.
|
230 |
Correlação probabilística implementada em spark para big data em saúdePita, Robespierre Dantas da Rocha 05 March 2015 (has links)
Submitted by Santos Davilene (davilenes@ufba.br) on 2016-05-30T16:15:43Z
No. of bitstreams: 1
Dissertação_Mestrado_Clicia(1).pdf: 2228201 bytes, checksum: d990a114eac5a988c57ba6d1e22e8f99 (MD5) / Made available in DSpace on 2016-05-30T16:15:43Z (GMT). No. of bitstreams: 1
Dissertação_Mestrado_Clicia(1).pdf: 2228201 bytes, checksum: d990a114eac5a988c57ba6d1e22e8f99 (MD5) / A aplicação de técnicas de correlação probabilística em registros de saúde ou socioeconômicos de uma população tem sido uma prática comum entre epidemiologistas como
base para suas pesquisa não-experimentais. Entretanto, o crescimento do volume dos dados comum ao cenário imposto pelo Big Data provocou uma carˆencia por ferramentas computacionais capazes de lidar com esses imensos reposit´orios. Neste trabalho é descrita uma solução implementada no framework de processamento em cluster Spark para a correlação probabilística de registros de grandes bases de dados do Sistema Público de Saúde
brasileiro. Este trabalho está vinculado a um projeto que visa analisar a relação entre o Programam Bolsa Família e a incidência de doen¸cas associadas á pobreza, tais como hanseníase e tuberculose. Os resultados obtidos demonstram que esta implementação
provê qualidade competitiva em relação a outras ferramentas e abordagens existentes, comprovada pela superioridade das métricas de tempo de execução.
|
Page generated in 0.0298 seconds