• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 25
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Determination of the factors affecting the performance of grout packs

Grave, Douglas Marcus Hadley 26 February 2007 (has links)
Student Number : 7439270 - MSc research report - School of Mining - Faculty of Engineering and the Built Environment / In tabular mining, common in South African gold and platinum mines, the removal of the tabular ore body by mining operations leaves behind excavations known as stopes. These stopes form the production areas of a mine and have to be supported in order that a safe working environment is created. Stopes generally have widths of close to a metre but, in some areas and on certain reefs, may be much wider. Prior to the 1980s, a combination of in-stope pillars and timber was used to support these stopes, but innovations from the 1970s have produced grout packs as a viable support option. These packs are cast in situ through the use of cemented classified tailings gravitated from surface and placed in reinforced geotextile bags at the stope face. As these packs cure and become rigid they are able to bear load when compressed by stope closure. In this way, the packs keep the working areas open. To quantify the load-bearing capacity of grout packs, a range of sizes and designs was tested in a laboratory press and, thereafter, a select few were tested underground. Initially, two aspects of grout packs that had not been adequately quantified previously were addressed. These were: the in situ load / compression characteristics of different forms of grout packs; and the relationship between laboratory test results and in situ performance. The laboratory test programme was extended to allow for an investigation into methods of improving the yieldability of grout packs and the possibility of using them to replace in-stope pillars. It was found that the factors that most affect the initial strength and post-failure characteristics of a grout pack are: the grout strength; the amount and type of steel reinforcement; the inclusion of ancillary columnar support; and the height and diameter of the pack. It was also found that grout packs could be used to replace in-stope pillars, but that pack strength and spacing should be conservatively calculated before implementation. A provisional relationship between the behaviour of packs tested in a press and those placed underground was determined.
22

Reinforcement Learning for Racecar Control

Cleland, Benjamin George January 2006 (has links)
This thesis investigates the use of reinforcement learning to learn to drive a racecar in the simulated environment of the Robot Automobile Racing Simulator. Real-life race driving is known to be difficult for humans, and expert human drivers use complex sequences of actions. There are a large number of variables, some of which change stochastically and all of which may affect the outcome. This makes driving a promising domain for testing and developing Machine Learning techniques that have the potential to be robust enough to work in the real world. Therefore the principles of the algorithms from this work may be applicable to a range of problems. The investigation starts by finding a suitable data structure to represent the information learnt. This is tested using supervised learning. Reinforcement learning is added and roughly tuned, and the supervised learning is then removed. A simple tabular representation is found satisfactory, and this avoids difficulties with more complex methods and allows the investigation to concentrate on the essentials of learning. Various reward sources are tested and a combination of three are found to produce the best performance. Exploration of the problem space is investigated. Results show exploration is essential but controlling how much is done is also important. It turns out the learning episodes need to be very long and because of this the task needs to be treated as continuous by using discounting to limit the size of the variables stored. Eligibility traces are used with success to make the learning more efficient. The tabular representation is made more compact by hashing and more accurate by using smaller buckets. This slows the learning but produces better driving. The improvement given by a rough form of generalisation indicates the replacement of the tabular method by a function approximator is warranted. These results show reinforcement learning can work within the Robot Automobile Racing Simulator, and lay the foundations for building a more efficient and competitive agent.
23

Cell Formation: A Real Life Application

Uyanik, Basar 01 September 2005 (has links) (PDF)
In this study, the plant layout problem of a worldwide Printed Circuit Board (PCB) producer company is analyzed. Machines are grouped into cells using grouping methodologies of Tabular Algorithm, K-means clustering algorithm, and Hierarchical grouping with Levenshtein distances. Production plant layouts, which are formed by using different techniques, are evaluated using technical and economical indicators.
24

The computational analysis and geo spatial the water resources of the Port Complex of PecÃm / Uma anÃlise computacional e geo espacial do sistema hÃdrico do Complexo PortuÃrio do PecÃm

Fernando AntÃnio Costa Pereira 03 April 2014 (has links)
nÃo hà / The present work has for objective to use a model of analysis of computational tools and tabular georeferencing as instruments to diagnose and analyze the water balance of the whole system which meets the Port Complex of PecÃm â CPP, In order to provide support in decision making forward the critical events and to give guarantees of supply of raw water in quantity and quality for the multiple users already signed and with a forecast of new concessions within the Port Complex PecÃm. Through computational analysis and geo spatial, both of water sources as well as of the users connected to the water resources of the CPP, and with the measurement of some hydraulic information, mainly flow effectively consumed and flow grantee, you can draw as well, a diagram that shows the vulnerability or not this water system and the possible alternatives for solutions to avoid the undersupply of the Port Complex of Pecem, as well as show the prospect of water supply for the care of future water demand from the strategic planning and the study and projects of socio economic development for this important region. Thus, it is expected to develop an efficient network that holds the most different hydraulic structures, to allow the generation of more various scenarios involving the water balance and the security of supply of water to the Port Complex PecÃm. / O presente trabalho tem por objetivo utilizar um modelo de anÃlise de ferramentas computacionais tabulares e georreferenciamento como instrumentos para diagnosticar e analisar o balanÃo hÃdrico de todo o sistema que atende ao Complexo PortuÃrio do PecÃm â CPP, de forma a proporcionar suporte na tomada de decisÃo frente a eventos crÃticos e para dar garantias de abastecimento de Ãgua bruta em quantidade e qualidade para os mÃltiplos usuÃrios jà outorgados e com previsÃo de novas outorgas no Complexo PortuÃrio do PecÃm. AtravÃs anÃlise computacional e geo espacial, tanto das fontes hÃdricas bem como dos usuÃrios ligados aos recursos hÃdricos do CPP, e com a mensuraÃÃo de algumas informaÃÃes hidrÃulicas, principalmente vazÃo efetivamente consumida e vazÃo outorgada, pode-se traÃar assim, um esquema que mostre a vulnerabilidade ou nÃo desse sistema hÃdrico e as possÃveis alternativas de soluÃÃes para evitar o desabastecimento do Complexo PortuÃrio do PecÃm, bem como mostrar a perspectiva de oferta hÃdrica para atendimento de demanda hÃdrica futura a partir do planejamento estratÃgico e do estudo e projetos de desenvolvimento socioeconÃmico para este importante regiÃo. Sendo assim, espera-se elaborar uma eficiente rede que seja comporta dos mais diferentes estruturas hidrÃulica, para possibilitar a geraÃÃo dos mais diversos cenÃrios envolvendo o balanÃo hÃdrico e a seguranÃa do fornecimento de Ãgua para o Complexo PortuÃrio do PecÃm.
25

Synthetic Data Generation Using Transformer Networks / Textgenerering med transformatornätverk : Skapa text från ett syntetiskt dataset i tabellform

Campos, Pedro January 2021 (has links)
One of the areas propelled by the advancements in Deep Learning is Natural Language Processing. These continuous advancements allowed the emergence of new language models such as the Transformer [1], a deep learning model based on attention mechanisms that takes a sequence of symbols as input and outputs another sequence, attending to the input during its generation. This model is often used in translation, text summarization and text generation, outperforming previous used methods such as Recurrent Neural Networks and Generative Adversarial Networks. The problem statement provided by the company Syndata for this thesis is related to this new architecture: Given a tabular dataset, create a model based on the Transformer that can generate text fields considering the underlying context from the rest of the accompanying fields. In an attempt to accomplish this, Syndata has previously implemented a recurrent model, nevertheless, they’re confident that a Transformer could perform better at this task. Their goal is to improve the solution provided with the implementation of a model based on the Transformer architecture. The implemented model should then be compared to the previous recurrent model and it’s expected to outperform it. Since there aren’t many published research articles where Transformers are used for synthetic tabular data generation, this problem is fairly original. Four different models were implemented: a model that is based on the GPT architecture [2], an LSTM [3], a Bidirectional-LSTM with an Encoder- Decoder structure and the Transformer. The first two models are autoregressive models and the later two are sequence-to-sequence models which have an Encoder-Decoder architecture. We evaluated each one of them based on 3 different aspects: on the distribution similarity between the real and generated datasets, on how well each model was able to condition name generation considering the information contained in the accompanying fields and on how much real data the model compromised after generation, which addresses a privacy related issue. We found that the Encoder-Decoder models such as the Transformer and the Bidirectional LSTM seem to perform better for this type of synthetic data generation where the output (or the field to be predicted) has to be conditioned by the rest of the accompanying fields. They’ve outperformed the GPT and the RNNmodels in the aspects that matter most to Syndata: keeping customer data private and being able to correctly condition the output with the information contained in the accompanying fields. / Deep learning har lett till stora framsteg inom textbaserad språkteknologi (Natural Language Processing) där en typ av maskininlärningsarkitektur kallad Transformers[1] har haft ett extra stort intryck. Dessa modeller använder sig av en så kallad attention mekanism, tränas som språkmodeller (Language Models), där de tar in en sekvens av symboler och matar ut en annan. Varje steg i den utgående sekvensen beror olika mycket på steg i den ingående sekvensen givet vad denna attention mekanism lärt sig vara relevant. Dessa modeller används för översättning, sammanfattning och textgenerering och har överträffat andra arkitekturer som Recurrent Neural Networks, RNNs samt Generative Adversarial Networks. Problemformuleringen för denna avhandling kom från företaget Syndata och är relaterat till denna arkitektur: givet tabellbaserad data, implementera en Transformer som genererar textfält beroende av informationen i de medföljande tabellfälten. Syndata har tidigare implementerat ett RNN för detta ändamål men är övertygande om att en Transformer kan prestera bättre. Målet för denna avhandling är att implementera en Transformer och jämföra med den tidigare implementationen med hypotesen att den kommer att prestera bättre. Det underliggande målet är att givet data i tabellform kunna generera ny syntetisk data, användbar för industrin, där problem kring integritet och privat information kan minimeras. Fyra modeller implementerades: en Transformermodel baserad på GPT- arkitekturen[ 2], en LSTM[3]-modell, en encoder-decoder Transformer och en BiLSTM-modell. De två förstnämnda modellerna är auto-regressiva och de senare två är sequence-to-sequence som har en encoder-decoder arkitektur. Dessa modeller utvärderades och jämfördes givet tre kriterier: hur lik sannolikhetsfördelningen mellan den verkliga och den genererade datamängden, hur mycket varje modell baserade generationen på de medföljande fälten och hur mycket verklig data som komprometteras genom synteseringen. Slutsatsen var att Encoder-Decoder varianterna, Transformern och BiLSTM, var bättre för att syntesera data i tabellformat, där utdatan (eller fälten som ska genereras) ska uppvisa ett starkt beroende av resten av de medföljande fälten. De överträffade GPT- och RNN- modellerna i de aspekter som betyder mest för Syndata att hålla kunddata privat och att den syntetiserade datan ska vara beroende av informationen i de medföljande fälten.
26

Tabular Information Extraction from Datasheets with Deep Learning for Semantic Modeling

Akkaya, Yakup 22 March 2022 (has links)
The growing popularity of artificial intelligence and machine learning has led to the adop- tion of the automation vision in the industry by many other institutions and organizations. Many corporations have made it their primary objective to make the delivery of goods and services and manufacturing in a more efficient way with minimal human intervention. Au- tomated document processing and analysis is also a critical component of this cycle for many organizations that contribute to the supply chain. The massive volume and diver- sity of data created in this rapidly evolving environment make this a highly desired step. Despite this diversity, important information in the documents is provided in the tables. As a result, extracting tabular data is a crucial aspect of document processing. This thesis applies deep learning methodologies to detect table structure elements for the extraction of data and preparation for semantic modelling. In order to find optimal structure definition, we analyzed the performance of deep learning models in different formats such as row/column and cell. The combined row and column detection models perform poorly compared to other models’ detection performance due to the highly over- lapping nature of rows and columns. Separate row and column detection models seem to achieve the best average F1-score with 78.5% and 79.1%, respectively. However, de- termining cell elements from the row and column detections for semantic modelling is a complicated task due to spanning rows and columns. Considering these facts, a new method is proposed to set the ground-truth information called a content-focused annota- tion to define table elements better. Our content-focused method is competent in handling ambiguities caused by huge white spaces and lack of boundary lines in table structures; hence, it provides higher accuracy. Prior works have addressed the table analysis problem under table detection and table structure detection tasks. However, the impact of dataset structures on table structure detection has not been investigated. We provide a comparison of table structure detection performance with cropped and uncropped datasets. The cropped set consists of only table images that are cropped from documents assuming tables are detected perfectly. The uncropped set consists of regular document images. Experiments show that deep learning models can improve the detection performance by up to 9% in average precision and average recall on the cropped versions. Furthermore, the impact of cropped images is negligible under the Intersection over Union (IoU) values of 50%-70% when compared to the uncropped versions. However, beyond 70% IoU thresholds, cropped datasets provide significantly higher detection performance.
27

Investigating the Use of Deep Learning Models for Transactional Underwriting / En Undersökning av Djupinlärningsmodeller för Transaktionell Underwriting

Tober, Samuel January 2022 (has links)
Tabular data is the most common form of data, and is abundant throughout crucial industries, such as banks, hospitals and insurance companies. Albeit, deep learning research has largely been dominated by applications to homogeneous data, e.g. images or natural language. Inspired by the great success of deep learning in these domains, recent efforts have been made to tailor deep learning architectures for tabular data. In this thesis, two such models are selected and tested in the context of transactional underwriting. Specifically, the two models are evaluated in terms of predictive performance, interpretability and complexity, to ultimately see if they can compete with gradient boosted tree models and live up to industry requirements. Moreover, the pre-training capabilities of the deep learning models are tested through transfer learning experiments across different markets. It is concluded that the two models are able to outperform the benchmark gradient boosted tree model in terms of RMSE, and moreover, pre-training across markets gives a statistically significant improvement in RMSE, on a level of 0.05. Furthermore, using SHAP, together with model specific explainability methods, it is concluded that the two deep learning models’ explainability is on-par with gradient boosted tree models. / Tabelldata är den vanligaste formen av data och finns i överflöd i viktiga branscher, såsom banker, sjukhus och försäkringsbolag. Även om forskningen inom djupinlärning till stor del dominerats av tillämpningar på homogen data, t.ex. bilder eller naturligt språk. Inspirerad av den stora framgången för djupinlärning inom dessa domäner, har nyligen ansträngningar gjorts för att skräddarsy djupinlärnings-arkitekturer för tabelldata. I denna avhandling väljs och testas två sådana modeller på problemet att estimera vinst marginalen på en transaktion. Specifikt utvärderas de två modellerna i termer av prediktiv prestanda, tolkningsbarhet och komplexitet, för att i slutändan se om de kan konkurrera med gradient boosted tree-modeller och leva upp till branschkrav. Dessutom testas för-träningsförmågan hos djupinlärningmodellerna genom överföringsexperiment mellan olika marknader. Man drar slutsatsen att de två modellerna kan överträffa benchmark gradient boosted tree-modellen när det gäller RMSE, och dessutom ger för-träning mellan marknader en statistiskt signifikant förbättring av RMSE, på en nivå av 0,05. Vidare, med hjälp av SHAP, tillsammans med modellspecifika förklaringsmetoder, dras slutsatsen att de två djupinlärning-modellernas förklaringsbarhet är i nivå med gradient boosted tree-modellerna.
28

Integration of Heterogeneous Web-based Information into a Uniform Web-based Presentation

Janga, Prudhvi 17 October 2014 (has links)
No description available.
29

L’outillage sur plaquette en quartzite du site ElFs-010. Étude d’une technologie distinctive en Jamésie, Québec (1900-400 A.A.)

Henriet, Jean-Pierre 04 1900 (has links)
Réalisé en collaboration avec Arkéos Inc. / Ce projet de recherche tente de mieux comprendre le phénomène des supports sur plaquette en quartzite du site ElFs-010 situé en Jamésie. Aucun travail de cette ampleur n’avait encore été réalisé sur ce type d’outil. Il y avait donc un vide à combler. Afin de répondre le plus adéquatement possible à cette problématique, nous avons divisé notre travail en trois objectifs. Dans un premier temps, déterminer si les plaquettes en quartzite sont le produit d’une technologie lithique ou bien d’un processus géologique naturel. En second lieu, démontrer si nous sommes en présence d’un épiphénomène propre au site ElFs-010. Finalement, définir si une période chronologique correspond à cette industrie. Les résultats de nos recherches nous démontrent que les supports sur plaquette en quartzite du site ElFs-010 se retrouvent naturellement sur le talus d’effondrement de la Colline Blanche. Leur faible épaisseur moyenne ainsi que leurs pans abrupts ont sans doute été les facteurs qui ont le plus influencé leur sélection. En nous basant sur ces deux caractéristiques, nous suggérons qu’ils auraient pu être utilisés comme des lames interchangeables ou bien des burins. Nous avons recensé 33 sites jamésiens qui comportaient au moins un fragment de plaquette en quartzite. Malgré quelques indices archéologiques, il est encore trop tôt pour affirmer que cette industrie est diagnostique d’un groupe culturel jamésien. Les données chronologiques suggèrent que cette industrie a connu un essor vers 1300 ans A.A. De plus, il semble que les régions géographiques que nous avons attribuées aux sites correspondent à des séquences culturelles bien définies. Finalement, nos hypothèses portent sur des recherches futures concernant un ensemble d’événements qui, tout comme les supports sur plaquette en quartzite, sont révélateurs de changements dans le mode de vie des groupes préhistoriques de la Jamésie. Mots-clés : Archéologie, Jamésie, ElFs-010, Colline Blanche, plaquette en quartzite, technologie lithique. / This research project seeks to better understand the phenomenon of tabular quartzite tools from the archeological site Elfs-010. No detailed work had yet been carried out on this type of tool, leaving a void to fill. To respond as adequately as possible to this problem, we focused our work on three main objectives. First, determine if tabular pieces of quartzite were the product of a particular lithic technology or of a natural geological process. Second, evaluate whether we are dealing with a unique phenomenon that is specific to site Elfs-010. Third, and finally, define if a specific time period corresponds to this industry. The results of our research show that tabular pieces of quartzite from site Elfs-010 occur naturally on the talus slope of the Colline Blanche. Their low average thickness and their steep sides were probably the factors that most influenced their selection. Based on these two characteristics, we suggest they could be used interchangeably as blades or burins. We identified 33 Jamesian sites that had at least one fragment of tabular quartzite. Despite some archaeological evidence, it is still too early to say that this industry is diagnostic of a Jamesian cultural group. Our chronological data suggest that this industry flourished around 1300 years BP. In addition, it appears that the geographic areas that we have attributed to the sites correspond to culturally well-defined sequences. Finally, our proposed hypotheses for future research concern the events that took place around 1300 years BP and which, like the tabular pieces of quartzite, are indicative of changes in the lifestyle of prehistoric groups of the James Bay region. Keywords: Archaeology, James Bay, ElFs-010, Colline Blanche, tabular pieces, quartzite, lithic technology.
30

Inversão 2D de dados magnetométricos com modelo prismático: Aplicação em enxames de diques / 2D inversion of magnetometric data with prismatic model: Application on the Ponta Grossa Dyke Swarm.

Cavalcante, Felipe Lisbona 22 February 2019 (has links)
Este trabalho apresenta um método de inversão de perfis de dados magnetométricos em enxames de diques, utilizando os módulos de um programa desenvolvido no contexto do Mestrado. Os enxames de diques produzem padrões complexos de anomalia, dependendo da densidade de diques ao longo do perfil avaliado, das propriedades magnéticas de cada unidade e da existência de fontes mais rasas e profundas. Poucas técnicas se mostram eficazes em inverter dados em tal cenário, seja para recuperar parâmetros confiáveis para cada dique ou valores médios em casos mais complexos. O método inclui uma abordagem de inversão por etapas para modelos compostos por múltiplos prismas finos, identificados interativamente de acordo com a qualidade do ajuste aos dados. Na abordagem proposta, a intensidade do campo vetorial anômalo é inicialmente invertida para fornecer parâmetros geométricos (posição ao longo do perfil e profundidade do topo) e o produto da intensidade de magnetização pela espessura para as unidades do modelo. O modelo obtido é usado para inverter os dados de anomalia de campo total para se obter a inclinação de magnetização para cada prisma do modelo. Para perfis com poucos prismas (diques), essa abordagem revela-se eficaz na recuperação dos parâmetros verdadeiros para cada unidades do modelo. Para perfis com maior densidade de prismas, apenas valores médios de diferentes populações de diques podem ser recuperados. Isso é obtido aplicando uma abordagem por análise de grupo usando o algoritmo k-means, para soluções alternativas obtidas na inversão de dados. O método é testado com dados sintéticos gerados por configurações simples e complexas de prismas e interferências. Uma vez testado com simulações numéricas, o método é aplicado a um perfil do Enxame de Diques do Arco de Ponta Grossa. A análise de cluster de soluções alternativas identificou pelo menos três gerações para os diques neste perfil, de acordo com os parâmetros médios dos grupos. Os valores obtidos com a análise de grupos também foram utilizados para calcular a expansão crustal ao longo do perfil, chegando a valores entre 12 e 23%. Além disso, resultados de inversão foram analisados com poços da base de dados do Sistema de Informação de Águas Subterrâneas (SIAGAS) para avaliar a produtividade de poços com respeito à sua proximidade a unidades específicas de diques. Este estudo mostra que poços mais produtivos estão situados próximos de uma classe de diques mais rasos, conforme identificado pela análise k-means. Para poços perfurados em zona de influência dessa classe de diques em rochas cristalinas de alto grau metamórfico (tufos, meta-tufos), a produtividade é cerca de 14,5 vezes maior do que aqueles perfurados nas encaixantes. Para poços em zona de influência dessa classe de diques em rochas cristalinas de baixo grau metamórfico, a produtividade é cerca de 4,3 maior do que nas encaixantes. Um modelo conceitual para exploração de águas subterrâneas é apresentado levando-se em consideração a distribuição de diques mais rasos na região estudada. / This work presents a method of inversion of magnetometric data profiles in dyke swarms, using the modules of a program developed in the context of the Master. Dyke swarms produce complex patterns of anomalies, depending on the density of dikes along the evaluated profile, the magnetic properties of each unit and the existence of shallower and deeper sources. Few techniques prove effective in inverting data in such a scenario, either to retrieve reliable parameters for each dyke or average values in more complex cases. The method includes a stepwise inversion approach for multi-prism models that are interactively identified according to the quality of fit to the data. In the proposed approach, the intensity of the anomalous vector field is initially inverted to provide geometric parameters (position along the profile and depth of the top) and the product of the magnetization intensity by the thickness for the model units. The obtained model is used to invert the total field anomaly data to obtain the magnetization inclination for each prism of the model. For profiles with few prisms (dykes), this approach proves to be effective in recovering the true parameters for each model unit. For profiles with a higher density of prisms, only mean values of different dyke populations can be recovered. This is achieved by applying a group analysis approach using the k-means algorithm, for alternative solutions obtained in the inversion of data. The method is tested with synthetic data generated by simple and complex configurations of prisms and interferences. Once tested with numerical simulations, the method is applied to a profile of the Dike Swarm of the Ponta Grossa Arch. The cluster analysis of alternative solutions identified at least three generations for the dikes in this profile, according to the average parameters of the groups. The mean values obtained with the cluster analysis were also used to calculate the crustal expansion along the profile, reaching values between 12 and 23%. In addition, inversion results were analyzed with wells from the Groundwater Information System (SIAGAS) database to evaluate the productivity of wells with respect to their proximity to specific dyke units. This study shows that more productive wells are located near a class of shallower dikes, as identified by the k-means analysis. For wells drilled in a zone of influence of this class of dykes in crystalline rocks of high metamorphic degree (tufts, meta-tufts), the productivity is about 14.5 times greater than those drilled in the hosting rocks. For wells in the zone of influence of this class of dykes in crystalline rocks of low metamorphic degree, the productivity is about 4.3 higher than the ones in the hosting rocks. A conceptual model for groundwater exploration is presented considering the distribution of shallow dikes in the studied region.

Page generated in 0.185 seconds