• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 134
  • 38
  • 33
  • 32
  • 31
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 5
  • 4
  • Tagged with
  • 774
  • 120
  • 87
  • 86
  • 84
  • 73
  • 65
  • 58
  • 51
  • 51
  • 50
  • 50
  • 44
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Adaptive reuse of the vernacular log building

Bergström, Christine January 2022 (has links)
This thesis project is an attempt to learn from vernacular building traditions when designing sustainable homes for families in a contemporary rural setting. My proposal is a multi-generational home consisting of reused old log houses, which would otherwise be torn down, joined together through a composition of local materials for new rammed earth structures.  The site is located in Dalarna, a province known for its image based around traditions closely related to small-scale farming and tight knit local communities. Vernacular architecture has been, and still is, the icon of this region. Vernacular buildings in the north part of Sweden have been almost exclusively log houses. The widespread accessibility of good quality wood has enabled sturdy log structures to be built that may last for hundreds of years.  The construction method of stacking logs on top of each other, held together only by their own weight and the pieces interlocking, is a flexible building method. The house can grow if needed by adding on additional logs, or taken apart completely for easy transportation. This is something that has enabled me to gather existing building from different parts of Sweden and bring them new life. This proposal consists of seven log houses, all found for sale online.
192

Predicting Octanol/Water Partition Coefficients Using Molecular Simulation for the SAMPL7 Challenge: Comparing the Use of Neat and Water Saturated 1-Octanol

Sabatino, Spencer Johnathan 13 April 2022 (has links)
No description available.
193

Centralized log management for complex computer networks

Hanikat, Marcus January 2018 (has links)
In modern computer networks log messages produced on different devices throughout the network is collected and analyzed. The data from these log messages gives the network administrators an overview of the networks operation, allows them to detect problems with the network and block security breaches. In this thesis several different centralized log management systems are analyzed and evaluated to see if they match the requirements for security, performance and cost which was established. These requirements are designed to meet the stakeholder’s requirements of log management and allow for scaling along with the growth of their network. To prove that the selected system meets the requirements, a small-scale implementation of the system will be created as a “proof of concept”. The conclusion reached was that the best solution for the centralized log management system was the ELK Stack system which is based upon the three open source software Elasticsearch, Logstash and Kibana. In the small-scale implementation of the ELK Stack system it was shown that it meets all the requirements placed on the system. The goal of this thesis is to help develop a greater understanding of some well-known centralized log management systems and why the usage of them is important for computer networks. This will be done by describing, comparing and evaluating some of the functionalities of the selected centralized log management systems. This thesis will also be able to provide people and entities with guidance and recommendations for the choice and implementation of a centralized log management system. / I moderna datornätverk så produceras loggar på olika enheter i nätverket för att sedan samlas in och analyseras. Den data som finns i dessa loggar hjälper nätverksadministratörerna att få en överblick av hur nätverket fungerar, tillåter dem att upptäcka problem i nätverket samt blockera säkerhetshål. I detta projekt så analyseras flertalet relevanta system för centraliserad loggning utifrån de krav för säkerhet, prestanda och kostnad som är uppsatta. Dessa krav är uppsatta för att möta intressentens krav på loghantering och även tillåta för skalning jämsides med tillväxten av deras nätverk. För att bevisa att det valda systemet även fyller de uppsatta kraven så upprättades även en småskalig implementation av det valda systemet som ett ”proof of concept”. Slutsatsen som drogs var att det bästa centraliserade loggningssystemet utifrån de krav som ställs var ELK Stack som är baserat på tre olika mjukvarusystem med öppen källkod som heter Elasticsearch, Logstash och Kibana. I den småskaliga implementationen av detta system så påvisades även att det valda loggningssystemet uppnår samtliga krav som ställdes på systemet. Målet med detta projekt är att hjälpa till att utveckla kunskapen kring några välkända system för centraliserad loggning och varför användning av dessa är av stor betydelse för datornätverk. Detta kommer att göras genom att beskriva, jämföra och utvärdera de utvalda systemen för centraliserad loggning. Projektet kan även att hjälpa personer och organisationer med vägledning och rekommendationer inför val och implementation av ett centraliserat loggningssystem.
194

Log Classification using a Shallow-and-Wide Convolutional Neural Network and Log Keys / Logklassificering med ett grunt-och-brett faltningsnätverk och loggnycklar

Annergren, Björn January 2018 (has links)
A dataset consisting of logs describing results of tests from a single Build and Test process, used in a Continous Integration setting, is utilized to automate categorization of the logs according to failure types. Two different features are evaluated, words and log keys, using unordered document matrices as document representations to determine the viability of log keys. The experiment uses Multinomial Naive Bayes, MNB, classifiers and multi-class Support Vector Machines, SVM, to establish the performance of the different features. The experiment indicates that log keys are equivalent to using words whilst achieving a great reduction in dictionary size. Three different multi-layer perceptrons are evaluated on the log key document matrices achieving slightly higher cross-validation accuracies than the SVM. A shallow-and-wide Convolutional Neural Network, CNN, is then designed using temporal sequences of log keys as document representations. The top performing model of each model architecture is evaluated on a test set except for the MNB classifiers as the MNB had subpar performance during cross-validation. The test set evaluation indicates that the CNN is superior to the other models. / Ett dataset som består av loggar som beskriver resultat av test från en bygg- och testprocess, använt i en miljö med kontinuerlig integration, används för att automatiskt kategorisera loggar enligt olika feltyper. Två olika sorters indata evalueras, ord och loggnycklar, där icke- ordnade dokumentmatriser används som dokumentrepresentationer för att avgöra loggnycklars användbarhet. Experimentet använder multinomial naiv bayes, MNB, som klassificerare och multiklass-supportvektormaskiner, SVM, för att avgöra prestandan för de olika sorternas indata. Experimentet indikerar att loggnycklar är ekvivalenta med ord medan loggnycklar har mycket mindre ordboksstorlek. Tre olika multi-lager-perceptroner evalueras på loggnyckel-dokumentmatriser och får något högre exakthet i krossvalideringen jämfört med SVM. Ett grunt-och-brett faltningsnätverk, CNN, designas med tidsmässiga sekvenser av loggnycklar som dokumentrepresentationer. De topppresterande modellerna av varje modellarkitektur evalueras på ett testset, utom för MNB-klassificerarna då MNB har dålig prestanda under krossvalidering. Evalueringen av testsetet indikerar att CNN:en är bättre än de andra modellerna.
195

Material characterization of viscoelastic polymeric molding compounds

Julian, Michael Robert January 1994 (has links)
No description available.
196

Online Techniques for Enhancing the Diagnosis of Digital Circuits

Tanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication. The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis. Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments. Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method. Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6. In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D.
197

Decreasing the cost of hauling timber through increased payload

Beardsell, Michael G. January 1986 (has links)
The potential for decreasing timber transportation costs in the South by increasing truck payloads was investigated using a combination of theoretical and case-study methods. A survey of transportation regulations in the South found considerable disparities between states. Attempts to model the factors which determine payload per unit of bunk area and load center of gravity location met with only moderate success, but illustrated the difficulties loggers experience in estimating gross and axle weights in the woods. A method was developed for evaluating the impact of Federal Bridge Formula axle weight constraints on the payloads of tractor-trailers with varying dimensions and axle configurations. Analysis of scalehouse data found log truck gross weights lower on average than the legal maximum but also highly variable. Eliminating both overloading and underloading would result in an increase in average payload, reduced overweight lines, and improved public relations. Tractor-trailer tare weights were also highly variable indicating potential for increasing payload by using lightweight equipment. Recommendations focused first on taking steps to keep GVW’s within a narrow range around the legal maximum by adopting alternative loading strategies, improving GVW estimation, and using scalehouse data as a management tool. When this goal is achieved, options for decreasing tare weight should be considered. Suggestions for future research included a study of GVW estimation accuracy using a variety of estimation techniques, and field testing of the project recommendations. / Ph. D.
198

O modelo de regressão odd log-logística gama generalizada com aplicações em análise de sobrevivência / The regression model odd log-logistics generalized gamma with applications in survival analysis

Prataviera, Fábio 11 July 2017 (has links)
Propor uma família de distribuição de probabilidade mais ampla e flexível é de grande importância em estudos estatísticos. Neste trabalho é utilizado um novo método de adicionar um parâmetro para uma distribuição contínua. A distribuição gama generalizada, que tem como casos especiais a distribuição Weibull, exponencial, gama, qui-quadrado, é usada como distribuição base. O novo modelo obtido tem quatro parâmetros e é chamado odd log-logística gama generalizada (OLLGG). Uma das características interessante do modelo OLLGG é o fato de apresentar bimodalidade. Outra proposta deste trabalho é introduzir um modelo de regressão chamado log-odd log-logística gama generalizada (LOLLGG) com base na GG (Stacy e Mihram, 1965). Este modelo pode ser muito útil, quando por exemplo, os dados amostrados possuem uma mistura de duas populações estatísticas. Outra vantagem da distribuição OLLGG consiste na capacidade de apresentar várias formas para a função de risco, crescente, decrescente, na forma de U e bimodal entre outras. Desta forma, são apresentadas em ambos os casos as expressões explícitas para os momentos, função geradora e desvios médios. Considerando dados nãocensurados e censurados de forma aleatória, as estimativas para os parâmetros de interesse, foram obtidas via método da máxima verossimilhança. Estudos de simulação, considerando diferentes valores para os parâmetros, porcentagens de censura e tamanhos amostrais foram conduzidos com o objetivo de verificar a flexibilidade da distribuição e a adequabilidade dos resíduos no modelo de regressão. Para ilustrar, são realizadas aplicações em conjuntos de dados reais. / Providing a wider and more flexible probability distribution family is of great importance in statistical studies. In this work a new method of adding a parameter to a continuous distribution is used. In this study the generalized gamma distribution (GG) is used as base distribution. The GG distribution has, as especial cases, Weibull distribution, exponential, gamma, chi-square, among others. For this motive, it is considered a flexible distribution in data modeling procedures. The new model obtained with four parameters is called log-odd log-logistic generalized gamma (OLLGG). One of the interesting characteristics of the OLLGG model is the fact that it presents bimodality. In addition, a regression model regression model called log-odd log-logistic generalized gamma (LOLLGG) based by GG (Stacy e Mihram, 1965) is introduced. This model can be very useful when, the sampled data has a mixture of two statistical populations. Another advantage of the OLLGG distribution is the ability to present various forms for the failing rate, as increasing, as decreasing, and the shapes of bathtub or U. Explicity expressions for the moments, generating functions, mean deviations are obtained. Considering non-censored and randomly censored data, the estimates for the parameters of interest were obtained using the maximum likelihood method. Simulation studies, considering different values for the parameters, percentages of censoring and sample sizes were done in order to verify the distribuition flexibility, and the residues distrbutuon in the regression model. To illustrate, some applications using real data sets are carried out.
199

O modelo de regressão odd log-logística gama generalizada com aplicações em análise de sobrevivência / The regression model odd log-logistics generalized gamma with applications in survival analysis

Fábio Prataviera 11 July 2017 (has links)
Propor uma família de distribuição de probabilidade mais ampla e flexível é de grande importância em estudos estatísticos. Neste trabalho é utilizado um novo método de adicionar um parâmetro para uma distribuição contínua. A distribuição gama generalizada, que tem como casos especiais a distribuição Weibull, exponencial, gama, qui-quadrado, é usada como distribuição base. O novo modelo obtido tem quatro parâmetros e é chamado odd log-logística gama generalizada (OLLGG). Uma das características interessante do modelo OLLGG é o fato de apresentar bimodalidade. Outra proposta deste trabalho é introduzir um modelo de regressão chamado log-odd log-logística gama generalizada (LOLLGG) com base na GG (Stacy e Mihram, 1965). Este modelo pode ser muito útil, quando por exemplo, os dados amostrados possuem uma mistura de duas populações estatísticas. Outra vantagem da distribuição OLLGG consiste na capacidade de apresentar várias formas para a função de risco, crescente, decrescente, na forma de U e bimodal entre outras. Desta forma, são apresentadas em ambos os casos as expressões explícitas para os momentos, função geradora e desvios médios. Considerando dados nãocensurados e censurados de forma aleatória, as estimativas para os parâmetros de interesse, foram obtidas via método da máxima verossimilhança. Estudos de simulação, considerando diferentes valores para os parâmetros, porcentagens de censura e tamanhos amostrais foram conduzidos com o objetivo de verificar a flexibilidade da distribuição e a adequabilidade dos resíduos no modelo de regressão. Para ilustrar, são realizadas aplicações em conjuntos de dados reais. / Providing a wider and more flexible probability distribution family is of great importance in statistical studies. In this work a new method of adding a parameter to a continuous distribution is used. In this study the generalized gamma distribution (GG) is used as base distribution. The GG distribution has, as especial cases, Weibull distribution, exponential, gamma, chi-square, among others. For this motive, it is considered a flexible distribution in data modeling procedures. The new model obtained with four parameters is called log-odd log-logistic generalized gamma (OLLGG). One of the interesting characteristics of the OLLGG model is the fact that it presents bimodality. In addition, a regression model regression model called log-odd log-logistic generalized gamma (LOLLGG) based by GG (Stacy e Mihram, 1965) is introduced. This model can be very useful when, the sampled data has a mixture of two statistical populations. Another advantage of the OLLGG distribution is the ability to present various forms for the failing rate, as increasing, as decreasing, and the shapes of bathtub or U. Explicity expressions for the moments, generating functions, mean deviations are obtained. Considering non-censored and randomly censored data, the estimates for the parameters of interest were obtained using the maximum likelihood method. Simulation studies, considering different values for the parameters, percentages of censoring and sample sizes were done in order to verify the distribuition flexibility, and the residues distrbutuon in the regression model. To illustrate, some applications using real data sets are carried out.
200

A nova família de distribuições odd log-logística: teoria e aplicações / The new family of odd log-logistic distributions: theory and applications

Cruz, José Nilton da 18 February 2016 (has links)
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais. / In this study, a new family of distributions was proposed, which allows to model survival data when the function of risk has unimodal shapes and U (bathtub). Modifications of the Weibull, Fréchet, generalized half-normal, log-logistic and lognormal distributions were considered. Taking censored and non-censored data, we consider the maximum likelihood estimators for the proposed model, in order to check the flexibility of the new family. Also, it was considered a location-scale regression model, to verify the influence of covariates on survival times. Additionally, a residual analysis was conducted based on modified deviance residuals. For different parameters fixed, percentages of censoring and sample sizes, several simulation studies were performed with the objective of verify the empirical distribution of the martingale type and modified deviance residuals. To detect influential observations, measures of local influence were used, which are diagnostic measures based on small perturbations in the data or in the proposed model. It can occur situations in which the assumption of independence between the failure and censoring times is not valid. Thus, another objective of this work is to consider the informative censoring mechanism based on the marginal likelihood, considering the log-odd log-logistic Weibull distribution in modelling. Finally, the methodologies described are applied to sets of real data.

Page generated in 0.5573 seconds