191 |
Mikroprocesorem řízená testovací jednotka / Microprocessor controlled testing unitMejzlík, Vladimír January 2010 (has links)
This project deals with the design of an autonomous microprocessor controlled testing unit for automatic controlling the output of the tested device, depending on the excitation of its inputs. There are possible realizations of testing unit hardware functional blocks described. Possible options are objectively analyzed in accord with the project specification and with regard to the mutual compatibility of individual blocks, availability, price and desired functionality. The most appropriate selected solution is implemented using the specific circuit elements. The output of the project is realized functional testing unit and elaborated product documentation. There was control software for the microprocessor of the unit written. The software implements an interpreter for the test algorithm execution, carries out the test evaluation and stores a record of the test process to the file. There was also the utility for a PC, which allows uploading tests to the testing unit via USB created.
|
192 |
Improving Biometric Log Detection with Partitioning and Filtering of the Search SpaceRajabli, Nijat January 2021 (has links)
Tracking of tree logs from a harvesting site to its processing site is a legal requirement for timber-based industries for social and economic reasons. Biometric tree log detection systems use images of the tree logs to track the logs by checking whether a given log image matches any of the logs registered in the system. However, as the number of registered tree logs in the database increases, the number of pairwise comparisons, and consequently the search time increase proportionally. Growing search space degrades the accuracy and the response time of matching queries and slows down the tracking process, costing time and resources. This work introduces database filtering and partitioning approaches based on discriminative log-end features to reduce the search space of the biometric log identification algorithms. In this study, 252 unique log images are used to train and test models for extracting features from the log images and to filter and cluster a database of logs. Experiments are carried out to show the end-to-end accuracy and speed-up impact of the individual approaches as well as the combinations thereof. The findings of this study indicate that the proposed approaches are suited for speeding-up tree log identification systems and highlight further opportunities in this field
|
193 |
Adaptive reuse of the vernacular log buildingBergström, Christine January 2022 (has links)
This thesis project is an attempt to learn from vernacular building traditions when designing sustainable homes for families in a contemporary rural setting. My proposal is a multi-generational home consisting of reused old log houses, which would otherwise be torn down, joined together through a composition of local materials for new rammed earth structures. The site is located in Dalarna, a province known for its image based around traditions closely related to small-scale farming and tight knit local communities. Vernacular architecture has been, and still is, the icon of this region. Vernacular buildings in the north part of Sweden have been almost exclusively log houses. The widespread accessibility of good quality wood has enabled sturdy log structures to be built that may last for hundreds of years. The construction method of stacking logs on top of each other, held together only by their own weight and the pieces interlocking, is a flexible building method. The house can grow if needed by adding on additional logs, or taken apart completely for easy transportation. This is something that has enabled me to gather existing building from different parts of Sweden and bring them new life. This proposal consists of seven log houses, all found for sale online.
|
194 |
Predicting Octanol/Water Partition Coefficients Using Molecular Simulation for the SAMPL7 Challenge: Comparing the Use of Neat and Water Saturated 1-OctanolSabatino, Spencer Johnathan 13 April 2022 (has links)
No description available.
|
195 |
Centralized log management for complex computer networksHanikat, Marcus January 2018 (has links)
In modern computer networks log messages produced on different devices throughout the network is collected and analyzed. The data from these log messages gives the network administrators an overview of the networks operation, allows them to detect problems with the network and block security breaches. In this thesis several different centralized log management systems are analyzed and evaluated to see if they match the requirements for security, performance and cost which was established. These requirements are designed to meet the stakeholder’s requirements of log management and allow for scaling along with the growth of their network. To prove that the selected system meets the requirements, a small-scale implementation of the system will be created as a “proof of concept”. The conclusion reached was that the best solution for the centralized log management system was the ELK Stack system which is based upon the three open source software Elasticsearch, Logstash and Kibana. In the small-scale implementation of the ELK Stack system it was shown that it meets all the requirements placed on the system. The goal of this thesis is to help develop a greater understanding of some well-known centralized log management systems and why the usage of them is important for computer networks. This will be done by describing, comparing and evaluating some of the functionalities of the selected centralized log management systems. This thesis will also be able to provide people and entities with guidance and recommendations for the choice and implementation of a centralized log management system. / I moderna datornätverk så produceras loggar på olika enheter i nätverket för att sedan samlas in och analyseras. Den data som finns i dessa loggar hjälper nätverksadministratörerna att få en överblick av hur nätverket fungerar, tillåter dem att upptäcka problem i nätverket samt blockera säkerhetshål. I detta projekt så analyseras flertalet relevanta system för centraliserad loggning utifrån de krav för säkerhet, prestanda och kostnad som är uppsatta. Dessa krav är uppsatta för att möta intressentens krav på loghantering och även tillåta för skalning jämsides med tillväxten av deras nätverk. För att bevisa att det valda systemet även fyller de uppsatta kraven så upprättades även en småskalig implementation av det valda systemet som ett ”proof of concept”. Slutsatsen som drogs var att det bästa centraliserade loggningssystemet utifrån de krav som ställs var ELK Stack som är baserat på tre olika mjukvarusystem med öppen källkod som heter Elasticsearch, Logstash och Kibana. I den småskaliga implementationen av detta system så påvisades även att det valda loggningssystemet uppnår samtliga krav som ställdes på systemet. Målet med detta projekt är att hjälpa till att utveckla kunskapen kring några välkända system för centraliserad loggning och varför användning av dessa är av stor betydelse för datornätverk. Detta kommer att göras genom att beskriva, jämföra och utvärdera de utvalda systemen för centraliserad loggning. Projektet kan även att hjälpa personer och organisationer med vägledning och rekommendationer inför val och implementation av ett centraliserat loggningssystem.
|
196 |
Log Classification using a Shallow-and-Wide Convolutional Neural Network and Log Keys / Logklassificering med ett grunt-och-brett faltningsnätverk och loggnycklarAnnergren, Björn January 2018 (has links)
A dataset consisting of logs describing results of tests from a single Build and Test process, used in a Continous Integration setting, is utilized to automate categorization of the logs according to failure types. Two different features are evaluated, words and log keys, using unordered document matrices as document representations to determine the viability of log keys. The experiment uses Multinomial Naive Bayes, MNB, classifiers and multi-class Support Vector Machines, SVM, to establish the performance of the different features. The experiment indicates that log keys are equivalent to using words whilst achieving a great reduction in dictionary size. Three different multi-layer perceptrons are evaluated on the log key document matrices achieving slightly higher cross-validation accuracies than the SVM. A shallow-and-wide Convolutional Neural Network, CNN, is then designed using temporal sequences of log keys as document representations. The top performing model of each model architecture is evaluated on a test set except for the MNB classifiers as the MNB had subpar performance during cross-validation. The test set evaluation indicates that the CNN is superior to the other models. / Ett dataset som består av loggar som beskriver resultat av test från en bygg- och testprocess, använt i en miljö med kontinuerlig integration, används för att automatiskt kategorisera loggar enligt olika feltyper. Två olika sorters indata evalueras, ord och loggnycklar, där icke- ordnade dokumentmatriser används som dokumentrepresentationer för att avgöra loggnycklars användbarhet. Experimentet använder multinomial naiv bayes, MNB, som klassificerare och multiklass-supportvektormaskiner, SVM, för att avgöra prestandan för de olika sorternas indata. Experimentet indikerar att loggnycklar är ekvivalenta med ord medan loggnycklar har mycket mindre ordboksstorlek. Tre olika multi-lager-perceptroner evalueras på loggnyckel-dokumentmatriser och får något högre exakthet i krossvalideringen jämfört med SVM. Ett grunt-och-brett faltningsnätverk, CNN, designas med tidsmässiga sekvenser av loggnycklar som dokumentrepresentationer. De topppresterande modellerna av varje modellarkitektur evalueras på ett testset, utom för MNB-klassificerarna då MNB har dålig prestanda under krossvalidering. Evalueringen av testsetet indikerar att CNN:en är bättre än de andra modellerna.
|
197 |
Material characterization of viscoelastic polymeric molding compoundsJulian, Michael Robert January 1994 (has links)
No description available.
|
198 |
Decreasing the cost of hauling timber through increased payloadBeardsell, Michael G. January 1986 (has links)
The potential for decreasing timber transportation costs in the South by increasing truck payloads was investigated using a combination of theoretical and case-study methods. A survey of transportation regulations in the South found considerable disparities between states. Attempts to model the factors which determine payload per unit of bunk area and load center of gravity location met with only moderate success, but illustrated the difficulties loggers experience in estimating gross and axle weights in the woods. A method was developed for evaluating the impact of Federal Bridge Formula axle weight constraints on the payloads of tractor-trailers with varying dimensions and axle configurations.
Analysis of scalehouse data found log truck gross weights lower on average than the legal maximum but also highly variable. Eliminating both overloading and underloading would result in an increase in average payload, reduced overweight lines, and improved public relations. Tractor-trailer tare weights were also highly variable indicating potential for increasing payload by using lightweight equipment.
Recommendations focused first on taking steps to keep GVW’s within a narrow range around the legal maximum by adopting alternative loading strategies, improving GVW estimation, and using scalehouse data as a management tool. When this goal is achieved, options for decreasing tare weight should be considered. Suggestions for future research included a study of GVW estimation accuracy using a variety of estimation techniques, and field testing of the project recommendations. / Ph. D.
|
199 |
Online Techniques for Enhancing the Diagnosis of Digital CircuitsTanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication.
The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis.
Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments.
Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method.
Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6.
In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D. / When a new microchip fabrication technology is introduced, the manufacturing is far from perfect. A lot of work goes into updating the fabrication rules and microchip designs before we get a higher proportion of good or defect-free chips. With continued advancements in the fabrication technology, this enhancement work has become increasingly difficult. This is primarily because of the sheer number of transistors that can be fabricated on a single chip, which has practically doubled every two years for the last four decades. The microchip testing process involves application of stimuli and checking the responses. These stimuli cater for a huge number of possible defects inside the chips. With the increase in the number of transistors, covering all possible defects is becoming practically impossible within the business constraints.
This research proposes a solution to this problem, which is to make various activities in this process adaptive to the actual defects in the chips. The stimuli, we mentioned above, now depend upon the feedback from the chip. By utilizing this feedback, we have demonstrated significant improvements in three primary activities namely failure logging, scan testing and scan chain diagnosis over state-of-the-art industrial tools. These activities are essential steps related to improving the proportion of good chips in the manufactured lot.
|
200 |
O modelo de regressão odd log-logística gama generalizada com aplicações em análise de sobrevivência / The regression model odd log-logistics generalized gamma with applications in survival analysisPrataviera, Fábio 11 July 2017 (has links)
Propor uma família de distribuição de probabilidade mais ampla e flexível é de grande importância em estudos estatísticos. Neste trabalho é utilizado um novo método de adicionar um parâmetro para uma distribuição contínua. A distribuição gama generalizada, que tem como casos especiais a distribuição Weibull, exponencial, gama, qui-quadrado, é usada como distribuição base. O novo modelo obtido tem quatro parâmetros e é chamado odd log-logística gama generalizada (OLLGG). Uma das características interessante do modelo OLLGG é o fato de apresentar bimodalidade. Outra proposta deste trabalho é introduzir um modelo de regressão chamado log-odd log-logística gama generalizada (LOLLGG) com base na GG (Stacy e Mihram, 1965). Este modelo pode ser muito útil, quando por exemplo, os dados amostrados possuem uma mistura de duas populações estatísticas. Outra vantagem da distribuição OLLGG consiste na capacidade de apresentar várias formas para a função de risco, crescente, decrescente, na forma de U e bimodal entre outras. Desta forma, são apresentadas em ambos os casos as expressões explícitas para os momentos, função geradora e desvios médios. Considerando dados nãocensurados e censurados de forma aleatória, as estimativas para os parâmetros de interesse, foram obtidas via método da máxima verossimilhança. Estudos de simulação, considerando diferentes valores para os parâmetros, porcentagens de censura e tamanhos amostrais foram conduzidos com o objetivo de verificar a flexibilidade da distribuição e a adequabilidade dos resíduos no modelo de regressão. Para ilustrar, são realizadas aplicações em conjuntos de dados reais. / Providing a wider and more flexible probability distribution family is of great importance in statistical studies. In this work a new method of adding a parameter to a continuous distribution is used. In this study the generalized gamma distribution (GG) is used as base distribution. The GG distribution has, as especial cases, Weibull distribution, exponential, gamma, chi-square, among others. For this motive, it is considered a flexible distribution in data modeling procedures. The new model obtained with four parameters is called log-odd log-logistic generalized gamma (OLLGG). One of the interesting characteristics of the OLLGG model is the fact that it presents bimodality. In addition, a regression model regression model called log-odd log-logistic generalized gamma (LOLLGG) based by GG (Stacy e Mihram, 1965) is introduced. This model can be very useful when, the sampled data has a mixture of two statistical populations. Another advantage of the OLLGG distribution is the ability to present various forms for the failing rate, as increasing, as decreasing, and the shapes of bathtub or U. Explicity expressions for the moments, generating functions, mean deviations are obtained. Considering non-censored and randomly censored data, the estimates for the parameters of interest were obtained using the maximum likelihood method. Simulation studies, considering different values for the parameters, percentages of censoring and sample sizes were done in order to verify the distribuition flexibility, and the residues distrbutuon in the regression model. To illustrate, some applications using real data sets are carried out.
|
Page generated in 0.0391 seconds