361 |
Economic analysis of recovering solid wood products from western hemlock pulp logsMortyn, Joel William 05 1900 (has links)
The purpose of this research was to quantify what value could be gained from cutting solid wood products from old-growth western hemlock (Tsuga heterophylla (Raf.) Sarg.) logs that are used to produce pulp in British Columbia. These logs represent a significant portion of the resource and increasing their value recovery would be beneficial to the forest industry.
One hundred and sixteen logs were sampled from the coastal and interior regions of British Columbia. Dimension and quality attributes were measured to enable estimates of gross and merchantable volume. Logs deemed likely to yield lumber were sawn with the aim of maximizing value recovery. The nominal dimension and grade of all lumber recovered was recorded. Margins and breakpoints at which sawing became profitable were calculated. Models to predict the volume of lumber and proportion of Clear grade lumber recovered (“C Industrial” grade at the interior mill, “D Select” grade at the coastal mill) were developed.
Lumber recovery, especially Clear grade lumber, was significantly higher from logs from the coastal site. At current market prices, cutting lumber from these logs was profitable, with the highest margins achieved when chips were produced from the milling residue. It was not profitable to recover lumber from the interior logs regardless of whether chips were produced. The disparity between locations was attributed to differences between the logs, the sawmilling equipment, the sawyers’ motivations and the lumber grades.
Between 60% and 67% of coastal logs and 13% to 21% of interior logs returned a profit, depending on whether chips were produced. Models were developed to better identify these logs using observable attributes. A linear model described the total volume of lumber recovered. Significant predictor variables in the model were the gross log volume, the average width of the sound collar and the stage of butt/heart rot at the large end. A second model predicted the proportion of Clear grade lumber. Regional models were developed to account for different Clear lumber grades between sawmills. Significant predictor variables were knot frequency, diameter at the large end, volume, length, taper and the width of the sound collar at the large end. / Forestry, Faculty of / Graduate
|
362 |
Digital Video Watermarking Robust to Geometric Attacks and CompressionsLiu, Yan January 2011 (has links)
This thesis focuses on video watermarking robust against geometric attacks and
video compressions. In addition to the requirements for an image watermarking algorithm,
a digital video watermarking algorithm has to be robust against advanced
video compressions, frame loss, frame swapping, aspect ratio change, frame rate change,
intra- and inter-frame filtering, etc. Video compression, especially, the most efficient
compression standard, H.264, and geometric attacks, such as rotation and cropping,
frame aspect ratio change, and translation, are considered the most challenging attacks
for video watermarking algorithms.
In this thesis, we first review typical watermarking algorithms robust against geometric
attacks and video compressions, and point out their advantages and disadvantages.
Then, we propose our robust video watermarking algorithms against Rotation,
Scaling and Translation (RST) attacks and MPEG-2 compression based on the logpolar
mapping and the phase-only filtering method. Rotation or scaling transformation
in the spatial domain results in vertical or horizontal shift in the log-polar mapping
(LPM) of the magnitude of the Fourier spectrum of the target frame. Translation has
no effect in this domain. This method is very robust to RST attacks and MPEG-2
compression. We also demonstrate that this method can be used as a RST parameters
detector to work with other watermarking algorithms to improve their robustness to
RST attacks.
Furthermore, we propose a new video watermarking algorithm based on the 1D
DFT (one-dimensional Discrete Fourier Transform) and 1D projection. This algorithm
enhances the robustness to video compression and is able to resist the most advanced video compression, H.264. The 1D DFT for a video sequence along the temporal domain
generates an ideal domain, in which the spatial information is still kept and the
temporal information is obtained. With detailed analysis and calculation, we choose
the frames with highest temporal frequencies to embed the fence-shaped watermark
pattern in the Radon transform domain of the selected frames. The performance of the
proposed algorithm is evaluated by video compression standards MPEG-2 and H.264;
geometric attacks such as rotation, translation, and aspect-ratio changes; and other
video processing. The most important advantages of this video watermarking algorithm
are its simplicity, practicality and robustness.
|
363 |
An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups designWang, Chunxin 01 July 2011 (has links)
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions.
When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results.
The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
|
364 |
Exploring Change Point Detection in Network Equipment LogsBjörk, Tim January 2021 (has links)
Change point detection (CPD) is the method of detecting sudden changes in timeseries, and its importance is great concerning network traffic. With increased knowledge of occurring changes in data logs due to updates in networking equipment,a deeper understanding is allowed for interactions between the updates and theoperational resource usage. In a data log that reflects the amount of network traffic, there are large variations in the time series because of reasons such as connectioncount or external changes to the system. To circumvent these unwanted variationchanges and assort the deliberate variation changes is a challenge. In this thesis, we utilize data logs retrieved from a network equipment vendor to detect changes, then compare the detected changes to when firmware/signature updates were applied, configuration changes were made, etc. with the goal to achieve a deeper understanding of any interaction between firmware/signature/configuration changes and operational resource usage. Challenges in the data quality and data processing are addressed through data manipulation to counteract anomalies and unwanted variation, as well as experimentation with parameters to achieve the most ideal settings. Results are produced through experiments to test the accuracy of the various change pointdetection methods, and for investigation of various parameter settings. Through trial and error, a satisfactory configuration is achieved and used in large scale log detection experiments. The results from the experiments conclude that additional information about how changes in variation arises is required to derive the desired understanding.
|
365 |
Maximum Likelihood Estimation of Hyperon Parameters in Python : Facilitating Novel Studies of Fundamental Symmetries with Modern Software ToolsVerbeek, Benjamin January 2021 (has links)
In this project, an algorithm has been implemented in Python to estimate the parameters describing the production and decay of a spin 1/2 baryon - antibaryon pair. This decay can give clues about a fundamental asymmetry between matter and antimatter. A model-independent formalism developed by the Uppsala hadron physics group and previously implemented in C++, has been shown to be a promising tool in the search for physics beyond the Standard Model (SM) of particle physics. The program developed in this work provides a more user-friendly alternative, and is intended to motivate further use of the formalism through a more maintainable, customizable and readable implementation. The hope is that this will expedite future research in the area of charge parity (CP)-violation and eventually lead to answers to questions such as why the universe consists of matter. A Monte-Carlo integrator is used for normalization and a Python library for function minimization. The program returns an estimation of the physics parameters including error estimation. Tests of statistical properties of the estimator, such as consistency and bias, have been performed. To speed up the implementation, the Just-In-Time compiler Numba has been employed which resulted in a speed increase of a factor 400 compared to plain Python code.
|
366 |
Rizika a opatření při rekonstrukcích dřevěných lidových staveb a konstrukčně srovnatelných novostaveb / Risks and Measures at the Renovation of Vernacular Wooden Structures and Comparable New BuildingsBeníček, Tomáš January 2016 (has links)
This diploma thesis is focused on structural progression of massive wooden structures, requirements to contemporary massive wooden structures, development of carpentry joints and connecting instruments used in constructions and reconstructions of these buildings, and analysis of risks connected with reconstructions of selected historical wooden structures. Another item of this thesis concerns on evaluation of possible risks in particular connecting instruments that are used in massive wooden structures per the FMEA method. The main content of thesis includes a description of selected historical wooden structures, options of their damages, forms of remediation and a description of possible risks and measures in reconstructions. In conclusion the thesis focuses on comparison of historical massive wooden structures and comparable new buildings.
|
367 |
Aktiv felhantering av loggdataÅhlander, Mattias January 2020 (has links)
The main goal of this project has been to investigate how a message queue can be used to handle error codes in log files more actively. The project has followed the Design Science Research Methodology for development and implementation of the solution. A model of the transaction system was developed and emulated in newly developed applications. Two experiments were performed, the first of which tested a longer run time with intervals between messages and the second a time measurement of how long it takes to send 20 000 messages. The first experiment showed that the message queue was able to handle all messages which gave a high throughput of 22.5 messages per second without any messages being lost. The implemented consumer application received all messages and successfully counted the number of error codes in the received data. The experiments that have been carried out have proven that a message queue can be implemented to handle error codes in log files more actively. The future work that can be performed may include an evaluation of the security of the system, comparisons of performance compared to other message queues, performing the experiments on more powerful computers and implementation of machine learning to classify the log data. / Målet med det här projektet har varit att undersöka hur en meddelandekö kan användas för att felhantera felkoder i loggfiler mer aktivt. Projektet har följt Design Science Research Methodology för utveckling och implementering av lösningen. En modell av transaktionssystemet togs fram och emulerades i nyutvecklade applikationer. Två experiment utfördes varav det första testade en längre körning med intervall mellan meddelanden och det andra en tidmätning för hur lång tid det tar att skicka 20 000 meddelanden. Det första experimentet visade att meddelandekön klarade av att hantera meddelanden som skickades över två timmar. Det andra experimentet visade att systemet tog 14 minuter och 45 sekunder att skicka och hantera alla meddelanden, vilket gav en hög genomströmning av 22.5 meddelanden per sekund utan att några meddelanden gick förlorade. Den implementerade mottagarapplikationen tog emot alla meddelanden och lyckades räkna upp antalet felkoder som presenterades i den inkomna datan. De experiment som har utförts har bevisat att en meddelandekö kan implementeras för att felhantera felkoder i loggfiler mer aktivt. De framtida arbeten som kan utföras omfattar en utvärdering av säkerheten av systemet, jämförelser av prestanda jämfört med andra meddelandeköer, utföra experimenten på kraftfullare datorer och en implementering av maskininlärning för att klassificera loggdatan.
|
368 |
ParCam : Applikation till Android för tolkning av parkeringsskyltarForsberg, Tomas January 2020 (has links)
It is not always that easy to accurately interpret a parking signs The driver is expected to keep track of what every road sign, direction, prohibition, and amendment means, both by themselves and in combination with each others In addition, the driver must also keep track of the time, date, if there is a holiday, week number, etcs This can make the driver unsure of the rules, or interpret the rules incorrectly, which can lead to hefty fnes or even a towed vehicles By developing a mobile application that can analyze a photograph of a parking sign and quickly give the driver the verdict, the interpretation process can be made easys The purpose of this study has been to examine available technology within image and text analysis and then develop a prototype of an Android application that can interpret a photograph of a parking sign and quickly give the correct verdict, with the help of said technologys The constructed prototype will be evaluated partly by user tests to evaluate the application’s usability, and partly by functionality tests to evaluate the accuracy of the analysis processs Based on the results from the tests, a conclusion was drawn that the application gave a very informative and clear verdict, which was correct most of the time, but ran into problems with certain signs and under more demanding environmental circumstancess The tests also showed that the interface was perceived as easy to understand and use, though less interaction needed from the user was desireds There is a great potential for future development of ParCam, where the focus will be on increasing the automation of the processs / Att tolka en parkeringsskylt korrekt är inte alltid så enkelt. Föraren förväntas ha koll på vad alla vägmärken, anvisningar, förbud, och tillägg betyder, både för sig själva och i kombination med varandra. Dessutom måste föraren även ha koll på tid, datum, ev. helgdag, veckonummer m.m. Detta kan leda till att föraren blir osäker på vad som gäller eller tolkar reglerna felaktigt, vilket kan leda till dryga böter och även bortbogserat fordon. Genom att utveckla en mobilapplikation som kan analysera ett fotografi av en parkeringsskylt och snabbt ge svar kan denna tolkningsprocess underlättas för föraren. Syftet med denna studie har varit att utforska befintliga teknologier inom bild- och textanalys och därefter konstruera en prototyp av en Android-app som med hjälp av denna teknologi samt användarens mobilkamera kunna tolka fotografier av en parkeringsskylt och snabbt ge en korrekt utvärdering. Den konstruerade prototypen kommer att utvärderas dels genom användartester för att testa applikationens användbarhet och dels genom analys av utdata för att mäta analysens träffsäkerhet. Från testerna drogs slutsatsen att applikationen gav ett väldigt tydligt och informativt svar där analysen var korrekt de allra flesta gångerna, men stötte på problem med vissa skyltar och under svårare miljöförhållanden. Testerna visade också att gränssnittet upplevdes lätt att använda, men skulle helst kräva mindre inblandning från användaren. Det finns stor utvecklingspotential för ParCam, där fokus kommer att läggas på utökad automatisering av processen.
|
369 |
Komponera mera! : En fenomenologisk självstudie av att komponera under tidspress / Compose more! : A phenomenological self-study on composing under time pressure.Wilson, Philip January 2020 (has links)
I detta självständiga arbete undersöks en lärandeprocess i musikalisk komposition under tidspress. Sex kompositioner skrivs utifrån tre olika tidsbegränsningar: fyra dagar, två dagar och tre timmar. Till varje deadline skrivs två kompositioner. Studien utgår ifrån ett fenomenologiskt perspektiv med stöd i forskning och litteratur kring musikalisk komposition och att jobba under tidspress. Loggbok och videoobservation används som dokumentationsmetoder och för att analysera materialet används metoden tematisk analys. Resultatet besvarar de två frågeställningarna: Hur upplever jag att arbeta med tidspress? Vilka metoder använder jag för att klara tidsfristen? Resultatet delas in till fyra teman utefter valda tidsbegränsningen: Komposition på fyra dagar, komposition på två dagar och komposition på tre timmar. Dessa fyra teman innehar underrubrikerna: kompositionens metoder och hur tidspress upplevs i kompositionen. I diskussionen sätts dessa resultat i relation till vald litteratur, forskning och fenomenologiskt perspektiv i ett diskussionskapitel. / In this self-study, I explore my own learning process in musical composition during time pressure. Six compositions are written with three different deadlines: four days, two days and three hours. There are two compositions for each deadline. The study is based on a phenomenological perspective with supporting literature and scientific research on musical composition and the correlation between work and time pressure. This study uses log records and video observations as methods of documentation while thematic analysis is used to analyze the collected data. The result answers two questions: How I experience working with limited time? and What methods are used to finish in time? The result is split into four categories each after its own deadline: four day composition, two day composition and three hour composition. These four categories have two subcategories: The compositions method and the experience in composing with time pressure. In the discussion, these results are set in relation to the selected literature, research and the phenomenological perspective.
|
370 |
Clustering Generic Log Files Under Limited Data Assumptions / Klustring av generiska loggfiler under begränsade antagandenEriksson, Håkan January 2016 (has links)
Complex computer systems are often prone to anomalous or erroneous behavior, which can lead to costly downtime as the systems are diagnosed and repaired. One source of information for diagnosing the errors and anomalies are log files, which are often generated in vast and diverse amounts. However, the log files' size and semi-structured nature makes manual analysis of log files generally infeasible. Some automation is desirable to sift through the log files to find the source of the anomalies or errors. This project aimed to develop a generic algorithm that could cluster diverse log files in accordance to domain expertise. The results show that the developed algorithm performs well in accordance to manual clustering even under more relaxed data assumptions. / Komplexa datorsystem är ofta benägna att uppvisa anormalt eller felaktigt beteende, vilket kan leda till kostsamma driftstopp under tiden som systemen diagnosticeras och repareras. En informationskälla till feldiagnosticeringen är loggfiler, vilka ofta genereras i stora mängder och av olika typer. Givet loggfilernas storlek och semistrukturerade utseende så blir en manuell analys orimlig att genomföra. Viss automatisering är önsvkärd för att sovra bland loggfilerna så att källan till felen och anormaliteterna blir enklare att upptäcka. Det här projektet syftade till att utveckla en generell algoritm som kan klustra olikartade loggfiler i enlighet med domänexpertis. Resultaten visar att algoritmen presterar väl i enlighet med manuell klustring även med färre antaganden om datan.
|
Page generated in 0.0575 seconds