• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 394
  • 76
  • 49
  • 39
  • 34
  • 29
  • 19
  • 12
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 902
  • 140
  • 139
  • 126
  • 79
  • 75
  • 67
  • 64
  • 63
  • 61
  • 59
  • 58
  • 56
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Visual Analysis of Form and Function in Computational Biology

Wiegreffe, Daniel 03 July 2019 (has links)
In the last years, the amount of available data in the field of computational biology steadily increased. In order to be able to analyze these data, various algorithms have been developed by bioinformaticians to process them efficiently. Moreover, computational models were developed to predict for instance biological relationships of species. Furthermore, the prediction of properties like the structure of certain biological molecules is modeled by complex algorithms. Despite these advances in handling such complicated tasks with automated workflows and a huge variety of freely available tools, the expert still needs to supervise the data analysis pipeline inspecting the quality of both the input data and the results. Additionally, choosing appropriate parameters of a model is quite involved. Visual support puts the expert into the data analysis loop by providing visual encodings of the data and the analysis results together with interaction facilities. In order to meet the requirements of the experts, the visualizations usually have to be adapted for the application purpose or completely new representations have to be developed. Furthermore, it is necessary to combine these visualizations with the algorithms of the experts to prepare the data. These in-situ visualizations are needed due to the amount of data handled within the analysis pipeline in this domain. In this thesis, algorithms and visualizations are presented that were developed in two different research areas of computational biology. On the one hand, the multi-replicate peak-caller Sierra Platinum was developed, which is capable of predicting significant regions of histone modifications occurring in genomes based on experimentally generated input data. This algorithm can use several input data sets simultaneously to calculate statistically meaningful results. Multiple quality measurements and visualizations were integrated into to the data analysis pipeline to support the analyst. Based on these in-situ visualizations, the analyst can modify the parameters of the algorithm to obtain the best results for a given input data set. Furthermore, Sierra Platinum and related algorithms were benchmarked against an artificial data set to evaluate the performance under specific conditions of the input data set, e.g., low read quality or undersequenced data. It turned out that Sierra Platinum achieved the best results in every test scenario. Additionally, the performance of Sierra Platinum was evaluated with experimental data confirming existing knowledge. It should be noticed that the results of the other algorithms seemed to contradict this knowledge. On the other hand, this thesis describes two new visualizations for RNA secondary structures. First, the interactive dot plot viewer iDotter is described that is able to visualize RNA secondary structure predictions as a web service. Several interaction techniques were implemented that support the analyst exploring RNA secondary structure dot plots. iDotter provides an API to share or archive annotated dot plots. Additionally, the API enables the embedding of iDotter in existing data analysis pipelines. Second, the algorithm RNApuzzler is presented that generates (outer-)planar graph drawings for all RNA secondary structure predictions. Previously presented algorithms failed in always producing crossing-free graphs. First, several drawing constraints were derived from the literature. Based on these, the algorithm RNAturtle was developed that did not always produced planar drawings. Therefore, some drawing constraints were relaxed and additional drawing constraints were established. Building on these modified constraints, RNApuzzler was developed. It takes the drawing generated by RNAturtle as an input and resolves the possible intersections of the graph. Due to the resolving mechanism, modified loops can become very large during the intersection resolving step. Therefore, an optimization was developed. During a post-processing step the radii of the heavily modified loops are reduced to a minimum. Based on the constraints and the intersection resolving mechanism, it can be shown that RNApuzzler is able to produce planar drawings for any RNA secondary structure. Finally, the results of RNApuzzler are compared to other algorithms.
382

Methods for Ionization Current Interpretation to be Used in Ignition Control

Eriksson, Lars January 1995 (has links)
It is desirable to measure engine performance for several reasons, e.g. when computing the spark advance setting in spark-ignited (SI) engines. There exists two methods, among others, of measuring the performance, such as measuring the pressure and the ionization current. Since the ionization current reflects the pressure, it is interesting to study if it is possible to extract information from the ionization current about the combustion and pressure. Three different algorithms for extracting information from the ionization current are studied. The first algorithm, ion peak, searches the \second peak" in the ionization signal. The second algorithm computes the centroid. In the third algorithm a model of the ionization signal structure is fitted to the ionization signal. The algorithms are tested in four operating conditions. The first algorithm uses the local information around the second peak and is sensitive to noise. The second algorithm uses a larger portion of the ionization signal, which is more stable. It provides promising results for engines with a clear post flame phase. The third algorithm, ion structure analysis, fits an ideal model to the ionization signal. The algorithm provides promising results, but the present implementation requires much computational effort.
383

Normalisation of Early Isometric Force Production as a Percentage of Peak Force, During Multi-Joint Isometric Assessment

Comfort, Paul, Dos'Santos, Thomas, Jones, Paul A., McMahon, John J., Suchomel, Timothy J., Bazyler, Caleb D., Stone, Michael H. 01 January 2019 (has links)
Purpose: To determine the reliability of early force production (50, 100, 150, 200, and 250 ms) relative to peak force (PF) during an isometric mid-thigh pull and to assess the relationships between these variables. Methods:: Male collegiate athletes (N = 29; age 21.1 [2.9] y, height 1.71 [0.07] m, body mass 71.3 [13.6] kg) performed isometric mid-thigh pulls during 2 separate testing sessions. Net PF and net force produced at each epoch were calculated. Within- and between-session reliabilities were determined using intraclass correlation coefficients and coefficient of variation percentages. In addition, Pearson correlation coefficients and coefficient of determination were calculated to examine the relationships between PF and time-specific force production. Results:: Net PF and time-specific force demonstrated very high to almost perfect reliability both within and between sessions (intraclass correlation coefficients .82–.97; coefficient of variation percentages 0.35%–1.23%). Similarly, time-specific force expressed as a percentage of PF demonstrated very high to almost perfect reliability both within and between sessions (intraclass correlation coefficients .76–.86; coefficient of variation percentages 0.32%–2.51%). Strong to nearly perfect relationships (r = .615–.881) exist between net PF and time-specific net force, with relationships improving over longer epochs. Conclusion:: Based on the smallest detectable difference, a change in force at 50 milliseconds expressed relative to PF > 10% and early force production (100, 150, 200, and 250 ms) expressed relative to PF of >2% should be considered meaningful. Expressing early force production as a percentage of PF is reliable and may provide greater insight into the adaptations to the previous training phase than PF alone.
384

TEACHING PERSPECTIVE TAKING SKILLS TO CHILDREN WITH DEVELOPMENTAL DISABILITIES THROUGH DEICTIC RELATIONAL FRAMES

White, Carrie 01 December 2019 (has links)
AN ABSTRACT OF THE THESIS OF
385

Analysis of shear strength of rock joints with PFC2D

Lazzari, Elisa January 2013 (has links)
Joints are the main features encountered in rock and sliding of rock blocks on joints is classified as the principal source of instability in underground excavations. In this regard, joints’ peak shear strength is the controlling parameter. However, given the difficulty in estimating it, shear tests are often performed. These are often quite expensive and also time consuming and, therefore, it would be valuable if shear tests could be artificially performed using numerical models. The objective of this study is to prove the possibility to perform virtual numerical shear tests in a PCF2D environment that resemble the laboratory ones. A numerical model of a granite rock joint has been created by means of a calibration process. Both the intact rock microparameters and the smooth joint scale have been calibrated against macroparameters derived from shear tests performed in laboratory. A new parameter, the length ratio, is introduced which takes into account the effective length of the smooth joint compared to the theoretical one. The normal and shear stiffnesses, the cohesion and the tensile force ought to be scaled against the length ratio. Four simple regular joint profiles have been tested in the PFC2D environment. The analysis shows good results both from a qualitative and from a quantitative point of view. The difference in peak shear strength with respect to the one computed with Patton´s formula is in the order of 1% which indicates a good accuracy of the model. In addition, four profiles of one real rough mated joint have been tested. From the scanned surface data, a two-dimensional profile has been extracted with four different resolutions. In this case, however, interlocking of particles along the smooth joint occurs, giving rise to an unrealistic distribution of normal and shear forces. A possible explanation to the problem is discussed based on recent developments in the study of numerical shear tests with PFC2D.
386

Peak shaving optimisation in school kitchens : A machine learning approach

Alhoush, George, Edvardsson, Emil January 2022 (has links)
With the increasing electrification of todays society the electrical grid is experiencing increasing pressure from demand. One factor that affects the stability of the grid are the time intervals at which power demand is at its highest which is referred to as peak demand. This project was conducted in order to reduce the peak demand through a process called peak shaving in order to relieve some of this pressure through the use of batteries and renewable energy. By doing so, the user of such systems could reduce the installation cost of their electrical infrastructure as well as the electrical billing. Peak shaving in this project was implemented using machine learning algorithms that predicted the daily power consumption in school kitchens with help of their food menus, which were then fed to an algorithm to steer a battery according to the results. All of these project findings are compared to another system installed by a company to decide whether the algorithm has the right accuracy and performance. The results of the simulations were promising as the algorithm was able to detect the vast majority of the peaks and perform peak shaving intelligently. Based on the graphs and values presented in this report, it can be concluded that the algorithm is ready to be implemented in the real world with the potential to contribute to a long-term sustainable electrical grid while saving money for the user.
387

Implications of Geochemistry and Textures of Titanite for the Geologic Histories of the Notch Peak Intrusion and Little Cottonwood Stock, Utah

Henze, Porter 27 July 2020 (has links)
Textural and compositional variations in titanite, along with whole-rock geochemistry, provide constraints on the emplacement and cooling histories of two plutons: the Jurassic Notch Peak pluton and the Oligocene Little Cottonwood stock, both in Utah. Titanite textures observed with back-scattered electron (BSE) imaging along with their compositions were used to determine four periods of growth: cores, rims, interstitial overgrowths, and secondary replacements. Brightness in BSE images correlates mostly with rare earth elements (REE). REE patterns in cores and rims are compositionally similar in both plutons, although the Notch Peak intrusion tend to be slightly more enriched in REE. Overgrowths and secondary replacements typically have lower concentrations of REE and Fe and higher Al, Mn, F, and U. They also have similar δ18O values to primary titanite, indicating alteration and recrystallization from exolved magmatic fluids rather than meteoric sources. In the Notch Peak intrusion, titanite grains usually have simple, oscillatory zoned textures that include cores which include bright sector zones. These are overprinted by secondary titanite that grows within and replaces the primary titanite grain. At some localities, Notch Peak titanites have been hydrothermally altered to fine-grained aggregates of rutile or brookite, magnetite, quartz, and plagioclase. These observations indicate a simple cooling path after magmatic intrusion, followed by hydrothermal alteration for the Notch Peak intrusion. The Little Cottonwood stock contains titanite grains that are distinctly different from those in the Notch Peak intrusion. They typically contain a distinct patchy core with rounded, resorbed ilmenite inclusions. Surrounding the core is a mantle of oscillatory zoned titanite. On many grains, narrow rims of secondary overgrowths are observed as well as interstitial titanite growing in between chloritized biotite sheets. The cores of these titanite grains suggest that a more reduced, ilmenite-rich magma mixed into an oxidized felsic magma, destabilizing existing ilmenites and forming its patchy texture. This was followed by hydrothermal overgrowths and interstitial titanite, like Notch Peak, but to a lesser extent. Although both plutons had similar emplacement settings–subduction related intrusion into Paleozoic limestone–their whole rock and titanite chemistries are different. The Notch Peak intrusion is more chemically evolved and less mafic than the Little Cottonwood stock. The patchy cores with Fe-Ti oxide inclusions found in the Little Cottonwood stock, along with the abundance of mafic enclaves in the pluton, provide evidence for magma mixing, while no evidence is observed in the Notch Peak intrusion for magma mixing.
388

Data Processing Algorithms in Wireless Sensor Networks får Structural Health Monitoring

Danna, Nigatu Mitiku, Mekonnen, Esayas Getachew January 2012 (has links)
The gradual deterioration and failure of old buildings, bridges and other civil engineering structures invoked the need for Structural Health Monitoring (SHM) systems to develop a means to monitor the health of structures. Dozens of sensing, processing and monitoring mechanisms have been implemented and widely deployed with wired sensors. Wireless sensor networks (WSNs), on the other hand, are networks of large numbers of low cost wireless sensor nodes that communicate through a wireless media. The complexity nature and high cost demand of the highly used wired traditional SHM systems have posed the need for replacement with WSNs. However, the major fact that wireless sensor nodes have memory and power supply limitations has been an issue and many efficient options have been proposed to solve this problem and preserve the long life of the network. This is the reason why data processing algorithms in WSNs focus mainly on the accomplishment of efficient utilization of these scarce resources. In this thesis, we design a low-power and memory efficient data processing algorithm using in-place radix-2 integer Fast Fourier Transform (FFT). This algorithm requires inputs with integer values; hence, increases the memory efficiency by more than 40% and highly saves processor power consumption over the traditional floating-point implementation. A standard-deviation-based peak picking algorithm is next applied to measure the natural frequency of the structure. The algorithms together with Contiki, a lightweight open source operating system for networked embedded systems, are loaded on Z1 Zolertia sensor node. Analogue Device’s ADXL345 digital accelerometer on board is used to collect vibration data. The bridge model used to test the target algorithm is a simply supported beam in the lab.
389

Machine Vision Inspection of the Lapping Process in the Production of Mass Impregnated High Voltage Cables

Nilsson, Jim, Valtersson, Peter January 2018 (has links)
Background. Mass impregnated high voltage cables are used in, for example, submarine electric power transmission. One of the production steps of such cables is the lapping process in which several hundred layers of special purpose paper are wrapped around the conductor of the cable. It is important for the mechanical and electrical properties of the finished cable that the paper is applied correctly, however there currently exists no reliable way of continuously ensuring that the paper is applied correctly. Objective. The objective of this thesis is to develop a prototype of a cost-effective machine vision system which monitors the lapping process and detects and records any errors that may occur during the process; with an accuracy of at least one tenth of a millimetre. Methods. The requirements of the system are specified and suitable hardware is identified. Using a method where the images are projected down to one axis as well as other signal processing methods, the errors are measured. Experiments are performed where the accuracy and performance of the system is tested in a controlled environment. Results. The results show that the system is able to detect and measure errors accurately down to one tenth of a millimetre while operating at a frame rate of 40 frames per second. The hardware cost of the system is less than €200. Conclusions. A cost-effective machine vision system capable of performing measurements accurate down to one tenth of a millimetre can be implemented using the inexpensive Raspberry Pi 3 and Raspberry Pi Camera Module V2. Th
390

Is anaerobic performance influenced by music in moderately trained individuals?

Ifrén, Anette January 2021 (has links)
No description available.

Page generated in 0.0599 seconds