• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 12
  • 8
  • 8
  • 3
  • 1
  • 1
  • Tagged with
  • 48
  • 17
  • 17
  • 15
  • 11
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

WORKSHOP "MOBILITÄT"

Anders, Jörg 12 June 2001 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur "Rechnernetze und verteilte Systeme" der Fakultaet fuer Informatik der TU Chemnitz. Workshop-Thema: Mobilitaet
32

Distributions Of Fiber Characteristics As A Tool To Evaluate Mechanical Pulps

Reyier Österling, Sofia January 2015 (has links)
Mechanical pulps are used in paper products such as magazine or news grade printing papers or paperboard. Mechanical pulping gives a high yield; nearly everything in the tree except the bark is used in the paper. This means that mechanical pulping consumes much less wood than chemical pulping, especially to produce a unit area of printing surface. A drawback of mechanical pulp production is the high amounts of electrical energy needed to separate and refine the fibers to a given fiber quality. Mechanical pulps are often produced from slow growing spruce trees of forests in the northern hemisphere resulting in long, slender fibers that are well suited for mechanical pulp products. These fibers have large varieties in geometry, mainly wall thickness and width, depending on seasonal variations and growth conditions. Earlywood fibers typically have thin walls and latewood fibers thick. The background to this study was that a more detailed fiber characterization involving evaluations of distributions of fiber characteristics, may give improved possibilities to optimize the mechanical pulping process and thereby reduce the total electric energy needed to reach a given quality of the pulp and final product. This would result in improved competitiveness as well as less environmental impact. This study evaluated the relation between fiber characteristics in three types of mechanical pulps made from Norway spruce (Picea abies), thermomechanical pulp(TMP), stone groundwood pulp (SGW) and chemithermomechanical pulp (CTMP). In addition, the influence of fibers from these pulp types on sheet characteristics, mainly tensile index, was studied. A comparatively rapid method was presented on how to evaluate the propensity of each fiber to form sheets of high tensile index, by the use of raw data from a commercially available fiber analyzer (FiberLabTM). The developed method gives novel opportunities of evaluating the effect on the fibers of each stage in the mechanical pulping process and has a potential to be applied also on‐line to steer the refining and pulping process by the characteristics of the final pulp and the quality of the final paper. The long fiber fraction is important for the properties of the whole pulp. It was found that fiber wall thickness and external fibrillation were the fibercharacteristics that contributed the most to tensile index of the long fiber fractions in five mechanical pulps (three TMPs, one SGW, one CTMP). The tensile index of handsheets of the long fiber fractions could be predicted by linear regressions using a combination of fiber wall thickness and degree of external fibrillation. The predicted tensile index was denoted BIN, short for Bonding ability INfluence. This resulted in the same linear correlation between BIN and tensile index for 52 samples of the five mechanical pulps studied, each fractionated into five streams(plus feed) in full size hydrocyclones. The Bauer McNett P16/R30 (passed 16 meshwire, retained on a 30 mesh wire) and P30/R50 fractions of each stream were used for the evaluation. The fibers of the SGW had thicker walls and a higher degree of external fibrillation than the TMPs and CTMP, which resulted in a correlation between BIN and tensile index on a different level for the P30/R50 fraction of SGW than the other pulp samples. A BIN model based on averages weighted by each fiber´s wall volume instead of arithmetic averages, took the fiber wall thickness of the SGW into account, and gave one uniform correlation between BIN and tensile index for all pulp samples (12 samples for constructing the model, 46 for validatingit). If the BIN model is used for predicting averages of the tensile index of a sheet, a model based on wall volume weighted data is recommended. To be able to produce BIN distributions where the influence of the length or wall volume of each fiber is taken into account, the BIN model is currently based on arithmetic averages of fiber wall thickness and fibrillation. Fiber width used as a single factor reduced the accuracy of the BIN model. Wall volume weighted averages of fiber width also resulted in a completely changed ranking of the five hydrocyclone streams compared to arithmetic, for two of thefive pulps. This was not seen when fiber width was combined with fiber wallthickness into the factor “collapse resistance index”. In order to avoid too high influence of fiber wall thickness and until the influence of fiber width on BIN and the measurement of fiber width is further evaluated, it is recommended to use length weighted or arithmetic distributions of BIN and other fiber characteristics. A comparably fast method to evaluate the distribution of fiber wall thickness and degree of external fibrillation with high resolution showed that the fiber wallthickness of the latewood fibers was reduced by increasing the refining energy in adouble disc refiner operated at four levels of specific energy input in a commercial TMP production line. This was expected but could not be seen by the use of average values, it was concluded that fiber characteristics in many cases should be evaluated as distributions and not only as averages. BIN distributions of various types of mechanical pulps from Norway spruce showed results that were expected based on knowledge of the particular pulps and processes. Measurements of mixtures of a news‐ and a SC (super calendered) gradeTMP, showed a gradual increase in high‐BIN fibers with higher amounts of SCgrade TMP. The BIN distributions also revealed differences between the pulps that were not seen from average fiber values, for example that the shape of the BINdistributions was similar for two pulps that originated from conical disc refiners, a news grade TMP and the board grade CTMP, although the distributions were on different BIN levels. The SC grade TMP and the SC grade SGW had similar levels of tensile index, but the SGW contained some fibers of very low BIN values which may influence the characteristics of the final paper, for example strength, surface and structure. This shows that the BIN model has the potential of being applied on either the whole or parts of a papermaking process based on mechanical or chemimechanical pulping; the evaluation of distributions of fiber characteristics can contribute to increased knowledge about the process and opportunities to optimize it.
33

Frequency Analysis of Floods - A Nanoparametric Approach

Santhosh, D January 2013 (has links) (PDF)
Floods cause widespread damage to property and life in different parts of the world. Hence there is a paramount need to develop effective methods for design flood estimation to alleviate risk associated with these extreme hydrologic events. Methods that are conventionally considered for analysis of floods focus on estimation of continuous frequency relationship between peak flow observed at a location and its corresponding exceedance probability depicting the plausible conditions in the planning horizon. These methods are commonly known as at-site flood frequency analysis (FFA) procedures. The available FFA procedures can be classified as parametric and nonparametric. Parametric methods are based on the assumption that sample (at-site data) is drawn from a population with known probability density function (PDF). Those procedures have uncertainty associated with the choice of PDF and the method for estimation of its parameters. Moreover, parametric methods are ineffective in modeling flood data if multimodality is evident in their PDF. To overcome those artifacts, a few studies attempted using kernel based nonparametric (NP) methods as an alternative to parametric methods. The NP methods are data driven and they can characterize the uncertainty in data without prior assumptions as to the form of the PDF. Conventional kernel methods have shortcomings associated with boundary leakage problem and normal reference rule (considered for estimation of bandwidth), which have implications on flood quantile estimates. To alleviate this problem, focus of NP flood frequency analysis has been on development of new kernel density estimators (kdes). Another issue in FFA is that information on the whole hydrograph (e.g., time to the peak flow, volume of the flood flow and duration of the flood event) is needed, in addition to peak flow for certain applications. An option is to perform frequency analysis on each of the variables independently. However, these variables are not independent, and hence there is a need to perform multivariate analysis to construct multivariate PDFs and use the corresponding cumulative distribution functions (CDFs) to arrive at estimates of characteristics of design flood hydrograph. In this perspective, recent focus of flood frequency analysis studies has been on development of methods to derive joint distributions of flood hydrograph related variables in a nonparametric setting. Further, in real world scenario, it is often necessary to estimate design flood quantiles at target locations that have limited or no data. Regional Flood Frequency analysis (RFFA) procedures have been developed for use in such situations. These procedures involve use of a regionalization procedure for identification of a homogeneous group of watersheds that are similar to watershed of the target site in terms of flood response. Subsequently regional frequency analysis (RFA) is performed, wherein the information pooled from the group (region) forms basis for frequency analysis to construct a CDF (growth curve) that is subsequently used to arrive at quantile estimates at the target site. Though there are various procedures for RFFA, they are largely confined to only univariate framework considering a parametric approach as the basis to arrive at required quantile estimates. Motivated by these findings, this thesis concerns development of a linear diffusion process based adaptive kernel density estimator (D-kde) based methodologies for at-site as well as regional FFA in univariate as well as bivariate settings. The D-kde alleviates boundary leakage problem and also avoids normal reference rule while estimating optimal bandwidth by using Botev-Grotowski-Kroese estimator (BGKE). Potential of the proposed methodologies in both univariate and bivariate settings is demonstrated by application to synthetic data sets of various sizes drawn from known unimodal and bimodal parametric populations, and to real world data sets from India, USA, United Kingdom and Canada. In the context of at-site univariate FFA (considering peak flows), the performance of D- kde was found to be better when compared to four parametric distribution based methods (Generalized extreme value, Generalized logistic, Generalized Pareto, Generalized Normal), thirty-two ‘kde and bandwidth estimator’ combinations that resulted from application of four commonly used kernels in conjunction with eight bandwidth estimators, and a local polynomial–based estimator. In the context of at-site bivariate FFA considering ‘peakflow-flood volume’ and ‘flood duration-flood volume’ bivariate combinations, the proposed D-kde based methodology was shown to be effective when compared to commonly used seven copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and student’s-T copulas) and Gaussian kernel in conjunction with conventional as well as BGKE bandwidth estimators. Sensitivity analysis indicated that selection of optimum number of bins is critical in implementing D-kde in bivariate setting. In the context of univariate regional flood frequency analysis (RFFA) considering peak flows, a methodology based on D-kde and Index-flood methods is proposed and its performance is shown to be better when compared to that of widely used L-moment and Index-flood based method (‘regional L-moment algorithm’) through Monte-Carlo simulation experiments on homogeneous as well as heterogeneous synthetic regions, and through leave-one-out cross validation experiment performed on data sets pertaining to 54 watersheds in Godavari river basin, India. In this context, four homogeneous groups of watersheds are delineated in Godavari river basin using kernel principal component analysis (KPCA) in conjunction with Fuzzy c-means cluster analysis in L-moment framework, as an improvement over heterogeneous regions in the area (river basin) that are currently being considered by Central Water Commission, India. In the context of bivariate RFFA two methods are proposed. They involve forming site-specific pooling groups (regions) based on either L-moment based bivariate homogeneity test (R-BHT) or bivariate Kolmogorov-Smirnov test (R-BKS), and RFA based on D-kde. Their performance is assessed by application to data sets pertaining to stations in the conterminous United States. Results indicate that the R-BKS method is better than R-BHT in predicting quantiles of bivariate flood characteristics at ungauged sites, although the size of pooling groups formed using R-BKS is, in general, smaller than size of those formed using R-BHT. In general, the performance of the methods is found to improve with increase in size of pooling groups. Overall the results indicate that the D-kde always yields bona fide PDF (and CDF) in the context of univariate as well as bivariate flood frequency analysis, as probability density is nonnegative for all data points and integrates to unity for the valid range of the data. The performance of D-kde based at-site as well as regional FFA methodologies is found to be effective in univariate as well as bivariate settings, irrespective of the nature of population and sample size. A primary assumption underlying conventional FFA procedures has been that the time series of peak flow is stationarity (temporally homogeneous). However, recent studies carried out in various parts of the World question the assumption of flood stationarity. In this perspective, Time Varying Gaussian Copula (TVGC) based methodology is proposed in the thesis for flood frequency analysis in bivariate setting, which allows relaxing the assumption of stationarity in flood related variables. It is shown to be effective than seven commonly used stationary copulas through Monte-Carlo simulation experiments and by application to data sets pertaining to stations in the conterminous United States for which null hypothesis that peak flow data were non-stationary cannot be rejected.
34

Kdevelop und glade - die Programme-Bauer

Becher, Mike 21 March 2000 (has links)
Werkzeuge für Entwickler Eine Vielzahl von kleinen Helfern erleichtert den Programmierern die Arbeit. Neben make, configure und kommandozeilenorientierte Compiler treten mächtige Entwicklungswerkzeuge, mit denen sich in Windeseile Oberflächen erstellen lassen. In diesem Vortrag werden der allgemeine Aufbau eines Software-Projektes unter Unix erläutert und die Leistungsfähigkeit der Entwicklungs-Tools am praktischen Beispiel vorgeführt.
35

Clustering on groups for human tracking with 3D LiDAR

Utterström, Simon January 2023 (has links)
3D LiDAR people detection and tracking applications rely on extracting individual people from the point cloud for reliable tracking. A recurring problem for these applications is under-segmentation caused by people standing close or interacting with each other, which in turn causes the system to lose tracking. To address this challenge, we propose Kernel Density Estimation Clustering with Grid (KDEG) based on Kernel Density Estimation Clustering. KDEG leverages a grid to save density estimates computed in parallel, finding cluster centers by selecting local density maxima in the grid. KDEG reaches a remarkable accuracy of 98.4%, compared to HDBSCAN and Scan Line Run (SLR) with 80.1% and 62.0% accuracy respectively. Furthermore, KDEG is measured to be highly efficient, with a running time similar to state-of-the-art methods SLR and Curved Voxel Clustering. To show the potential of KDEG, an experiment with a real tracking application on two people walking shoulder to shoulder was performed. This experiment saw a significant increase in the number of accurately tracked frames from 5% to 78% by utilizing KDEG, displaying great potential for real-world applications.  In parallel, we also explored HDBSCAN as an alternative to DBSCAN. We propose a number of modifications to HDBSCAN, including the projection of points to the groundplane, for improved clustering on human groups. HDBSCAN with the proposed modifications demonstrates a commendable accuracy of 80.1%, surpassing DBSCAN while maintaining a similar running time. Running time is however found to be lacking for both HDBSCAN and DBSCAN compared to more efficient methods like KDEG and SLR. / <p>Arbetet är gjort på plats i Tokyo på Chuo Universitet utan samverkan från Umeå Universitet såsom utbytesprogram eller liknande.</p><p>Arbetet är delvis finansierat av Scandinavia-Japan Sasakawa Foundation.</p><p>Arbetet gick inte under vanlig termin, utan började 2023/05/01 och slutade 2023/08</p>
36

Vibroakustická analýza spalovacího motoru / Vibroacoustic Analysis of the Combustion Engine

Špačková, Jana January 2020 (has links)
This thesis is focused on vibration and noise diagnostics of a combustion engine. In the range of the current knowledge there is an analysis of measuring devices, units of measure, possibilities of frequency analysis and a combustion engine from the point of view of vibrations and noise. Part of this work is creating a technical experiment where there were measured both vibrations and noise of two combustion engines. From the data it was done a frequency analysis. Conclusion of the analysis is an evaluation of frequencies which occurred on given engines.
37

Nespojitá regulace s PLC ve výrobních systémech / Discontinuous control with PLC in production systems

Petlach, Jan January 2021 (has links)
Discontinous regulation, PLC control system, Programmable controller, Temperature control using Peltier´s module.
38

WORKSHOP "MOBILITÄT"

Anders, Jörg 12 June 2001 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur "Rechnernetze und verteilte Systeme" der Fakultaet fuer Informatik der TU Chemnitz. Workshop-Thema: Mobilitaet
39

Probabilistic Regression using Conditional Generative Adversarial Networks

Oskarsson, Joel January 2020 (has links)
Regression is a central problem in statistics and machine learning with applications everywhere in science and technology. In probabilistic regression the relationship between a set of features and a real-valued target variable is modelled as a conditional probability distribution. There are cases where this distribution is very complex and not properly captured by simple approximations, such as assuming a normal distribution. This thesis investigates how conditional Generative Adversarial Networks (GANs) can be used to properly capture more complex conditional distributions. GANs have seen great success in generating complex high-dimensional data, but less work has been done on their use for regression problems. This thesis presents experiments to better understand how conditional GANs can be used in probabilistic regression. Different versions of GANs are extended to the conditional case and evaluated on synthetic and real datasets. It is shown that conditional GANs can learn to estimate a wide range of different distributions and be competitive with existing probabilistic regression models.
40

Workshop: INFRASTRUKTUR DER ¨DIGITALEN UNIVERSIAET¨

Huebner, Uwe 09 August 2000 (has links)
Gemeinsamer Workshop von Universitaetsrechenzentrum und Professur Rechnernetze und verteilte Systeme (Fakultaet fuer Informatik) der TU Chemnitz. Workshop-Thema: Infrastruktur der ¨Digitalen Universitaet¨

Page generated in 0.021 seconds