• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 91
  • 19
  • 18
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An integrated GIS-based and spatiotemporal analysis of traffic accidents: a case study in Sherbrooke

Harirforoush, Homayoun January 2017 (has links)
Abstract: Road traffic accidents claim more than 1,500 lives each year in Canada and affect society adversely, so transport authorities must reduce their impact. This is a major concern in Quebec, where the traffic-accident risks increase year by year proportionally to provincial population growth. In reality, the occurrence of traffic crashes is rarely random in space-time; they tend to cluster in specific areas such as intersections, ramps, and work zones. Moreover, weather stands out as an environmental risk factor that affects the crash rate. Therefore, traffic-safety engineers need to accurately identify the location and time of traffic accidents. The occurrence of such accidents actually is determined by some important factors, including traffic volume, weather conditions, and geometric design. This study aimed at identifying hotspot locations based on a historical crash data set and spatiotemporal patterns of traffic accidents with a view to improving road safety. This thesis proposes two new methods for identifying hotspot locations on a road network. The first method could be used to identify and rank hotspot locations in cases in which the value of traffic volume is available, while the second method is useful in cases in which the value of traffic volume is not. These methods were examined with three years of traffic-accident data (2011–2013) in Sherbrooke. The first method proposes a two-step integrated approach for identifying traffic-accident hotspots on a road network. The first step included a spatial-analysis method called network kernel-density estimation. The second step involved a network-screening method using the critical crash rate, which is described in the Highway Safety Manual. Once the traffic-accident density had been estimated using the network kernel-density estimation method, the selected potential hotspot locations were then tested with the critical-crash-rate method. The second method offers an integrated approach to analyzing spatial and temporal (spatiotemporal) patterns of traffic accidents and organizes them according to their level of significance. The spatiotemporal seasonal patterns of traffic accidents were analyzed using the kernel-density estimation; it was then applied as the attribute for a significance test using the local Moran’s I index value. The results of the first method demonstrated that over 90% of hotspot locations in Sherbrooke were located at intersections and in a downtown area with significant conflicts between road users. It also showed that signalized intersections were more dangerous than unsignalized ones; over half (58%) of the hotspot locations were located at four-leg signalized intersections. The results of the second method show that crash patterns varied according to season and during certain time periods. Total seasonal patterns revealed denser trends and patterns during the summer, fall, and winter, then a steady trend and pattern during the spring. Our findings also illustrated that crash patterns that applied accident severity were denser than the results that only involved the observed crash counts. The results clearly show that the proposed methods could assist transport authorities in quickly identifying the most hazardous sites in a road network, prioritizing hotspot locations in a decreasing order more efficiently, and assessing the relationship between traffic accidents and seasons. / Les accidents de la route sont responsables de plus de 1500 décès par année au Canada et ont des effets néfastes sur la société. Aux yeux des autorités en transport, il devient impératif d’en réduire les impacts. Il s’agit d’une préoccupation majeure au Québec depuis que les risques d’accidents augmentent chaque année au rythme de la population. En réalité, les accidents routiers se produisent rarement de façon aléatoire dans l’espace-temps. Ils surviennent généralement à des endroits spécifiques notamment aux intersections, dans les bretelles d’accès, sur les chantiers routiers, etc. De plus, les conditions climatiques associées aux saisons constituent l’un des facteurs environnementaux à risque affectant les taux d’accidents. Par conséquent, il devient impératif pour les ingénieurs en sécurité routière de localiser ces accidents de façon plus précise dans le temps (moment) et dans l’espace (endroit). Cependant, les accidents routiers sont influencés par d’importants facteurs comme le volume de circulation, les conditions climatiques, la géométrie de la route, etc. Le but de cette étude consiste donc à identifier les points chauds au moyen d’un historique des données d’accidents et de leurs répartitions spatiotemporelles en vue d’améliorer la sécurité routière. Cette thèse propose deux nouvelles méthodes permettant d’identifier les points chauds à l’intérieur d’un réseau routier. La première méthode peut être utilisée afin d’identifier et de prioriser les points chauds dans les cas où les données sur le volume de circulation sont disponibles alors que la deuxième méthode est utile dans les cas où ces informations sont absentes. Ces méthodes ont été conçues en utilisant des données d’accidents sur trois ans (2011-2013) survenus à Sherbrooke. La première méthode propose une approche intégrée en deux étapes afin d’identifier les points chauds au sein du réseau routier. La première étape s’appuie sur une méthode d’analyse spatiale connue sous le nom d’estimation par noyau. La deuxième étape repose sur une méthode de balayage du réseau routier en utilisant les taux critiques d’accidents, une démarche éprouvée et décrite dans le manuel de sécurité routière. Lorsque la densité des accidents routiers a été calculée au moyen de l’estimation par noyau, les points chauds potentiels sont ensuite testés à l’aide des taux critiques. La seconde méthode propose une approche intégrée destinée à analyser les distributions spatiales et temporelles des accidents et à les classer selon leur niveau de signification. La répartition des accidents selon les saisons a été analysée à l’aide de l’estimation par noyau, puis ces valeurs ont été assignées comme attributs dans le test de signification de Moran. Les résultats de la première méthode démontrent que plus de 90 % des points chauds à Sherbrooke sont concentrés aux intersections et au centre-ville où les conflits entre les usagers de la route sont élevés. Ils révèlent aussi que les intersections contrôlées sont plus à risque par comparaison aux intersections non contrôlées et que plus de la moitié des points chauds (58 %) sont situés aux intersections à quatre branches (en croix). Les résultats de la deuxième méthode montrent que les distributions d’accidents varient selon les saisons et à certains moments de l’année. Les répartitions saisonnières montrent des tendances à la densification durant l’été, l’automne et l’hiver alors que les distributions sont plus dispersées au cours du printemps. Nos observations indiquent aussi que les répartitions ayant considéré la sévérité des accidents sont plus denses que les résultats ayant recours au simple cumul des accidents. Les résultats démontrent clairement que les méthodes proposées peuvent: premièrement, aider les autorités en transport en identifiant rapidement les sites les plus à risque à l’intérieur du réseau routier; deuxièmement, prioriser les points chauds en ordre décroissant plus efficacement et de manière significative; troisièmement, estimer l’interrelation entre les accidents routiers et les saisons.
82

Data Fusion for Multi-Sensor Nondestructive Detection of Surface Cracks in Ferromagnetic Materials

Heideklang, René 28 November 2018 (has links)
Ermüdungsrissbildung ist ein gefährliches und kostenintensives Phänomen, welches frühzeitig erkannt werden muss. Weil kleine Fehlstellen jedoch hohe Testempfindlichkeit erfordern, wird die Prüfzuverlässigkeit durch Falschanzeigen vermindert. Diese Arbeit macht sich deshalb die Diversität unterschiedlicher zerstörungsfreier Oberflächenprüfmethoden zu Nutze, um mittels Datenfusion die Zuverlässigkeit der Fehlererkennung zu erhöhen. Der erste Beitrag dieser Arbeit in neuartigen Ansätzen zur Fusion von Prüfbildern. Diese werden durch Oberflächenabtastung mittels Wirbelstromprüfung, thermischer Prüfung und magnetischer Streuflussprüfung gewonnen. Die Ergebnisse zeigen, dass schon einfache algebraische Fusionsregeln gute Ergebnisse liefern, sofern die Daten adäquat vorverarbeitet wurden. So übertrifft Datenfusion den besten Einzelsensor in der pixelbasierten Falscherkennungsrate um den Faktor sechs bei einer Nutentiefe von 10 μm. Weiterhin wird die Fusion im Bildtransformationsbereich untersucht. Jedoch werden die theoretischen Vorteile solcher richtungsempfindlichen Transformationen in der Praxis mit den vorliegenden Daten nicht erreicht. Nichtsdestotrotz wird der Vorteil der Fusion gegenüber Einzelsensorprüfung auch hier bestätigt. Darüber hinaus liefert diese Arbeit neuartige Techniken zur Fusion auch auf höheren Ebenen der Signalabstraktion. Ein Ansatz, der auf Kerndichtefunktionen beruht, wird eingeführt, um örtlich verteilte Detektionshypothesen zu integrieren. Er ermöglicht, die praktisch unvermeidbaren Registrierungsfehler explizit zu modellieren. Oberflächenunstetigkeiten von 30 μm Tiefe können zuverlässig durch Fusion gefunden werden, wogegen das beste Einzelverfahren erst Tiefen ab 40–50 μm erfolgreich auffindet. Das Experiment wird auf einem zweiten Prüfkörper bestätigt. Am Ende der Arbeit werden Richtlinien für den Einsatz von Datenfusion gegeben, und die Notwendigkeit einer Initiative zum Teilen von Messdaten wird betont, um zukünftige Forschung zu fördern. / Fatigue cracking is a dangerous and cost-intensive phenomenon that requires early detection. But at high test sensitivity, the abundance of false indications limits the reliability of conventional materials testing. This thesis exploits the diversity of physical principles that different nondestructive surface inspection methods offer, by applying data fusion techniques to increase the reliability of defect detection. The first main contribution are novel approaches for the fusion of NDT images. These surface scans are obtained from state-of-the-art inspection procedures in Eddy Current Testing, Thermal Testing and Magnetic Flux Leakage Testing. The implemented image fusion strategy demonstrates that simple algebraic fusion rules are sufficient for high performance, given adequate signal normalization. Data fusion reduces the rate of false positives is reduced by a factor of six over the best individual sensor at a 10 μm deep groove. Moreover, the utility of state-of-the-art image representations, like the Shearlet domain, are explored. However, the theoretical advantages of such directional transforms are not attained in practice with the given data. Nevertheless, the benefit of fusion over single-sensor inspection is confirmed a second time. Furthermore, this work proposes novel techniques for fusion at a high level of signal abstraction. A kernel-based approach is introduced to integrate spatially scattered detection hypotheses. This method explicitly deals with registration errors that are unavoidable in practice. Surface discontinuities as shallow as 30 μm are reliably found by fusion, whereas the best individual sensor requires depths of 40–50 μm for successful detection. The experiment is replicated on a similar second test specimen. Practical guidelines are given at the end of the thesis, and the need for a data sharing initiative is stressed to promote future research on this topic.
83

Spatiotemporal Analyses of Recycled Water Production

Archer, Jana E. 01 May 2017 (has links)
Increased demands on water supplies caused by population expansion, saltwater intrusion, and drought have led to water shortages which may be addressed by use of recycled water as recycled water products. Study I investigated recycled water production in Florida and California during 2009 to detect gaps in distribution and identify areas for expansion. Gaps were detected along the panhandle and Miami, Florida, as well as the northern and southwestern regions in California. Study II examined gaps in distribution, identified temporal change, and located areas for expansion for Florida in 2009 and 2015. Production increased in the northern and southern regions of Florida but decreased in Southwest Florida. Recycled water is an essential component water management a broader adoption of recycled water will increase water conservation in water-stressed coastal communities by allocating recycled water for purposes that once used potable freshwater.
84

Range-use estimation and encounter probability for juvenile Steller sea lions (Eumetopias jubatus) in the Prince William Sound-Kenai Fjords region of Alaska

Meck, Stephen R. 21 March 2013 (has links)
Range, areas of concentrated activity, and dispersal characteristics for juvenile Steller sea lions Eumetopias jubatus in the endangered western population (west of 144° W in the Gulf of Alaska) are poorly understood. This study quantified space use by analyzing post-release telemetric tracking data from satellite transmitters externally attached to n = 65 juvenile (12-25 months; 72.5 to 197.6 kg) Steller sea lions (SSLs) captured in Prince William Sound (60°38'N -147°8'W) or Resurrection Bay (60°2'N -149°22'W), Alaska, from 2003-2011. The analysis divided the sample population into 3 separate groups to quantify differences in distribution and movement. These groups included sex, the season when collected, and the release type (free ranging animals which were released immediately at the site of capture, and transient juveniles which were kept in captivity for up to 12 weeks as part of a larger ongoing research program). Range-use was first estimated by using the minimum convex polygon (MCP) approach, and then followed with a probabilistic kernel density estimation (KDE) to evaluate both individual and group utilization distributions (UDs). The LCV method was chosen as the smoothing algorithm for the KDE analysis as it provided biologically meaningful results pertaining to areas of concentrated activity (generally, haulout locations). The average distance traveled by study juveniles was 2,131 ± 424 km. The animals mass at release (F[subscript 1, 63] = 1.17, p = 0.28) and age (F[subscript 1, 63] = 0.033, p = 0.86) were not significant predictors of travel distance. Initial MCP results indicated the total area encompassed by all study SSLs was 92,017 km², excluding land mass. This area was heavily influenced by the only individual that crossed over the 144°W Meridian, the dividing line between the two distinct population segments. Without this individual, the remainder of the population (n = 64) fell into an area of 58,898 km². The MCP area was highly variable, with a geometric average of 1,623.6 km². Only the groups differentiated by season displayed any significant difference in area size, with the Spring/Summer (SS) groups MCP area (Mdn = 869.7 km²) being significantly less than that of the Fall/Winter (FW) group (Mdn = 3,202.2 km²), U = 330, p = 0.012, r = -0.31. This result was not related to the length of time the tag transmitted (H(2) = 49.65, p = 0.527), nor to the number of location fixes (H(2) = 62.77, p = 0.449). The KDE UD was less variable, with 50% of the population within a range of 324-1,387 km2 (mean=690.6 km²). There were no significant differences in area use associated with sex or release type (seasonally adjusted U = 124, p = 0.205, r = -0.16 and U = 87, p = 0.285, r = -0.13, respectively). However, there were significant differences in seasonal area use: U = 328, p = 0.011, r = -0.31. There was no relationship between the UD area and the amount of time the tag remained deployed (H(2) = 45.30, p = 0.698). The kernel home range (defined as 95% of space use) represented about 52.1% of the MCP range use, with areas designated as "core" (areas where the sea lions spent fully 50% of their time) making up only about 6.27% of the entire MCP range and about 11.8% of the entire kernel home range. Area use was relatively limited – at the population level, there were a total of 6 core areas which comprised 479 km². Core areas spanned a distance of less than 200 km from the most western point at the Chiswell Islands (59°35'N -149°36'W) to the most eastern point at Glacier Island (60°54'N -147°6'W). The observed differences in area use between seasons suggest a disparity in how juvenile SSLs utilize space and distribute themselves over the course of the year. Due to their age, this variation is less likely due to reproductive considerations and may reflect localized depletion of prey near preferred haul-out sites and/or changes in predation risk. Currently, management of the endangered western and threatened eastern population segments of the Steller sea lion are largely based on population trends derived from aerial survey counts and terrestrial-based count data. The likelihood of individuals to be detected during aerial surveys, and resulting correction factors to calculate overall population size from counts of hauled-out animals remain unknown. A kernel density estimation (KDE) analysis was performed to delineate boundaries around surveyed haulout locations within Prince William Sound-Kenai Fjords (PWS-KF). To closely approximate the time in which population abundance counts are conducted, only sea lions tracked during the spring/summer (SS) months (May 10-August 10) were chosen (n = 35). A multiple state model was constructed treating the satellite location data, if it fell within a specified spatiotemporal context, as a re-encounter within a mark-recapture framework. Information to determine a dry state was obtained from the tags time-at-depth (TAD) histograms. To generate an overall terrestrial detection probability 1) The animal must have been within a KDE derived core-area that coincided with a surveyed haulout site 2) it must have been dry and 3) it must have provided at least one position during the summer months, from roughly 11:00 AM-5:00 PM AKDT. A total of 10 transition states were selected from the data. Nine states corresponded to specific surveyed land locations, with the 10th, an "at-sea" location (> 3 km from land) included as a proxy for foraging behavior. A MLogit constraint was used to aid interpretation of the multi-modal likelihood surface, and a systematic model selection process employed as outlined by Lebreton & Pradel (2002). At the individual level, the juveniles released in the spring/summer months (n = 35) had 85.3% of the surveyed haulouts within PWS-KF encompass KDE-derived core areas (defined as 50% of space use). There was no difference in the number of surveyed haulouts encompassed by core areas between sexes (F[subscript 1, 33] << 0.001, p = 0.98). For animals held captive for up to 12 weeks, 33.3% returned to the original capture site. The majority of encounter probabilities (p) fell between 0.42 and 0.78 for the selected haulouts within PWS, with the exceptions being Grotto Island and Aialik Cape, which were lower (between 0.00-0.17). The at-sea (foraging) encounter probability was 0.66 (± 1 S.E. range 0.55-0.77). Most dry state probabilities fell between 0.08-0.38, with Glacier Island higher at 0.52, ± 1 S.E. range 0.49-0.55. The combined detection probability for hauled-out animals (the product of at haul-out and dry state probabilities), fell mostly between 0.08-0.28, with a distinct group (which included Grotto Island, Aialik Cape, and Procession Rocks) having values that averaged 0.01, with a cumulative range of ≈ 0.00-0.02 (± 1 S.E.). Due to gaps present within the mark-recapture data, it was not possible to run a goodness-of-fit test to validate model fit. Therefore, actual errors probably slightly exceed the reported standard errors and provide an approximation of uncertainties. Overall, the combined detection probabilities represent an effort to combine satellite location and wet-dry state telemetry and a kernel density analysis to quantify the terrestrial detection probability of a marine mammal within a multistate modeling framework, with the ultimate goal of developing a correction factor to account for haulout behavior at each of the surveyed locations included in the study. / Graduation date: 2013
85

Probabilistic Sequence Models with Speech and Language Applications

Henter, Gustav Eje January 2013 (has links)
Series data, sequences of measured values, are ubiquitous. Whenever observations are made along a path in space or time, a data sequence results. To comprehend nature and shape it to our will, or to make informed decisions based on what we know, we need methods to make sense of such data. Of particular interest are probabilistic descriptions, which enable us to represent uncertainty and random variation inherent to the world around us. This thesis presents and expands upon some tools for creating probabilistic models of sequences, with an eye towards applications involving speech and language. Modelling speech and language is not only of use for creating listening, reading, talking, and writing machines---for instance allowing human-friendly interfaces to future computational intelligences and smart devices of today---but probabilistic models may also ultimately tell us something about ourselves and the world we occupy. The central theme of the thesis is the creation of new or improved models more appropriate for our intended applications, by weakening limiting and questionable assumptions made by standard modelling techniques. One contribution of this thesis examines causal-state splitting reconstruction (CSSR), an algorithm for learning discrete-valued sequence models whose states are minimal sufficient statistics for prediction. Unlike many traditional techniques, CSSR does not require the number of process states to be specified a priori, but builds a pattern vocabulary from data alone, making it applicable for language acquisition and the identification of stochastic grammars. A paper in the thesis shows that CSSR handles noise and errors expected in natural data poorly, but that the learner can be extended in a simple manner to yield more robust and stable results also in the presence of corruptions. Even when the complexities of language are put aside, challenges remain. The seemingly simple task of accurately describing human speech signals, so that natural synthetic speech can be generated, has proved difficult, as humans are highly attuned to what speech should sound like. Two papers in the thesis therefore study nonparametric techniques suitable for improved acoustic modelling of speech for synthesis applications. Each of the two papers targets a known-incorrect assumption of established methods, based on the hypothesis that nonparametric techniques can better represent and recreate essential characteristics of natural speech. In the first paper of the pair, Gaussian process dynamical models (GPDMs), nonlinear, continuous state-space dynamical models based on Gaussian processes, are shown to better replicate voiced speech, without traditional dynamical features or assumptions that cepstral parameters follow linear autoregressive processes. Additional dimensions of the state-space are able to represent other salient signal aspects such as prosodic variation. The second paper, meanwhile, introduces KDE-HMMs, asymptotically-consistent Markov models for continuous-valued data based on kernel density estimation, that additionally have been extended with a fixed-cardinality discrete hidden state. This construction is shown to provide improved probabilistic descriptions of nonlinear time series, compared to reference models from different paradigms. The hidden state can be used to control process output, making KDE-HMMs compelling as a probabilistic alternative to hybrid speech-synthesis approaches. A final paper of the thesis discusses how models can be improved even when one is restricted to a fundamentally imperfect model class. Minimum entropy rate simplification (MERS), an information-theoretic scheme for postprocessing models for generative applications involving both speech and text, is introduced. MERS reduces the entropy rate of a model while remaining as close as possible to the starting model. This is shown to produce simplified models that concentrate on the most common and characteristic behaviours, and provides a continuum of simplifications between the original model and zero-entropy, completely predictable output. As the tails of fitted distributions may be inflated by noise or empirical variability that a model has failed to capture, MERS's ability to concentrate on high-probability output is also demonstrated to be useful for denoising models trained on disturbed data. / <p>QC 20131128</p> / ACORNS: Acquisition of Communication and Recognition Skills / LISTA – The Listening Talker
86

Distributions Of Fiber Characteristics As A Tool To Evaluate Mechanical Pulps

Reyier Österling, Sofia January 2015 (has links)
Mechanical pulps are used in paper products such as magazine or news grade printing papers or paperboard. Mechanical pulping gives a high yield; nearly everything in the tree except the bark is used in the paper. This means that mechanical pulping consumes much less wood than chemical pulping, especially to produce a unit area of printing surface. A drawback of mechanical pulp production is the high amounts of electrical energy needed to separate and refine the fibers to a given fiber quality. Mechanical pulps are often produced from slow growing spruce trees of forests in the northern hemisphere resulting in long, slender fibers that are well suited for mechanical pulp products. These fibers have large varieties in geometry, mainly wall thickness and width, depending on seasonal variations and growth conditions. Earlywood fibers typically have thin walls and latewood fibers thick. The background to this study was that a more detailed fiber characterization involving evaluations of distributions of fiber characteristics, may give improved possibilities to optimize the mechanical pulping process and thereby reduce the total electric energy needed to reach a given quality of the pulp and final product. This would result in improved competitiveness as well as less environmental impact. This study evaluated the relation between fiber characteristics in three types of mechanical pulps made from Norway spruce (Picea abies), thermomechanical pulp(TMP), stone groundwood pulp (SGW) and chemithermomechanical pulp (CTMP). In addition, the influence of fibers from these pulp types on sheet characteristics, mainly tensile index, was studied. A comparatively rapid method was presented on how to evaluate the propensity of each fiber to form sheets of high tensile index, by the use of raw data from a commercially available fiber analyzer (FiberLabTM). The developed method gives novel opportunities of evaluating the effect on the fibers of each stage in the mechanical pulping process and has a potential to be applied also on‐line to steer the refining and pulping process by the characteristics of the final pulp and the quality of the final paper. The long fiber fraction is important for the properties of the whole pulp. It was found that fiber wall thickness and external fibrillation were the fibercharacteristics that contributed the most to tensile index of the long fiber fractions in five mechanical pulps (three TMPs, one SGW, one CTMP). The tensile index of handsheets of the long fiber fractions could be predicted by linear regressions using a combination of fiber wall thickness and degree of external fibrillation. The predicted tensile index was denoted BIN, short for Bonding ability INfluence. This resulted in the same linear correlation between BIN and tensile index for 52 samples of the five mechanical pulps studied, each fractionated into five streams(plus feed) in full size hydrocyclones. The Bauer McNett P16/R30 (passed 16 meshwire, retained on a 30 mesh wire) and P30/R50 fractions of each stream were used for the evaluation. The fibers of the SGW had thicker walls and a higher degree of external fibrillation than the TMPs and CTMP, which resulted in a correlation between BIN and tensile index on a different level for the P30/R50 fraction of SGW than the other pulp samples. A BIN model based on averages weighted by each fiber´s wall volume instead of arithmetic averages, took the fiber wall thickness of the SGW into account, and gave one uniform correlation between BIN and tensile index for all pulp samples (12 samples for constructing the model, 46 for validatingit). If the BIN model is used for predicting averages of the tensile index of a sheet, a model based on wall volume weighted data is recommended. To be able to produce BIN distributions where the influence of the length or wall volume of each fiber is taken into account, the BIN model is currently based on arithmetic averages of fiber wall thickness and fibrillation. Fiber width used as a single factor reduced the accuracy of the BIN model. Wall volume weighted averages of fiber width also resulted in a completely changed ranking of the five hydrocyclone streams compared to arithmetic, for two of thefive pulps. This was not seen when fiber width was combined with fiber wallthickness into the factor “collapse resistance index”. In order to avoid too high influence of fiber wall thickness and until the influence of fiber width on BIN and the measurement of fiber width is further evaluated, it is recommended to use length weighted or arithmetic distributions of BIN and other fiber characteristics. A comparably fast method to evaluate the distribution of fiber wall thickness and degree of external fibrillation with high resolution showed that the fiber wallthickness of the latewood fibers was reduced by increasing the refining energy in adouble disc refiner operated at four levels of specific energy input in a commercial TMP production line. This was expected but could not be seen by the use of average values, it was concluded that fiber characteristics in many cases should be evaluated as distributions and not only as averages. BIN distributions of various types of mechanical pulps from Norway spruce showed results that were expected based on knowledge of the particular pulps and processes. Measurements of mixtures of a news‐ and a SC (super calendered) gradeTMP, showed a gradual increase in high‐BIN fibers with higher amounts of SCgrade TMP. The BIN distributions also revealed differences between the pulps that were not seen from average fiber values, for example that the shape of the BINdistributions was similar for two pulps that originated from conical disc refiners, a news grade TMP and the board grade CTMP, although the distributions were on different BIN levels. The SC grade TMP and the SC grade SGW had similar levels of tensile index, but the SGW contained some fibers of very low BIN values which may influence the characteristics of the final paper, for example strength, surface and structure. This shows that the BIN model has the potential of being applied on either the whole or parts of a papermaking process based on mechanical or chemimechanical pulping; the evaluation of distributions of fiber characteristics can contribute to increased knowledge about the process and opportunities to optimize it.
87

Reconhecimento de padrões em sistemas de energia elétrica através de uma abordagem geométrica aprimorada para a construção de redes neurais artificiais

Valente, Wander Antunes Gaspar 09 February 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-01-08T10:36:58Z No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 4197156 bytes, checksum: 5b667869c3bb237e570559ddf4cbb30d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-01-25T16:56:26Z (GMT) No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 4197156 bytes, checksum: 5b667869c3bb237e570559ddf4cbb30d (MD5) / Made available in DSpace on 2016-01-25T16:56:26Z (GMT). No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 4197156 bytes, checksum: 5b667869c3bb237e570559ddf4cbb30d (MD5) Previous issue date: 2015-02-09 / O presente trabalho fundamenta-se no método das segmentações geométricas sucessivas (MSGS) para a construção de uma rede neural artificial capaz de gerar tanto a topologia da rede quanto o peso dos neurônios sem a especificação de parâmetros iniciais. O MSGS permite identificar um conjunto de hiperplanos no espaço Rn que, quando combinados adequadamente, podem separar duas ou mais classes de dados. Especificamente neste trabalho é empregado um aprimoramento ao MSGS com base em estimativas de densidade por kernel. Utilizando-se KDE, é possível encontrar novos hiperplanos de separação de forma mais consistente e, a partir daí, conduzir à classificação de dados com taxas de acerto superiores à técnica originalmente empregada. Neste trabalho, o MSGS aprimorado é empregado satisfatoriamente pela primeira vez para a identificação de padrões em sistemas de energia elétrica. O método foi ajustado para a classificação de faltas incipientes em transformadores de potência e os resultados apresentam índices de acerto superiores a trabalhos correlatos. O MSGS aprimorado também foi adaptado para classificar e localizar faltas inter-circuitos em linhas áreas de transmissão em circuito duplo, obtendo resultados positivos em comparação com a literatura científica. / This work is based on the method of successive geometric segmentations (SGSM) for the construction of an artificial neural network capable of generating both the network topology as the weight of neurons without specifying initial parameters. The MSGS allows to identify a set of hyperplanes in the Rn space that when properly combined, can separate two or more data classes. Specifically in this work is used an improvement to SGSM based on kernel density estimates (KDE). Using KDE, it is possible to find new hyperplanes of separation more consistently and, from there, lead to data classification with accuracy rates higher than originally technique. In this paper, the improved SGSM is first used satisfactorily to identify patterns in electrical power systems. The method has been adjusted to the classification of incipient faults in power transformers and the results have achieved rates above related work. The improved SGSM has also been adapted to classify and locate inter-circuit faults on double circuit overhead transmission lines with positive results compared with the scientific literature.
88

Direct optimization of dose-volume histogram metrics in intensity modulated radiation therapy treatment planning / Direkt optimering av dos-volym histogram-mått i intensitetsmodulerad strålterapiplanering

Zhang, Tianfang January 2018 (has links)
In optimization of intensity-modulated radiation therapy treatment plans, dose-volumehistogram (DVH) functions are often used as objective functions to minimize the violationof dose-volume criteria. Neither DVH functions nor dose-volume criteria, however,are ideal for gradient-based optimization as the former are not continuously differentiableand the latter are discontinuous functions of dose, apart from both beingnonconvex. In particular, DVH functions often work poorly when used in constraintsdue to their being identically zero when feasible and having vanishing gradients on theboundary of feasibility.In this work, we present a general mathematical framework allowing for direct optimizationon all DVH-based metrics. By regarding voxel doses as sample realizations ofan auxiliary random variable and using kernel density estimation to obtain explicit formulas,one arrives at formulations of volume-at-dose and dose-at-volume which are infinitelydifferentiable functions of dose. This is extended to DVH functions and so calledvolume-based DVH functions, as well as to min/max-dose functions and mean-tail-dosefunctions. Explicit expressions for evaluation of function values and corresponding gradientsare presented. The proposed framework has the advantages of depending on onlyone smoothness parameter, of approximation errors to conventional counterparts beingnegligible for practical purposes, and of a general consistency between derived functions.Numerical tests, which were performed for illustrative purposes, show that smoothdose-at-volume works better than quadratic penalties when used in constraints and thatsmooth DVH functions in certain cases have significant advantage over conventionalsuch. The results of this work have been successfully applied to lexicographic optimizationin a fluence map optimization setting. / Vid optimering av behandlingsplaner i intensitetsmodulerad strålterapi används dosvolym- histogram-funktioner (DVH-funktioner) ofta som målfunktioner för att minimera avståndet till dos-volymkriterier. Varken DVH-funktioner eller dos-volymkriterier är emellertid idealiska för gradientbaserad optimering då de förstnämnda inte är kontinuerligt deriverbara och de sistnämnda är diskontinuerliga funktioner av dos, samtidigt som båda också är ickekonvexa. Speciellt fungerar DVH-funktioner ofta dåligt i bivillkor då de är identiskt noll i tillåtna områden och har försvinnande gradienter på randen till tillåtenhet. I detta arbete presenteras ett generellt matematiskt ramverk som möjliggör direkt optimering på samtliga DVH-baserade mått. Genom att betrakta voxeldoser som stickprovsutfall från en stokastisk hjälpvariabel och använda ickeparametrisk densitetsskattning för att få explicita formler, kan måtten volume-at-dose och dose-at-volume formuleras som oändligt deriverbara funktioner av dos. Detta utökas till DVH-funktioner och så kallade volymbaserade DVH-funktioner, såväl som till mindos- och maxdosfunktioner och medelsvansdos-funktioner. Explicita uttryck för evaluering av funktionsvärden och tillhörande gradienter presenteras. Det föreslagna ramverket har fördelarna av att bero på endast en mjukhetsparameter, av att approximationsfelen till konventionella motsvarigheter är försumbara i praktiska sammanhang, och av en allmän konsistens mellan härledda funktioner. Numeriska tester genomförda i illustrativt syfte visar att slät dose-at-volume fungerar bättre än kvadratiska straff i bivillkor och att släta DVH-funktioner i vissa fall har betydlig fördel över konventionella sådana. Resultaten av detta arbete har med framgång applicerats på lexikografisk optimering inom fluensoptimering.
89

Image Distance Learning for Probabilistic Dose–Volume Histogram and Spatial Dose Prediction in Radiation Therapy Treatment Planning / Bilddistansinlärning för probabilistisk dos–volym-histogram- och dosprediktion inom strålbehandling

Eriksson, Ivar January 2020 (has links)
Construction of radiotherapy treatments for cancer is a laborious and time consuming task. At the same time, when presented with a treatment plan, an oncologist can quickly judge whether or not it is suitable. This means that the problem of constructing these treatment plans is well suited for automation. This thesis investigates a novel way of automatic treatment planning. The treatment planning system this pipeline is constructed for provides dose mimicking functionality with probability density functions of dose–volume histograms (DVHs) and spatial dose as inputs. Therefore this will be the output of the pipeline. The input is historically treated patient scans, segmentations and spatial doses. The approach involves three modules which are individually replaceable with little to no impact on the remaining two modules. The modules are: an autoencoder as a feature extractor to concretise important features of a patient segmentation, a distance optimisation step to learn a distance in the previously constructed feature space and, finally, a probabilistic spatial dose estimation module using sparse pseudo-input Gaussian processes trained on voxel features. Although performance evaluation in terms of clinical plan quality was beyond the scope of this thesis, numerical results show that the proposed pipeline is successful in capturing salient features of patient geometry as well as predicting reasonable probability distributions for DVH and spatial dose. Its loosely connected nature also gives hope that some parts of the pipeline can be utilised in future work. / Skapandet av strålbehandlingsplaner för cancer är en tidskrävande uppgift. Samtidigt kan en onkolog snabbt fatta beslut om en given plan är acceptabel eller ej. Detta innebär att uppgiften att skapa strålplaner är väl lämpad för automatisering. Denna uppsats undersöker en ny metod för att automatiskt generera strålbehandlingsplaner. Planeringssystemet denna metod utvecklats för innehåller funktionalitet för dosrekonstruktion som accepterar sannolikhetsfördelningar för dos–volymhistogram (DVH) och dos som input. Därför kommer detta att vara utdatan för den konstruerade metoden. Metoden är uppbyggd av tre beståndsdelar som är individuellt utbytbara med liten eller ingen påverkan på de övriga delarna. Delarna är: ett sätt att konstruera en vektor av kännetecken av en patients segmentering, en distansoptimering för att skapa en distans i den tidigare konstruerade känneteckensrymden, och slutligen en skattning av sannolikhetsfördelningar med Gaussiska processer tränade på voxelkännetecken. Trots att utvärdering av prestandan i termer av klinisk plankvalitet var bortom räckvidden för detta projekt uppnåddes positiva resultat. De estimerade sannolikhetsfördelningarna uppvisar goda karaktärer för både DVHer och doser. Den löst sammankopplade strukturen av metoden gör det dessutom möjligt att delar av projektet kan användas i framtida arbeten.
90

Vývoj moderních akustických parametrů kvantifikujících hypokinetickou dysartrii / Development of modern acoustic features quantifying hypokinetic dysarthria

Kowolowski, Alexander January 2019 (has links)
This work deals with designing and testing of new acoustic features for analysis of dysprosodic speech occurring in hypokinetic dysarthria patients. 41 new features for dysprosody quantification (describing melody, loudness, rhythm and pace) are presented and tested in this work. New features can be divided into 7 groups. Inside the groups, features vary by the used statistical values. First four groups are based on absolute differences and cumulative sums of fundamental frequency and short-time energy of the signal. Fifth group contains features based on multiples of this fundamental frequency and short-time energy combined into one global intonation feature. Sixth group contains global time features, which are made of divisions between conventional rhythm and pace features. Last group contains global features for quantification of whole dysprosody, made of divisions between global intonation and global time features. All features were tested on Czech Parkinsonian speech database PARCZ. First, kernel density estimation was made and plotted for all features. Then correlation analysis with medicinal metadata was made, first for all the features, then for global features only. Next classification and regression analysis were made, using classification and regression trees algorithm (CART). This analysis was first made for all the features separately, then for all the data at once and eventually a sequential floating feature selection was made, to find out the best fitting combination of features for the current matter. Even though none of the features emerged as a universal best, there were a few features, that were appearing as one of the best repeatedly and also there was a trend that there was a bigger drop between the best and the second best feature, marking it as a much better feature for the given matter, than the rest of the tested. Results are included in the conclusion together with the discussion.

Page generated in 0.1161 seconds