• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

MIFO-baserade bedömningar av risken för förorening och spridning av PFAS vid brandstationer / Risk assessments of pollution and spread of PFAS at fire stations based on MIFO

Hollsten, Josefin January 2022 (has links)
A relatively unexplored source of pollution is fire stations and their usage of aqueous film forming foam (AFFF) containing per- and polyfluorinated substances (PFAS). It is well documented that these were used at fire drill sites that contaminated surrounding surface water, sediments and groundwater. The aim of this study was to assess whether fire stations could be a source of pollution and spread of PFAS and if the industry should be given priority for further investigations. Four fire stations were selected for the assessment, which were carried out by using part one of the Method of Surveying Contaminated Sites (Acronym in Swedish: MIFO). This included studies of maps, archives and field visits where fire fighters were interviewed to gather information about activities that had taken place historically on the specific sites. All of the fire stations were classified as level 2, meaning they pose a high risk for the enviroment and human health accordning to MIFO. In conjunction to the assessment, existing testing results of PFAS in soil and water from other fire stations in Sweden are submitted in purpose of showing the general situation of pollution linked with the results of this evaluation. The conclusion of this study was that various activities at fire stations possibly have polluted ground -and surface waters and that the industry should be given priority for further investigations.
2

Dynamics and control of open- and closed-chained multibody systems

Lin, Nanjou January 1992 (has links)
No description available.
3

Water Quality Impact of Burning and Grazing On A Chained Pinyon-Juniper Site in Southeastern Utah

Buckhouse, John C. 01 May 1975 (has links)
During 1973 and 1974 a water quality study was conducted in San Juan County, southeastern Utah. In 1973, baseline water quality data was collected from study locations which had been chained to remove pinyon-juniper vegetation six years earlier. The area had been chained under two different techniques: (1) doubled chained, with debris-left-in-place and (2) chained, with debris windrowed. An "undisturbed, natural" woodland was left between these two treatments in order to serve as a control area. In the fall of 1973 and spring of 1974 secondary treatments of burning and grazing were superimposed upon the debris-in-place and windrowed sites, respectively. All water collected and analyzed for the several water quality parameters was generated through use of a small plot Rocky Mountain infiltrometer which creates a simulated rainstorm. Resultant runoff was collected and analyzed for each of the parameters in question. No significant changes were noted from these point source measurements in terms of fecal and total coliform production (fecal pollution bacterial indicators). The point source approach was a technique for sampling a much larger land area through many small plots (0.23 m2). There is an element of risk involved whenever the data generated from such a small area is projected to the larger land area. Based on this small plot data it appears, however, that this level of livestock grazing (2 hectares/AUM) does not constitute a public health hazard in terms of fecal pollution indicators when grazed on similar semi-arid watershed areas. Some significant changes were noted following burning, however. Significant increases in potassium and phosphorus were noted. Apparently the fire "released" these nutrients which were tied up in the debris scattered across the site. Potassium registered an increase of about 4 ppm (400 percent) while phosphorus showed an increase of about 0.2 ppm (400 percent). No significant treatment changes were detected for sodium, calcium, or nitrate-nitrogen, however. Sediment production was also measured under the various treatment conditions. High natural variability is present among these sites, and no significant treatment effect was defected following our prescribed burning or grazing treatments. Infiltration rates were also monitored. No significant treatment differences were noted among the initial treatment means during 1973. Apparently any differences in infiltration rates due to chaining technique had been overcome by the passage of six years since the initial chaining had been completed. During 1974, however, secondary treatment was in effect. Infiltration rates on the grazed and burned watersheds were significantly depressed during certain time intervals in comparison to the "undisturbed, natural" woodland location. Apparently this level of secondary treatment could have an effect on the hydrology of the area, at least in terms of infiltration rates.
4

Imputation techniques for non-ordered categorical missing data

Karangwa, Innocent January 2016 (has links)
Philosophiae Doctor - PhD / Missing data are common in survey data sets. Enrolled subjects do not often have data recorded for all variables of interest. The inappropriate handling of missing data may lead to bias in the estimates and incorrect inferences. Therefore, special attention is needed when analysing incomplete data. The multivariate normal imputation (MVNI) and the multiple imputation by chained equations (MICE) have emerged as the best techniques to impute or fills in missing data. The former assumes a normal distribution of the variables in the imputation model, but can also handle missing data whose distributions are not normal. The latter fills in missing values taking into account the distributional form of the variables to be imputed. The aim of this study was to determine the performance of these methods when data are missing at random (MAR) or completely at random (MCAR) on unordered or nominal categorical variables treated as predictors or response variables in the regression models. Both dichotomous and polytomous variables were considered in the analysis. The baseline data used was the 2007 Demographic and Health Survey (DHS) from the Democratic Republic of Congo. The analysis model of interest was the logistic regression model of the woman’s contraceptive method use status on her marital status, controlling or not for other covariates (continuous, nominal and ordinal). Based on the data set with missing values, data sets with missing at random and missing completely at random observations on either the covariates or response variables measured on nominal scale were first simulated, and then used for imputation purposes. Under MVNI method, unordered categorical variables were first dichotomised, and then K − 1 (where K is the number of levels of the categorical variable of interest) dichotomised variables were included in the imputation model, leaving the other category as a reference. These variables were imputed as continuous variables using a linear regression model. Imputation with MICE considered the distributional form of each variable to be imputed. That is, imputations were drawn using binary and multinomial logistic regressions for dichotomous and polytomous variables respectively. The performance of these methods was evaluated in terms of bias and standard errors in regression coefficients that were estimated to determine the association between the woman’s contraceptive methods use status and her marital status, controlling or not for other types of variables. The analysis was done assuming that the sample was not weighted fi then the sample weight was taken into account to assess whether the sample design would affect the performance of the multiple imputation methods of interest, namely MVNI and MICE. As expected, the results showed that for all the models, MVNI and MICE produced less biased smaller standard errors than the case deletion (CD) method, which discards items with missing values from the analysis. Moreover, it was found that when data were missing (MCAR or MAR) on the nominal variables that were treated as predictors in the regression model, MVNI reduced bias in the regression coefficients and standard errors compared to MICE, for both unweighted and weighted data sets. On the other hand, the results indicated that MICE outperforms MVNI when data were missing on the response variables, either the binary or polytomous. Furthermore, it was noted that the sample design (sample weights), the rates of missingness and the missing data mechanisms (MCAR or MAR) did not affect the behaviour of the multiple imputation methods that were considered in this study. Thus, based on these results, it can be concluded that when missing values are present on the outcome variables measured on a nominal scale in regression models, the distributional form of the variable with missing values should be taken into account. When these variables are used as predictors (with missing observations), the parametric imputation approach (MVNI) would be a better option than MICE.
5

Model-based Multiple Imputation by Chained-equations for Multilevel Data below the Limit of Detection

Xu, Peixin 24 May 2022 (has links)
No description available.
6

Performance Comparison of Imputation Methods for Mixed Data Missing at Random with Small and Large Sample Data Set with Different Variability

Afari, Kyei 01 August 2021 (has links)
One of the concerns in the field of statistics is the presence of missing data, which leads to bias in parameter estimation and inaccurate results. However, the multiple imputation procedure is a remedy for handling missing data. This study looked at the best multiple imputation methods used to handle mixed variable datasets with different sample sizes and variability along with different levels of missingness. The study employed the predictive mean matching, classification and regression trees, and the random forest imputation methods. For each dataset, the multiple regression parameter estimates for the complete datasets were compared to the multiple regression parameter estimates found with the imputed dataset. The results showed that the random forest imputation method was the best for mostly a sample of 150 and 500 irrespective of the variability. The classification and regression tree imputation methods worked best mostly on sample of 30 irrespective of the variability.
7

A Third-order Differential Steering Robot And Trajectory Generation In The Presence Of Moving Obstacles

An, Vatana 01 January 2006 (has links)
In this thesis, four robots will be used to implement a collision-free trajectory planning/replanning algorithm. The existence of a chained form transformation so that the robot's model can be control in canonical form will be analyzed and proved. A trajectory generation for obstacles avoidance will be derived, simulated, and implemented. A specific PC based control algorithm will be developed. Chapter two describes two wheels differential drive robot modeling and existence of controllable canonical chained form. Chapter 3 describes criterion for avoiding dynamic objects, a feasible collision-free trajectory parameterization, and solution to steering velocity. Chapter 4 describes robot implementation, pc wireless interface, and strategy to send and receive information wirelessly. The main robot will be moving in a dynamically changing environment using canonical chained form. The other three robots will be used as moving obstacles that will move with known piecewise constant velocities, and therefore, with known trajectories. Their initial positions are assumed to be known as well. The main robot will receive the command from the computer such as how fast to move and to turn in order to avoid collision. The robot will autonomously travel to the desired destination collision-free.
8

The role of consumer leverage in financial crises

Dimova, Dilyana January 2015 (has links)
This thesis demonstrates that consumer leverage can contribute to financial crises such as the subprime mortgage crisis characterised by increased bankruptcy prospects and tightened credit access. A recession may follow even when the leveraged sector is not a production sector and can be triggered by seeming positive events such as a technological innovation and a relaxation of borrowing conditions. The first preliminary chapter updates the Bernanke, Gertler and Gilchrist (1999) approach with financial frictions in the production sector to a two-sector model with consumption and housing. It shows that credit frictions in the capital financing decisions of housing firms are not sufficient to capture the negative consumer experience with falling housing prices and relaxed credit access during the recession. The second chapter brings the model closer to the subprime mortgage crisis by shifting credit constraints to the consumer mortgage market. Increased supply of houses lowers asset prices and reduces the value of the real estate collateral used in the mortgage which in turn worsens the leverage of indebted consumers. A relaxation of borrowing conditions turns credit-constrained households into a potential source of disturbances themselves when market optimism allows them to raise their leverage with little downpayment. Both cases demonstrate that although households are not production agents, their worsening debt levels can trigger a lasting financial downturn. The third chapter develops a chained mortgage contracts model where both homeowner consumers and the financial institutions that securitize their mortgage loan are credit-constrained. Adding credit constraints to the financial sector that provides housing mortgages creates opportunities for risk sharing where banks shift some of the downturn onto indebted consumers in order to hasten their own recovery. This consequence is especially evident in the case of relaxed credit access for banks. Financial institutions repair their debt position relatively fast at the expense of consumers whose borrowing ability is squeezed for a long period despite the fact that they may not be the source of the disturbance. The result mirrors the recent subprime mortgage crisis characterised by a sharp but brief decline for banks and a protracted recovery for mortgaged households.
9

Analysis of Methods for Chained Connections with Mutual Authentication Using TLS / Analys av metoder för kedjade anslutningar med ömsesidig autentisering användandes TLS

Petersson, Jakob January 2015 (has links)
TLS is a vital protocol used to secure communication over networks and it provides an end- to-end encrypted channel between two directly communicating parties. In certain situations it is not possible, or desirable, to establish direct connections from a client to a server, as for example when connecting to a server located on a secure network behind a gateway. In these cases chained connections are required. Mutual authentication and end-to-end encryption are important capabilities in a high assur- ance environment. These are provided by TLS, but there are no known solutions for chained connections. This thesis explores multiple methods that provides the functionality for chained connec- tions using TLS in a high assurance environment with trusted servers and a public key in- frastructure. A number of methods are formally described and analysed according to multi- ple criteria reflecting both functionality and security requirements. Furthermore, the most promising method is implemented and tested in order to verify that the method is viable in a real-life environment. The proposed solution modifies the TLS protocol through the use of an extension which allows for the distinction between direct and chained connections. The extension which also allows for specifying the structure of chained connections is used in the implementation of a method that creates chained connections by layering TLS connections inside each other. Testing demonstrates that the overhead of the method is negligible and that the method is a viable solution for creating chained connections with mutual authentication using TLS.
10

應用資料採礦於零售通路業之商品力矩陣分析-以某連鎖藥妝銷售資料為例 / The Application of Data Mining on Commodity Competitiveness Matrix Analysis of Retailing Industry-Case Study of Chained Drugstore Sales Data

賴柏龍, Lai, Po Lung Unknown Date (has links)
由於台灣國人所得提高,生活水準跟著日漸提高,近年來更是意識到健康對個人及家庭的重要性,因此國內健康食品與藥品市場在這幾年蓬勃地發展,特別是連鎖藥妝的普及,結合藥品、健康食品與開架式保養品、化妝品銷售,提供專業藥師諮詢服務,成為複合式的經營模式。但近年來連鎖藥妝零售業者也面臨來自外商連鎖藥妝、本土連鎖藥妝、地區性連鎖藥局等不同體系的競爭,因此藥品及化粧品零售業者普遍認同,目前經營上所面臨之困難主要為「同業競爭激烈」。 商品力為一連鎖藥妝零售業者成功的重要因素,具體展現在商品多樣性、商品獲利性、商品價格競爭力、商品獨特性…等不同的面向。目前藥品及化粧品零售業中,確實大部分的業者都有商品企劃或設計的需求,但有商品企劃或設計部門者僅為少數。利用資料採礦技術,將能在不大量增加人事費用的情況下,有效率地協助進行商品企劃或設計,進而提升連鎖藥妝零售業者的商品力。 本研究將針對資料採礦在連鎖藥妝上的應用進行探討,包含以下研究目的: 1. 利用資料採礦中之集群分析建置商品力矩陣,代表他們的屬性與價值。透過商品力矩陣釐清各商品的定位,幫助決策者優化商品組合,針對各商品執行妥善策略安排。 2. 依循集群分析後的結果,更進一步進行商品分類的關聯規則分析。幫助決策者將集群分析之成果化為實務決策之參考,優化商品組合,針對各商品執行妥善策略安排,也為關聯規則的整理帶來新的應用方式。 3. 根據上述兩模型建置之結果,對H連鎖藥妝提出具體可行之行銷策略建議。 本研究利用資料採礦中的Two-step Cluster模型建置出H連鎖藥妝中各項商品的商品力矩陣,此矩陣的兩軸分別為「個別商品的平均毛利」及「個別商品的年交易筆數」,將各種商品概略分為明星、樂透、忠狗、問號四大類商品,分別代表他們不同的屬性與價值。同時配合關聯規則分析,提出具體可行之候選規則篩選模式: 1. 樂透型商品,應用方式有兩種,將樂透型商品放在Apriori模型中的後項,找出導購向樂透型商品的潛在模式;將樂透型商品放在Apriori模型中的前項,並將後項商品作為加價購搭售促銷標的,提升購買樂透型商品的意願。 2. 忠狗型商品,應用方式也有兩種,將忠狗型商品放在Apriori模型中的前項,找出可能導購的商品標的,推出合適的加價購搭售促銷活動;另外也可以藉由觀察忠狗型商品的消費行為,進而提供適當的促銷、推薦,提高其他品項交叉銷售的可能性。 / Taiwanese living standard raised due to the income growing, which lead to recognizing the importance of health toward personal and family. As a result, the market of dietary supplements and drugs flourishing these years, especially the spread of chained drugstores, which turned into combinative store by providing professional pharmacist consultant and selling of drugs, dietary supplements, skincare products and cosmetics. The drug and cosmetic retailers generally agreed that the main difficulty is “Industry Competition” due to the competition from different systems, including foreign chained drugstores, local chained drugstores and regional chained drugstores. Commodity competitiveness is one of the key successful factors of chained drugstores, which expressed as commodity diversity, commodity profitability, commodity price competitiveness, commodity uniqueness, etc. Seldom drugstores own product planning or designing department although most drugstores have demand of product planning or designing. It could raise the commodity competitiveness of chained drugstores by applying data mining to help product planning or designing more efficiently without increasing too much labor cost. This study focus on the application of data mining on chained drugstores, including goals below: 1. Building commodity competitiveness matrix by cluster analysis, representing their features and values. Through positioning products on commodity competitiveness matrix, helping decision maker optimize product mix and execute appropriate strategy toward products. 2. Based on the results from cluster analysis, proceed association rules analysis toward product categories. Help turning the results from cluster analysis into references of actual decision, optimize product mix and execute appropriate strategy toward products. Bringing new application pattern of association rules analysis. 3. Providing actual marketing strategy suggestions to H chained drugstore based on the two models built above. This study built commodity competitiveness matrix of H chained drugstore by Two-step Cluster model, which take “average margin of individual product” and “annual transaction amounts of individual product” as two axes. Divided products into Star, Lottery, Greyfriars and Question Mark. Each of them represent different features and values. Providing practical filtering rules of candidate rules in association rules analysis: 1. Lottery Products: Placing lottery products as consequents in Apriori model, searching for the potential pattern led to buying lottery products. Placing lottery products as antecedents, which we can provide the consequents with additional purchase discount in order to raise the willing to buy lottery products. 2. Greyfriars Products: Placing Greyfriars products as antecedents, searching for potential recommendation with additional purchase discount. Providing appropriate sales and recommendation to raise the possibility of cross-selling by observing consuming behaviors of Greyfriars products.

Page generated in 0.0454 seconds