• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 360
  • 108
  • 63
  • 26
  • 17
  • 14
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 6
  • 2
  • 2
  • Tagged with
  • 822
  • 822
  • 180
  • 130
  • 130
  • 108
  • 106
  • 106
  • 90
  • 81
  • 77
  • 76
  • 75
  • 74
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Traffic sensitive quality of service controller

Kumar, Abhishek Anand. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: delay hints; AQM; QoS Controller. Includes bibliographical references (p. 49-52).
112

Shareadox : -The paradox of service quality assurance in Sharing Economy businesses

Appelquist, Patrik, Johansson, Jesper, Törnlöf, Mathilda January 2015 (has links)
The purpose of the Sharing Economy is to exploit unused resources between people, as an alternative to buying new and owning themselves (Gansky, 2010). Gansky (2010) argues that a major reason why the Sharing Economy has grown in recent times is, besides from the recession and people’s increased environmental awareness, the urbanization which has made people move closer to each other and in to the big cities. This in addition to an increased use of GPS technology in smartphones enables people to be constantly connected to an ever-changing network. People now share and exchange services and products from each other. Sharing Economy businesses (SE businesses) enables this by facilitating the meeting and transaction, while at the same time capitalizing on it. What most established theories within the subject have in common is that they primarly describes what the Sharing Economy phenomenon is, but not how the SE businesses are working or could work with the business economics aspects such as quality assurance. Many SE businesses are building their brands as service companies and are therefore marketing themselfs as such. This despite the fact that these companies merely are enabling, and capitalizing on, individuals to connect. Hence, the SE businesses no longer own either the human or physical resources in the same way as traditional businesses would. Even so, these individuals are the public face of the companies. In order to continue to capitalize on the phenomenon it should be in the interest of the SE businesses to somehow work towards ensuring the quality of their services, but what happens when the resources are no longer owned by the company? The purpose of this study is to increase understanding of how SE businesses work with quality assurance of its services. There are in current time not many empirical studies on how SE businesses work with quality assurance from a business perspective. Therefore, this study intends to generate a theory based in reality. The researchers have used the grounded theory methodology. The companies that have been object of study are AirBnB, Lyft, Flexidrive and WorkaroundTown. One finding from the study is that the objects of study, the SE businesses, are working consciously on quality assuring their services through recruitment, training and feedback to their providers (the ones performing the services). Despite the fact that these companies only intend to work as an intermediary between users who want to share resources, the study has shown that the companies are focusing much on what could be compared to Human Resource Management.   Furthermore, the SE businesses are using tools that in different ways result in quality assurance. Standardized systems for reservations, payments and similar reduces the risk of errors. Systems for grading fulfill a purpose of incentives as well as a means of control for quality assurance. The building of a community contributes to the creation an artificial corporate culture where common values and quality assumptions are being established. New providers are being recruited, trained and shaped in a user community where quality and standards are already deeply rooted.   Finally, the study has shown that various tools are of various importances in the quality assurance work depending on the development on the SE business in question. As the service transactions between users are becoming more and more self-propelled, the company’s resources can shift from managing the main process towards managing supporting processes such as reactive processes, marketing and community building. / Sharing Economy, eller delningsekonomi som det översatts till på svenska, syftar till att utnyttja outnyttjade resurser människor emellan som ett alternativ till att köpa nytt och äga själv (Ganska, 2010). En stor anledning till varför Sharing Economy har växt sig stort under den senaste tiden menar Gansky (2010) är, förutom lågkonjunkturen och människors ökade miljömedvetenhet, den urbanisering som sker över världen där människor flyttar närmare varandra till de stora städerna. Detta tillsammans med en ökad användning av GPS-teknologi i mobiltelefoner gör det möjligt att i realtid vara uppkopplad mot ett ständigt föränderligt nätverk. Nu byter och lånar människor tjänster och produkter mellan varandra, med hjälp av Sharing Economy-företag (SE-företag) som faciliterar mötet och transaktionen och kapitaliserar på detta. Vad de flesta etablerade teorier i ämnet har gemensamt är att de mestadels beskriver vad fenomenet är och vilka aktörerna är som jobbar med det men inte hur SE-företagen arbetar eller kan arbeta praktiskt med företagsekonomiska områden som exempelvis kvalitetssäkring. Många SE-företag bygger sina varumärken som tjänsteföretag och marknadsför sig som tjänsteföretag trots att företaget egentligen bara möjliggör för, och kapitaliserar på, att privatpersoner kopplas samman. SE- företagen äger således inte längre de mänskliga eller de fysiska resurserna på samma sätt som traditionella företag gör. Trots det fungerar dessa privatpersoner som företagets ansikte utåt. För att kunna fortsätta kapitalisera på fenomenet torde SE-företagen på något sätt arbeta med att kvalitetssäkra sina tjänster, men hur går det till när resurserna inte längre ägs av företaget? Syftet med denna studie är att öka förståelsen för hur Sharing Economy-företag arbetar med kvalitetssäkring av sina tjänster. Då det i dagsläget inte finns många empiriska studier om hur SE-företag arbetar med kvalitetssäkring ur ett företagsekonomiskt perspektiv syftar studien till att med utgångspunkt i verkligheten generera teori. Forskarna har använt sig av grundad teori som metodik. De företag som studerats i denna studie är AirBnB, Lyft, Flexidrive och WorkaroundTown. Studien har visat att de undersökta Sharing Economy-företagen aktivt arbetar med kvalitetssäkring av sina tjänster genom såväl rekrytering, utbildning och återkoppling till sina providers (den som utför tjänsten). Trots det faktum att dessa företag i grund och botten endast ämnar fungera som en förmedlare mellan användare som vill dela resurser, så visar detta att de har ett stort fokus på vad som kan liknas vid Human Resource Management.   Vidare använder företagen olika verktyg i sitt arbete som på olika sätt leder till kvalitetssäkring. Standardiserade system för bokning, betalning och likande minimerar risken för att fel uppstår. Betygssystem fungerar som ett incitament såväl som ett kontrollmedel för kvalitetssäkring. Byggandet av ett community bidrar till att en artificiell företagskultur skapas där gemensamma värderingar och kvalitetsantaganden etableras hos företagens providers. Nya providers rekryteras in, utbildas och formas således i ett användarcommunity där kvalitet och standarder redan ska finnas djupt rotade.   Slutligen har studien visat att olika verktyg är olika viktiga och tar olika stor plats i kvalitetssäkringsarbetet beroende på i vilken fas Sharing Economy-företaget befinner. I takt med att tjänstetransaktionerna privatpersonerna emellan blir allt mer självgående skiftas fokus för företaget egna resurser från huvudprocessen till stödprocesser så som reaktiva processer, marknadsföring och community-byggande.
113

Quantifying the value of our libraries. Are our systems ready?

Magodongo, April Mahlangu January 2012 (has links)
Paper presented at the 15th IUGSA Conference, 12-14 November 2012, Bloemfontein
114

An Automated Quality Assurance Procedure for Archived Transit Data from APC and AVL Systems

Saavedra, Marian Ruth January 2010 (has links)
Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) systems can be powerful tools for transit agencies to archive large, detailed quantities of transit operations data. Managing data quality is an important first step for exploiting these rich datasets. This thesis presents an automated quality assurance (QA) methodology that identifies unreliable archived AVL/APC data. The approach is based on expected travel and passenger activity patterns derived from the data. It is assumed that standard passenger balancing and schedule matching algorithms are applied to the raw AVL/APC data along with any existing automatic validation programs. The proposed QA methodology is intended to provide transit agencies with a supplementary tool to manage data quality that complements, but does not replace, conventional processing routines (that can be vendor-specific and less transparent). The proposed QA methodology endeavours to flag invalid data as “suspect” and valid data as “non-suspect”. There are three stages: i) the first stage screens data that demonstrate a violation of physical constraints; ii) the second stage looks for data that represent outliers; and iii) the third stage evaluates whether the outlier data can be accounted for with valid or invalid pattern. Stop-level tests are mathematically defined for each stage; however data is filtered at the trip-level. Data that do not violate any physical constraints and do not represent any outliers are considered valid trip data. Outlier trips that may be accounted for with a valid outlier pattern are also considered valid. The remaining trip data is considered suspect. The methodology is applied to a sample set of AVL/APC data from Grand River Transit in the Region of Waterloo, Ontario, Canada. The sample data consist of 4-month’s data from September to December of 2008; it is comprised of 612,000 stop-level records representing 25,012 trips. The results show 14% of the trip-level data is flagged as suspect for the sample dataset. The output is further dissected by: reviewing which tests most contribute to the set of suspect trips; confirming the pattern assumptions for the valid outlier cases; and comparing the sample data by various traits before and after the QA methodology is applied. The latter task is meant to recognize characteristics that may contribute to higher or lower quality data. Analysis shows that the largest portion of suspect trips, for this sample set, suggests the need for improved passenger balancing algorithms or greater accuracy of the APC equipment. The assumptions for valid outlier case patterns were confirmed to be reasonable. It was found that poor schedule data contributes to poorer quality in AVL-APC data. An examination of data distribution by vehicle showed that usage and the portion of suspect data varied substantially between vehicles. This information can be useful in the development of maintenance plans and sampling plans (when combined with information of data distribution by route). A sensitivity analysis was conducted along with an impact analysis on downstream data uses. The model was found to be sensitive to three of the ten user-defined parameters. The impact of the QA procedure on network-level measures of performance (MOPs) was not found to be significant, however the impact was shown to be more substantial for route-specific MOPs.
115

Process improvements for manufacturing excellence

Carrillo, Janice E. 05 1900 (has links)
No description available.
116

The development of an in-vivo dosimeter for the application in radiotherapy

Bose, Rajiv January 2012 (has links)
The expectation for continual improvements in the treatment of cancer has brought quality assurance in radiotherapy under scrutiny in recent years. After a cancer diagnosis a custom treatment plan is devised to meet the particular needs of the patient's condition based on their prognosis. A cancer treatment plan will typically comprise of several cancer treatment technologies combining to form a comprehensive programme to fight the malignant growth. Inherent in each cancer treatment technology is a percentage error in treatment accuracy. Quality assurance is the medical practice to minimise the percentage error in treatment accuracy. Radiotherapy is one of the several cancer treatment technologies a patient might receive as part of their treatment plan, and in-vivo dosimetry is a quality assurance technology specifically designed to minimise the percentage error in the treatment accuracy of radiotherapy. This thesis outlines the work completed in the design of a next generation dosimeter for in-vivo dosimetry. The proposed dosimeter is intended to modernise the process of measuring the absorbed dose of ionising radiation received by the target volume during a radiotherapy session. To accomplish this goal the new dosimeter will amalgamate specialist technologies from the field of particle physics and reapply them to the field of medical physics. This thesis describes the design of a new implantable in-vivo dosimeter, a dosimeter comprising of several individual stages of electronics working together to modernise quality assurance in radiotherapy. Presented within this thesis are the results demonstrating the performance of two critical stages for this new dosimeter, including: the oating gate metal oxide field effective transistor, a radiation sensitive electronic component measuring an absorbed dose of radiation; and the micro antenna, a highly specialist wireless communications device working to transmit a high frequency radio signal. This was a collaborative project between Rutherford Appleton Laboratory and Brunel University. The presented work in this thesis was completed between March 2007 and January 2011.
117

Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled Trials

Ward, Roxanne E. 21 March 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes. Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG). Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV. Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
118

Evaluation of a Helical Diode Array and Planned Dose Perturbation Model for Pretreatment Verification of Volumetric Modulated Arc Therapy

Maynard, Evan David 17 September 2013 (has links)
The ArcCHECK dosimeter is a novel dosimetry tool that uses a helical array of silicon diode detectors to measure dose in a cylindrical plane. 3DVH is an associated software that can use ArcCHECK diode measurements along with treatment planning system (TPS) data to guide a full 3D dose reconstruction. The ArcCHECK phantom, along with 3DVH software was evaluated as a volumetric modulated arc therapy (VMAT) pretreatment verification tool. The comprehensive evaluation of the ArcCHECK and 3DVH system involved a comparison of measured dose to both ECLIPSE and Monte Carlo calculated dose for open fields and intensity modulated radiation therapy (IMRT) plans. System based confidence limits for gamma-pass rate and dose difference metrics were established through the measurement of prostate and head and neck VMAT plans. Using the system based confidence limits and clinically accepted tolerances, the sensitivity of the ArcCHECK and 3DVH system to VMAT errors was determined. Dose measured by the ArcCHECK and reconstructed in 3DVH agreed very well with dose calculated in ECLIPSE and Monte Carlo for both open fields and IMRT plans. The only results that fell outside of clinically accepted tolerances were a set of head and neck IMRT plans, however it was determined that a major factor in this result was suboptimal modelling of MLC effects in the TPS, in combination with changes in linac performance since commissioning of the TPS model. VMAT measured by the ArcCHECK and 3DVH system were in excellent agreement with ECLIPSE results and system based confidence limits were determined to be tighter than commonly used limits. ArcCHECK and 3DVH were sensitive to clinically relevant VMAT errors and insensitive to errors with little dosimetric impact, although diode measurements alone required tighter tolerances than are typically used. The ArcCHECK phantom and 3DVH software when used together have been shown to provide useful dosimetric information when used for VMAT pretreatment verification. / Graduate / 0756
119

Quality Assessment of Feeder Cattle and Processes Based on Available Background Information

Franke, Jake 02 October 2013 (has links)
The 2011 National Feeder Cattle Audit evaluated 42,704 cattle in 260 lots from 12 Texas and five Nebraska feedyards to determine BQA adherence, the effects prior management and transportation practices had on feedyard performance and health, and established industry benchmark data so that future advancements and improvements in beef quality related areas can be monitored. This study suggested most feedyard managers and some cow-calf producers and stocker operators have implemented Beef Quality Assurance plans into their respective operations. Survey data documents that the many stakeholders in the beef cattle industry have followed BQA guidelines in an effort to improve the quality and safety of beef being produced. The lots of cattle traveled an average distance of 468 miles from their origin to the feedyard and spent an average of 185.7 days on feed. The majority of the lots were from a single-source origin. Of the cattle where feedlot performance data was available, they gained an average of 3.2 lb/day and converted at 6.2:1. Across all lots, the average animal cost per day was $3.30. Cattle in the feedyard appeared healthy with a 1.7% average death loss and 19.6% average morbidity rate. Processing costs averaged $14.47 per animal, and medicine costs were $5.22 per animal in the lot. The majority of lots had lot tags present in their ear (98.8%), were branded with at least one hide brand (64.3%) and were polled (79.8%). The cattle had primarily a solid hide color (70.7%) and were black (49.6%). Lots appeared uniform with 82.9% being termed slightly to extremely uniform and only 17.1% of the evaluated lots being assessed as slightly to extremely variable. Cattle that traveled further distances to the feedyard had higher processing costs, but in turn did not have differences in medicine costs through the finishing period. It appears the industry will need more communication across the different segments to ensure a sustainable future. Continuing to track cattle origin and what management practices have been done will be important so that cattle can be received with the appropriate processing protocol. Across-segment collaboration and communication provides economic opportunities for beef cattle producers.
120

Evaluation of 4-H and FFA Members Scores on the 2011-2012 Texas Quality Counts Verification Exam

Grube, Brittany C. 03 October 2013 (has links)
Experimental results indicated that dolomite dissolution rates increased in all the acid solutions as the disk rotational speeds increased at 150, 200, and 250˚F. The dissolution of dolomite in 0.886 M GLDA was found to be surface-reaction limited at lower temperatures and mass-transfer limited at highest temperature. GLDA with the lowest reaction rates and relative diffusion coefficient demonstrated retardation before spending with deeper penetration capability for productivity and injectivity improvement. The purpose of this study was to analyze the Texas Quality Counts Verification Exam for junior and senior aged 4-H and FFA members. The Texas Quality Counts program was developed due to a need for teaching livestock ethics and care to the youth of Texas and it strives to teach youth how to produce a safe and wholesome livestock product for the consumer. An analysis of youth scores between 2011 and 2012 was done to determine how well youth were scoring on the Texas Quality Counts Verification Exam. Out of the 91,733 attempts, 18,204 were taken by juniors and 73,572 were taken by seniors. Junior level attempts show a fairly even spread among self-identified membership in 4-H and FFA, while senior level attempts saw a much greater spread in membership, with 73% of attempts taken by youth who identified themselves as a member in FFA. Overall, 78% of junior level youth were able to pass the exam on their first attempt, and showed a range of mean attempts between 1.13 and 1.47 based on age. Senior level youth, on the other hand, had only 47% pass on their first attempt and had a range of mean attempts between 2.21 and 2.54 based on age. The means of scores on the exam were calculated for juniors and seniors, with 0.85 and 0.71 respectively. To determine if there were any differences in scores between self-identified membership in 4-H, FFA, or both 4-H and FFA, a oneway ANOVA for junior and senior members was conducted. Both junior and senior age groups showed a significant difference between the three membership categories (p=0.001).

Page generated in 0.0935 seconds