• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 19
  • 11
  • 10
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 27
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimal Design of An Accelerated Degradation Experiment with Reciprocal Weibull Degradation Rate

Polavarapu, Indira 01 September 2004 (has links)
To meet increasing competition, get products to market in the shortest possible time, and satisfy heightened customer expectations, products must be made more robust and fewer failures must be observed in a short development period. In this circumstance, assessing product reliability based on degradation data at high stress levels becomes necessary. This assessment is accomplished through accelerated degradation tests (ADT). These tests involve over stress testing in which instead of life product performance is measured as it degrades over time. Due to the role these tests play in determining proper reliability estimates for the product, it is necessary to scientifically design these test plans so as to save time and expense and provide more accurate estimates of reliability for a given number of test units and test time. In ADTs, several decision variables such as inspection frequency,the sample size, and the termination time at each stress level are important. In this research, an optimal plan is developed for the design of accelerated degradation test with a reciprocal Weibull degradation data using the mean time to failure (MTTF) as the minimizing criteria. A non linear integer programming problem is developed under the constraint that the total experimental cost does not exceed a pre-determined budget. The optimal combination of sample size, inspection frequency and the termination time at each stress level is found. A case example based on Light Emitting Diode (LED) example is used to illustrate the proposed method. Sensitivity analyses on the cost parameters and the parameters of the underlying probability distribution are performed to assess the robustness of the proposed method.
42

Assessing Abstinence in Infants Greater Than 28 Days Old

Cline, Genieveve J. 02 September 2018 (has links)
There are currently no published scoring instruments with prior empirical evidence to support the validity and reliability of the accuracy of the drug withdrawal scores generated in infants greater than 28 days of life with a diagnosis of Neonatal Abstinence Syndrome (NAS). This study was done to identify the signs of withdrawal in infants greater than 28 days of life with NAS and determine if further adaptation of the modified-FNAST was necessary to accurately measure the severity of drug withdrawal in this sub population of infants. This aim could not analyzed due to limitations of the data. The study was also done to describe the relationship between the medications used to treat the infant NAS and the longitudinal trajectory of the Finnegan scores. The results of the study revealed that the total modified-FNAST scores ranged from 0-21 on day 1 of life with a mean of 8.68 and a SD (4.127), and then gradually decreased with less variability over the length of the hospitalization until discharge. Four medications were used to treat the infants for NAS. The medications used to treat the infants for NAS included morphine (99%), phenobarbital (66.2%), clonidine (25.1%), and buprenorphine (1.9%). The minimum to maximum dosage and minimum to maximum duration of inpatient treatment days for each of the medications were explored and revealed, morphine (dosage range, 0.33-2.170 mg/kg/day and duration of 14-81 days), buprenorphine (dosage range 7.00-61.30 mcg/kg/day and duration of 4.00-30.00 days), clonidine (dosage range 3.97-28.93 mcg/kg/day and duration of 16.00-87.00 days), and phenobarbital (dosage range 3.00-16.00 mg/kg/day and duration of 2.00-84.00 days). Most of the infants received morphine alone or in combination with phenobarbital or clonidine consistent with the established evidence-based NAS weaning protocol. The Mixed Effects Model Analysis revealed that there was an overall decrease in the total Finnegan scores over time (p < 0.0001). The mean total Finnegan scores showed a statistically significant difference in the groups treated with and without clonidine (p = 0.0031). The group treated with clonidine had higher mean total Finnegan scores. The infants treated with phenobarbital did not show a significant association with the total Finnegan scores (p = 0.6852). In addition, all other control variables failed to show significant associations with the repeated measures of total Finnegan scores including: gender (p = 0.6257), infant birth weight (p = 0.9375), gestational age (p = 0.8444) and the estimated number of cigarettes smoked by the mother during the pregnancy (p = 0.7300). The interaction between the infants treated with clonidine and phenobarbital were not statistical significant either. (p = 0.6412). Key Words: Neonatal Abstinence Syndrome, opioid, modified-FNAST, reliable, valid, factor analysis
43

Varför yttrandefrihet? : Om rättfärdigandet av yttrandefrihet med utgångspunkt från fem centrala argument i den demokratiska idétraditionen

Petäjä, Ulf January 2006 (has links)
This thesis focuses primarily on the question ”why is freedom of speech valuable in a democratic context?” I argue that it is problematic that free-dom of speech takes for granted and that the main question therefore is absent in current political science research, in legal texts, and in public discourse. I also argue that in democratic states the focus, regarding freedom of speech, is often on its boundaries and limits rather than on its justification. But it is highly problematic to find and establish its limits without dis-cussion why freedom of speech is desirable in the first place. The thesis poses two questions. The first concerns how freedom of speech is justified by the five strongest available arguments. I analyze the arguments and conclude that they justify freedom of speech differently but that they are similar in one aspect. Freedom of speech is not primarily justified as an individual right. It is rather justified in terms of the public good. The second question asks if we can reach a better understanding of the central arguments. I argue that the arguments have something in common; all of them justify freedom of speech with reference to a common value. I argue that this common value is what I call, a “reliable communication process”. All five arguments claim that freedom of speech is valuable because it promotes a reliable communication process. This process is reliable in terms of its capacity to create a pluralistic public discourse that exposes citizens to ideas and perspectives that they would not have chosen in advance. This study results in the following findings. First, that freedom of speech is valuable in a democratic context because the reliable communication process supports the central democratic value of the enlightened understanding of the democratic citizen. Secondly, that I can give a principled reason for the boundaries of freedom of speech. This means that, according to the arguments, there are reasons to abolish or limit freedom of speech if the reliable communication process is damaged or absent, for example in case of war, anarchy, or violent circumstances. Third, that there are strong reasons in support of a public service media, and greater state intervention in media politics. One strong reason for that conclusion is that a public service media can ensure a pluralistic communication in society and counteract information conformity and intolerance among the members of society.
44

A User-level, Reliable and Reconfigurable Transport Layer Protocol

Wang, Tan January 2009 (has links)
Over the past 15 years, the Internet has proven itself to be one of the most influential inventions that humankind has ever conceived. The success of the Internet can be largely attributed to its stability and ease of access. Among the various pieces of technologies that constitute the Internet, TCP/IP can be regarded as the cornerstone to the Internet’s impressive scalability and stability. Many researchers have been and are currently actively engaged in the studies on the optimization of TCP’s performance in various network environments. This thesis presents an alternative transport layer protocol called RRTP, which is designed to provide reliable transport layer services to software applications. The motivation for this work comes from the fact that the most commonly used versions of TCP perform unsatisfactorily when they are deployed over non-conventional network platforms such as cellular/wireless, satellite, and long fat pipe networks. These non-conventional networks usually have higher network latency and link failure rate as compared with the conventional wired networks and the classic versions of TCP are unable to adapt to these characteristics. This thesis attempts to address this problem by introducing a user-level, reliable, and reconfigurable transport layer protocol that runs on top of UDP and appropriately tends to the characteristics of non-conventional networks that TCP by default ignores. A novel aspect of RRTP lies in identifying three key characteristic parameters of a network to optimize its performance. The single most important contribution of this work is its empirical demonstration of the fact that parameter-based, user-configurable, flow-control and congestion-control algorithms are highly effective at adapting to and fully utilizing various networks. This fact is demonstrated through experiments designed to benchmark the performance of RRTP against that of TCP on simulated as well as real-life networks. The experimental results indicate that the performance of RRTP consistently match and exceed TCP’s performance on all major network platforms. This leads to the conclusion that a user-level, reliable, and reconfigurable transport-layer protocol, which possesses the essential characteristics of RRTP, would serve as a viable replacement for TCP over today’s heterogeneous network platforms.
45

A User-level, Reliable and Reconfigurable Transport Layer Protocol

Wang, Tan January 2009 (has links)
Over the past 15 years, the Internet has proven itself to be one of the most influential inventions that humankind has ever conceived. The success of the Internet can be largely attributed to its stability and ease of access. Among the various pieces of technologies that constitute the Internet, TCP/IP can be regarded as the cornerstone to the Internet’s impressive scalability and stability. Many researchers have been and are currently actively engaged in the studies on the optimization of TCP’s performance in various network environments. This thesis presents an alternative transport layer protocol called RRTP, which is designed to provide reliable transport layer services to software applications. The motivation for this work comes from the fact that the most commonly used versions of TCP perform unsatisfactorily when they are deployed over non-conventional network platforms such as cellular/wireless, satellite, and long fat pipe networks. These non-conventional networks usually have higher network latency and link failure rate as compared with the conventional wired networks and the classic versions of TCP are unable to adapt to these characteristics. This thesis attempts to address this problem by introducing a user-level, reliable, and reconfigurable transport layer protocol that runs on top of UDP and appropriately tends to the characteristics of non-conventional networks that TCP by default ignores. A novel aspect of RRTP lies in identifying three key characteristic parameters of a network to optimize its performance. The single most important contribution of this work is its empirical demonstration of the fact that parameter-based, user-configurable, flow-control and congestion-control algorithms are highly effective at adapting to and fully utilizing various networks. This fact is demonstrated through experiments designed to benchmark the performance of RRTP against that of TCP on simulated as well as real-life networks. The experimental results indicate that the performance of RRTP consistently match and exceed TCP’s performance on all major network platforms. This leads to the conclusion that a user-level, reliable, and reconfigurable transport-layer protocol, which possesses the essential characteristics of RRTP, would serve as a viable replacement for TCP over today’s heterogeneous network platforms.
46

Highly Reliable Broadcast Scheme with Directional Antennas

Kuo, Yi-Cheng 04 September 2003 (has links)
Ad hoc wireless networks are constructed by several mobile hosts and have a property that its topology is changed as mobile hosts moved. There is no stationary infrastructure or based station to coordinate packets transmissions and advertise the information of network topology or something important. The special networks are used in temporal wireless networks, such as battlefield, disease rescue place, and so on. So without any stationary infrastructure supported, mobile hosts can communicate with others immediately or indirectly. Because topology is often changed while mobile hosts moving, mobile hosts must exchange information to deal with the changed conditions. Mobile hosts often utilize broadcasting to exchange information with their neighbor hosts, but there is high bit error ratio in wireless networks, packet corruption occurs frequently, so that mobile host might lost some important information sent from its neighboring host. In 802.11 standard, lack of acknowledgement, broadcasting is an unreliable transmission, because sender host do not know whether all of it neighboring hosts received broadcasting packets correctly or no. Many proposed papers of reliable broadcast assumed that links between mobile hosts are bidirectional links, but bidirectional link is an ideal assumption. In real environment, links are unidirectional, so host A could send packets to host B immediately, but host B could not because of their transmission range are different. In this paper, we propose a new reliable broadcast scheme, Highly Reliable Broadcast Scheme with Directional Antennas (HRBSDA). HRBSDA can reduce the influence of unidirectional links and reach for highly reliable broadcasting. HRBSDA uses directional antennas and concept of Time Division Multiple Access (TDMA)-like. HRBSDA divide DCF Inter-Frame Space (DIFS) into several minislots, and mobile hosts use these minislots to ask sender for retransmission of lost packets. By the way, HRBSDA can not only reach for highly reliable broadcasting, but also reduce Packet Loss Recovery Time, and avoid causing extra overhead. Using directional antennas HRBSDA can reduce collision, so that improving throughput and channel utilization.
47

Networking infrastructure and data management for large-scale cyber-physical systems

Han, Song, doctor of computer sciences 25 February 2013 (has links)
A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system’s computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this thesis, we first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. We then describe the network management techniques designed for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. We also present a light-weighted and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer and a constrained application protocol layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms. At the end of this thesis, we present a semi-autonomous robotic system called cyberphysical avatar. The cyberphysical avatar is built based on our proposed network infrastructure and data management techniques. By integrating recent advance in body-compliant control in robotics, and neuroevolution in machine learning, the cyberphysical avatar can adjust to an unstructured environment and perform physical tasks subject to critical timing constraints while under human supervision. / text
48

Using video self-modelling to improve the reading attitudes of students with dyslexia

Maguire, James Vincent January 2015 (has links)
Individuals with dyslexia have an unexpected difficulty learning to read. This difficulty produces other effects, such as poor reading attitudes, meaning many choose not to read. Reading is a valuable source of information and entertainment, therefore individuals with dyslexia require better reading support. This study attempted to develop an intervention to improve reading attitudes using video self-modelling (VSM). VSM involves individuals watching carefully created videos of themselves correctly performing target behaviours. During this 1 month intervention, 14 participants (13 male and 1 female) aged 9-14 who had dyslexia were asked to watch a weekly video of themselves silently reading one of four types of material: academic digital, academic print, recreational digital or recreational print. The participants’ reading attitudes and ability were measured before and after the intervention using the Survey of Adolescent Reading Attitudes and the Wide Range Achievement Test–Fourth Edition, respectively. Their reading habits and affect while reading (as a proxy measure of reading attitudes) were monitored during the intervention using a daily reading diary. This study did not detect any systematic or reliable changes in reading habits, affect while reading, reading attitudes and reading skills. This may have been due to limitations in the procedure, or it is possible that VSM cannot affect attitudes and that reading attitudes alone do not have a strong influence on ability. Consequently, future research should use VSM to help individuals with dyslexia by focusing on specific reading skills, such as phonological awareness.
49

Performance Analysis Of Reliable Multicast Protocols

Celik, Coskun 01 December 2004 (has links) (PDF)
IP multicasting is a method for transmitting the same information to multiple receivers over IP networks. Reliability issue of multicasting contains the challenges for detection and recovery of packet losses and ordered delivery of the entire data. In this work, existing reliable multicast protocols are classified into three main groups, namely tree based, NACK-only and router assisted, and a representative protocol for each group is selected to demonstrate the advantages and disadvantages of the corresponding approaches. The selected protocols are SRM, PGM and RMTP. Performance characteristics of these protocols are empirically evaluated by using simulation results. Network Simulator-2 (ns2), a discrete event simulator is used for the implementation and simulation of the selected protocols. The contributions of the thesis are twofold, i.e. the extension of the ns library with an open source implementation of RMTP which did not exist earlier and the evaluation of the selected protocols by investigating performance metrics like distribution delay and recovery latency with respect to varying multicast group size, network diameter, link loss rate, etc.
50

Of Malicious Motes and Suspicious Sensors

Gilbert, Seth, Guerraoui, Rachid, Newport, Calvin 19 April 2006 (has links)
How much damage can a malicious tiny device cause in a single-hopwireless network? Imagine two players, Alice and Bob, who want toexchange information. Collin, a malicious adversary, wants to preventthem from communicating. By broadcasting at the same time as Alice orBob, Collin can destroy their messages or overwhelm them with his ownmalicious data. Being a tiny device, however, Collin can onlybroadcast up to B times. Given that Alice and Bob do not knowB, and cannot distinguish honest from malicious messages, howlong can Collin prevent them from communicating? We show the answerto be 2B + Theta(lg|V|) communication rounds, where V is theset of values that Alice and Bob may transmit. We prove this resultto be optimal by deriving an algorithm that matches our lowerbound---even in the stronger case where Alice and Bob do not start thegame at the same time.We then argue that this specific 3-player game captures the generalextent to which a malicious adversary can disrupt coordination in asingle-hop wireless network. We support this claim by deriving---via reduction from the 3-player game---round complexity lower boundsfor several classical n-player problems: 2B + Theta(lg|V|) for reliable broadcast,2B + Omega(lg(n/k)) for leader election among k contenders,and 2B + Omega(k*lg(|V|/k)) for static k-selection. We then consider an extension of our adversary model that also includes up to t crash failures. We study binary consensus as the archetypal problem for this environment and show a bound of 2B + Theta(t) rounds. We conclude by providing tight, or nearly tight, upper bounds for all four problems. The new upper and lower bounds in this paper represent the first such results for a wireless network in which the adversary has the ability to disrupt communication.

Page generated in 0.0452 seconds