• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 347
  • 40
  • 34
  • 33
  • 30
  • 26
  • 23
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1024
  • 1024
  • 331
  • 274
  • 189
  • 129
  • 112
  • 89
  • 88
  • 87
  • 77
  • 72
  • 71
  • 68
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Spectrum Management and Cross-layer Protocol Design in Cognitive Radio Networks

Dai, Ying January 2014 (has links)
Cognitive radio networks (CRNs) are a promising solution to the channel (spectrum) congestion problem. This dissertation presents work on the two main issues in CRNs: spectrum management and cross-layer protocol design. The objective of spectrum management is to enable the efficient usage of spectrum resources in CRNs, which protects primary users' activities and ensures the effective spectrum sharing among nodes. We consider to improve the spectrum sensing efficiency and accuracy, so that the spectrum sensing cost is reduced. We consider the pre-phase of spectrum sensing and provide structures for sensing assistance. Besides the spectrum sensing phase, the sharing of spectrum, or the channel allocation, among nodes is also the main component in the spectrum management. We provide our approach to achieve a reliable and effective channel assignment. The channel availabilities for different nodes in CRNs are dynamic and inconsistent. This poses challenges on the MAC layer protocols for CRNs. Moreover, due to the lack of knowledge on primary users, they can suddenly become available during the secondary users' data transmission. Therefore, for a end-to-end data transmission in CRNs, the routing algorithm is different from the existing routing algorithms in traditional networks. We consider the cross-layer protocol design, and propose the solutions for efficient data transmission. We propose the novel routing protocol design considering the boundaries of PUs. Also, an effective structure for reliable end-to-end data transmission is presented, which makes use of the area routing protocol. We build a USRP/Gnuradio testbed for the performance evaluation of our protocols. / Computer and Information Science
432

Priority-Based Data Transmission in Wireless Networks using Network Coding

Ostovari, Pouya January 2015 (has links)
With the rapid development of mobile devices technology, they are becoming very popular and a part of our everyday lives. These devices, which are equipped with wireless radios, such as cellular and WiFi radios, affect almost every aspect of our lives. People use smartphone and tablets to access the Internet, watch videos, chat with their friends, and etc. The wireless connections that these devices provide is more convenient than the wired connections. However, there are two main challenges in wireless networks: error-prone wireless links and network resources limitation. Network coding is widely used to provide reliable data transmission and to use the network resources efficiently. Network coding is a technique in which the original packets are mixed together using algebraic operations. In this dissertation, we study the applications of network coding in making the wireless transmissions robust against transmission errors and in efficient resource management. In many types of data, the importance of different parts of the data are different. For instance, in the case of numeric data, the importance of the data decreases from the most significant to the least significant bit. Also, in multi-layer videos, the importance of the packets in different layers of the videos are not the same. We propose novel data transmission methods in wireless networks that considers the unequal importance of the different parts of the data. In order to provide robust data transmissions and use the limited resources efficiently, we use random linear network coding technique, which is a type of network coding. In the first part of this dissertation, we study the application of network coding in resource management. In order to use the the limited storage of cache nodes efficiently, we propose to use triangular network coding for content distribution. We also design a scalable video-on-demand system, which uses helper nodes and network coding to provide users with their desired video quality. In the second part, we investigate the application of network coding in providing robust wireless transmissions. We propose symbol-level network coding, in which each packet is partitioned to symbols with different importance. We also propose a method that uses network coding to make multi-layer videos robust against transmission errors. / Computer and Information Science
433

Emergency Nurse Efficiency as a Measure of Emergency Nurse Performance:

DePesa, Christopher Daniel January 2023 (has links)
Thesis advisor: Monica O'Reilly-Jacob / Background: Emergency department crowding (EDC) is a major issue affecting hospitals in the United States and has devastating consequences, including an increased risk of patient mortality. Solutions to address EDC are traditionally focused on adding resources, including increased nurse staffing ratios. However, these solutions largely ignore the value of the experience and expertise that each nurse possesses and how those attributes can impact patient outcomes. This dissertation uses Benner’s Novice to Expert theory of professional development to describe how individual emergency nurse expertise influences patient length of stay in the emergency department and how it can be part of the strategy in addressing EDC.Purpose: The purpose of this program of research was to identify, articulate, and demonstrate a new approach to emergency nurse performance evaluation that integrates patient outcome data and emergency nurse characteristics. Methods: First, in a scoping review, we explored the different approaches to measuring nurse performance using patient outcome data and identified common themes. Second, a concept analysis introduced Emergency Nurse Efficiency as a novel framework to understand how emergency nurses can be evaluated using patient outcome data. Finally, a retrospective correlational study established the association between nurse expertise and emergency patient length of stay. Results: In Chapter Two of this dissertation, the researchers conducted a scoping review of nurse performance metrics and identified twelve articles for inclusion. We identified three themes: the emerging nature of these metrics in the literature, variability in their applications, and performance implications. We further described an opportunity for future researchers to work with nurse leaders and staff nurses to optimize these metrics. In Chapter Three, we performed a concept analysis to introduce a novel metric, called Emergency Nurse Efficiency, that is a measurable attribute that changes as experience is gained and incorporates the positive impact of an individual nurse during a given time while subtracting the negative. Using this measurement to evaluate ED nurse performance could guide staff development, education, and performance improvement initiatives. In Chapter Four, we performed a retrospective correlational analysis and administered an online survey to describe the relationship between individual emergency nurses, and their respective level of expertise, and their patients’ ED LOS. We found that, when accounting for patient-level variables and the influence of the ED physicians, emergency nurses are a statistically significant predictor of their patients’ ED LOS. A higher level of clinical expertise among emergency likely produces a lower ED LOS for their patients, and nurse leaders should seek to better understand these metrics for professional development and quality improvement activities. Conclusions: This dissertation made substantial knowledge contributions to the literature regarding the evaluation of individual emergency nurses and the influence that they have on patient outcomes. It established, first, that the measurement of individual nurse performance is varied and inconsistent; second, that considering emergency nursing as a team activity similar to professional sports results in a conceptual framework that can evaluate individual performance within a group context; and, third, that there is a relationship between the individual emergency nurse and their patients’ ED LOS, and that relationship can be further understood within Benner’s Novice to Expert theoretical model. We recommend that nurse leaders use these data as part of their strategy to decrease EDC. / Thesis (PhD) — Boston College, 2023. / Submitted to: Boston College. Connell School of Nursing. / Discipline: Nursing.
434

Performance Evaluation Study of Intrusion Detection Systems.

Alhomoud, Adeeb M., Munir, Rashid, Pagna Disso, Jules F., Al-Dhelaan, A., Awan, Irfan U. 2011 August 1917 (has links)
With the thriving technology and the great increase in the usage of computer networks, the risk of having these network to be under attacks have been increased. Number of techniques have been created and designed to help in detecting and/or preventing such attacks. One common technique is the use of Network Intrusion Detection / Prevention Systems NIDS. Today, number of open sources and commercial Intrusion Detection Systems are available to match enterprises requirements but the performance of these Intrusion Detection Systems is still the main concern. In this paper, we have tested and analyzed the performance of the well know IDS system Snort and the new coming IDS system Suricata. Both Snort and Suricata were implemented on three different platforms (ESXi virtual server, Linux 2.6 and FreeBSD) to simulate a real environment. Finally, in our results and analysis a comparison of the performance of the two IDS systems is provided along with some recommendations as to what and when will be the ideal environment for Snort and Suricata.
435

Performance evaluation of a brackish water reverse osmosis pilot-plant desalination process under different operating conditions: Experimental study

Ansari, M., Al-Obaidi, Mudhar A.A.R., Hadadian, Z., Moradi, M., Haghighi, A., Mujtaba, Iqbal M. 28 March 2022 (has links)
Yes / The Reverse Osmosis (RO) input parameters have key roles in mass transport and performance indicators. Several studies can be found in open literature. However, an experimental research on evaluating the brackish water RO input parameters influence on the performance metrics with justifying the interference between them via a robust model has not been addressed yet. This paper aims to design, construct, and experimentally evaluate the performance of a 50 m3/d RO pilot-plant to desalinate brackish water in Shahid Chamran University of Ahvaz, Iran. Water samples with various salinity ranging from 1000 to 5000 ppm were fed to a semi-permeable membrane under variable operating pressures from 5 to 13 bar. By evaluating permeate flux and brine flowrate, permeate and brine salinities, membrane water recovery, and salt rejection, some logical relations were derived. The results indicated that the performance of an RO unit is largely dependent on feed pressure and feed salinity. At a fixed feed concentration, an almost linear relationship was found to relate feed pressure and both permeate and brine flowrates. Statistically, it was found that 13 bar feed pressure results in a maximum salt rejection of 98.8% at a minimum permeate concentration of 12 ppm. Moreover, 73.3% reduction in permeate salinity and 30.8% increase in brine salinity are reported when feed pressure increases from 5 to 13 bar. Finally, it is concluded that the water transport coefficient is a function of feed pressure, salinity, and temperature, which is experimentally estimated to be 2.8552 L/(m2 h bar).
436

LEVERAGING MACHINE LEARNING FOR FAST PERFORMANCE PREDICTION FOR INDUSTRIAL SYSTEMS : Data-Driven Cache Simulator

Yaghoobi, Sharifeh January 2024 (has links)
This thesis presents a novel solution for CPU architecture simulation with a primary focus on cache miss prediction using machine learning techniques. The solution consists of two main components: a configurable application designed to generate detailed execution traces via DynamoRIO and a machine learning model, specifically a Long Short-Term Memory (LSTM) network, developed to predict cache behaviors based on these traces. The LSTM model was trained and validated using a comprehensive dataset derived from detailed trace analysis, which included various parameters like instruction sequences and memory access patterns. The model was tested against unseen datasets to evaluate its predictive accuracy and robustness. These tests were critical in demonstrating the model’s effectiveness in real-world scenarios, showing it could reliably predict cache misses with significant accuracy. This validation underscores the viability of machine learning-based methods in enhancing the fidelity of CPU architecture simulations. However, performance tests comparing the LSTM model and DynamoRIO revealed that while the LSTM achieves satisfactory accuracy, it does so at the cost of increased processing time. Specifically, the LSTM model processed 25 million instructions in 45 seconds, compared to DynamoRIO’s 41 seconds, with additional overheads for loading and executing the inference process. This highlights a critical trade-off between accuracy and simulation speed, suggesting areas for further optimization and efficiency improvements in future work.
437

Design and Analysis of Algorithms for Efficient Location and Service Management in Mobile Wireless Systems

Gu, Baoshan 01 December 2005 (has links)
Mobile wireless environments present new challenges to the design and validation of system supports for facilitating development of mobile applications. This dissertation concerns two major system-support mechanisms in mobile wireless networks, namely, location management and service management. We address this research issue by considering three topics: location management, service management, and integrated location and service management. A location management scheme must effectively and efficiently handle both user location-update and location-search operations. We first quantitatively analyze a class of location management algorithms and identify conditions under which one algorithm may perform better than others. From insight gained from the quantitative analysis, we design and analyze a hybrid replication with forwarding algorithm that outperforms individual algorithms and show that such a hybrid algorithm can be uniformly applied to mobile users with distinct call and mobility characteristics to simplify the system design without sacrificing performance. For service management, we explore the notion of location-aware personal proxies that cooperate with the underlying location management system with the goal to minimize the network communication cost caused by service management operations. We show that for cellular wireless networks that provide packet services, when given a set of model parameters characterizing the network and workload conditions, there exists an optimal proxy service area size such that the overall network communication cost for service operations is minimized. These proxy-based mobile service management schemes are shown to outperform non-proxy-based schemes over a wide range of identified conditions. We investigate a class of integrated location and service management schemes by which service proxies are tightly integrated with location databases to further reduce the overall network signaling and communication cost. We show analytically and by simulation that when given a user's mobility and service characteristics, there exists an optimal integrated location and service management scheme that would minimize the overall network communication cost for servicing location and service operations. We demonstrate that the best integrated location and service scheme identified always performs better than the best decoupled scheme that considers location and service managements separately. / Ph. D.
438

Linking Governance and Performance: ICANN as an Internet Hybrid

Lee, Maeng Joo 25 August 2008 (has links)
The Internet Corporation for Assigned Names and Numbers (ICANN) is a hybrid organization managing the most critical Internet infrastructure - the Domain Name System. ICANN represents a new, emerging Internet self-governance model in which the private sector takes the lead and the government sector plays a more marginal role. Little is known, however, about what is actually happening in this new organization. The dissertation (a) systematically assesses ICANN's overall performance based on a set of evaluative criteria drawn from its mission statements; (b) explores possible factors and actors that influence ICANN's overall performance by tracing the governance processes in three cases based on a preliminary conceptual framework; and (c) suggests practical and theoretical implications of ICANN's governance and performance in its broader institutional context. The study finds that although differing governance processes have led to different performance outcomes (Lynn et al. 2000), "stability" has been the defining value that has shaped the overall path of ICANN's governance and performance. The study characterizes ICANN as a conservative hybrid captured, based on specific issues, by the technical and governmental communities. It also proposes the concept of "technical capture" to suggest how technical experts can have significant, but often implicit, influence over the policy development process in organizations. / Ph. D.
439

Institutional segmentation of equity markets: causes and consequences

Hosseinian, Amin 27 July 2022 (has links)
We re-examine the determinants of institutional ownership (IO) from a segmentation perspective -- i.e. accounting for a hypothesized systematic exclusion of stocks that cause high implementation or agency costs. Incorporating segmentation effects substantially improves both explained variance in IO and model parsimony (essentially requiring just one input: market capitalization). Our evidence clearly establishes a role for both implementation costs and agency considerations in explaining segmentation effects. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. Importantly, our study finds evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. We then explore three implications of segmentation for the equity market. First, a mass exodus of publicly listed stocks predicted to fall outside institutions' investable universe helps explain the listing puzzle. There has been no comparable exit by institutionally investable stocks. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, IO return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use our logistic model and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with abnormal institutional ownership generate significant positive returns, suggesting institution actions are informed. / Doctor of Philosophy / We demonstrate that implementation and agency frictions restrict professional money managers from ownership of particular stocks. We characterize this systematic exclusion of stocks as segmentation and show that a specification that accommodates the segmentation effect substantially improves the empirical fit of institutional demand. The adjusted R-squared increases substantially; the residuals are better behaved, and the dimensionality of institutions' demands for stock characteristics reduces from a list of 8-10 standard characteristics (e.g., market cap, liquidity, index membership, volatility, beta) to just one: a stock's market capitalization. Our evidence identifies a prominent role for both implementation costs and agency costs as determinants of institutional segmentation. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. In fact, we find that segmentation arises from a trade-off between implementation costs (which bind for larger institutions) and agency considerations (which bind for smaller institutions). Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. More importantly, we find evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. This conclusion is important to considerations of scale economies/diseconomies in investment management. The agency story goes in the opposite direction to the conventional wisdom underlying scale arguments. We then explore three implications of segmentation for the equity market. First, our evidence suggests that institutional segmentation coupled with growing institutional dominance in public equity markets may have had a truncating effect on the universe of listed stocks. Stocks predicted to fall outside of institutions' investable universe were common prior to the 1990s, but are now almost nonexistent. By contrast, stocks predicted to fall within institutions' investable universe have not declined over time. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, feasible return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use logistic specification and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with high (low) abnormal institutional ownership generate significant positive (negative) returns, suggesting institution actions are informed.
440

Automated Landing Site Evaluation for Semi-Autonomous Unmanned Aerial Vehicles

Klomparens, Dylan 27 October 2008 (has links)
A system is described for identifying obstacle-free landing sites for a vertical-takeoff-and-landing (VTOL) semi-autonomous unmanned aerial vehicle (UAV) from point cloud data obtained from a stereo vision system. The relatively inexpensive, commercially available Bumblebee stereo vision camera was selected for this study. A "point cloud viewer" computer program was written to analyze point cloud data obtained from 2D images transmitted from the UAV to a remote ground station. The program divides the point cloud data into segments, identifies the best-fit plane through the data for each segment, and performs an independent analysis on each segment to assess the feasibility of landing in that area. The program also rapidly presents the stereo vision information and analysis to the remote mission supervisor who can make quick, reliable decisions about where to safely land the UAV. The features of the program and the methods used to identify suitable landing sites are presented in this thesis. Also presented are the results of a user study that compares the abilities of humans and computer-supported point cloud analysis in certain aspects of landing site assessment. The study demonstrates that the computer-supported evaluation of potential landing sites provides an immense benefit to the UAV supervisor. / Master of Science

Page generated in 0.0538 seconds