• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 369
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1072
  • 1072
  • 331
  • 274
  • 193
  • 134
  • 117
  • 97
  • 92
  • 91
  • 77
  • 74
  • 72
  • 72
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Performance Evaluation Study of Intrusion Detection Systems.

Alhomoud, Adeeb M., Munir, Rashid, Pagna Disso, Jules F., Al-Dhelaan, A., Awan, Irfan U. 2011 August 1917 (has links)
With the thriving technology and the great increase in the usage of computer networks, the risk of having these network to be under attacks have been increased. Number of techniques have been created and designed to help in detecting and/or preventing such attacks. One common technique is the use of Network Intrusion Detection / Prevention Systems NIDS. Today, number of open sources and commercial Intrusion Detection Systems are available to match enterprises requirements but the performance of these Intrusion Detection Systems is still the main concern. In this paper, we have tested and analyzed the performance of the well know IDS system Snort and the new coming IDS system Suricata. Both Snort and Suricata were implemented on three different platforms (ESXi virtual server, Linux 2.6 and FreeBSD) to simulate a real environment. Finally, in our results and analysis a comparison of the performance of the two IDS systems is provided along with some recommendations as to what and when will be the ideal environment for Snort and Suricata.
472

Performance evaluation of a brackish water reverse osmosis pilot-plant desalination process under different operating conditions: Experimental study

Ansari, M., Al-Obaidi, Mudhar A.A.R., Hadadian, Z., Moradi, M., Haghighi, A., Mujtaba, Iqbal M. 28 March 2022 (has links)
Yes / The Reverse Osmosis (RO) input parameters have key roles in mass transport and performance indicators. Several studies can be found in open literature. However, an experimental research on evaluating the brackish water RO input parameters influence on the performance metrics with justifying the interference between them via a robust model has not been addressed yet. This paper aims to design, construct, and experimentally evaluate the performance of a 50 m3/d RO pilot-plant to desalinate brackish water in Shahid Chamran University of Ahvaz, Iran. Water samples with various salinity ranging from 1000 to 5000 ppm were fed to a semi-permeable membrane under variable operating pressures from 5 to 13 bar. By evaluating permeate flux and brine flowrate, permeate and brine salinities, membrane water recovery, and salt rejection, some logical relations were derived. The results indicated that the performance of an RO unit is largely dependent on feed pressure and feed salinity. At a fixed feed concentration, an almost linear relationship was found to relate feed pressure and both permeate and brine flowrates. Statistically, it was found that 13 bar feed pressure results in a maximum salt rejection of 98.8% at a minimum permeate concentration of 12 ppm. Moreover, 73.3% reduction in permeate salinity and 30.8% increase in brine salinity are reported when feed pressure increases from 5 to 13 bar. Finally, it is concluded that the water transport coefficient is a function of feed pressure, salinity, and temperature, which is experimentally estimated to be 2.8552 L/(m2 h bar).
473

LEVERAGING MACHINE LEARNING FOR FAST PERFORMANCE PREDICTION FOR INDUSTRIAL SYSTEMS : Data-Driven Cache Simulator

Yaghoobi, Sharifeh January 2024 (has links)
This thesis presents a novel solution for CPU architecture simulation with a primary focus on cache miss prediction using machine learning techniques. The solution consists of two main components: a configurable application designed to generate detailed execution traces via DynamoRIO and a machine learning model, specifically a Long Short-Term Memory (LSTM) network, developed to predict cache behaviors based on these traces. The LSTM model was trained and validated using a comprehensive dataset derived from detailed trace analysis, which included various parameters like instruction sequences and memory access patterns. The model was tested against unseen datasets to evaluate its predictive accuracy and robustness. These tests were critical in demonstrating the model’s effectiveness in real-world scenarios, showing it could reliably predict cache misses with significant accuracy. This validation underscores the viability of machine learning-based methods in enhancing the fidelity of CPU architecture simulations. However, performance tests comparing the LSTM model and DynamoRIO revealed that while the LSTM achieves satisfactory accuracy, it does so at the cost of increased processing time. Specifically, the LSTM model processed 25 million instructions in 45 seconds, compared to DynamoRIO’s 41 seconds, with additional overheads for loading and executing the inference process. This highlights a critical trade-off between accuracy and simulation speed, suggesting areas for further optimization and efficiency improvements in future work.
474

Design and Analysis of Algorithms for Efficient Location and Service Management in Mobile Wireless Systems

Gu, Baoshan 01 December 2005 (has links)
Mobile wireless environments present new challenges to the design and validation of system supports for facilitating development of mobile applications. This dissertation concerns two major system-support mechanisms in mobile wireless networks, namely, location management and service management. We address this research issue by considering three topics: location management, service management, and integrated location and service management. A location management scheme must effectively and efficiently handle both user location-update and location-search operations. We first quantitatively analyze a class of location management algorithms and identify conditions under which one algorithm may perform better than others. From insight gained from the quantitative analysis, we design and analyze a hybrid replication with forwarding algorithm that outperforms individual algorithms and show that such a hybrid algorithm can be uniformly applied to mobile users with distinct call and mobility characteristics to simplify the system design without sacrificing performance. For service management, we explore the notion of location-aware personal proxies that cooperate with the underlying location management system with the goal to minimize the network communication cost caused by service management operations. We show that for cellular wireless networks that provide packet services, when given a set of model parameters characterizing the network and workload conditions, there exists an optimal proxy service area size such that the overall network communication cost for service operations is minimized. These proxy-based mobile service management schemes are shown to outperform non-proxy-based schemes over a wide range of identified conditions. We investigate a class of integrated location and service management schemes by which service proxies are tightly integrated with location databases to further reduce the overall network signaling and communication cost. We show analytically and by simulation that when given a user's mobility and service characteristics, there exists an optimal integrated location and service management scheme that would minimize the overall network communication cost for servicing location and service operations. We demonstrate that the best integrated location and service scheme identified always performs better than the best decoupled scheme that considers location and service managements separately. / Ph. D.
475

Linking Governance and Performance: ICANN as an Internet Hybrid

Lee, Maeng Joo 25 August 2008 (has links)
The Internet Corporation for Assigned Names and Numbers (ICANN) is a hybrid organization managing the most critical Internet infrastructure - the Domain Name System. ICANN represents a new, emerging Internet self-governance model in which the private sector takes the lead and the government sector plays a more marginal role. Little is known, however, about what is actually happening in this new organization. The dissertation (a) systematically assesses ICANN's overall performance based on a set of evaluative criteria drawn from its mission statements; (b) explores possible factors and actors that influence ICANN's overall performance by tracing the governance processes in three cases based on a preliminary conceptual framework; and (c) suggests practical and theoretical implications of ICANN's governance and performance in its broader institutional context. The study finds that although differing governance processes have led to different performance outcomes (Lynn et al. 2000), "stability" has been the defining value that has shaped the overall path of ICANN's governance and performance. The study characterizes ICANN as a conservative hybrid captured, based on specific issues, by the technical and governmental communities. It also proposes the concept of "technical capture" to suggest how technical experts can have significant, but often implicit, influence over the policy development process in organizations. / Ph. D.
476

Strength and Life Prediction of FRP Composite Bridge Deck

Majumdar, Prasun Kanti 30 April 2008 (has links)
Fiber reinforced polymer (FRP) composites are considered very promising for infrastructure applications such as repair, rehabilitation and replacement of deteriorated bridge decks. However, there is lack of proper understanding of the structural behavior of FRP decks. For example, due to the localization of load under a truck tire, the conventionally used uniform patch loading is not suitable for performance evaluation of FRP composite deck systems with cellular geometry and relatively low modulus (compared to concrete decks). In this current study, a simulated tire patch loading profile has been proposed for testing and analysis of FRP deck. The tire patch produced significantly different failure mode (local transverse failure under the tire patch) compared to the punching-shear mode obtained using the conventional rectangular steel plate. The local response of a cellular FRP composite deck has been analyzed using finite element simulation and results are compared with full scale laboratory experiment of bridge deck and structure. Parametric studies show that design criteria based on global deck displacement is inadequate for cellular FRP deck and local deformation behavior must be considered. The adhesive bonding method is implemented for joining of bridge deck panels and response of structural joint analyzed experimentally. Strength, failure mode and fatigue life prediction methodologies for a cellular FRP bridge deck are presented in this dissertation. / Ph. D.
477

Institutional segmentation of equity markets: causes and consequences

Hosseinian, Amin 27 July 2022 (has links)
We re-examine the determinants of institutional ownership (IO) from a segmentation perspective -- i.e. accounting for a hypothesized systematic exclusion of stocks that cause high implementation or agency costs. Incorporating segmentation effects substantially improves both explained variance in IO and model parsimony (essentially requiring just one input: market capitalization). Our evidence clearly establishes a role for both implementation costs and agency considerations in explaining segmentation effects. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. Importantly, our study finds evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. We then explore three implications of segmentation for the equity market. First, a mass exodus of publicly listed stocks predicted to fall outside institutions' investable universe helps explain the listing puzzle. There has been no comparable exit by institutionally investable stocks. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, IO return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use our logistic model and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with abnormal institutional ownership generate significant positive returns, suggesting institution actions are informed. / Doctor of Philosophy / We demonstrate that implementation and agency frictions restrict professional money managers from ownership of particular stocks. We characterize this systematic exclusion of stocks as segmentation and show that a specification that accommodates the segmentation effect substantially improves the empirical fit of institutional demand. The adjusted R-squared increases substantially; the residuals are better behaved, and the dimensionality of institutions' demands for stock characteristics reduces from a list of 8-10 standard characteristics (e.g., market cap, liquidity, index membership, volatility, beta) to just one: a stock's market capitalization. Our evidence identifies a prominent role for both implementation costs and agency costs as determinants of institutional segmentation. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. In fact, we find that segmentation arises from a trade-off between implementation costs (which bind for larger institutions) and agency considerations (which bind for smaller institutions). Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. More importantly, we find evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. This conclusion is important to considerations of scale economies/diseconomies in investment management. The agency story goes in the opposite direction to the conventional wisdom underlying scale arguments. We then explore three implications of segmentation for the equity market. First, our evidence suggests that institutional segmentation coupled with growing institutional dominance in public equity markets may have had a truncating effect on the universe of listed stocks. Stocks predicted to fall outside of institutions' investable universe were common prior to the 1990s, but are now almost nonexistent. By contrast, stocks predicted to fall within institutions' investable universe have not declined over time. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, feasible return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use logistic specification and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with high (low) abnormal institutional ownership generate significant positive (negative) returns, suggesting institution actions are informed.
478

Automated Landing Site Evaluation for Semi-Autonomous Unmanned Aerial Vehicles

Klomparens, Dylan 27 October 2008 (has links)
A system is described for identifying obstacle-free landing sites for a vertical-takeoff-and-landing (VTOL) semi-autonomous unmanned aerial vehicle (UAV) from point cloud data obtained from a stereo vision system. The relatively inexpensive, commercially available Bumblebee stereo vision camera was selected for this study. A "point cloud viewer" computer program was written to analyze point cloud data obtained from 2D images transmitted from the UAV to a remote ground station. The program divides the point cloud data into segments, identifies the best-fit plane through the data for each segment, and performs an independent analysis on each segment to assess the feasibility of landing in that area. The program also rapidly presents the stereo vision information and analysis to the remote mission supervisor who can make quick, reliable decisions about where to safely land the UAV. The features of the program and the methods used to identify suitable landing sites are presented in this thesis. Also presented are the results of a user study that compares the abilities of humans and computer-supported point cloud analysis in certain aspects of landing site assessment. The study demonstrates that the computer-supported evaluation of potential landing sites provides an immense benefit to the UAV supervisor. / Master of Science
479

An energy-efficient and scalable slot-based privacy homomorphic encryption scheme for WSN-integrated networks

Verma, Suraj, Pillai, Prashant, Hu, Yim Fun 04 1900 (has links)
Yes / With the advent of Wireless Sensor Networks (WSN) and its immense popularity in a wide range of applications, security has been a major concern for these resource-constraint systems. Alongside security, WSNs are currently being integrated with existing technologies such as the Internet, satellite, Wi-Max, Wi-Fi, etc. in order to transmit data over long distances and hand-over network load to more powerful devices. With the focus currently being on the integration of WSNs with existing technologies, security becomes a major concern. The main security requirement for WSN-integrated networks is providing end-to-end security along with the implementation of in-processing techniques of data aggregation. This can be achieved with the implementation of Homomorphic encryption schemes which prove to be computationally inexpensive since they have considerable overheads. This paper addresses the ID-issue of the commonly used Castelluccia Mykletun Tsudik (CMT) [12] homomorphic scheme by proposing an ID slotting mechanism which carries information pertaining to the security keys responsible for the encryption of individual sensor data. The proposed scheme proves to be 93.5% lighter in terms of induced overheads and 11.86% more energy efficient along with providing efficient WSN scalability compared to the existing scheme. The paper provides analytical results comparing the proposed scheme with the existing scheme thus justifying that the modification to the existing scheme can prove highly efficient for resource-constrained WSNs.
480

Measuring the efficiency of two stage network processes: a satisficing DEA approach

Mehdizadeh, S., Amirteimoori, A., Vincent, Charles, Behzadi, M.H., Kordrostami, S. 2020 March 1924 (has links)
No / Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.

Page generated in 0.1445 seconds