• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 42
  • 42
  • 21
  • 20
  • 19
  • 10
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 438
  • 128
  • 63
  • 61
  • 54
  • 42
  • 39
  • 37
  • 35
  • 34
  • 31
  • 28
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

A Heuristic Featured Based Quantification Framework for Efficient Malware Detection. Measuring the Malicious intent of a file using anomaly probabilistic scoring and evidence combinational theory with fuzzy hashing for malware detection in Portable Executable files

Namanya, Anitta P. January 2016 (has links)
Malware is still one of the most prominent vectors through which computer networks and systems are compromised. A compromised computer system or network provides data and or processing resources to the world of cybercrime. With cybercrime projected to cost the world $6 trillion by 2021, malware is expected to continue being a growing challenge. Statistics around malware growth over the last decade support this theory as malware numbers enjoy almost an exponential increase over the period. Recent reports on the complexity of the malware show that the fight against malware as a means of building more resilient cyberspace is an evolving challenge. Compounding the problem is the lack of cyber security expertise to handle the expected rise in incidents. This thesis proposes advancing automation of the malware static analysis and detection to improve the decision-making confidence levels of a standard computer user in regards to a file’s malicious status. Therefore, this work introduces a framework that relies on two novel approaches to score the malicious intent of a file. The first approach attaches a probabilistic score to heuristic anomalies to calculate an overall file malicious score while the second approach uses fuzzy hashes and evidence combination theory for more efficient malware detection. The approaches’ resultant quantifiable scores measure the malicious intent of the file. The designed schemes were validated using a dataset of “clean” and “malicious” files. The results obtained show that the framework achieves true positive – false positive detection rate “trade-offs” for efficient malware detection.
292

Совершенствование оценки кредитного риска заемщиков - физических лиц на основе внедрения технологии интегрального скоринга (на примере ПАО «СКБ-Банк») : магистерская диссертация / Improving the assessment of the credit risk of individual borrowers based on the introduction of integral scoring technology (on the example of JSC "SKB-Bank")

Романова, Е. В., Romanova, E. V. January 2018 (has links)
Магистерская диссертация посвящена вопросам оценки кредитного риска заемщиков – физических лиц. Целью исследования является разработка методического подхода по оценке кредитоспособности заемщиков как направления совершенствования управления кредитным риском. В работе сделан вывод о том, что повышение качества кредитного портфеля банка способствует повышению конкурентоспособности банка в условиях высокой конкуренции и нормативных требований Банка России. / Master thesis is devoted to the assessment of credit risk of individual borrowers. The aim of the study is the development a methodical approach to assess the creditworthiness of borrowers as a way to improve credit risk management. The work concluded that improving the quality of the credit portfolio of a bank contributes to improving competitiveness of bank in conditions of high competition and regulatory requirements of the Bank of Russia.
293

Contributions to Distributed Detection and Estimation over Sensor Networks

Whipps, Gene Thomas January 2017 (has links)
No description available.
294

The Relative Security Metric of Information Systems: Using AIMD Algorithms

Owusu-Kesseh, Daniel 28 June 2016 (has links)
No description available.
295

Rhythm and Views: A Compilation of Eight Projects Including Scoring, Video Production and Motion Graphic Design

Hudgins, Donald A. 28 April 2008 (has links)
No description available.
296

Functional Norm Regularization for Margin-Based Ranking on Temporal Data

Stojkovic, Ivan January 2018 (has links)
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization. / Computer and Information Science
297

Capabilities and Processes to Mitigate Risks Associated with Machine Learning in Credit Scoring Systems : A Case Study at a Financial Technology Firm / Förmågor och processer för att mitigera risker associerade med maskininlärning inom kreditvärdering : En fallstudie på ett fintech-bolag

Pehrson, Jakob, Lindstrand, Sara January 2022 (has links)
Artificial intelligence and machine learning has become an important part of society and today businesses compete in a new digital environment. However, scholars and regulators are concerned with these technologies' societal impact as their use does not come without risks, such as those stemming from transparency and accountability issues. The potential wrongdoing of these technologies has led to guidelines and future regulations on how they can be used in a trustworthy way. However, these guidelines are argued to lack practicality and they have sparked concern that they will hamper organisations' digital pursuit for innovation and competitiveness. This master’s thesis aims to contribute to this field by studying how teams can work with risk mitigation of risks associated with machine learning. The scope was set on capturing insights on the perception of employees, on what they consider to be important and challenging with machine learning risk mitigation, and then put it in relation to research to develop practical recommendations. The master’s thesis specifically focused on the financial technology sector and the use of machine learning in credit scoring. To achieve the aim, a qualitative single case study was conducted. The master’s thesis found that a combination of processes and capabilities are perceived as important in this work. Moreover, current barriers are also found in the single case. The findings indicate that strong responsiveness is important, and this is achieved in the single case by having separation of responsibilities and strong team autonomy. Moreover, standardisation is argued to be needed for higher control, but that it should be implemented in a way that allows for flexibility. Furthermore, monitoring and validation are important processes for mitigating machine learning risks. Additionally, the capability of extracting as much information from data as possible is an essential component in daily work, both in order to create value but also to mitigate risks. One barrier in this work is that the needed knowledge takes time to develop and that knowledge transferring is sometimes restricted by resource allocation. However, knowledge transfer is argued to be important for long term sustainability. Organisational culture and societal awareness are also indicated to play a role in machine learning risk mitigations. / Artificiell intelligens och maskininlärning har blivit en betydelsefull del av samhället och idag konkurrerar organisationer i en ny digital miljö. Forskare och regulatorer är däremot bekymrade gällande den samhällspåverkan som sådan teknik har eftersom användningen av dem inte kommer utan risker, såsom exempelvis risker som uppkommer från brister i transparens och ansvarighet. Det potentiella olämpliga användandet av dessa tekniker har resulterat i riktlinjer samt framtida föreskrifter på hur de kan användas på ett förtroendefullt och etiskt sätt. Däremot så anses dessa riktlinjer sakna praktisk tillämpning och de har väckt oro då de möjligen kan hindra organisationers digitala strävan efter innovation och konkurrenskraft. Denna masteruppsats syftar till att bidra till detta område genom att studera hur team kan arbeta med riskreducering av risker kopplade till maskininlärning. Uppsatsens omfång lades på att fånga insikter på medarbetares uppfattning, för att sedan ställa dessa i relation till forskning och utveckla praktiska rekommendationer. Denna masteruppsats fokuserade specifikt på finansteknologisektorn och användandet av maskininlärning inom kreditvärdering. En kvalitativ singelfallstudie genomfördes för att uppnå detta mål. Masteruppsatsen fann att en kombination av processer och förmågor uppfattas som viktiga inom detta arbete. Dessutom fann fallstudien några barriärer. Resultaten indikerar att en stark förmåga att reagera är essentiellt och att detta uppnås i fallstudien genom att ha tydlig ansvarsfördelning och att teamen har stark autonomi. Vidare så anses standardisering behövas för en högre nivå av kontroll, samtidigt som det bör vara implementerat på ett sådant sätt som möjliggör flexibilitet. Fortsättningsvis anses monitorering och validering vara viktiga processer för att mitigera maskininlärningsrisker. Dessutom är förmågan att extrahera så mycket information från data som möjligt en väsentlig komponent i det dagliga arbetet, både för värdeskapande och för att minska risker. En barriär inom detta arbetet är att det tar tid för den behövda kunskapen att utvecklas och att kunskapsöverföring ibland hindras av resursallokering. Kunskapsöverföring anses däremot vara viktigt för långsiktig hållbarhet. Organisationskultur och samhällsmedvetenhet indikeras också påverka minskningen av risker kring maskininlärning.
298

Nonverbal behaviour in the process of the therapeutic interview : an ecosystemic perspective

Scott, Sybil 11 1900 (has links)
Communication can be divied into two broad areas namely, the verbal and nonverbal levels. While attention has been paid to nonverbal communication in the literature, few studies address the nonverbal communication that takes place in the natural setting of a therapeutic session. The present study provides such a naturalistic study, where the verbal content of actual therapy sessions are integrated with the nonverbal content to yield a holistic view of the session. An ecosystemic epistemology is adopted in this study, and represents a move away from more traditional approaches to nonverbal behaviour which are largely confined to a positivistic framework of thought and design. Symlog Interaction Scoring is employed as a practical method of assisting observers in distinguishing nonverbal behaviours, which are usually perceived unconsciously, and lifting them into consciousness, allowing this infonnation to be integrated with the meanings and hypotheses generated during therapy. By deliberately including descriptions of nonverbal behaviour, the descriptions of therapy were broadened, thereby providing a more holistic approach to therapy. / Psychology / M.A. (Clinical Psychology)
299

Measuring, refining and calibrating speaker and language information extracted from speech

Brummer, Niko 12 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: We propose a new methodology, based on proper scoring rules, for the evaluation of the goodness of pattern recognizers with probabilistic outputs. The recognizers of interest take an input, known to belong to one of a discrete set of classes, and output a calibrated likelihood for each class. This is a generalization of the traditional use of proper scoring rules to evaluate the goodness of probability distributions. A recognizer with outputs in well-calibrated probability distribution form can be applied to make cost-effective Bayes decisions over a range of applications, having di fferent cost functions. A recognizer with likelihood output can additionally be employed for a wide range of prior distributions for the to-be-recognized classes. We use automatic speaker recognition and automatic spoken language recognition as prototypes of this type of pattern recognizer. The traditional evaluation methods in these fields, as represented by the series of NIST Speaker and Language Recognition Evaluations, evaluate hard decisions made by the recognizers. This makes these recognizers cost-and-prior-dependent. The proposed methodology generalizes that of the NIST evaluations, allowing for the evaluation of recognizers which are intended to be usefully applied over a wide range of applications, having variable priors and costs. The proposal includes a family of evaluation criteria, where each member of the family is formed by a proper scoring rule. We emphasize two members of this family: (i) A non-strict scoring rule, directly representing error-rate at a given prior. (ii) The strict logarithmic scoring rule which represents information content, or which equivalently represents summarized error-rate, or expected cost, over a wide range of applications. We further show how to form a family of secondary evaluation criteria, which by contrasting with the primary criteria, form an analysis of the goodness of calibration of the recognizers likelihoods. Finally, we show how to use the logarithmic scoring rule as an objective function for the discriminative training of fusion and calibration of speaker and language recognizers. / AFRIKAANSE OPSOMMING: Ons wys hoe om die onsekerheid in die uittree van outomatiese sprekerherkenning- en taalherkenningstelsels voor te stel, te meet, te kalibreer en te optimeer. Dit maak die bestaande tegnologie akkurater, doeltre ender en meer algemeen toepasbaar.
300

應用資料採礦技術建置中小企業傳統產業之信用評等系統 / Applications of data mining techniques in establishing credit scoring system for the traditional industry of the SMEs

羅浩禎, Luo, Hao-Chen Unknown Date (has links)
中小企業是台灣經濟貿易發展的命脈,過去以中小企業為主的出口貿易經濟體系,是創造台灣經濟奇蹟的主要動力。隨著2006年底新巴賽爾協定的正式實施,金融機構為符合新協定規範,亦需將中小企業信用評分程序,納入其徵、授信管理系統,以求信用風險評估皆可量化處理。故本研究將資料採礦技術應用於建置中小企業違約風險模型,針對內部評等法中的企業型暴險,根據新協定與金管會的準則,不僅以財務變數為主,也廣泛增加如企業基本特性及總體經濟因子等非財務變數,納入模型作為考慮變數,計算違約機率進而建置一信用評等系統,作為金融機構對於未來新授信戶之風險管理的參考依據。而本研究將以中小企業中製造傳統產業公司為主要的研究對象,建構企業違約風險模型及其信用評等系統,資料的觀察期間為2003至2005年。 本研究分別利用羅吉斯迴歸、類神經網路、和C&R Tree三種方法建立模型並加以評估比較其預測能力。研究結果發現,經評估確立以1:1精細抽樣比例下,使用羅吉斯迴歸技術建模的效果最佳,共選出六個變數作為企業違約機率模型之建模變數。經驗證後,此模型即使應用到不同期間或其他實際資料,仍具有一定的穩定性與預測效力,且符合新巴塞資本協定與金管會的各項規範,表示本研究之信用評等模型,確實能夠在銀行授信流程實務中加以應用。 / To track the development of Taiwan’s economy history, one very important factor that should never be ignored is the role of small enterprise businesses (the SMEs) which has always been played as a main driving force in the growth of Taiwan’s export trade economic system. With the formal implementation of Basel II in the end of 2006, there arises the need in the banking institutions to establish a credit scoring process for the SMEs into their credit evaluation systems in order to conform to the new accords and to quantify the credit risk assessment process. Consequently, in this research we apply data mining techniques to construct the default risk model for the SMEs in accordance to the new accords and the guidelines published by the FSC (the Financial Supervisory Commission). In addition we not only take the financial variables as the core variables but also increase the non- financial variables such as the enterprise basic characteristics and overall economic factors extensively into the default risk model in order to formulate the probability of credit default risk as well as to establish the credit rating system for the enterprise-based at risk for default in the IRB in the second pillars of the Basel II. The data which used in this research is taken from the traditional SMEs industry ranging from the year of 2003 to 2005. We use each of the following three methods, the Logistic Regression, the Neural Network and the C&R Tree, to build the model. Evaluation of the models is carried out using several statistics test results to compare the prediction accuracy of each model. Based on the result of this research under the 1:1 oversampling proportion, we are inclined to adopt the Logistic Regression techniques modeling as our chosen choice of model. There are six variables being selected from the dataset as the final significant variables in the default risk model. After multiple testing of the model, we believe that this model can withstand the testing for its capability of prediction even when applying in a different time frame or on other data set. More importantly this model is in conformity with the Basel II requirements published by the FSC which makes it even more practical in terms of evaluating credit risk default and credit rating system in the banking industry.

Page generated in 0.028 seconds