Biblio
The paper dwells on the peculiarities of stylometry technologies usage to determine the style of the author publications. Statistical linguistic analysis of the author's text allows taking advantage of text content monitoring based on Porter stemmer and NLP methods to determine the set of stop words. The latter is used in the methods of stylometry to determine the ownership of the analyzed text to a specific author in percentage points. There is proposed a formal approach to the definition of the author's style of the Ukrainian text in the article. The experimental results of the proposed method for determining the ownership of the analyzed text to a particular author upon the availability of the reference text fragment are obtained. The study was conducted on the basis of the Ukrainian scientific texts of a technical area.
Institutions use the information security (InfoSec) policy document as a set of rules and guidelines to govern the use of the institutional information resources. However, a common problem is that these policies are often not followed or complied with. This study explores the extent to which the problem lies with the policy documents themselves. The InfoSec policies are documented in the natural languages, which are prone to ambiguity and misinterpretation. Subsequently such policies may be ambiguous, thereby making it hard, if not impossible for users to comply with. A case study approach with a content analysis was conducted. The research explores the extent of the problem by using a case study of an educational institution in South Africa.
Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.
Data mining has been used as a technology in various applications of engineering, sciences and others to analysis data of systems and to solve problems. Its applications further extend towards detecting cyber-attacks. We are presenting our work with simple and less efforts similar to data mining which detects email based phishing attacks. This work digs html contents of emails and web pages referred. Also domains and domain related authority details of these links, script codes associated to web pages are analyzed to conclude for the probability of phishing attacks.