Li Hui, Tian Ya-dan and Zhou Ming-jie, School of Economic and Management, Xidian University, Xi’an 710126, Shannxi Province, P.R. China
This article analyze the evolution of relative research in the field of deep learning according to the papers in Web of Science core collection. We use SciMAT to detect the hidden topics of 1136 papers and visualize the evolution process for exploring the pattern of the evolution. Also, we aim to find the distribution of the topics from different institutions. The results show that, in the past 11 years, deep learning researches focus on two main topic evolution lines: Learning Approaches and Strategies and Neural Networks. Then we identify the important topics for the last four years: Boltzmann machine and Neural networks. Based on the evolution rule and the research topic distribution of institutions in deep learning, we propose suggestions about the future development of deep learning. The methods and research result presented in this study could also help academics identify the research topic in other fields and master the technological frontier, so as to further provide information basis for scientific and technological decision-making.
Deep learning; Scientometric analysis; Topic evolution; SciMAT.
Era Desti Ramayani, Fadila Amelia Futri, Moch Mogi Ibrahim H and Gigih Forda Nama, Department of Engineering, Lampung University, Lampung, Indonesia
The University is an institution of higher education and research, which provides academic degrees in various fields. A university provides undergraduate and postgraduate education. There are several admissions paths for University of Lampung students such as SBMPTN, SNMPTN, Mandiri, PMAP, Pararel or special achievements. New students who are accepted also come from various regions and various ages. From various regions and various ages, the chosen faculties also varied. Based on this, the journal was made to analyze the level of interest of selected faculty from various regions and ages. The data to be analyzed is the data of new University of Lampung students in 2018 using the Decision Tree method. Data processing is done by using the Rapid Miner. Based on the results of the analysis, the faculty with the highest interest is the faculty of K.I.P and the quota of new students comes from the city of Bandar Lampung.
Data mining, Decision Tree, Rapid Miner
Areej Al-Hassan and Hmood Al-Dossari, King Saud University, Riyadh, Saudi Arabia
In social media platforms, hate speech can be a reason of “cyber conflict” which can affect social life in both of individual-level and country-level. Hateful and antagonistic content propagated via social networks has the potential to cause harm and suffering on an individual basis and lead to social tension and disorder beyond cyber space. However, social networks cannot control all the content that users post. For this reason, there is a demand for automatic detection of hate speech. This demand particularly raises when the content is written in complex languages (e.g. Arabic). Arabic text is known with its challenges, complexity and scarcity of its resources. This paper will present a background on hate speech and its related detection approaches. In addition, the recent contributions on hate speech and its related anti-social behaviour topics will be reviewed. Finally, challenges and recommendations for the Arabic hate speech detection problem will be presented.
Text Mining, Social Networks, Hate Speech, Natural Language Processing, Arabic NLP
Chunlei Li, Chunming Rong and Jayachander Surbiryala, University of Bergen, Norwary
Outsourcing data storage and computation to cloud service providers (CSPs) is a popular practice in cloud computing. Although the cloud computing technology has undergone rapid developments in recent years, the security issue has not been properly addressed so far, and it presents as a roadblock in large deployment of cloud computing. Recently an idea of securing big data by protecting its storage path was proposed by Rong et al. While the general idea was clearly described in their work, the problem of generating a storage path secured by a trapdoor function remain unsolved. In this paper, we further extend the idea by introducing a middleware between a tenant and the cloud. The roles of the middleware in the scheme are twofold: it acts as a reliable communicator between the tenant and the cloud, and it integrates a toolkit that generates secure storage paths according to the inputs from the cloud and the tenant. More specifically, the idea of further partitioning dataset into client-defined basic data units and algorithms that efficiently generate parameterized big permutations according to the tenant’s configurations on secret parameters are integrated into the middleware. In this way, the storage path cannot be recovered without the knowledge of the exact client’a inputs. Therefore, this paper not only addresses the unsolved crucial problem in their work, but also contributes to increasing security and flexibility of the scheme. Our analysis indicates that the proposed scheme is a secure, efficient and flexible approach to storing, accessing and sharing big data in the cloud.
cloud storage, cryptography, security, storage allocation, data partition.
Junfeng Xu,Jin Yi and Weihua Hu, China Information Technology Security Evaluation Center, Beijing, China
With the rapid development of 3G/4G technology and the wide popularity of mobile terminals, the mobile Internet and its massive applications have become the most rapidly developing business of the Internet. The high integration of mobile communication technology and Internet technology not only brings unprecedented operation experience to users, but also brings new security problems. In particular, malicious traffic on mobile terminals consumes illegal plug-ins and Trojan virus rampant, which makes mobile data traffic surge and brings losses to users. Therefore, it is important to study the mobile Internet security detection technology of network traffic. Based on summarizing the traditional network traffic security detection technology, this paper discusses the network traffic mobile security detection technology systematically from three perspectives, including network traffic measurement and identification technology, network traffic characteristics security detection technology and network traffic content security detection technology. At the same time, this paper also pointed out the current research network traffic mobile Internet security detection technology difficult problems, and compared and analyzed the relevant technology. Finally, wesummarize the main content of this paper and the future research direction in the direction of network traffic mobile Internet security detection.
Mobile software security, network traffic, security detection.
Huafei Zhu, Nanyang Technology University, Singapore
Electronic health record (EHR) greatly enhances the convenience of cross-domain sharing and has been proven effectively to improve the quality of healthcare. On the other hand, the sharing of sensitive medical data is facing critical security and privacy issues, which become an obstacle that prevents EHR being widely adopted. In this paper, we address several challenges in very important patients’ (VIPs) data privacy, including how to protect a VIP’s identity by using pseudonym, how to enable a doctor to update an encrypted EHR with the VIP’s absence, how to help a doctor link up and decrypt historical EHRs of a patient for secondary use under a secure environment, and so on. Then we propose a framework for secure EHR data management. In our framework, we use a transitive pseudonym generation technique to allow a patient to vary his/her identity in each hospital visit. We separate metadata from detailed EHR data in storage, so that the security of EHR data is guaranteed by the security of both the central server and local servers in all involved hospitals. Furthermore, in our framework, a hospital can encrypt and upload a patient’s EHR when he/she is absent; a patient can help to download and decrypt his/her previous EHRs from the central server; and a doctor can decrypt a patient’s historical EHRs for secondary use under the help and audit by several proxies.
Electronic health record, pseudonym, semantic security, transitive pseudonym.
Chhaya S Dule1, Dr Girijamma H A2, Dr Rajasekharaiah K.M3, 1KG Reddy College of Engg. and Tech , Hydrabad (JNTHU) ,India 2RNS Institute of Technology, Bangaluru (VTU Belgaum), India and 3KG Reddy College of Engg. and Tech , Hydrabad (JNTHU) ,India
In the world various methods being adopted to create a systematic identification for their citizens. In year 2009 UIDAI a government body of India initiated a 12-digit number called Aadhar number generated out of biometric and demographic fusion of an individuals as their identity. In order to ensure a highest level of data protection as well as privacy preservation of the Aadhar card propagation through a network require an efficient model of security which are synchronous with cloud infrastructure. This paper initially investigates the existing approaches and their limitations towards the Aadhar card security then proposes a security model namely ECrypto-AaDhaar based on the cryptography approach synchronous to the cloud architecture.The privacy preservation of the AAdhar card associative demographic and biometric information is performed considering a statistical crypto-blocking operation prior propagating it through the network in the context of cloud infrastructure. The study later also presented an experimental analysis to demonstrate the performance of ECrypto-AaDhaar technique from a time complexity perspective.
Aadhar, Image Cryptography, Statistical crypto-blocking mechanism.
Kan Luo1 Siyuan Wang1 An Wei2 Wei Yu1 Kai Hu1, 1School of Computer Science and Engineering, Beihang University, Beijing, China and 2China Mobile(Hangzhou) Information Technology Co.,Ltd
Soft-as-a-Service(SaaS) is a software delivery model that contains composition, development and execution on cloud platforms. And massive SaaS applications need to verifying before deployed. To get the verify results of a large quantity of applications in a tolerate time, verify algebra(VA) is used to cut down the number of combinations to be verified. VA is an effective way to acquire the verify statue by using previous results. In VA, the verify result is calculated without knowing the process of verification. In this way, the verification task can be distributed to servers and executed in any order. This paper proposes method called component disassembly tree to decompose a complex SaaS application. And designs a parallel verification in cloud environment. The Optimization of execution is discussed. The proposed parallel schema is simulated in MapReduce
Verification, SaaS, Components Combinations
Sami Ouali, College of Applied Sciences, lbri, Oman
Software Product Line (SPL) is a paradigm used to improve reuse. This capacity is ensured by the management of the common and variable parts of a set of products. Although an SPL aims to increase productivity, quality and to reduce maintenance costs and time to market. That’s why an inappropriate implementation of SPL can conduct to code smells or code anomalies. These code smells are symptoms of something can be wrong in source code which can have an impact on the quality of the derived products of a SPL. These practices can have a domino effect because the same problem can be present in many products due to reuse. Refactoring can be a solution to this problem. This process improves the internal structure of source code without altering external behavior. This paper proposes an approach to reduce code smells in SPL with the use of refactoring. This approach takes into account code transformation to design through reverse engineering.
Software Product Line, Code smells, Refactoring, Reverse Engineering.
Mabroukah Amarif1 and Ibtusam Alashoury2, 1,2Department of Computer Sciences, Sebha University, Sebha, Libya
The shortest path between two points is one of the greatest challenges facing the researchers nowadays. There are many algorithms and mechanisms that are designed and still all according to the certain approach and adopted structural. The most famous and widely used algorithm is Dijkstra algorithm, which is characterized by finding the shortest path between two points through graph data structure. It’s obvious to find the implicit path from the solution path; but the searching time varies according to the type of data structure used to store the solution path. This paper improves the development of Dijkstra algorithm using linked hash map data structure for storing the produced solution shortest path, and then investigates the subsequent implicit paths within this data structure. The result show that the searching time through the given data structure is much better than restart the algorithm again to search for the same path.
Dijkstra algorithm, data structure, linked hash map, time complexity, implicit path, graph.
Mohamed Elmassry and Saad Al-Ahmadi, Department of Computer Science, King Saud University, Riyadh, Saudi Arabia
Enterprise Resource Planning (ERP) is a software that manages and automate the internal processes of an organization. Process speed and quality can be increased, and cost reduced by process automation. Odoo is an open source ERP platform including more than 15000 apps. ERP systems such as Odoo are all-in-one management systems. Odoo can be suitable for small and medium organizations, but duo to efficiency limitations, it is not suitable for the large ones. Furthermore, Odoo can be implemented on both local or public servers in which each has some advantages and disadvantages such as; the speed of internet, synced data or anywhere access. In many cases, there is a persistent need to have more than one synchronized Odoo instance in several physical places. We modified Odoo to support this kind of requirements and improve its efficiency by replacing its standard database with a distributed one, namely CockroachDB.
Odoo, ERP, distributed ERP systems, distributed database, CockroachDB. Open Source ERP.
Rina Komatsu and Tad Gonsalves, Sophia University, Tokyo, Japan
Digital images often contain “noise” which takes away their clarity and sharpness. Most of the existing denoising algorithms do not offer the best solution because there are difficulties such as removing strong noise while leaving the features and other details of the image intact. Faced with the problem of denoising, we tried solving it with a Convolutional Neural Network architecture called the “U-Net”. This paper deals with the training of a U-Net to remove 3 different kinds of noise: Gaussian, Blockiness, and Camera shake. Our results indicate the effectiveness of U-Net in denoising images while leaving their features and other details intact.
Deep Learning, Image Processing, Denoising, Convolutional Neural Network, U-Net.
Junta Watanabe and Tad Gonsalves, Sophia University, Tokyo, Japan
Moving object detection is one of the fundamental technologies necessary to realize autonomous driving. In this study, we propose the prediction of an in-vehicle camera image by Generative Adversarial Network (GAN). From the past images input to the system, it predicts the future images at the output. By predicting the motion of a moving object, it can predict the destination of the moving object. The proposed model can predict the motion of moving objects such as cars, bicycles, and pedestrians.
Deep Learning, Image Processing, Convolutional Neural Network, GAN, DGAN
Mohammad Hassan Anjom AHoa, Department of Mathematics, Vli E Asr University of Rafsanjan, Rafsanjan, Iran
In this paper, we try to express the strengths and weaknesses of each of intelligence techniques to form a suitable combination of these techniques for a moment of maximum intelligence, as well as more and more accurate mathematical structures. A new technique is a combination of applied methods. In this new technique for all previous techniques such as a coherent method, actors, etc., a matrix has been proposed to establish a matrix of elements and parameters which communicate with the help of network analysis techniques between their parameters. While using the weak assumptions that are used in the competing assumptions analysis technique, the other matrices can be used, and with the help of the network, parameters that have the greatest relevance to other parameters are selected as a job preference for analysis.
competitor analysis technique, coherent effect analysis model, model of analysis of actors, intelligence network analysis technique, parameter matrix, and component.
Sadegh Hosseini1 and Mehdi Ahmadi Najafabadi2 1Department of Mechanical Engineering, Varamin Pishva Branch, Islamic Azad University, Varamin, Iran 2Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran
This paper presents the results of Acoustic Emission (AE) method for estimating the journal bearing defect size under different working conditions. An experimental test bed was provided to investigate relation between registered AE signals and the defect in the test bearings. The AE signal features in time-frequency domain were extracted by Continuous Wavelet Transform (CWT). Then, Particle Swarm Optimization (PSO) was implemented to identify the optimum coefficients of the CWT as an input of Artificial Neural Network (ANN) classifier. Metaparameters of the PSO and architecture of the ANN were tuned using Non-dominated Sorting Genetic Algorithm II (NSGA II). Thus, the eight-class classification problem of the defect size in the journal bearings was solved.
Acoustic Emission; Particle Swarm Optimization; Artificial Neural Network; Non-dominated Sorting Genetic Algorithm; Defect Size; Journal Bearing;
Abdullah A. Al-Shaher Department of Computer and Information Systems College of Business Studies, Public Authority for Applied Education and Training, Kuwait
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Shape Recognition, Arabic Handwritten Characters, Regression Curves, Expectation Maximization Algorithm.