Volume 8 Number 1 June 2018

1

A Novel Design of Contradiction Based on Grey Hole Attack Minimization for MANET
T. Manjula, M. Malathi

Abstract- Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes simultaneous by wireless topology with randomly (Md. Sameeruddin Khan, Md. Yusuf Mulge (2017)) .The nodes do not depend upon other nodes. The main purpose of MANET network is to communicate each other and act upon wireless network and wireless devices such as mobile, laptop and some hardware communication devices. MANET is dynamic topology, each node has not inhibited mobility, connectivity and changes its link within a fraction of seconds to move frequently. Routing in MANET is done together between nodes. Each node works as router that forwards packets for other nodes (Hosny M. Ibrahim, Nagwa M. Omar, Ebram K. William (2015)). In this paper, our aim to describe the minimization of grey hole attack in MANET network using network simulator tool to address the detection and prevention of malicious node of grey hole attack in mobile ad-hoc network.

2

Ear Biometric Compression using Huffman Coding
P. Ramya, I. Laurence Aroquiaraj

Abstract- An ear recognition framework depicts a potential tool in forensic applications. Even in the case the facial features of a pseudo is partly or fully covered an image of the outer ear may suffice to disclose a subject’s identity. In forensic scenarios images may stem from surveillance cameras of atmospheres where image compression is general practice to conquer of limitations of storage and transmission capacities. Yet, the impact of an intensive image compression on ear recognition gas remained undocumented. In this paper, we investigate the impact of various cutting-edge image compression techniques on ear recognition and ear detection. Huffman coding compression technique is applying on ear images. Assessments directed on an uncompressed ear database are Considered as for various stages in the preparing chain of an ear recognition framework where compression might be connected, corresponding to the most pertinent forensic scenarios. Experimental results discuss the highlights of the proposing concept.

3

A Survey on Factorization Methods in MapReduce Environment
M. Rajathi, M. Ramaswami

Abstract- Big data is a term encompassing complex types of large datasets that is hard to process with the traditional data processing systems. Innumerable challenges are encountered with big data like storage, transition, visualization, searching, analysis, security and privacy violations and sharing. Parallelism is a computational mechanism used to process such a large amount of data in an inexpensive and more efficient way. Hadoop is the core platform for handling massive data and it runs applications using the Map Reduce algorithm, where the data is processed in parallel on different CPU nodes. MapReduce is a software framework for applications which process vast amount of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Due to the increasing availability of massive data sets in the form of matrices, researchers are facing the problem because the matrices which are to be factorized are having dimensions in the orders of millions. Recent research has shown that it is possible to factorize such large matrices within tens of hours using the MapReduce distributed programming platform. In this paper, we discuss two different matrix decomposition implementations using MapReduce; QR factorization and the SVD are two fundamental matrix factorization methods with applications around scientific computing and data analysis. The QR method decompose the matrix into partitions and by applying multiple processes to compute the QR decomposition in parallel provides decomposition process much faster than computing the QR decomposition on the original matrix. In the same way, Singular value decomposition (SVD) shows strong vitality in the area of information analysis and has significant application value in many scientific big data fields. For a large-scale matrix, applying SVD computation directly is both time consuming and memory demanding. To speed up the computation of SVD, a MapReduce model has many advantages over a message passing interface model, such as fault tolerance, load balancing and simplicity. This survey paper discusses different QR and SVD factorization methods and how they are implemented in MapReduce environment.

4

Earlier Stage Identification of Glaucoma Disease using Segmentation Algorithm in IRIS Image
G. B. Govinda Prabhu, R. Mahalakshmi Priya, M. Vasumathy

Abstract- The eye is one of the most important sensory organs in the human body. Eye diseases are a common health issue around the world. Two such diseases are cataract and conjunctivitis. Cataract causes a sort of clouding on the lens leading to compact vision and if kept untreated for long leads to permanent blindness. Conjunctivitis or pink eye is a state where the conjunctiva of the eye is inflamed by an infection or by an allergic reaction. It can affect one or both eyes and it leads to redness or discharge. Bacterial and viral conjunctivitis may be very contagious. The proposed method to diagnose the eye diseases is based on the effective computation approach. Iris images were reserved before and after the treatment of eye disease and the output shows the mathematical difference obtained from treatment. This identification system was effectively withstood with most ophthalmic disease like corneal oedema, iridotomies and conjunctivitis. This proposed iris recognition may be used to solve the Iris related problems that could cause in key biometric technology and medical diagnosis. New glaucoma diagnostic technologies are penetrating clinical care and are altering quickly. Having a systematic review of these technologies will help clinicians and decision makers and help detect gaps that need to be addressed.

5

A Survey on Big Data Bio-informatics
R. Samya

Abstract- With the advent of Internet of Things (IoT) and Web 2.0 technologies, there has been a tremendous growth in the amount of data generated. This paper emphasizes on the need for big data, technological advancements, tools and techniques used to process big data are discussed. Since, the traditional technologies like Relational Database Management System (RDBMS) have their own limitations to handle big data, new technologies have been developed to handle them and to derive useful insights. The availability of Big Data, coupled with new data analytics, challenges established epistemologies across the sciences, social sciences and humanities, and assesses the extent to which they are engendering paradigm shifts across multiple disciplines. However, the high performance computing devices and software tools to deal with this complex and increased volume of data is still persists as a big challenge among the computer scientists and biologists. The basic objective of this paper is to study various analysis and visualization tools in bioinformatics, bioinformatics big databases and high level bioinformatics system architecture to handle the voluminous data in bioinformatics.