Volume 1 Number 1 June 2011

1

Performance Evaluation of Quality of Service and Security using Single path and Multipath in VoIP
K. Maheswari, M. Punithavalli

Abstract: Voice over Internet Protocol (VoIP) is an internet protocol. VoIP is a new fancy and up growing technology to carry speech content over an internet protocol. A major change in telecommunication industry is the introduction of VoIP. This technology replaces the plain old traditional telephone system. The attraction of VoIP is the reduction of cost. The transmission of Real time voice data is not as easy as ordinary text data. The real time voice transmission faces lot of difficulties. It suffers from packet loss, delay, quality and security. These factors will affects and degrade the performance and quality of a VoIP. This paper addresses the Quality of Service (QoS) and security aspects of a VoIP by the modified secret sharing algorithm over a single path and multipath with reduced packet loss. The simulation results show that higher security and quality is achieved in terms of reduced delay and increased packet delivery ratio. The modified secret sharing algorithm is implemented in single path by AODV routing protocol and multipath by AOMDV routing protocol. This work is implemented in Network Simulator NS-2.

2

Metrics of a New Symmetrical Encryption Algorithm
G. Ramesh, R. Umarani

Abstract: The new symmetrical algorithm avoids the key exchange between users and reduces the time taken for the encryption, decryption, and authentication processes. It operates at a data rate higher than DES, 3DES, AES, UMARAM, RC2 and RC6 algorithms. It is applied on a text file and an image as an application. The encryption becomes more secure and high data rate than DES, 3DES, AES,UMARAM,RC2 and RC6. A comparison has been conducted for the encryption algorithms like DES, 3DES, AES, UMARAM, RC2 and RC6 at different settings for each algorithm such as different sizes of data blocks, different data types, battery power consumption, different key size and finally encryption/decryption speed. Experimental results are given to demonstrate the effectiveness of each algorithm.

3

Frequent Itemset Mining of Market Basket Data using K-Apriori Algorithm
D. Ashok Kumar, M. C. Loraine Charlet Annie

Abstract: The constant advance in computing power makes data collection and storage easier, so that the databases are tend to be very big. Market basket databases are very large binary databases. To identify the frequently bought items from the market basket data, a novel approach called K-Apriori algorithm is proposed here, in which binary data is clustered based on a linear data transformation technique, then frequent item sets and association rules are mined. Experimental results show that the proposal has higher performance in terms of objectivity and subjectivity.

4

MANET Security for Reactive Routing Protocol with Node Reputation Scheme
A. Suresh, K. Duraiswamy

Abstract: The mobile node’s reputation in the Mobile Ad hoc Network (MANET) identifies its trust worthiness for secured multiple data communication. Unknown nature of the node’s communication status for initial period has great impact in the effective data transfer as MANET is self-organized and distributed. The functional operation of the mobile network relies on the trusty cooperation between the nodes. The major factor in securing the MANET is based on the quantification of node’s reputation and trustworthiness. The previous literatures provided uncertainty model to reflect a node’s confidence in sufficiency of its past experience, and effect of collecting trust information from the unknown node status. With node mobility characteristic, it reduces unknown nature and speed up trust convergence. Mobility-assisted uncertainty reduction schemes comprised of, proactive schemes, that achieve trust convergence and reactive schemes provide node authentication and their reputation. They provide an acceptable trade-off between delay, and uncertainty. The mobility based node reputation scheme presented in this paper, identifies and monitor the node’s trustworthiness in sharing the information within the ad hoc network. Mobile nodes information uncertainty is handled with the mobility characteristics and its reputation is evaluated to trust or discard the node’s communication. Simulations are carried out to evaluate the performance of mobility based node reputation scheme by measuring the nodes consistency behavior, neighboring communication rate, and path diversity. The average node’s neighboring communication rate is high for the proposed mobility based reputation scheme compared to the reactive routing protocols.

5

Hybridized Oscillating Search Algorithm for Unsupervised Feature Selection
D. Devakumari

Abstract: In feature selection, a search problem of finding a subset of features from a given set of measurements has been of interest for a long time. However, unsupervised methods are scarce. An unsupervised criterion, based on SVD-entropy (Singular Value Decomposition), selects a feature according to its contribution to the entropy (CE) calculated on a leave-one-out basis. Based on this criterion, this paper proposes a Hybridized Oscillating Search feature selection method (HOS) which does not follow a pre defined direction of search (forward or backward). It is a randomized search method which begins with a random subset of features. The proposed HOS method makes use of a sequential feature selection method called Simple Ranking based on CE to get the initial feature subset. Repeated modification of the subset is achieved through up and down swings which form the oscillating cycles. The up swing adds good features to the current subset while the down swing removes worst features from the current subset. After each oscillating cycle, the subset is evaluated by comparing its predictive accuracy with known classification. Common indices like Rand Index and Jaccard Coefficient are used for this purpose. If the last oscillating cycle did not find a better subset, then the process ends with the current subset.

6

Feature Selection Based On Neuro-Fuzzy-Rough System
A. Pethalakshmi

Abstract: Rough Sets theory provides a new mathematical tool to deal with uncertainty, inexact and vagueness of an information system. The information system may contain a certain amount of redundancy that will not aid knowledge discovery and may in fact mislead the process. The redundant attributes may be eliminated in order to reduce the complexity of the problem. This paper proposes Neuro-Fuzzy-Rough Quickreduct (NFRQ) algorithm to select the features from the information system. Neural network is used to construct the membership functions, for fuzzyfying the crisp data. The experiments are carried out on the data sets of UCI machine learning repository and the Human Immunodeficiency Virus (HIV) data set in order to achieve the efficiency of the proposed algorithm.

7

Supervised Feature Subset Selection using Extended Fuzzy Absolute Information Measure for Different Classifiers
K. Sarojini

Abstract: Feature subset selection plays an important role in data mining and machine learning applications. The main aim of feature subset selection is reducing dimensionality by removing irrelevant and redundant features and improving classification accuracy. This paper presents a supervised feature selection method called as Extended Fuzzy Absolute Information Measure (EFAIM) for different classifiers. In this process, first discretization algorithm is applied to discretize numeric and nominal features of a database to construct fuzzy sets of a feature. Then the method EFAIM is applied to select feature subset focusing on boundary samples. To verify the effectiveness of this method, several experiments are conducted for different classifiers, such as, LMT, Naïve Bayes, SMO, C4.5, JRIP, PART and Simple Cart with different UCI datasets. The Experimental results indicates that the proposed algorithm have achieved better classification accuracy for all datasets, that is, almost above 75% of accuracy. For WINE dataset, it gets 96% of classification accuracy for Naïve Bayes classifier. For Ionosphere dataset, it gives almost 89% of classification accuracy for maximum of classifiers with minimum selected feature subset. Thus improved classification accuracy is obtained with selected subset of minimum number of features at minimum processing time.

8

An Efficient Fingerprint Enhancement System using Fuzzy Based Filtering Technique
K. Srinivasan , C. Chandrasekar

Abstract: Fingerprint identification is one of the salient areas in biometric identification system. The quality of the fingerprint image is imperative for a veracious matching process. Normally, the contrast of the image is improved during the preprocessing phase of fingerprint matching. Contrast refers to the difference between two contiguous pixels. There are several enhancement techniques available for fingerprint identification. But, these existing techniques are very complex and inefficacious. Hence, in order to conquer the drawbacks of existing techniques, we proposed an efficient and robust fingerprint enhancement technique via fuzzy based filtering. In this paper, a fuzzy modeling approach is employed for removing the noise as well as for improving the luminosity of the ridges. Moreover, the fuzzy filter values are evaluated and superior results are produced in the image domain. The probabilities of gray values are measured from the position of the input image pixel. Finally, the result shows the enhanced performance of the proposed fuzzy filtering technique.

9

Verdict Accuracy of Quick Reduct Algorithm for Gene Expression Data
T. Chandrasekhar, K. Thangavel, E. N. Sathishkumar

Abstract: Gene expression data are the number of training samples is very small compared to the large number of genes involved in the experiments, that gene selection results, the cost of biological experiment and decision can be greatly reduced by analyzing only the marker genes. Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features.In this paper studies a feature selection method based on rough set theory. Further K-Means, Fuzzy C-Means (FCM) algorithm have implemented for the reduced feature set without considering class labels. Then they obtained results are compared with the original class labels. Back Propagation Network (BPN) has also been used for classification. Then the performance of K-Means, FCM, and BPN are analyzed through the confusion matrix. It is found that the BPN is performing well comparatively.

10

Rough Set Based Unsupervised Feature Selection Using Relative dependency Measures
C. Velayutham, K. Thangavel

Abstract: Feature Selection (FS) is a process which attempts to select features which are more informative. It is an important step in knowledge discovery from data. Conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. In this paper, we propose a new rough set-based unsupervised feature selection using relative dependency measures. The method employs a backward elimination-type search to remove features from the complete set of original features. As with the classification performance is evaluated using WEKA tool. The method is compared with an existing supervised method and demonstrates that it can effectively remove redundant features.

11

Optimal Web Page Category for Web Personalization Using Biclustering Approach
P. S. Raja, R. Rathipriya

Abstract: In this paper, Biclustering of Web Usage Data using Genetic Algorithm is proposed to extract optimal web page category. Three different fitness functions based on Mean Squared Residue (MSR) score are used to study the performance of the proposed biclustering method. Experiment was conducted on the CTI dataset, and results of the different fitness functions are analyzed. The valuable outcome of the proposed biclustering method can be used to better understand behavioral characteristics of visitor or user segments, improve the organization and structure of the site, and create a personalized experience for visitors by providing recommendations.

12

Segmentation of Brain Portion From MRI of Head Scans Using K-Means Cluster
K. Somasundaram, S. Vijayalakshmi, T. Kalaiselvi

Abstract: Segmentation is one of the essential processes in image processing. The objective of this paper is to segment human brain area from other non-brain area in MRI of head scans. In our method, brain portion is detected using clustering technique. The proposed method clusters the image into 15 regions using k-mean clustering technique. The non brain parts like skull, sclera, fat, skin were clustered to regions according to its intensity. These regions were eliminated and the remaining regions are merged to form the brain portion. The proposed model has been tested on various image slices and found to give good segmentation. Experimental results show that the proposed method gives an average value of 0.95 for Jaccard co-efficient, 0.97 for Dice, 0.96 for Sensitivity and 0.98 for specificity.

13

Energy Efficient Cluster based Key Management & Authentication Technique for Wireless Sensor Networks
T. Lalitha, R. Umarani

Abstract: In wireless sensor networks, the security attacks are due to the compromise of the large part of the network which can cause node damage or disturbance in the data flow. At the time of re-keying process, if the re-key from the sink is not securely transmitted to the compromised node, it will lead to denial of service attack. In this paper, we propose a cluster based key management technique for authentication in wireless sensor networks. Initially, clusters are formed in the network and the cluster heads are selected based on the energy cost, coverage and processing capacity. The sink assigns cluster key to every cluster and an EBS key set to every cluster head. The EBS key set contains the pairwise keys for intra-cluster and inter-cluster communication. The cluster head upon detecting a compromised node in its cluster sends a request to sink to perform re-keying operation. The re-keying process utilizes the hashing function for authentication and nodes are recovered in a secured manner. During data transmission towards the sink, the data is made to pass through two phases of encryption thus ensuring security in the network. By simulation results, we show that the proposed approach recovers the compromised node in the secured manner.