Volume 2 Number 3 December 2012

1

An Efficient VLSI Architecture for 3D DWT Using Lifting Scheme
A Karthikeyan, P Saranya, N Jayashree

Abstract: The role of the compression is to reduce bandwidth requirements for transmission and memory requirements for storage of all forms of data as it would not be practical to put images, audio, video alone on websites without compression. The medical community has many applications in image compression often involving various types of diagnostic imaging. The use of wavelet transform is now well established due to its multi resolution and scaling property. Among the various techniques, we have used the lifting scheme, as the lifting scheme allows perfect reconstruction by its structure. The system is fully compatible with JPEG 2000 standard. The most important property of this concept seems to be the possibility of simple and fast applications into FPGA chip. It requires fever operations and provides in-place computation of the wavelet coefficients. This paper presents a method which implements 3-D lifting wavelet by FPGA. This architecture has an efficient pipeline structure to implement high-throughput processing without any on-chip memory/first in first out access. The proposed VLSI architecture is more efficient than the previous proposed architectures in terms of memory access, hardware regularity and simplicity and throughput.

2

Breast Cancer Data Analysis Using Weighted Fuzzy Ant Colony Clustering
S. Nithya and R. Manavalan

Abstract: The performance of Data partitioning using machine learning techniques is calculated only with distance measures i.e. similarity between the transactions is carried out with the help of distance measurement algorithms such as Euclidian distance measure and cosine distance measure. There distance measure did not consider the global connectivity. The Distance with Connectivity (DWC) model is used to estimate distance between transactions with local consistency and global connectivity information. The Ant Colony Optimization (ACO) techniques are used for the data clustering process. ACO is integrated with DWC to find spherical shaped cluster. The global distance measure model called DWC is enhanced with fuzzy logic. The transaction weights are updated using fuzzification process. All the attribute weight values are updated with a fuzzy set weight value. The distance with connectivity model is tuned to estimate distance between the transactions using the fuzzy set values. The distance measure model efficiently handles the uneven transaction distributions. The ant colony-clustering algorithm is also improved with fuzzy logic. The similarity computations are carried out with fuzzy distance measurement models. All the fuzzified attribute values are updated with weight values. Un-even data distribution handling, accurate distance measure and cluster accuracy are the features of the proposed clustering algorithm.

3

Optimized K-Means and Fuzzy C-Means for MRI Brain Image Segmentation
A.Pethalakshmi and A.Banumathi

Abstract: Image segmentation is an important aspect of medical image processing, where K-Means and Fuzzy C-Means clustering approaches are widely used in biomedical applications particularly for brain tumor detection. In this paper, K-Means and Fuzzy C-Means clustering algorithms are analyzed and found it have two major drawbacks. First drawback, it forces the objects to be clustered within the user defined K number of clusters. Without having prior knowledge on the database it is difficult to predict the number of resultant cluster that is to be obtained. Second drawback, the quality of the resultant cluster is based on the initial seeds where it is selected randomly. On random selection there is possibility of selecting nearby seeds as a centroid for each cluster of K. In this case also once again the algorithm forces all the data’s towards the fixed centriod of each cluster and hence there is possibility of wrong diagnosing happens. But, the brain is highly sensitive and control centered organ because of above mentioned drawbacks, on implementing these algorithms may leads to wrong diagnosing which is a life risk job. In order to overcome this drawback the current paper focused on developing the Unique Clustering with Affinity Measure (UCAM) and Fuzzy-UCAM algorithm for clustering without defining initial seed and number of resultant clusters. Unique clustering is obtained with the help of affinity measures.

4

High Performance and Low Power Modified Radix-25 FFT Architecture For High Rate WPAN Application
B. Pushparaj and C. Paramasivam

Abstract- This paper presents a high speed and low complexity modified radix-25 512-point Fast Fourier transform (FFT) architecture using an eight data-path pipelined approach for high rate wireless personal area network applications. A novel modified radix-25 FFT algorithm that reduces the hardware complexity is proposed. This method can reduce the number of complex multiplications and the size of the twiddle factor memory. It also uses a complex constant multiplier instead of a complex Booth multiplier. The results demonstrate that the total gate count of the proposed FFT architecture is17201. Furthermore the highest throughput rate is up to 2.4GS/s at 300MH.

5

Enhancing the Security and Privacy of multi modal biometric system
A.Pethalakshmi and A.P.Caroline Hirudhaya

Abstract: Person authentication is done mostly by using one or more of the following means as text passwords, personal identification numbers, barcodes and identity cards. The technology has been improved to secure the privacy via biometrics. Unimodal biometrics are normally used to provide personal authentication. In this paper multimodal biometrics, combination of palmprint, hand geometry, knuckle extraction and speech is applied to authentication with improved security. In this paper we propose a new approach in multimodal biometrics by applying Least mean square algorithm, which is one of the adaptive filtering algorithm to preserve the privacy.

6

Adaptive Data Traffic Control With Wireless Sensor Networks
S. Kumaravel and S. Prabha

Abstract: With the growing demand in wireless sensor network (WSN), adversary attack on sensor node becomes major issue in current WSN deployment. The replica nodes which are generated by attackers time transmits inappropriate message to the sink in the sensor networks. Existing works presented Sequential Probability Ratio Test (SPRT) which reduces the overhead of sensor node transmission on the adversary conditions. Our first work extends replica detection scheme of probability ratio test with Finite Range Query (FRQ) technique to effectively identify the mobile replica nodes and eliminate the varying query ranges of mobile sensor nodes. But it is necessary to validate the query to improve the detection of mobile replica nodes. Our second work focused on validating the functionality of the query scheme at the time of the operation, thus reducing the risk of untimely failures. Numerous source nodes require accounting data to a sink node, producing the funneling consequence where the traffic load enhances since the distance to the sink node reduces. A significance of the funneling outcome is network jamming where packet lines spread out since packets appear at nodes faster than what the nodes can broadcast. Distinctive packet traffic in a sensor network exposes distinct models that permit an adversary examining packet traffic to realize the position of a base station. To manage the packet creation rate at the sources and transitional nodes, in this work we present an adaptive traffic data control scheme. To evade over-utilizing the system in terms of the node packet shields and wireless channels, the proposed adaptive traffic data control scheme control the data flow rate at the sink based on the original nodes presence in the wireless sensor networks. An experimental evaluation is conducted to estimate the performance of the proposed adaptive traffic data control scheme in WSN [ATDCS] in terms of delay, traffic control rate, reliability.

7

Efficient Cache Replacement Techniques Based On Information Density over Wireless AdHoc Networks
S.Vimala and R.Anbarasu

Abstract: Data caching strategy for ad hoc networks whose nodes exchange information items in a peer-to-peer fashion. Data caching is a fully distributed scheme where each node, upon receiving requested information, determines the cache drop time of the information or which content to replace to make room for the newly arrived information. These decisions are made depending on the perceived “presence” of the content in the nodes proximity, whose estimation does not cause any additional overhead to the information sharing system. We devise a strategy where nodes, independent of each other, decide whether to cache some content and for how long. In the case of small-sized caches, we aim to design a content replacement strategy that allows nodes to successfully store newly received information while maintaining the good performance of the content distribution system. Under both conditions, each node takes decisions according to its perception of what nearby users may store in their caches and with the aim of differentiating its own cache content from the other nodes’. The result is the creation of content diversity within the nodes neighborhood so that a requesting user likely finds the desired information nearby. We simulate our caching algorithms in different ad hoc network scenarios and compare them with other caching schemes, showing that our solution succeeds in creating the desired content diversity, thus leading to a resource-efficient information access.

8

Detecting Outliers Using PAM with Normalization Factor on Yeast Data
P. Ashok, G.M Kadhar Nawaz and E. Elayaraja

Abstract: Protein is a macro nutrient composed of amino acids that is essential for the proper growth and function of the human body. While the body can construct several amino acids required for protein production, a set of fundamental amino acids needs to be obtained from animal and/or vegetable protein sources. Cluster analysis is a one of the primary data analysis tools in data mining. Clustering is the process of grouping the data into classes or clusters so that objects within a cluster have high similarity in comparison to one another, but are very dissimilar to objects in other cluster. Clustering can be performed on Nominal, Ordinal and Ratio-Scaled variables. The main purpose of clustering is to reduce the size and complexity of the dataset. In this paper, we introduce the method clustering and its type’s K-Means and K-Medoids. The clustering algorithms are improved by implementing the three initial centroid selection methods instead of selecting centroids randomly, which is compared by Davies Bouldin index measure. Hence selecting the initial centroids selection by systematic selection (ICSS) algorithm overcomes the drawbacks of selecting initial cluster centers then other methods. In the yeast dataset, the defective proteins (objects) are considered as outliers, which are identified by the clustering methods with ADOC (Average Distance between Object and Centroid) function. The outlier’s detection method and computational complexity method is studied and compared, the experimental results shows that the K-Medoids method performs well when compare with the K-Means clustering.

9

Decision Tree Learning Technique for Selection of Appropriate Grid Scheduling Algorithm
D Ramyachitra, K Vivekanandan

Abstract: Different scheduling algorithms are discussed in the literature that is appropriate for the grid environment. Scheduling heuristics can be classified into constructive and improvement and we have studied and analyzed several widely used the above said heuristics. The heuristics perform according to the nature of the users’ tasks and the resources. In this paper, we have proposed a strategy for finding out an appropriate scheduling heuristic from the widely used ones. Based on our study, we classified the user’s tasks and resource into seven categories using the decision tree learning technique based on quantity and heterogeneity. When the user submits the job, the characteristics of the tasks and the resources given by the user will match with any one of the categories. Our proposed strategy selects a scheduling heuristic that has the maximum occurrence of best makespan and executes the user tasks. The proposed strategy shows better accuracy and this can be used to find the appropriate scheduling heuristic to execute users tasks.

10

Design for Testability and Performance of Arithmetic Logic Circuits
G.Ramachandran, N.Manikanda Devarajan, R.Ramani and S.Kannan

Abstract: A set of test vectors that detects all single stuck-at faults on all primary inputs of a fanout free combinational logic circuit will detect all single stuck –at faults in that circuit . A set of test vectors that detect all single stuck-at faults on all primary inputs and all fanout branches of a combinational logic circuit will detect all single stuck-at faults in that circuit. Design of logic integrated circuits in CMOS technology is becoming more and more complex since VLSI is the interest of many electronic IC users and manufactures . A common problem to be solved by designers, manufactures and users is the testing of these Ics. Testing can be expressed by checking if the outputs of a function systems (functional block, integrated circuits , printed circuit board or a complete systems) correspond to the inputs applied to it . If the test of this function system is positive , then the system is good for use. If the outputs are different than expected. Then the system has a problem : so either the system is rejected (go/no go test) , or a diagnosis is applied to it , in order to point out and probably eliminate the problem’s causes.