Abstract Class & Java Interface - Chiang Mai University · Abstract Class & Java Interface. 2 Agenda
2013-2014 IEEE JAVA Project List With Abstract
-
Upload
chandrakanth -
Category
Documents
-
view
231 -
download
0
description
Transcript of 2013-2014 IEEE JAVA Project List With Abstract
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
IEEE 2012-13 JAVA/ J2EE TITLES
CLOUD COMPUTING
1.A Stochastic Model to Investigate Data Center Performance and QoS
in IaaS Cloud Computing Systems
Abstract:
Cloud data center management is a key problem due to the numerous and
heterogeneous strategies that can be applied, ranging from the VM
placement to the federation with other clouds. Performance evaluation of
Cloud Computing infrastructures is required to predict and quantify the
cost-benefit of a strategy portfolio and the corresponding Quality of
Service (QoS) experienced by users. Such analyses are not feasible by
simulation or on-the-field experimentation, due to the great number of
parameters that have to be investigated. In this paper, we present an
analytical model, based on Stochastic Reward Nets (SRNs), that is both
scalable to model systems composed of thousands of resources and
flexible to represent different policies and cloud-specific strategies.
Several performance metrics are defined and evaluated to analyze the
behavior of a Cloud data center: utilization, availability, waiting time, and
responsiveness. A resiliency analysis is also provided to take into account
load bursts. Finally, a general approach is presented that, starting from the
concept of system capacity, can help system managers to opportunely set
the data center parameters under different working conditions.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
2.CloudMoV: Cloud-based Mobile Social TV
Abstract:
The rapidly increasing power of personal mobile devices (smart phones,
tablets, etc.) is providing much richer contents and social interactions to
users on the move. This trend however is throttled by the limited battery
lifetime of mobile devices and unstable wireless connectivity, making the
highest possible quality of service experienced by mobile users not
feasible. The recent cloud computing technology, with its rich resources
to compensate for the limitations of mobile devices and connections, can
potentially provide an ideal platform to support the desired mobile
services. Tough challenges arise on how to effectively exploit cloud
resources to facilitate mobile services, especially those with stringent
interaction delay requirements. In this paper, we propose the design of a
Cloud-based, novel Mobile social TV system (CloudMoV). The system
effectively utilizes both PaaS (Platform-as-a-Service) and IaaS
(Infrastructure-as-a- Service) cloud services to offer the living-room
experience of video watching to a group of disparate mobile users who
can interact socially while sharing the video. To guarantee good
streaming quality as experienced by the mobile users with time varying
wireless connectivity, we employ a surrogate for each user in the IaaS
cloud for video downloading and social exchanges on behalf of the user.
The surrogate performs efficient stream transcoding that matches the
current connectivity quality of the mobile user. Given the battery life as a
key performance bottleneck, we advocate the use of burst transmission
from the surrogates to the mobile users, and carefully decide the burst
size which can lead to high energy efficiency and streaming quality.
Social interactions among the users, in terms of spontaneous textual
exchanges, are effectively achieved by efficient designs of data storage Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
with Big Table and dynamic handling of large volumes of concurrent
messages in a typical PaaS cloud. These various designs for flexible
transcoding capabilities, battery efficiency of mobile devices and
spontaneous social interactivity together provide an ideal platform for
mobile social TV services. We have implemented CloudMoV on Amazon
EC2 and Google App Engine and verified its superior performance based
on real world experiments.
3.Dynamic Resource Allocation Using Virtual Machines for Cloud
Computing Environment
Abstract:
Cloud computing allows business customers to scale up and down their
resource usage based on needs. Many of the touted gains in the cloud
model come from resource multiplexing through virtualization
technology. In this paper, we present a system that uses virtualization
technology to allocate data center resources dynamically based on
application demands and support green computing by optimizing the
number of servers in use. We introduce the concept of “skewness” to
measure the unevenness in the multidimensional resource utilization of a
server. By minimizing skewness, we can combine different types of
workloads nicely and improve the overall utilization of server resources.
We develop a set of heuristics that prevent overload in the system
effectively while saving energy used. Trace driven simulation and
experiment results demonstrate that our algorithm achieves good
performance.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
4.Error-Tolerant Resource Allocation and Payment Minimization for
Cloud System
Abstract:
virtual machine (VM) technology being increasingly mature, compute
resources in cloud systems can be partitioned in fine granularity and
allocated on demand. We make three contributions in this paper: 1) we
formulate a deadline-driven resource allocation problem based on the
cloud environment facilitated with VM resource isolation technology, and
also propose a novel solution with polynomial time, which could
minimize users’ payment in terms of their expected deadlines. 2) By
analyzing the upper bound of task execution length based on the possibly
inaccurate workload prediction, we further propose an error-tolerant
method to guarantee task’s completion within its deadline. 3) We validate
its effectiveness over a real VM-facilitated cluster environment under
different levels of competition. In our experiment, by tuning algorithmic
input deadline based on our derived bound, task execution length can
always be limited within its deadline in the sufficient-supply situation;
the mean execution length still keeps 70 percent as high as user specified
deadline under the severe competition. Under the original-deadline-based
solution, about 52.5 percent of tasks are completed within 0.95-1.0 as
high as their deadline, which still conforms to the deadline-guaranteed
requirement. Only 20 percent of tasks violate deadlines, yet most (17.5
percent) are still finished within 1.05 times of deadlines.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
5.Harnessing the Cloud for Securely Outsourcing Large-Scale Systems
of Linear Equations
Abstract:
Cloud computing economically enables customers with limited
computational resources to outsource large-scale computations to the
cloud. However, how to protect customers’ confidential data involved in
the computations then becomes a major security concern. In this paper,
we present a secure outsourcing mechanism for solving large-scale
systems of linear equations (LE) in cloud. Because applying traditional
approaches like Gaussian elimination or LU decomposition (aka. direct
method) to such large- scale LEs would be prohibitively expensive, we
build the secure LE outsourcing mechanism via a completely different
approach—iterative method, which is much easier to implement in
practice and only demands relatively simpler matrix-vector operations.
Specifically, our mechanism enables a customer to securely harness the
cloud for iteratively finding successive approximations to the LE
solution, while keeping both the sensitive input and output of the
computation private. For robust cheating detection, we further explore the
algebraic property of matrix-vector operations and propose an efficient
result verification mechanism, which allows the customer to verify all
answers received from previous iterative approximations in one batch
with high probability. Thorough security analysis and prototype
experiments on Amazon EC2 demonstrate the validity and practicality of
our proposed design.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
6.Mona: Secure Multi-Owner Data Sharing for Dynamic Groups in the
Cloud
Abstract:
Cloud computing provides an economical and efficient solution for
sharing group resource among cloud users. Unfortunately, sharing data in
a multi-owner manner while preserving data and identity privacy from an
untrusted cloud is still a challenging issue, due to the frequent change of
the membership. In this paper, we propose a secure multi owner data
sharing scheme, named Mona, for dynamic groups in the cloud. By
leveraging group signature and dynamic broadcast encryption techniques,
any cloud user can anonymously share data with others. Meanwhile, the
storage overhead and encryption computation cost of our scheme are
independent with the number of revoked users. In addition, we analyze
the security of our scheme with rigorous proofs, and demonstrate the
efficiency of our scheme in experiments.
7.PACK: Prediction-Based Cloud Bandwidth and Cost Reduction
System
Abstract:
In this paper, we present PACK (Predictive ACKs), a novel end-to-end
traffic redundancy elimination (TRE) system, designed for cloud
computing customers. Cloud-based TRE needs to apply a judicious use of
cloud resources so that the bandwidth cost reduction combined with the
additional cost of TRE computation and storage would be optimized.
PACK’s main advantage is its capability of offloading the cloud-server
TRE effort to end clients, thus minimizing the processing costs induced
by the TRE algorithm. Unlike previous solutions, PACK does not require
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
the server to continuously maintain clients’ status. This makes PACK
very suitable for pervasive computation environments that combine client
mobility and server migration to maintain cloud elasticity. PACK is based
on a novel TRE technique, which allows the client to use newly received
chunks to identify previously received chunk chains, which in turn can be
used as reliable predictors to future transmitted chunks. We present a
fully functional PACK implementation, transparent to all TCP-based
applications and network devices. Finally, we analyze PACK benefits for
cloud users, using traffic traces from various sources.
8.Privacy-Preserving Public Auditing for Secure Cloud Storage
Abstract:
Using cloud storage, users can remotely store their data and enjoy the on-
demand high-quality applications and services from a shared pool of
configurable computing resources, without the burden of local data
storage and maintenance. However, the fact that users no longer have
physical possession of the outsourced data makes the data integrity
protection in cloud computing a formidable task, especially for users with
constrained computing resources. Moreover, users should be able to just
use the cloud storage as if it is local, without worrying about the need to
verify its integrity. Thus, enabling public auditability for cloud storage is
of critical importance so that users can resort to a third-party auditor
(TPA) to check the integrity of outsourced data and be worry free. To
securely introduce an effective TPA, the auditing process should bring in
no new vulnerabilities toward user data privacy, and introduce no
additional online burden to user. In this paper, we propose a secure cloud
storage system supporting privacy-preserving public auditing. We further
extend our result to enable the TPA to perform audits for multiple users Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
simultaneously and efficiently. Extensive security and performance
analysis show the proposed schemes are provably secure and highly
efficient. Our preliminary experiment conducted on Amazon EC2
instance further demonstrates the fast performance of the design.
9.Scalable and Secure Sharing of Personal Health Records in Cloud
Computing Using Attribute-Based Encryption
Abstract:
Personal health record (PHR) is an emerging patient-centric model of
health information exchange, which is often outsourced to be stored at a
third party, such as cloud providers. However, there have been wide
privacy concerns as personal health information could be exposed to
those third party servers and to unauthorized parties. To assure the
patients’ control over access to their own PHRs, it is a promising method
to encrypt the PHRs before outsourcing. Yet, issues such as risks of
privacy exposure, scalability in key management, flexible access, and
efficient user revocation, have remained the most important challenges
toward achieving fine-grained, cryptographically enforced data access
control. In this paper, we propose a novel patient-centric framework and a
suite of mechanisms for data access control to PHRs stored in semitrusted
servers. To achieve fine-grained and scalable data access control for
PHRs, we leverage attribute-based encryption (ABE) techniques to
encrypt each patient’s PHR file. Different from previous works in secure
data outsourcing, we focus on the multiple data owner scenario, and
divide the users in the PHR system into multiple security domains that
greatly reduces the key management complexity for owners and users. A
high degree of patient privacy is guaranteed simultaneously by exploiting
multiauthority ABE. Our scheme also enables dynamic modification of
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
access policies or file attributes, supports efficient on-demand
user/attribute revocation and break-glass access under emergency
scenarios. Extensive analytical and experimental results are presented
which show the security, scalability, and efficiency of our proposed
scheme.
10.Enabling Dynamic Data and Indirect Mutual Trust for Cloud
Computing Storage Systems
Abstract:
Storage-as-a-Service offered by cloud service providers (CSPs) is a paid
facility that enables organizations to outsource their sensitive data to be
stored on remote servers. In this paper, we propose a cloud-based storage
scheme that allows the data owner to benefit from the facilities offered by
the CSP and enables indirect mutual trust between them. The proposed
scheme has four important features: (i) it allows the owner to outsource
sensitive data to a CSP, and perform full block-level dynamic operations
on the outsourced data, i.e., block modification, insertion, deletion, and
append, (ii) it ensures that authorized users (i.e., those who have the right
to access the owner’s file) receive the latest version of the outsourced
data, (iii) it enables indirect mutual trust between the owner and the CSP,
and (iv) it allows the owner to grant or revoke access to the outsourced
data. We discuss the security issues of the proposed scheme. Besides, we
justify its performance through theoretical analysis and a prototype
implementation on Amazon cloud platform to evaluate storage,
communication, and computation overheads.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
11.A Load Balancing Model Based on Cloud Partitioning for the Public
Cloud
Abstract:
Load balancing in the cloud computing environment has an important
impact on the performance. Good load balancing makes cloud computing
more efficient and improves user satisfaction. This article introduces a
better load balance model for the public cloud based on the cloud
partitioning concept with a switch mechanism to choose different
strategies for different situations. The algorithm applies the game theory
to the load balancing strategy to improve the efficiency in the public
cloud environment.
12.Load Rebalancing for Distributed File Systems in Clouds
Abstract:
Distributed file systems are key building blocks for cloud computing
applications based on the MapReduce programming paradigm. In such
file systems, nodes simultaneously serve computing and storage
functions; a file is partitioned into a number of chunks allocated in
distinct nodes so that MapReduce tasks can be performed in parallel over
the nodes. However, in a cloud computing environment, failure is the
norm, and nodes may be upgraded, replaced, and added in the system.
Files can also be dynamically created, deleted, and appended. This results
in load imbalance in a distributed file system; that is, the file chunks are
not distributed as uniformly as possible among the nodes. Emerging
distributed file systems in production systems strongly depend on a
central node for chunk reallocation. This dependence is clearly
inadequate in a large-scale, failure-prone environment because the central
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
load balancer is put under considerable workload that is linearly scaled
with the system size, and may thus become the performance bottleneck
and the single point of failure. In this paper, a fully distributed load
rebalancing algorithm is presented to cope with the load imbalance
problem. Our algorithm is compared against a centralized approach in a
production system and a competing distributed solution presented in the
literature. The simulation results indicate that our proposal is comparable
with the existing centralized approach and considerably outperforms the
prior distributed algorithm in terms of load imbalance factor, movement
cost, and algorithmic overhead. The performance of our proposal
implemented in the Hadoop distributed file system is further investigated
in a cluster environment.
13.Optimizing Cloud Resources for Delivering IPTV Services Through
Virtualization
Abstract:
Virtualized cloud-based services can take advantage of statistical
multiplexing across applications to yield significant cost savings.
However, achieving similar savings with real-time services can be a
challenge. In this paper, we seek to lower a provider’s costs for real-time
IPTV services through a virtualized IPTV architecture and through
intelligent time-shifting of selected services. Using Live TV and Video-
on-Demand (VoD) as examples, we show that we can take advantage of
the different deadlines associated with each service to effectively
multiplex these services. We provide a generalized framework for
computing the amount of resources needed to support multiple services,
without missing the deadline for any service.We construct the problem as
an optimization formulation that uses a generic cost function. We
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
consider multiple forms for the cost function (e.g., maximum, convex and
concave functions) reflecting the cost of providing the service. The
solution to this formulation gives the number of servers needed at
different time instants to support these services. We implement a simple
mechanism for time-shifting scheduled jobs in a simulator and study the
reduction in server load using real traces from an operational IPTV
network. Our results show that we are able to reduce the load by
(compared to a possible as predicted by the optimization framework).
14.Privacy Preserving Delegated Access Control in Public Clouds
Abstract:
Current approaches to enforce fine-grained access control on confidential
data hosted in the cloud are based on fine-grained encryption of the data.
Under such approaches, data owners are in charge of encrypting the data
before uploading them on the cloud and re-encrypting the data whenever
user credentials change. Data owners thus incur high communication and
computation costs. A better approach should delegate the enforcement
offline-grained access control to the cloud, so to minimize the overhead at
the data owners, while assuring data confidentiality from the cloud. We
propose an approach, based on two layers of encryption that addresses
such requirement. Under our approach, the data owner performs a coarse-
grained encryption, whereas the cloud performs a fine-grained encryption
on top of the owner encrypted data. A challenging issue is how to
decompose access control policies (ACPs) such that the two layer
encryption can be performed. We show that this problem is NP-complete
and propose novel optimization algorithms. We utilize an efficient group
key management scheme that supports expressive ACPs. Our system
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
assures the confidentiality of the data and preserves the privacy of users
from the cloud while delegating most of the access control enforcement
to the cloud.
15.Attribute-Based Encryption With Verifiable Outsourced Decryption
Abstract:
Attribute-based encryption (ABE) is a public-keybased one-to-many
encryption that allows users to encrypt and decrypt data based on user
attributes. A promising application of ABE is flexible access control of
encrypted data stored in the cloud, using access polices and ascribed
attributes associated with private keys and ciphertexts.One of themain
efficiency drawbacks of the existing ABE schemes is that decryption
involves expensive pairing operations and the number of such operations
grows with the complexity of the access policy. Recently, Green et al.
proposed an ABE system with outsourced decryption that largely
eliminates the decryption overhead for users. In such a system, a user
provides an untrusted server, say a cloud service provider, with a
transformation key that allows the cloud to translate any ABE ciphertext
satisfied by that user’s attributes or access policy into a simple ciphertext,
and it only incurs a small computational overhead for the user to recover
the plaintext from the transformed ciphertext. Security of an ABE system
with outsourced decryption ensures that an adversary (including a
malicious cloud) will not be able to learn anything about the encrypted
message; however, it does not guarantee the correctness of the
transformation done by the cloud. In this paper, we consider a new
requirement of ABE with outsourced decryption: verifiability. Informally,
verifiability guarantees that a user can efficiently check if the
transformation is done correctly. We give the formal model of ABE with Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
verifiable outsourced decryption and propose a concrete scheme. We
prove that our new scheme is both secure and verifiable, without relying
on random oracles. Finally, we show an implementation of our scheme
and result of performance measurements, which indicates a significant
reduction on computing resources imposed on users.
16.A Secure Erasure Code-Based Cloud Storage System with Secure
Data Forwarding
Abstract:
A cloud storage system, consisting of a collection of storage servers,
provides long-term storage services over the Internet. Storing data in a
third party’s cloud system causes serious concern over data
confidentiality. General encryption schemes protect data confidentiality,
but also limit the functionality of the storage system because a few
operations are supported over encrypted data. Constructing a secure
storage system that supports multiple functions is challenging when the
storage system is distributed and has no central authority. We propose a
threshold proxy re-encryption scheme and integrate it with a
decentralized erasure code such that a secure distributed storage system is
formulated. The distributed storage system not only supports secure and
robust data storage and retrieval, but also lets a user forward his data in
the storage servers to another user without retrieving the data back. The
main technical contribution is that the proxy re-encryption scheme
supports encoding operations over encrypted messages as well as
forwarding operations over encoded and encrypted messages. Our method
fully integrates encrypting, encoding, and forwarding. We analyze and
suggest suitable parameters for the number of copies of a message
dispatched to storage servers and the number of storage servers queried Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
by a key server. These parameters allow more flexible adjustment
between the number of storage servers and robustness.
17.Cloud Computing Security: From Single to Multi-Clouds
Abstract:
The use of cloud computing has increased rapidly in many organizations.
Cloud computing provides many benefits in terms of low cost and
accessibility of data. Ensuring the security of cloud computing is a major
factor in the cloud computing environment, as users often store sensitive
information with cloud storage providers but these providers may be
untrussed. Dealing with “single cloud” providers is predicted to become
less popular with customers due to risks of service availability failure and
the possibility of malicious insiders in the single cloud. A movement
towards “multi-clouds”, or in other words, “interclouds” or “cloud-of-
clouds” has emerged recently. This paper surveys recent research related
to single and multi-cloud security and addresses possible solutions. It is
found that the research into the use of multi-cloud providers to maintain
security has received less attention from the research community than has
the use of single clouds. This work aims to promote the use of multi-
clouds due to its ability to reduce security risks that affect the cloud
computing user.
18.Scalable and Secure Sharing of Personal Health Records in Cloud
Computing Using Attribute-Based Encryption
Abstract:
Personal health record (PHR) is an emerging patient-centric model of
health information exchange, which is often outsourced to be stored at a
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
third party, such as cloud providers. However, there have been wide
privacy concerns as personal health information could be exposed to
those third party servers and to unauthorized parties. To assure the
patients’ control over access to their own PHRs, it is a promising method
to encrypt the PHRs before outsourcing. Yet, issues such as risks of
privacy exposure, scalability in key management, flexible access, and
efficient user revocation, have remained the most important challenges
toward achieving fine-grained, cryptographically enforced data access
control. In this paper, we propose a novel patient-centric framework and a
suite of mechanisms for data access control to PHRs stored in semi-
trusted servers. To achieve fine-grained and scalable data access control
for PHRs, we leverage attribute-based encryption (ABE) techniques to
encrypt each patient’s PHR file. Different from previous works in secure
data outsourcing, we focus on the multiple data owner scenario, and
divide the users in the PHR system into multiple security domains that
greatly reduces the key management complexity for owners and users. A
high degree of patient privacy is guaranteed simultaneously by exploiting
multi authority ABE. Our scheme also enables dynamic modification of
access policies or file attributes, supports efficient on-demand
user/attribute revocation and break-glass access under emergency
scenarios. Extensive analytical and experimental results are presented
which show the security, scalability, and efficiency of our proposed
scheme.
19.Ensuring Distributed Accountability for Data Sharing in the Cloud
Abstract:
Cloud computing enables highly scalable services to be easily consumed
over the Internet on an as-needed basis. A major feature of the cloud Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
services is that users’ data are usually processed remotely in unknown
machines that users do not own or operate. While enjoying the
convenience brought by this new emerging technology, users’ fears of
losing control of their own data (particularly, financial and health data)
can become a significant barrier to the wide adoption of cloud services.
To address this problem, in this paper, we propose a novel highly
decentralized information accountability framework to keep track of the
actual usage of the users’ data in the cloud. In particular, we propose an
object-centered approach that enables enclosing our logging mechanism
together with users’ data and policies. We leverage the JAR
programmable capabilities to both create a dynamic and traveling object,
and to ensure that any access to users’ data will trigger authentication and
automated logging local to the JARs. To strengthen user’s control, we
also provide distributed auditing mechanisms. We provide extensive
experimental studies that demonstrate the efficiency and effectiveness of
the proposed approaches.
20.Cooperative Provable Data Possession for Integrity Verification in
Multi-Cloud Storage
Abstract:
Provable data possession (PDP) is a technique for ensuring the integrity of
data in storage outsourcing. In this paper, we address the construction of
an efficient PDP scheme for distributed cloud storage to support the
scalability of service and data migration, in which we consider the
existence of multiple cloud service providers to cooperatively store and
maintain the clients’ data. We present a cooperative PDP (CPDP) scheme
based on homomorphic verifiable response and hash index hierarchy. We
prove the security of our scheme based on multi-prover zero-knowledge
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
proof system, which can satisfy completeness, knowledge soundness, and
zero-knowledge properties. In addition, we articulate performance
optimization mechanisms for our scheme, and in particular present an
efficient method for selecting optimal parameter values to minimize the
computation costs of clients and storage service providers. Our
experiments show that our solution introduces lower computation and
communication overheads in comparison with non-cooperative approaches.
21.HASBE: A Hierarchical Attribute-Based Solution for Flexible and
Scalable Access Control in Cloud Computing
Abstract:
Cloud computing has emerged as one of the most influential paradigms in
the IT industry in recent years. Since this new computing technology
requires users to entrust their valuable data to cloud providers, there have
been increasing security and privacy concerns on outsourced data.
Several schemes employing attribute-based encryption (ABE) have been
proposed for access control of outsourced data in cloud computing;
however, most of them suffer from inflexibility in implementing complex
access control policies. In order to realize scalable, flexible, and fine-
grained access control of outsourced data in cloud computing, in this
paper, we propose hierarchical attribute-set-based encryption (HASBE)
by extending ciphertext-policy attribute-set-based encryption (ASBE)
with a hierarchical structure of users. The proposed scheme not only
achieves scalability due to its hierarchical structure, but also inherits
flexibility and fine-grained access control in supporting compound
attributes of ASBE. In addition, HASBE employs multiple value
assignments for access expiration time to deal with user revocation more
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
efficiently than existing schemes. We formally prove the security of
HASBE based on security of the cipher text-policy attribute-based
encryption (CP-ABE) scheme by Bethencourt et al. and analyze its
performance and computational complexity. We implement our scheme
and show that it is both efficient and flexible in dealing with access
control for outsourced data in cloud computing with comprehensive
experiments.
22.Outsourced Similarity Search on Metric Data Assets
Abstract:
This paper considers a cloud computing setting in which similarity
querying of metric data is outsourced to a service provider. The data is to
be revealed only to trusted users, not to the service provider or anyone
else. Users query the server for the most similar data objects to a query
example. Outsourcing offers the data owner scalability and a low-initial
investment. The need for privacy may be due to the data being sensitive
(e.g., in medicine), valuable (e.g., in astronomy), or otherwise
confidential. Given this setting, the paper presents techniques that
transform the data prior to supplying it to the service provider for
similarity queries on the transformed data. Our techniques provide
interesting trade-offs between query cost and accuracy. They are then
further extended to offer an intuitive privacy guarantee. Empirical studies
with real data demonstrate that the techniques are capable of offering
privacy while enabling efficient and accurate processing of similarity
queries.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
23.Toward Secure and Dependable Storage Services in Cloud
Computing
Abstract:
Cloud storage enables users to remotely store their data and enjoy the on-
demand high quality cloud applications without the burden of local
hardware and software management. Though the benefits are clear, such a
service is also relinquishing users’ physical possession of their
outsourced data, which inevitably poses new security risks toward the
correctness of the data in cloud. In order to address this new problem and
further achieve a secure and dependable cloud storage service, we
propose in this paper a flexible distributed storage integrity auditing
mechanism, utilizing the homomorphic token and distributed erasure-
coded data. The proposed design allows users to audit the cloud storage
with very lightweight communication and computation cost. The auditing
result not only ensures strong cloud storage correctness guarantee, but
also simultaneously achieves fast data error localization, i.e., the
identification of misbehaving server. Considering the cloud data are
dynamic in nature, the proposed design further supports secure and
efficient dynamic operations on outsourced data, including block
modification, deletion, and append. Analysis shows the proposed scheme
is highly efficient and resilient against Byzantine failure, malicious data
modification attack, and even server colluding attacks.
DATA MINING
24.A Fast Clustering-Based Feature Subset Selection Algorithm for
High-Dimensional Data
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
Feature selection involves identifying a subset of the most useful features
that produces compatible results as the original entire set of features. A
feature selection algorithm may be evaluated from both the efficiency and
effectiveness points of view. While the efficiency concerns the time
required to find a subset of features, the effectiveness is related to the
quality of the subset of features. Based on these criteria, a fast clustering-
based feature selection algorithm (FAST) is proposed and experimentally
evaluated in this paper. The FAST algorithm works in two steps. In the
first step, features are divided into clusters by using graph-theoretic
clustering methods. In the second step, the most representative feature
that is strongly related to target classes is selected from each cluster to
form a subset of features. Features in different clusters are relatively
independent, the clustering-based strategy of FAST has a high probability
of producing a subset of useful and independent features. To ensure the
efficiency of FAST, we adopt the efficient minimum-spanning tree
(MST) clustering method. The efficiency and effectiveness of the FAST
algorithm are evaluated through an empirical study. Extensive
experiments are carried out to compare FAST and several representative
feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and
FOCUS-SF, with respect to four types of well-known classifiers, namely,
the probabilitybased Naive Bayes, the tree-based C4.5, the instance-based
IB1, and the rule-based RIPPER before and after feature selection. The
results, on 35 publicly available real-world high-dimensional image,
microarray, and text data, demonstrate that the FAST not only produces
smaller subsets of features but also improves the performances of the four
types of classifiers.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
25.A New Algorithm for Inferring User Search Goals with Feedback
Sessions
Abstract:
For a broad-topic and ambiguous query, different users may have
different search goals when they submit it to a search engine. The
inference and analysis of user search goals can be very useful in
improving search engine relevance and user experience. In this paper, we
propose a novel approach to infer user search goals by analyzing search
engine query logs. First, we propose a framework to discover different
user search goals for a query by clustering the proposed feedback
sessions. Feedback sessions are constructed from user click-through logs
and can efficiently reflect the information needs of users. Second, we
propose a novel approach to generate pseudo-documents to better
represent the feedback sessions for clustering. Finally, we propose a new
criterion “Classified Average Precision (CAP)” to evaluate the
performance of inferring user search goals. Experimental results are
presented using user click-through logs from a commercial search engine
to validate the effectiveness of our proposed methods.
26.Annotating Search Results from Web Databases
Abstract:
An increasing number of databases have become web accessible through
HTML form-based search interfaces. The data units returned from the
underlying database are usually encoded into the result pages
dynamically for human browsing. For the encoded data units to be
machine process able, which is essential for many applications such as
deep web data collection and Internet comparison shopping, they need to
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
be extracted out and assigned meaningful labels. In this paper, we present
an automatic annotation approach that first aligns the data units on a
result page into different groups such that the data in the same group have
the same semantic. Then, for each group we annotate it from different
aspects and aggregate the different annotations to predict a final
annotation label for it. An annotation wrapper for the search site is
automatically constructed and can be used to annotate new result pages
from the same web database. Our experiments indicate that the proposed
approach is highly effective.
27.Anomaly Detection via Online Over-Sampling Principal Component
Analysis
Abstract:
Anomaly detection has been an important research topic in data mining
and machine learning. Many real-world applications such as intrusion or
credit card fraud detection require an effective and efficient framework to
identify deviated data instances. However, most anomaly detection
methods are typically implemented in batch mode, and thus cannot be
easily extended to large-scale problems without sacrificing computation
and memory requirements. In this paper, we propose an online over-
sampling principal component analysis (osPCA) algorithm to address this
problem, and we aim at detecting the presence of outliers from a large
amount of data via an online updating technique. Unlike prior PCA based
approaches, we do not store the entire data matrix or covariance matrix,
and thus our approach is especially of interest in online or large-scale
problems. By over-sampling the target instance and extracting the
principal direction of the data, the proposed osPCA allows us to
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
determine the anomaly of the target instance according to the variation of
the resulting dominant eigenvector. Since our osPCA need not perform
eigen analysis explicitly, the proposed framework is favored
for online applications which have computation or memory limitations.
Compared with the well-known power method for PCA and other popular
anomaly detection algorithms, our experimental results verify the
feasibility of our proposed method in terms of both accuracy and
efficiency.
28.Distributed Processing of Probabilistic Top-k Queries in Wireless
Sensor Networks
Abstract:
In this paper, we introduce the notion of sufficient set and necessary set
for distributed processing of probabilistic top-k queries in cluster-based
wireless sensor networks. These two concepts have very nice properties
that can facilitate localized data pruning in clusters. Accordingly, we
develop a suite of algorithms, namely, sufficient set-based (SSB),
necessary set-based (NSB), and boundary-based (BB), for intercluster
query processing with bounded rounds of communications. Moreover, in
responding to dynamic changes of data distribution in the network, we
develop an adaptive algorithm that dynamically switches among the three
proposed algorithms to minimize the transmission cost. We show the
applicability of sufficient set and necessary set to wireless sensor
networks with both two-tier hierarchical and tree-structured network
topologies. Experimental results show that the proposed algorithms
reduce data transmissions significantly and incur only small constant
rounds of data communications. The experimental results also
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
demonstrate the superiority of the adaptive algorithm, which achieves a
near-optimal performance under various conditions.
29.m-Privacy for Collaborative Data Publishing
Abstract:
In this paper, we consider the collaborative data publishing problem for
anonymizing horizontally partitioned data at multiple data providers. We
consider a new type of “insider attack” by colluding data providers who
may use their own data records (a subset of the overall data) to infer the
data records contributed by other data providers. The paper addresses this
new threat, and makes several contributions. First, we introduce the
notion of m-privacy, which guarantees that the anonymized data satisfies
a given privacy constraint against any group of up to m colluding data
providers. Second, we present heuristic algorithms exploiting the
monotonicity of privacy constraints for efficiently checking m-privacy
given a group of records. Third, we present a data provider-aware
anonymization algorithm with adaptive m-privacy checking strategies to
ensure high utility and m-privacy of anonymized data with efficiency.
Finally, we propose secure multi-party computation protocols for
collaborative data publishing with m-privacy. All protocols are
extensively analyzed and their security and efficiency are formally
proved. Experiments on real-life datasets suggest that our approach
achieves better or comparable utility and efficiency than existing and
baseline algorithms while satisfying m-privacy.
30.Sensitive Labels in Social Network Data Anonymization
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
31.Tweet Analysis for Real-Time Event Detection and Earthquake
Reporting System Development
Abstract:
Twitter has received much attention recently. An important characteristic
of Twitter is its real-time nature. We investigate the real-time interaction
of events such as earthquakes in Twitter and propose an algorithm to
monitor tweets and to detect a target event. To detect a target event, we
devise a classifier of tweets based on features such as the keywords in a
tweet, the number of words, and their context. Subsequently, we produce
a probabilistic spatiotemporal model for the target event that can find the
center of the event location. We regard each Twitter user as a sensor and
apply particle filtering, which are widely used for location estimation.
The particle filter works better than other comparable methods for
estimating the locations of target events. As an application, we develop
an earthquake reporting system for use in Japan. Because of the numerous
earthquakes and the large number of Twitter users throughout the
country, we can detect an earthquake with high probability (93 percent of
earthquakes of Japan Meteorological Agency (JMA) seismic intensity
scale 3 or more are detected) merely by monitoring tweets. Our system
detects earthquakes promptly and notification is delivered much faster
than JMA broadcast announcements.
32.Document Clustering for Forensic Analysis: An Approach for
Improving Computer Inspection
Abstract:
In computer forensic analysis, hundreds of thousands of files are usually
examined. Much of the data in those files consists of unstructured text,
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
whose analysis by computer examiners is difficult to be performed. In
this context, automated methods of analysis are of great interest. In
particular, algorithms for clustering documents can facilitate the
discovery of new and useful knowledge from the documents under
analysis. We present an approach that applies document clustering
algorithms to forensic analysis of computers seized in police
investigations. We illustrate the proposed approach by carrying out
extensive experimentation with six well-known clustering algorithms (K-
means, K-medoids, Single Link, Complete Link, Average Link, and
CSPA) applied to five real-world datasets obtained from computers
seized in real-world investigations. Experiments have been performed
with different combinations of parameters, resulting in 16 different
instantiations of algorithms. In addition, two relative validity indexes
were used to automatically estimate the number of clusters. Related
studies in the literature are significantly more limited than our study. Our
experiments show that the Average Link and Complete Link algorithms
provide the best results for our application domain. If suitably initialized,
partitional algorithms (K-means and K-medoids) can also yield to very
good results. Finally, we also present and discuss several practical results
that can be useful for researchers and practitioners of forensic computing.
33.A Method for Mining Infrequent Causal Associations and Its
Application in Finding Adverse Drug Reaction Signal Pairs
Abstract:
In many real-world applications, it is important to mine causal
relationships where an event or event pattern causes certain outcomes
with low probability. Discovering this kind of causal relationships can
help us prevent or correct negative outcomes caused by their antecedents. Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
In this paper, we propose an innovative data mining framework and apply
it to mine potential causal associations in electronic patient data sets
where the drug-related events of interest occur infrequently. Specifically,
we created a novel interestingness measure, exclusive causal-leverage,
based on a computational, fuzzy recognition-primed decision (RPD)
model that we previously developed. On the basis of this new measure, a
data mining algorithm was developed to mine the causal relationship
between drugs and their associated adverse drug reactions (ADRs). The
algorithm was tested on real patient data retrieved from the Veterans
Affairs Medical Center in Detroit, Michigan. The retrieved data included
16,206 patients (15,605 male, 601 female). The exclusive causal-leverage
was employed to rank the potential causal associations between each of
the three selected drugs (i.e., enalapril, pravastatin, and rosuvastatin) and
3,954 recorded symptoms, each of which corresponded to a potential
ADR. The top10 drug-symptom pairs for each drug were evaluated by the
physicians on our project team. The numbers of symptoms considered as
likely real ADRs for enalapril, pravastatin, and rosuvastatin were 8, 7,
and 6, respectively. These preliminary results indicate the usefulness of
our method in finding potential ADR signal pairs for further analysis
(e.g., epidemiology study) and investigation (e.g., case review) by drug
safety professionals.
34.Clustering Sentence-Level Text Using a Novel Fuzzy Relational
Clustering Algorithm
Abstract:
In comparison with hard clustering methods, in which a pattern belongs
to a single cluster, fuzzy clustering algorithms allow patterns to belong to
all clusters with differing degrees of membership. This is important in Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
domains such as sentence clustering, since a sentence is likely to be
related to more than one theme or topic present within a document or set
of documents. However, because most sentence similarity measures do
not represent sentences in a common metric space, conventional fuzzy
clustering approaches based on prototypes or mixtures of Gaussians are
generally not applicable to sentence clustering. This paper presents a
novel fuzzy clustering algorithm that operates on relational input data;
i.e., data in the form of a square matrix of pairwise similarities between
data objects. The algorithm uses a graph representation of the data, and
operates in an Expectation-Maximization framework in which the graph
centrality of an object in the graph is interpreted as a likelihood. Results
of applying the algorithm to sentence clustering tasks demonstrate that
the algorithm is capable of identifying overlapping clusters of
semantically related sentences, and that it is therefore of potential use in a
variety of text mining tasks. We also include results of applying the
algorithm to benchmark data sets in several other domains.
35.Crowd sourcing Predictors of Behavioral Outcomes
Abstract:
Generating models from large data sets—and determining which subsets
of data to mine—is becoming increasingly automated. However,
choosing what data to collect in the first place requires human intuition or
experience, usually supplied by a domain expert. This paper describes a
new approach to machine science which demonstrates for the first time
that nondomain experts can collectively formulate features and provide
values for those features such that they are predictive of some behavioral
outcome of interest. This was accomplished by building a Web platform
in which human groups interact to both respond to questions likely to Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
help predict a behavioral outcome and pose new questions to their peers.
This results in a dynamically growing online survey, but the result of this
cooperative behavior also leads to models that can predict the user’s
outcomes based on their responses to the user-generated survey questions.
Here, we describe two Web-based experiments that instantiate this
approach: The first site led to models that can predict users’ monthly
electric energy consumption, and the other led to models that can predict
users’ body mass index. As exponential increases in content are often
observed in successful online collaborative communities, the proposed
methodology may, in the future, lead to similar exponential rises in
discovery and insight into the causal factors of behavioral outcomes.
36.Facilitating Document Annotation using Content and Querying
Value
Abstract:
A large number of organizations today generate and share textual
descriptions of their products, services, and actions. Such collections of
textual data contain significant amount of structured information, which
remains buried in the unstructured text. While information extraction
algorithms facilitate the extraction of structured relations, they are often
expensive and inaccurate, especially when operating on top of text that
does not contain any instances of the targeted structured information. We
present a novel alternative approach that facilitates the generation of the
structured metadata by identifying documents that are likely to contain
information of interest and this information is going to be subsequently
useful for querying the database. Our approach relies on the idea that
humans are more likely to add the necessary metadata during creation
time, if prompted by the interface; or that it is much easier for humans Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
(and/or algorithms) to identify the metadata when such information
actually exists in the document, instead of naively prompting users to fill
in forms with information that is not available in the document. As a
major contribution of this paper, we present algorithms that identify
structured attributes that are likely to appear within the document, by
jointly utilizing the content of the text and the query workload. Our
experimental evaluation shows that our approach generates superior
results compared to approaches that rely only on the textual content or
only on the query workload, to identify attributes of interest.
37.A Generalized Flow-Based Method for Analysis of Implicit
Relationships on Wikipedia
Abstract:
We focus on measuring relationships between pairs of objects in
Wikipedia whose pages can be regarded as individual objects. Two kinds
of relationships between two objects exist: in Wikipedia, an explicit
relationship is represented by a single link between the two pages for the
objects, and an implicit relationship is represented by a link structure
containing the two pages. Some of the previously proposed methods for
measuring relationships are cohesion-based methods, which
underestimate objects having high degrees, although such objects could
be important in constituting relationships in Wikipedia. The other
methods are inadequate for measuring implicit relationships because they
use only one or two of the following three important factors: distance,
connectivity, and co citation. We propose a new method using a
generalized maximum flow which reflects all the three factors and does
not underestimate objects having high degree. We confirm through
experiments that our method can measure the strength of a relationship Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
more appropriately than these previously proposed methods do. Another
remarkable aspect of our method is mining elucidatory objects, that is,
objects constituting a relationship. We explain that mining elucidatory
objects would open a novel way to deeply understand a relationship.
38.A System to Filter Unwanted Messages from OSN User Walls
Abstract:
One fundamental issue in today’s Online Social Networks (OSNs) is to
give users the ability to control the messages posted on their own private
space to avoid that unwanted content is displayed. Up to now, OSNs
provide little support to this requirement. To fill the gap, in this paper, we
propose a system allowing OSN users to have a direct control on the
messages posted on their walls. This is achieved through a flexible rule-
based system, which allows users to customize the filtering criteria to be
applied to their walls, and a Machine Learning-based soft classifier
automatically labeling messages in support of content-based filtering.
39.Anonymization of Centralized and Distributed Social Networks by
Sequential Clustering
Abstract:
We study the problem of privacy-preservation in social networks. We
consider the distributed setting in which the network data is split between
several data holders. The goal is to arrive at an anonymized view of the
unified network without revealing to any of the data holders information
about links between nodes that are controlled by other data holders. To
that end, we start with the centralized setting and offer two variants of an
anonymization algorithm which is based on sequential clustering (Sq).
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Our algorithms significantly outperform the SaNGreeA algorithm due to
Campan and Truta which is the leading algorithm for achieving
anonymity in networks by means of clustering. We then devise secure
distributed versions of our algorithms. To the best of our knowledge, this
is the first study of privacy preservation in distributed social networks.
We conclude by outlining future research proposals in that direction.
40.Intrusion Detection Technique by using K-means, Fuzzy Neural
Network and SVM classifiers.
Abstract:
With the impending era of internet, the network security has become the
key foundation for lot of financial and business web applications.
Intrusion detection is one of the looms to resolve the problem of network
security. Imperfectness of intrusion detection systems (IDS) has given an
opportunity for data mining to make several important contributions to
the field of intrusion detection. In recent years, many researchers are
using data mining techniques for building IDS. Here, we propose a new
approach by utilizing data mining techniques such as neuro-fuzzy and
radial basis support vector machine (SVM) for helping IDS to attain
higher detection rate. The proposed technique has four major steps:
primarily, k-means clustering is used to generate different training
subsets. Then, based on the obtained training subsets, different neuro-
fuzzy models are trained. Subsequently, a vector for SVM classification
is formed and in the end, classification using radial SVM is performed to
detect intrusion has happened or not. To illustrate the applicability and
capability of the new approach, the results of experiments on KDD CUP
1999 dataset is demonstrated. Experimental results shows that our
proposed new approach do better than BPNN, multiclass SVM and other Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
well-known methods such as decision trees and Columbia model in terms
of sensitivity, specificity and in particular detection accuracy.
41.Spatial Approximate String Search
Abstract:
This work deals with the approximate string search in large spatial
databases. Specifically, we investigate range queries augmented with a
string similarity search predicate in both Euclidean space and road
networks. We dub this query the spatial approximate string (SAS) query.
In Euclidean space, we propose an approximate solution, the MHR-tree,
which embeds min-wise signatures into an R-tree. The min-wise
signature for an index node u keeps a concise representation of the union
of q-grams from strings under the sub-tree of u. We analyze the pruning
functionality of such signatures based on the set resemblance between the
query string and the q-grams from the sub-trees of index nodes. We also
discuss how to estimate the selectivity of a SAS query in Euclidean
space, for which we present a novel adaptive algorithm to find balanced
partitions using both the spatial and string information stored in the tree.
For queries on road networks, we propose a novel exact method,
RSASSOL, which significantly outperforms the baseline algorithm in
practice. The RSASSOL combines the q-gram based inverted lists and the
reference nodes based pruning. Extensive experiments on large real data
sets demonstrate the efficiency and effectiveness of our approaches.
42.Preventing Private Information Inference Attacks on Social
Networks
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
Online social networks, such as Facebook, are increasingly utilized by
many people. These networks allow users to publish details about
themselves and to connect to their friends. Some of the information
revealed inside these networks is meant to be private. Yet it is possible to
use learning algorithms on released data to predict private information. In
this paper, we explore how to launch inference attacks using released
social networking data to predict private information. We then devise
three possible sanitization techniques that could be used in various
situations. Then, we explore the effectiveness of these techniques and
attempt to use methods of collective inference to discover sensitive
attributes of the data set. We show that we can decrease the effectiveness
of both local and relational classification algorithms by using the
sanitization methods we described.
43.Multiparty Access Control for Online Social Networks Model and
mechanisms
ABSTRACT:
Online social networks (OSNs) have experienced tremendous growth in
recent years and become a de facto portal for hundreds of millions of
Internet users. These OSNs offer attractive means for digital social
interactions and information sharing, but also raise a number of security
and privacy issues. While OSNs allow users to restrict access to shared
data, they currently do not provide any mechanism to enforce privacy
concerns over data associated with multiple users. To this end, we
propose an approach to enable the protection of shared data associated
with multiple users in OSNs. We formulate an access control model to
capture the essence of multiparty authorization requirements, along with a Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
multiparty policy specification scheme and a policy enforcement
mechanism. Besides, we present a logical representation of our access
control model that allows us to leverage the features of existing logic
solvers to perform various analysis tasks on our model. We also discuss a
proof-of-concept prototype of our approach as part of an application in
Facebook and provide usability study and system evaluation of our
method.
44.Incentive Compatible Privacy-Preserving Data Analysis
Abstract:
In many cases, competing parties who have private data may
collaboratively conduct privacy-preserving distributed data analysis
(PPDA) tasks to learn beneficial data models or analysis results. Most
often, the competing parties have different incentives. Although certain
PPDA techniques guarantee that nothing other than the final analysis
result is revealed, it is impossible to verify whether participating parties
are truthful about their private input data. Unless proper incentives are
set, current PPDA techniques cannot prevent participating parties from
modifying their private inputs. This raises the question of how to design
incentive compatible privacy-preserving data analysis techniques that
motivate participating parties to provide truthful inputs. In this paper, we
first develop key theorems, then base on these theorems, we analyze
certain important privacy-preserving data analysis tasks that could be
conducted in a way that telling the truth is the best choice for any
participating party.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
SECURE COMPUTING
45.EAACK—A Secure Intrusion-Detection System for MANETs
Abstract:
The migration to wireless network from wired network has been a global
trend in the past few decades. The mobility and scalability brought by
wireless network made it possible in many applications. Among all the
contemporary wireless networks, Mobile Ad hoc NETwork (MANET) is
one of the most important and unique applications. On the contrary to
traditional network architecture, MANET does not require a fixed
network infrastructure; every single node works as both a transmitter and
a receiver. Nodes communicate directly with each other when they are
both within the same communication range. Otherwise, they rely on their
neighbors to relay messages. The self-configuring ability of nodes in
MANET made it popular among critical mission applications like
military use or emergency recovery. However, the open medium and
wide distribution of nodes make MANET vulnerable to malicious
attackers. In this case, it is crucial to develop efficient intrusion-detection
mechanisms to protect MANET from attacks. With the improvements of
the technology and cut in hardware costs, we are witnessing a current
trend of expanding MANETs into industrial applications. To adjust to
such trend, we strongly believe that it is vital to address its potential
security issues. In this paper, we propose and implement a new intrusion-
detection system named Enhanced Adaptive Acknowledgment (EAACK)
specially designed for MANETs. Compared to contemporary approaches,
EAACK demonstrates higher malicious-behavior-detection rates in
certain circumstances while does not greatly affect the network
performances.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
46.Identity-Based Secure Distributed Data Storage Schemes
Abstract:
Secure distributed data storage can shift the burden of maintaining a large
number of files from the owner to proxy servers. Proxy servers can
convert encrypted files for the owner to encrypted files for the receiver
without the necessity of knowing the content of the original files. In
practice, the original files will be removed by the owner for the sake of
space efficiency. Hence, the issues on confidentiality and integrity of the
outsourced data must be addressed carefully. In this paper, we propose
two identity-based secure distributed data storage (IBSDDS) schemes.
Our schemes can capture the following properties: (1) The file owner can
decide the access permission independently without the help of the
private key generator (PKG); (2) For one query, a receiver can only
access one file, instead of all files of the owner; (3) Our schemes are
secure against the collusion attacks, namely even if the receiver can
compromise the proxy servers, he cannot obtain the owner’s secret key.
Although the first scheme is only secure against the chosen plaintext
attacks (CPA), the second scheme is secure against the chosen cipher text
attacks (CCA). To the best of our knowledge, it is the first IBSDDS
schemes where an access permissions is made by the owner for an exact
file and collusion attacks can be protected in the standard model.
47.Modeling the Pair-wise Key Pre-distribution Scheme in the Presence of Unreliable Links
Abstract:
We investigate the secure connectivity of wireless sensor networks under
the random pairwise key predistribution scheme of Chan, Perrig, and
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Song. Unlike recent work carried out under the assumption of full
visibility, here we assume a (simplified) communication model where
unreliable wireless links are represented as independent on/off
channels.We present conditions on how to scale the model parameters so
that the network 1) has no secure node that is isolated and 2) is securely
connected, both with high probability, when the number of sensor nodes
becomes large. The results are given in the form of zero-one laws, and
exhibit significant differences with corresponding results in the full-
visibility case. Through simulations, these zero-one laws are shown to
also hold under a more realistic communication model, namely the disk
model.
48.NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems
Abstract:Cloud security is one of most important issues that have attracted a lot of
research and development effort in past few years. Particularly, attackers
can explore vulnerabilities of a cloud system and compromise virtual
machines to deploy further large-scale Distributed Denial-of-Service
(DDoS). DDoS attacks usually involve early stage actions such as multi-
step exploitation, low frequency vulnerability scanning, and
compromising identified vulnerable virtual machines as zombies, and
finally DDoS attacks through the compromised zombies. Within the
cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the
detection of zombie exploration attacks is extremely difficult. This is
because cloud users may install vulnerable applications on their virtual
machines. To prevent vulnerable virtual machines from being
compromised in the cloud, we propose a multi-phase distributed
vulnerability detection, measurement, and countermeasure selection Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
mechanism called NICE, which is built on attack graph based analytical
models and reconfigurable virtual network-based countermeasures. The
proposed framework leverages Open Flow network programming APIs to
build a monitor and control plane over distributed programmable virtual
switches in order to significantly improve attack detection and mitigate
attack consequences. The system and security evaluations demonstrate
the efficiency and effectiveness of the proposed solution.
49.Privacy Preserving Data Sharing With Anonymous ID Assignment
Abstract:
An algorithm for anonymous sharing of private data among parties is
developed. This technique is used iteratively to assign these nodes ID
numbers ranging from 1 to N. This assignment is anonymous in that the
identities received are unknown to the other members of the group.
Resistance to collusion among other members is verified in an
information theoretic sense when private communication channels are
used. This assignment of serial numbers allows more complex data to be
shared and has applications to other problems in privacy preserving data
mining, collision avoidance in communications and distributed database
access. The required computations are distributed without using a trusted
central authority. Existing and new algorithms for assigning anonymous
IDs are examined with respect to trade-offs between communication and
computational requirements. The new algorithms are built on top of a
secure sum data mining operation using Newton’s identities and Sturm’s
theorem. An algorithm for distributed solution of certain polynomials
over finite fields enhances the scalability of the algorithms. Markov chain
representations are used to find statistics on the number of iterations
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
required, and computer algebra gives closed form results for the
completion rates.
50.Securing Class Initialization in Java-like Languages
Abstract:
Language-based information-flow security is concerned with specifying
and enforcing security policies for information flow via language
constructs. Although much progress has been made on understanding
information flow in object-oriented programs, little attention has been
given to the impact of class initialization on information flow. This paper
turns the spotlight on security implications of class initialization. We
reveal the subtleties of information propagation when classes are
initialized, and demonstrate how these flows can be exploited to leak
information through error recovery. Our main contribution is a type-and-
effect system which tracks these information flows. The type system is
parameterized by an arbitrary lattice of security levels. Flows through the
class hierarchy and dependencies in field initializers are tracked by typing
class initializers wherever they could be executed. The contexts in which
each class can be initialized are tracked to prevent insecure flows of out-
of-scope contextual information through class initialization statuses and
error recovery. We show that the type system enforces termination-
insensitive noninterference.
51.Security Analysis of a Single Sign-On Mechanism for Distributed Computer Networks
Abstract:
The Single sign-on (SSO) is a new authentication mechanism that enables
a legal user with a single credential to be authenticated by multiple
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
service providers in a distributed computer network. Recently, Chang and
Lee proposed a new SSO scheme and claimed its security by providing
well-organized security arguments. In this paper, however, we
demonstrative that their scheme is actually insecure as it fails to meet
credential privacy and soundness of authentication. Specifically, we
present two impersonation attacks. The first attack allows a malicious
service provider, who has successfully communicated with a legal user
twice, to recover the user’s credential and then to impersonate the user to
access resources and services offered by other service providers. In
another attack, an outsider without any credential may be able to enjoy
network services freely by impersonating any legal user or a nonexistent
user. We identify the flaws in their security arguments to explain why
attacks are possible against their SSO scheme. Our attacks also apply to
another SSO scheme proposed by Hsu and Chuang, which inspired the
design of the Chang–Lee scheme. Moreover, by employing an efficient
verifiable encryption of RSA signatures proposed by Ateniese, we
propose an improvement for repairing the Chang–Lee scheme. We
promote the formal study of the soundness of authentication as one open
problem.
52.SORT: A Self-Organizing Trust Model for Peer-to-Peer Systems
Abstract:
Open nature of peer-to-peer systems exposes them to malicious activity.
Building trust relationships among peers can mitigate attacks of malicious
peers. This paper presents distributed algorithms that enable a peer to
reason about trustworthiness of other peers based on past interactions and
recommendations. Peers create their own trust network in their proximity
by using local information available and do not try to learn global trust
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
information. Two contexts of trust, service, and recommendation contexts
are defined to measure trustworthiness in providing services and giving
recommendations. Interactions and recommendations are evaluated based
on importance, recentness, and peer satisfaction parameters. Additionally,
recommender’s trustworthiness and confidence about a recommendation
are considered while evaluating recommendations. Simulation
experiments on a file sharing application show that the proposed model
can mitigate attacks on 16 different malicious behavior models. In the
experiments, good peers were able to form trust relationships in their
proximity and isolate malicious peers.
53.WARNINGBIRD: A Near Real-time Detection System for Suspicious URLs in Twitter Stream
Abstract:Twitter is prone to malicious tweets containing URLs for spam, phishing,
and malware distribution. Conventional Twitter spam detection schemes
utilize account features such as the ratio of tweets containing URLs and
the account creation date, or relation features in the Twitter graph. These
detection schemes are ineffective against feature fabrications or consume
much time and resources. Conventional suspicious URL detection
schemes utilize several features including lexical features of URLs, URL
redirection, HTML content, and dynamic behavior. However, evading
techniques such as time-based evasion and crawler evasion exist. In this
paper, we propose WARNINGBIRD, a suspicious URL detection system
for Twitter. Our system investigates correlations of URL redirect chains
extracted from several tweets. Because attackers have limited resources
and usually reuse them, their URL redirect chains frequently share the
same URLs. We develop methods to discover correlated URL redirect
chains using the frequently shared URLs and to determine their Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
suspiciousness. We collect numerous tweets from the Twitter public
timeline and build a statistical classifier using them. Evaluation results
show that our classifier accurately and efficiently detects suspicious
URLs. We also present WARNINGBIRD as a near real-time system for
classifying suspicious URLs in the Twitter stream.
54.Two tales of privacy in online social networks
Abstract:
Privacy is one of the friction points that emerges when communications
get mediated in Online Social Networks (OSNs). Different communities
of computer science researchers have framed the ‘OSN privacy problem’
as one of surveillance, institutional or social privacy. In tackling these
problems they have also treated them as if they were independent. We
argue that the different privacy problems are entangled and that research
on privacy in OSNs would benefit from a more holistic approach. In this
article, we first provide an introduction to the surveillance and social
privacy perspectives emphasizing the narratives that inform them, as well
as their assumptions, goals and methods. We then juxtapose the
differences between these two approaches in order to understand their
complementarily, and to identify potential integration challenges as well
as research questions that so far have been left unanswered.
55.Secure Encounter-based Mobile Social Networks: Requirements, Designs, and Tradeoffs
Abstract:
Encounter-based social networks and encounter-based systems link users
who share a location at the same time, as opposed to the traditional social
network paradigm of linking users who have an offline friendship. This
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
new approach presents challenges that are fundamentally different from
those tackled by previous social network designs. In this paper, we
explore the functional and security requirements for these new systems,
such as availability, security, and privacy, and present several design
options for building secure encounter-based social networks. To highlight
these challenges we examine one recently proposed encounter-based
social network design and compare it to a set of idealized security and
functionality requirements. We show that it is vulnerable to several
attacks, including impersonation, collusion, and privacy breaching, even
though it was designed specifically for security. Mindful of the possible
pitfalls, we construct a flexible framework for secure encounter-based
social networks, which can be used to construct networks that offer
different security, privacy, and availability guarantees. We describe two
example constructions derived from this framework, and consider each in
terms of the ideal requirements. Some of our new designs fulfill more
requirements in terms of system security, reliability, and privacy than
previous work. We also evaluate real-world performance of one of our
designs by implementing a proof-of-concept iPhone application called
MeetUp. Experiments highlight the potential of our system and hint at the
deploy ability of our designs on a large scale.
56.Twitsper: Tweeting Privately
Abstract:
While OSNs today provide some form of privacy controls to protect a
user’s shared content from other users, these controls are not sufficiently
expressive to provide fine grained protection. In this article, we introduce
Twitsper, to support fine-grained control over who sees a user’s
messages. Twitsper provides privacy controls to the users of Twitter
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
today without relying on Twitter to make changes. This is because it is a
wrapper around Twitter that enables private group communication while
preserving Twitter’s commercial interests. It preserves privacy both from
the Twitsper server as well as from undesired Twitsper users.
57.Combining Cryptographic Primitives to Prevent Jamming Attacks in Wireless Networks
Abstract:
The Open Nature of wireless medium leaves an intentional interference
attack, typically referred to as jamming. This intentional interference with
wireless transmission launch pad for mounting Denial-Of- Service attack
on wireless networks. Typically, jamming has been addresses under an
external threat model. However, adversaries with internal knowledge of
protocol specification and network secrets can launch low-effort jamming
attacks that are difficult to detect and counter. In this work we address the
problem of jamming attacks and adversary is active for short period of
time, selectively targeting the messages of high importance. We show
that the selective jamming attacks can be launched by performing real-
time packet classification at the physical layer. To mitigate these attacks,
we develop three schemes that prevent real time packet classification by
combining cryptographic primitives with physical-layer attributes. They
are Strong Hiding Commitment Schemes (SHCS), Cryptographic Puzzles
Hiding Schemes (CPHS), and All- Or-Nothing Transformation Hiding
Schemes (AONTSHS). Random key distribution methods are done along
with three schemes to give more secured packet transmission in wireless
networks.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
58.Cross-Domain Privacy-Preserving Cooperative Firewall Optimization
Abstract:
Firewalls have been widely deployed on the Internet for securing private
networks. A firewall checks each incoming or outgoing packet to decide
whether to accept or discard the packet based on its policy. Optimizing
firewall policies is crucial for improving network performance. Prior
work on firewall optimization focuses on either intra-firewall or inter
firewall optimization within one administrative domain where the privacy
of firewall policies is not a concern. This paper explores inter firewall
optimization across administrative domains for the first time. The key
technical challenge is that firewall policies cannot be shared across
domains because a firewall policy contains confidential information and
even potential security holes, which can be exploited by attackers. In this
paper, we propose the first cross-domain privacy-preserving cooperative
firewall policy optimization protocol. Specifically, for any two adjacent
firewalls belonging to two different administrative domains, our protocol
can identify in each firewall the rules that can be removed because of the
other firewall. The optimization process involves cooperative
computation between the two firewalls without any party disclosing its
policy to the other. We implemented our protocol and conducted
extensive experiments. The results on real firewall policies show that our
protocol can remove as many as 49% of the rules in a firewall, whereas
the average is 19.4%. The communication cost is less than a few hundred
kilobytes. Our protocol incurs no extra online packet processing
overhead, and the offline processing time is less than a few hundred
seconds.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Networking
59.A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks
Abstract:
Given the sensitivity of the potential WSN applications and because of
resource limitations, key management emerges as a challenging issue for
WSNs. One of the main concerns when designing a key management
scheme is the network scalability. Indeed, the protocol should support a
large number of nodes to enable a large scale deployment of the network.
In this paper, we propose a new scalable key management scheme for
WSNs which provides a good secure connectivity coverage. For this
purpose, we make use of the unital design theory. We show that the basic
mapping from unitals to key pre-distribution allows us to achieve high
network scalability. Nonetheless, this naive mapping does not guarantee a
high key sharing probability. Therefore, we propose an enhanced unital-
based key pre-distribution scheme providing high network scalability and
good key sharing probability approximately lower bounded by 1 − e−¿1 ¿
≈ 0.632. We conduct approximate analysis and simulations and compare
our solution to those of existing methods for different criteria such as
storage overhead, network scalability, network connectivity, average
secure path length and network resiliency. Our results show that the
proposed approach enhances the network scalability while providing high
secure connectivity coverage and overall improved performance.
Moreover, for an equal network size, our solution reduces significantly
the storage overhead compared to those of existing solutions.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
60.Delay-Based Network Utility Maximization
Abstract:It is well known that max-weight policies based on a queue backlog index
can be used to stabilize stochastic networks, and that similar stability
results hold if a delay index is used. Using Lyapunov optimization, we
extend this analysis to design a utility maximizing algorithm that uses
explicit delay information from the head-of-line packet at each user. The
resulting policy is shown to ensure deterministic worst-case delay
guarantees and to yield a throughput utility that differs from the optimally
fair value by an amount that is inversely proportional to the delay
guarantee. Our results hold for a general class of 1-hop networks,
including packet switches
61.Dynamic Control of Coding for Progressive Packet Arrivals in DTNs
Abstract:
In Delay Tolerant Networks (DTNs) the core challenge is to cope with
lack of persistent connectivity and yet be able to deliver messages from
source to destination. In particular, routing schemes that leverage relays’
memory and mobility are a customary solution in order to improve
message delivery delay. When large files need to be transferred from
source to destination, not all packets may be available at the source prior
to the first transmission. This motivates us to study general packet
arrivals at the source, derive performance analysis of replication based
routing policies and study their optimization under two hop routing. In
particular, we determine the conditions for optimality in terms of
probability of successful delivery and mean delay and we devise optimal
policies, so-called piecewise-threshold policies. We account for linear
block-codes and rate less random linear coding to efficiently generate
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
redundancy, as well as for an energy constraint in the optimization. We
numerically assess the higher efficiency of piecewise-threshold policies
compared with other policies by developing heuristic optimization of the
thresholds for all flavors of coding considered.
62.Minimum Cost Blocking Problem in Multi-path Wireless Routing Protocols
Abstract:
We present a class of Minimum Cost Blocking (MCB) problems in
Wireless Mesh Networks (WMNs) with multi-path wireless routing
protocols. We establish the provable superiority of multi-path routing
protocols over conventional protocols against blocking, node-isolation
and network-partitioning type attacks. In our attack model, an adversary
is considered successful if he is able to capture/isolate a subset of nodes
such that no more than a certain amount of traffic from source nodes
reaches the gateways. Two scenarios, viz. (a) low mobility for network
nodes, and (b) high degree of node mobility, are evaluated. Scenario (a) is
proven to be NP-hard and scenario (b) is proven to be #P-hard for the
adversary to realize the goal. Further, several approximation algorithms
are presented which show that even in the best case scenario it is at least
exponentially hard for the adversary to optimally succeed in such
blocking-type attacks. These results are verified through simulations
which demonstrate the robustness of multi-path routing protocols against
such attacks. To the best of our knowledge, this is the first work that
theoretically evaluates the attack-resiliency and performance of multi-
path protocols with network node mobility.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
63.On the Node Clone Detection in Wireless Sensor Networks
Abstract:
Wireless sensor networks are vulnerable to the node clone, and several
distributed protocols have been proposed to detect this attack. However,
they require too strong assumptions to be practical for large-scale,
randomly deployed sensor networks. In this paper, we propose two novel
node clone detection protocols with different tradeoffs on network
conditions and performance. The first one is based on a distributed hash
table (DHT), by which a fully decentralized, key-based caching and
checking system is constructed to catch cloned nodes effectively. The
protocol performance on efficient storage consumption and high security
level is theoretically deducted through a probability model, and the
resulting equations, with necessary adjustments for real application, are
supported by the simulations. Although the DHT-based protocol incurs
similar communication cost as previous approaches, it may be considered
a little high for some scenarios. To address this concern, our second
distributed detection protocol, named randomly directed exploration,
presents good communication performance for dense sensor networks, by
a probabilistic directed forwarding technique along with random initial
direction and border determination. The simulation results uphold the
protocol design and show its efficiency on communication overhead and
satisfactory detection probability
64.Opportunistic MANETs: Mobility Can Make Up for Low Transmission Power
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
Opportunistic mobile ad hoc networks (MANETs) are a special class of
sparse and disconnected MANETs where data communication exploits
sporadic contact opportunities among nodes. We consider opportunistic
MANETs where nodes move independently at random over a square of
the plane. Nodes exchange data if they are at a distance at most within
each other, where is the node transmission radius. The flooding time is
the number of time-steps required to broadcast a message from a source
node to every node of the network. Flooding time is an important
measure of how fast information can spread in dynamic networks. We
derive the first upper bound on the flooding time, which is a decreasing
function of the maximal speed of the nodes. The bound holds with high
probability, and it is nearly tight. Our bound shows that, thanks to node
mobility, even when the network is sparse and disconnected, information
spreading can be fast.
65.Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks
Abstract:
Back-pressure-based adaptive routing algorithms where each packet is
routed along a possibly different path have been extensively studied in
the literature. However, such algorithms typically result in poor delay
performance and involve high implementation complexity. In this paper,
we develop a new adaptive routing algorithm built upon the widely
studied back-pressure algorithm. We decouple the routing and scheduling
components of the algorithm by designing a probabilistic routing table
that is used to route packets to per-destination queues. The scheduling
decisions in the case of wireless networks are made using counters called
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
shadow queues. The results are also extended to the case of networks that
employ simple forms of network coding. In that case, our algorithm
provides a low-complexity solution to optimally exploit the routing–
coding tradeoff.
66.Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm
Abstract:
The problem of efficiently and securely broadcasting to a remote
cooperative group occurs in many newly emerging networks. A major
challenge in devising such systems is to overcome the obstacles of the
potentially limited communication from the group to the sender, the
unavailability of a fully trusted key generation center, and the dynamics
of the sender. The existing key management paradigms cannot deal with
these challenges effectively. In this paper, we circumvent these obstacles
and close this gap by proposing a novel key management paradigm. The
new paradigm is a hybrid of traditional broadcast encryption and group
key agreement. In such a system, each member maintains a single
public/secret key pair. Upon seeing the public keys of the members, a
remote sender can securely broadcast to any intended subgroup chosen in
an ad hoc way. Following this model, we instantiate a scheme that is
proven secure in the standard model. Even if all the non-intended
members collude, they cannot extract any useful information from the
transmitted messages. After the public group encryption key is extracted,
both the computation overhead and the communication cost are
independent of the group size. Furthermore, our scheme facilitates simple
yet efficient member deletion/ addition and flexible rekeying strategies.
Its strong security against collusion, its constant overhead, and its
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
implementation friendliness without relying on a fully trusted authority
render our protocol a very promising solution to many applications.
67.Participatory Privacy: Enabling Privacy in Participatory Sensing
Abstract:
Participatory sensing is an emerging computing paradigm that enables the
distributed collection of data by self-selected participants. It allows the
increasing number of mobile phone users to share local knowledge
acquired by their sensor-equipped devices (e.g., to monitor temperature,
pollution level, or consumer pricing information). While research
initiatives and prototypes proliferate, their real-world impact is often
bounded to comprehensive user participation. If users have no incentive,
or feel that their privacy might be endangered, it is likely that they will
not participate. In this article, we focus on privacy protection in
participatory sensing and introduce a suitable privacy-enhanced
infrastructure. First, we provide a set of definitions of privacy
requirements for both data producers (i.e., users providing sensed
information) and consumers (i.e., applications accessing the data). Then
we propose an efficient solution designed for mobile phone users, which
incurs very low overhead. Finally, we discuss a number of open problems
and possible research directions.
68.Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks
Abstract:
In view of the fast-growing Internet traffic, this paper propose a
distributed traffic management framework, in which routers are deployed
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
with intelligent data rate controllers to tackle the traffic mass. Unlike
other explicit traffic control protocols that have to estimate network
parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or
the number of flows) in order to compute the allowed source sending rate,
our fuzzy-logic-based controller can measure the router queue size
directly; hence it avoids various potential performance problems arising
from parameter estimations while reducing much consumption of
computation and memory resources in routers. As a network parameter,
the queue size can be accurately monitored and used to proactively decide
if action should be taken to regulate the source sending rate, thus
increasing the resilience of the network to traffic congestion. The
communication QoS (Quality of Service) is assured by the good
performances of our scheme such as max-min fairness, low queueing
delay and good robustness to network dynamics. Simulation results and
comparisons have verified the effectiveness and showed that our new
traffic management scheme can achieve better performances than the
existing protocols that rely on the estimation of network parameters.
PARALLEL & DISTRIBUTED SYSTEMS69.A Secure Protocol for Spontaneous Wireless Ad Hoc Networks
Creation
Abstract:
This paper presents a secure protocol for spontaneous wireless ad hoc
networks which uses a hybrid symmetric/ asymmetric scheme and the
trust between users in order to exchange the initial data and to exchange
the secret keys that will be used to encrypt the data. Trust is based on the
first visual contact between users. Our proposal is a complete self-
configured secure protocol that is able to create the network and share
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
secure services without any infrastructure. The network allows sharing
resources and offering new services among users in a secure
environment. The protocol includes all functions needed to operate
without any external support. We have designed and developed it in
devices with limited resources. Network creation stages are detailed and
the communication, protocol messages, and network management are
explained. Our proposal has been implemented in order to test the
protocol procedure and performance. Finally, we compare the protocol
with other spontaneous ad hoc network protocols in order to highlight its
features and we provide a security analysis of the system.
70.Security Analysis of a Privacy-Preserving Decentralized Key-Policy Attribute-Based Encryption Scheme
Abstract:
In a decentralized attribute-based encryption (ABE) system, any party can
act as an authority by creating a public key and issuing private keys to
different users that reflect their attributes without any collaboration. Such
an ABE scheme can eliminate the burden of heavy communication and
collaborative computation in the setup phase of multi-authority ABE
schemes, thus is considered more preferable. Recently in IEEE Trans.
Parallel Distrib. Syst., Han et al. [3] proposed an interesting privacy-
preserving decentralized key-policy ABE scheme, which was claimed to
achieve better privacy for users and to be provably secure in the standard
model. However, after carefully revisiting the scheme, we conclude that
their scheme cannot resist the collusion attacks, hence fails to meet the
basic security definitions of the ABE system.
71.IP-Geolocation Mapping for Moderately Connected Internet Regions
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
Most IP-geolocation mapping schemes take delay-measurement
approach, based on the assumption of a strong correlation between
networking delay and geographical distance between the targeted client
and the landmarks. In this paper, however, we investigate a large region
of moderately connected Internet and find the delay-distance correlation
is weak. But we discover a more probable rule—with high probability the
shortest delay comes from the closest distance. Based on this closest-
shortest rule, we develop a simple and novel IP-geolocation mapping
scheme for moderately connected Internet regions, called GeoGet. In
GeoGet, we take a large number of webservers as passive landmarks and
map a targeted client to the geolocation of the landmark that has the
shortest delay. We further use JavaScript at targeted clients to generate
HTTP/Get probing for delay measurement. To control the measurement
cost, we adopt a multistep probing method to refine the geolocation of a
targeted client, finally to city level. The evaluation results show that when
probing about 100 landmarks, GeoGet correctly maps 35.4 percent clients
to city level, which outperforms current schemes such as GeoLim [16]
and GeoPing [14] by 270 and 239 percent, respectively, and the median
error distance in GeoGet is around 120 km, outperforming GeoLim and
GeoPing by 37 and 70 percent, respectively.
72.Optimal Client-Server Assignment for Internet Distributed Systems
Abstract:
We investigate an underlying mathematical model and algorithms for
optimizing the performance of a class of distributed systems over the
Internet. Such a system consists of a large number of clients who
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
communicate with each other indirectly via a number of intermediate
servers. Optimizing the overall performance of such a system then can be
formulated as a client-server assignment problem whose aim is to assign
the clients to the servers in such a way to satisfy some prespecified
requirements on the communication cost and load balancing. We show
that 1) the total communication load and load balancing are two opposing
metrics, and consequently, their tradeoff is inherent in this class of
distributed systems; 2) in general, finding the optimal client-server
assignment for some prespecified requirements on the total load and load
balancing is NP-hard, and therefore; 3) we propose a heuristic via relaxed
convex optimization for finding the approximate solution. Our simulation
results indicate that the proposed algorithm produces superior
performance than other heuristics, including the popular Normalized Cuts
algorithm.
73.Social Tube P2P-assisted Video Sharing in Online Social Networks
Abstract:
Video sharing has been an increasingly popular application in online
social networks (OSNs). However, its sustainable development is
severely hindered by the intrinsic limit of the client/server architecture
deployed in current OSN video systems, which is not only costly in terms
of server bandwidth and storage but also not scalable with the soaring
amount of users and video content. The peer-assisted Video-on-Demand
(VoD) technique, in which participating peers assist the server in
delivering video content has been proposed recently. Unfortunately,
videos can only be disseminated through friends in OSNs. Therefore,
current VoD works that explore clustering nodes with similar interests or
close location for high performance are suboptimal, if not entirely
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
inapplicable, in OSNs. Based on our long-term real-world measurement
of over 1,000,000 users and 2,500 videos on Facebook, we propose
SocialTube, a novel peer-assisted video sharing system that explores
social relationship, interest similarity, and physical location between
peers in OSNs. Specifically, SocialTube incorporates four algorithms: a
social network (SN)-based P2P overlay construction algorithm, a SN-
based chunk prefetching algorithm, chunk delivery and scheduling
algorithm, and a buffer management algorithm. Experimental results from
a prototype on PlanetLab and an event-driven simulator show that
SocialTube can improve the quality of user experience and system
scalability over current P2P VoD techniques.
74.A System for Denial-of-Service Attack Detection Based on Multivariate Correlation Analysis
Abstract:
Interconnected systems, such as Web servers, database servers, cloud
computing servers etc, are now under threads from network attackers. As
one of most common and aggressive means, Denial-of-Service (DoS)
attacks cause serious impact on these computing systems. In this paper,
we present a DoS attack detection system that uses Multivariate
Correlation Analysis (MCA) for accurate network traffic characterization
by extracting the geometrical correlations between network traffic
features. Our MCA-based DoS attack detection system employs the
principle of anomaly-based detection in attack recognition. This makes
our solution capable of detecting known and unknown DoS attacks
effectively by learning the patterns of legitimate network traffic only.
Furthermore, a triangle-area-based technique is proposed to enhance and
to speed up the process of MCA. The effectiveness of our proposed
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
detection system is evaluated using KDD Cup 99 dataset, and the
influences of both non-normalized data and normalized data on the
performance of the proposed detection system are examined. The results
show that our system outperforms two other previously developed state-
of-the-art approaches in terms of detection accuracy.
MOBILE COMPUTING75.DCIM: Distributed Cache Invalidation Method for Maintaining
Cache Consistency in Wireless Mobile Networks
Abstract:
This paper proposes distributed cache invalidation mechanism (DCIM), a
client-based cache consistency scheme that is implemented on top of a
previously proposed architecture for caching data items in mobile ad hoc
networks (MANETs), namely COACS, where special nodes cache the
queries and the addresses of the nodes that store the responses to these
queries. We have also previously proposed a server-based consistency
scheme, named SSUM, whereas in this paper, we introduce DCIM that is
totally client-based. DCIM is a pull-based algorithm that implements
adaptive time to live (TTL), piggybacking, and prefetching, and provides
near strong consistency capabilities. Cached data items are assigned
adaptive TTL values that correspond to their update rates at the data
source, where items with expired TTL values are grouped in validation
requests to the data source to refresh them, whereas unexpired ones but
with high request rates are prefetched from the server. In this paper,
DCIM is analyzed to assess the delay and bandwidth gains (or costs)
when compared to polling every time and push-based schemes. DCIM
was also implemented using ns2, and compared against client-based and
server-based schemes to assess its performance experimentally. The Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
consistency ratio, delay, and overhead traffic are reported versus several
variables, where DCIM showed to be superior when compared to the
other systems.
76.Discovery and Verification of Neighbor Positions in Mobile Ad Hoc Networks
Abstract:
A growing number of ad hoc networking protocols and location-aware
services require that mobile nodes learn the position of their neighbors.
However, such a process can be easily abused or disrupted by adversarial
nodes. In absence of a priori trusted nodes, the discovery and verification
of neighbor positions presents challenges that have been scarcely
investigated in the literature. In this paper, we address this open issue by
proposing a fully distributed cooperative solution that is robust against
independent and colluding adversaries, and can be impaired only by an
overwhelming presence of adversaries. Results show that our protocol
can thwart more than 99 percent of the attacks under the best possible
conditions for the adversaries, with minimal false positive rates.
77.Efficient Rekeying Framework for Secure Multicast with Diverse- Subscription-Period Mobile Users
Abstract:
Group key management (GKM) in mobile communication is important to
enable access control for a group of users. A major issue in GKM is how
to minimize the communication cost for group rekeying. To design the
optimal GKM, researchers have assumed that all group members have the
same leaving probabilities and that the tree is balanced and complete to
simplify analysis. In the real mobile computing environment, however,
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
these assumptions are impractical and may lead to a large gap between
the impractical analysis and the measurement in real-life situations, thus
allowing for GKM schemes to incorporate only a specific number of
users. In this paper, we propose a new GKM framework supporting more
general cases that do not require these assumptions. Our framework
consists of two algorithms: one for initial construction of a basic key-tree
and another for optimizing the key-tree after membership changes. The
first algorithm enables the framework to generate an optimal key-tree that
reflects the characteristics of users’ leaving probabilities, and the second
algorithm allows continual maintenance of communication with less
overhead in group rekeying. Through simulations, we show that our
GKM framework outperforms the previous one which is known to be the
best balanced and complete structure.
78.Toward a Statistical Framework for Source Anonymity in Sensor Networks
Abstract:
In certain applications, the locations of events reported by a sensor
network need to remain anonymous. That is, unauthorized observers must
be unable to detect the origin of such events by analyzing the network
traffic. Known as the source anonymity problem, this problem has
emerged as an important topic in the security of wireless sensor networks,
with variety of techniques based on different adversarial assumptions
being proposed. In this work, we present a new framework for modeling,
analyzing, and evaluating anonymity in sensor networks. The novelty of
the proposed framework is twofold: first, it introduces the notion of
“interval indistinguishability” and provides a quantitative measure to
model anonymity in wireless sensor networks; second, it maps source
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
anonymity to the statistical problem of binary hypothesis testing with
nuisance parameters. We then analyze existing solutions for designing
anonymous sensor networks using the proposed model. We show how
mapping source anonymity to binary hypothesis testing with nuisance
parameters leads to converting the problem of exposing private source
information into searching for an appropriate data transformation that
removes or minimize the effect of the nuisance information. By doing so,
we transform the problem from analyzing real-valued sample points to
binary codes, which opens the door for coding theory to be incorporated
into the study of anonymous sensor networks. Finally, we discuss how
existing solutions can be modified to improve their anonymity.
79.Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks
Abstract:
Wireless Sensor Networks (WSNs) are increasingly used in data-
intensive applications such as microclimate monitoring, precision
agriculture, and audio/video surveillance. A key challenge faced by data-
intensive WSNs is to transmit all the data generated within an
application’s lifetime to the base station despite the fact that sensor nodes
have limited power supplies. We propose using lowcost disposable
mobile relays to reduce the energy consumption of data-intensive WSNs.
Our approach differs from previous work in two main aspects. First, it
does not require complex motion planning of mobile nodes, so it can be
implemented on a number of low-cost mobile sensor platforms. Second,
we integrate the energy consumption due to both mobility and wireless
transmissions into a holistic optimization framework. Our framework
consists of three main algorithms. The first algorithm computes an
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
optimal routing tree assuming no nodes can move. The second algorithm
improves the topology of the routing tree by greedily adding new nodes
exploiting mobility of the newly added nodes. The third algorithm
improves the routing tree by relocating its nodes without changing its
topology. This iterative algorithm converges on the optimal position for
each node given the constraint that the routing tree topology does not
change. We present efficient distributed implementations for each
algorithm that require only limited, localized synchronization. Because
we do not necessarily compute an optimal topology, our final routing tree
is not necessarily optimal. However, our simulation results show that our
algorithms significantly outperform the best existing solutions.
80.Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks
Abstract:
Ad hoc low-power wireless networks are an exciting research direction in
sensing and pervasive computing. Prior security work in this area has
focused primarily on denial of communication at the routing or medium
access control levels. This paper explores resource depletion attacks at
the routing protocol layer, which permanently disable networks by
quickly draining nodes’ battery power. These “Vampire” attacks are not
specific to any specific protocol, but rather rely on the properties of many
popular classes of routing protocols. We find that all examined protocols
are susceptible to Vampire attacks, which are devastating, difficult to
detect, and are easy to carry out using as few as one malicious insider
sending only protocol-compliant messages. In the worst case, a single
Vampire can increase network-wide energy usage by a factor of O (N),
where N in the number of network nodes. We discuss methods to mitigate
these types of attacks, including a new proof-of-concept protocol that Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
provably bounds the damage caused by Vampires during the packet
forwarding phase.
81.Toward Privacy Preserving and Collusion Resistance in a Location Proof Updating System
Abstract:
Today’s location-sensitive service relies on user’s mobile device to
determine the current location. This allows malicious users to access a
restricted resource or provide bogus alibis by cheating on their locations.
To address this issue, we propose A Privacy-Preserving Location proof
Updating System (APPLAUS) in which co-located Bluetooth enabled
mobile devices mutually generate location proofs and send updates to a
location proof server. Periodically changed pseudonyms are used by the
mobile devices to protect source location privacy from each other, and
from the untrusted location proof server. We also develop user-centric
location privacy model in which individual users evaluate their location
privacy levels and decide whether and when to accept the location proof
requests. In order to defend against colluding attacks, we also present
between’s ranking-based and correlation clustering-based approaches for
outlier detection. APPLAUS can be implemented with existing network
infrastructure, and can be easily deployed in Bluetooth enabled mobile
devices with little computation or power cost. Extensive experimental
results show that APPLAUS can effectively provide location proofs,
significantly preserve the source location privacy, and effectively detect
colluding attacks.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
PATTERN ANALYSIS & MACHINE INTELLIGENCE82.A Framework for Mining Signatures from Event Sequences and Its
Applications in Healthcare Data
Abstract:
This paper proposes a novel temporal knowledge representation and
learning framework to perform large-scale temporal signature mining of
longitudinal heterogeneous event data. The framework enables the
representation, extraction, and mining of high order latent event structure
and relationships within single and multiple event sequences. The
proposed knowledge representation maps the heterogeneous event
sequences to a geometric image by encoding events as a structured
spatial-temporal shape process. We present a doubly constrained
convolutional sparse coding framework that learns interpretable and shift-
invariant latent temporal event signatures. We show how to cope with the
sparsity in the data as well as in the latent factor model by inducing a
double sparsity constraint on the β-divergence to learn an over complete
sparse latent factor model. A novel stochastic optimization scheme
performs large-scale incremental learning of group-specific temporal
event signatures. We validate the framework on synthetic data and on an
electronic health record dataset.
Service Computing (Web Service)83.A Decentralized Service Discovery Approach on Peer-to-Peer
Networks
Abstract:
Service-Oriented Computing (SOC) is emerging as a paradigm for
developing distributed applications. A critical issue of utilizing SOC is to
have a scalable, reliable, and robust service discovery mechanism.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
However, traditional service discovery methods using centralized
registries can easily suffer from problems such as performance bottleneck
and vulnerability to failures in large scalable service networks, thus
functioning abnormally. To address these problems, this paper proposes a
peer-to-peer-based decentralized service discovery approach named
Chord4S. Chord4S utilizes the data distribution and lookup capabilities of
the popular Chord to distribute and discover services in a decentralized
manner. Data availability is further improved by distributing published
descriptions of functionally equivalent services to different successor
nodes that are organized into virtual segments in the Chord4S circle.
Based on the service publication approach, Chord4S supports QoS-aware
service discovery. Chord4S also supports service discovery with
wildcard(s). In addition, the Chord routing protocol is extended to support
efficient discovery of multiple services with a single query. This enables
late negotiation of Service Level Agreements (SLAs) between service
consumers and multiple candidate service providers. The experimental
evaluation shows that Chord4S achieves higher data availability and
provides efficient query with reasonable overhead.
84.Personalized QoS-Aware Web Service Recommendation and Visualization
Abstract:
With the proliferation of web services, effective QoS-based approach to
service recommendation is becoming more and more important. Although
service recommendation has been studied in the recent literature, the
performance of existing ones is not satisfactory, since 1) previous
approaches fail to consider the QoS variance according to users’
locations; and 2) previous recommender systems are all black boxes
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
providing limited information on the performance of the service
candidates. In this paper, we propose a novel collaborative filtering
algorithm designed for large-scale web service recommendation.
Different from previous work, our approach employs the characteristic of
QoS and achieves considerable improvement on the recommendation
accuracy. To help service users better understand the rationale of the
recommendation and remove some of the mystery, we use a
recommendation visualization technique to show how a recommendation
is grouped with other choices. Comprehensive experiments are conducted
using more than 1.5 million QoS records of real-world web service
invocations. The experimental results show the efficiency and
effectiveness of our approach.
85.Privacy-Enhanced Web Service Composition
Abstract:
Data as a Service (DaaS) builds on service-oriented technologies to
enable fast access to data resources on the Web. However, this paradigm
raises several new privacy concerns that traditional privacy models do not
handle. In addition, DaaS composition may reveal privacy-sensitive
information. In this paper, we propose a formal privacy model in order to
extend DaaS descriptions with privacy capabilities. The privacy model
allows a service to define a privacy policy and a set of privacy
requirements. We also propose a privacy-preserving DaaS composition
approach allowing verifying the compatibility between privacy
requirements and policies in DaaS composition. We propose a negotiation
mechanism that makes it possible to dynamically reconcile the privacy
capabilities of services when incompatibilities arise in a composition. We
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
validate the applicability of our proposal through a prototype
implementation and a set of experiments.
SOFTWARE ENGINEERING86.Whole Test Suite Generation
Abstract:
Not all bugs lead to program crashes, and not always is there a formal
specification to check the correctness of a software test’s outcome. A
common scenario in software testing is therefore that test data are
generated, and a tester manually adds test oracles. As this is a difficult
task, it is important to produce small yet representative test sets, and this
representativeness is typically measured using code coverage. There is,
however, a fundamental problem with the common approach of targeting
one coverage goal at a time: Coverage goals are not independent, not
equally difficult, and sometimes infeasible—the result of test generation
is therefore dependent on the order of coverage goals and how many of
them are feasible. To overcome this problem, we propose a novel
paradigm in which whole test suites are evolved with the aim of covering
all coverage goals at the same time while keeping the total size as small
as possible. This approach has several advantages, as for example, its
effectiveness is not affected by the number of infeasible targets in the
code. We have implemented this novel approach in the EVOSUITE tool,
and compared it to the common approach of addressing one goal at a
time. Evaluated on open source libraries and an industrial case study for a
total of 1,741 classes, we show that EVOSUITE achieved up to 188 times
the branch coverage of a traditional approach targeting single branches,
with up to62 percent smaller test suites.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
MULTIMEDIA87.Understanding the External Links of Video Sharing Sites:
Measurement and Analysis
Abstract:
Recently, many video sharing sites provide external links so that their
video or audio contents can be embedded into external web sites. For
example, users can copy the embedded URLs of the videos of YouTube
and post the URL links on their own blogs. Clearly, the purpose of such
function is to increase the distribution of the videos and the associated
advertisement. Does this function fulfill its purpose and what is the
quantification? In this paper, we provide a comprehensive measurement
study and analysis on these external links to answer these two questions.
With the traces collected from two major video sharing sites, YouTube
and Youku of China, we show that the external links have various
impacts on the popularity of the video sharing sites. More specifically, for
videos that have been uploaded for eight months in Youku, around 15%
of views can come from external links. Some contents are densely linked.
For example, comedy videos can attract more than 800 external links on
average. We also study the relationship between the external links and the
internal links. We show that there are correlations; for example, if a video
is popular itself, it is likely to have a large number of external links.
Another observation we find is that the external links usually have a
higher impact on Youku than that of YouTube. We conjecture that it is
more likely that the external links have higher impact for a regional site
than a worldwide site.
88.Learn and Personalized Image Search
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
Increasingly developed social sharing websites, like Flickr and Youtube,
allow users to create, share, annotate and comment medias. The large-
scale user-generated meta-data not only facilitate users in sharing and
organizing multimedia content, but provide useful information to improve
media retrieval and management. Personalized search serves as one of
such examples where the web search experience is improved by
generating the returned list according to the modified user search intents.
In this paper, we exploit the social annotations and propose a novel
framework simultaneously considering the user and query relevance to
learn to personalized image search. The basic premise is to embed the
user preference and query-related search intent into user-specific topic
spaces. Since the users’ original annotation is too sparse for topic
modeling, we need to enrich users’ annotation pool before user-specific
topic spaces construction. The proposed framework contains two
components: 1) A Ranking based Multi-correlation Tensor Factorization
model is proposed to perform annotation prediction, which is considered
as users’ potential annotations for the images; 2) We introduce User-
specific Topic Modeling to map the query relevance and user preference
into the same user-specific topic space. For performance evaluation, two
resources involved with users’ social activities are employed.
Experiments on a large-scale Flickr dataset demonstrate the effectiveness
of the proposed method.
SERVICE COMPUTING89.Design and Implementation of TARF: A Trust-Aware Routing
Framework for WSNs
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
The multihop routing in wireless sensor networks (WSNs) offers little
protection against identity deception through replaying routing
information. An adversary can exploit this defect to launch various
harmful or even devastating attacks against the routing protocols,
including sinkhole attacks, wormhole attacks, and Sybil attacks. The
situation is further aggravated by mobile and harsh network conditions.
Traditional cryptographic techniques or efforts at developing trust-aware
routing protocols do not effectively address this severe problem. To
secure the WSNs against adversaries misdirecting the multihop routing,
we have designed and implemented TARF, a robust trust-aware routing
framework for dynamic WSNs. Without tight time synchronization or
known geographic information, TARF provides trustworthy and energy-
efficient route. Most importantly, TARF proves effective against those
harmful attacks developed out of identity deception; the resilience of
TARF is verified through extensive evaluation with both simulation and
empirical experiments on large-scale WSNs under various scenarios
including mobile and RF-shielding network conditions. Further, we have
implemented a low-overhead TARF module in TinyOS; as demonstrated,
this implementation can be incorporated into existing routing protocols
with the least effort. Based on TARF, we also demonstrated a proof-of-
concept mobile target detection application that functions well against an
antidetection mechanism.
90.Packet-Hiding Methods for Preventing Selective Jamming Attacks
Abstract:
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
The open nature of the wireless medium leaves it vulnerable to
intentional interference attacks, typically referred to as jamming. This
intentional interference with wireless transmissions can be used as a
launch pad for mounting Denial-of-Service attacks on wireless networks.
Typically, jamming has been addressed under an external threat model.
However, adversaries with internal knowledge of protocol specifications
and network secrets can launch low-effort jamming attacks that are
difficult to detect and counter. In this work, we address the problem of
selective jamming attacks in wireless networks. In these attacks, the
adversary is active only for a short period of time, selectively targeting
messages of high importance. We illustrate the advantages of selective
jamming in terms of network performance degradation and adversary
effort by presenting two case studies; a selective attack on TCP and one
on routing. We show that selective jamming attacks can be launched by
performing real-time packet classification at the physical layer. To
mitigate these attacks, we develop three schemes that prevent real-time
packet classification by combining cryptographic primitives with
physical-layer attributes. We analyze the security of our methods and
evaluate their computational and communication overhead.
91.Risk-Aware Mitigation for MANET Routing Attacks
Abstract:
Mobile Ad hoc Networks (MANET) have been highly vulnerable to
attacks due to the dynamic nature of its network infrastructure. Among
these attacks, routing attacks have received considerable attention since it
could cause the most devastating damage to MANET. Even though there
exist several intrusion response techniques to mitigate such critical
attacks, existing solutions typically attempt to isolate malicious nodes
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
based on binary or naı¨ve fuzzy response decisions. However, binary
responses may result in the unexpected network partition, causing
additional damages to the network infrastructure, and naı¨ve fuzzy
responses could lead to uncertainty in countering routing attacks in
MANET. In this paper, we propose a risk-aware response mechanism to
systematically cope with the identified routing attacks. Our risk-aware
approach is based on an extended Dempster-Shafer mathematical theory
of evidence introducing a notion of importance factors. In addition, our
experiments demonstrate the effectiveness of our approach with the
consideration of several performance metrics.
92.DoubleGuard: Detecting Intrusions in Multitier Web Applications
Abstract:
Internet services and applications have become an inextricable part of
daily life, enabling communication and the management of personal
information from anywhere. To accommodate this increase in application
and data complexity, web services have moved to a multitier design
wherein the web server runs the application front-end logic and data are
outsourced to a database or file server. In this paper, we present Double
Guard, an IDS system that models the network behavior of user sessions
across both the front-end web server and the back-end database. By
monitoring both web and subsequent database requests, we are able to
ferret out attacks that independent IDS would not be able to identify.
Furthermore, we quantify the limitations of any multitier IDS in terms of
training sessions and functionality coverage. We implemented Double
Guard using an Apache web server with MySQL and lightweight
virtualization. We then collected and processed real-world traffic over a
15-day period of system deployment in both dynamic and static web
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
applications. Finally, using Double Guard, we were able to expose a wide
range of attacks with 100 percent accuracy while maintaining 0 percent
false positives for static web services and 0.6 percent false positives for
dynamic web services.
Image Processing93.Adaptive Membership Functions for Hand-Written Character
Recognition by Voronoi-based Image Zoning
Abstract:
In the field of hand-written character recognition, image zoning is a
widespread technique for feature extraction since it is rightly considered
able to cope with hand-written pattern variability. As a matter of fact, the
problem of zoning design has attracted many researchers that have
proposed several image zoning topologies, according to static and
dynamic strategies. Unfortunately, little attention has been paid so far to
the role of feature-zone membership functions, that define the way in
which a feature influences different zones of the zoning method. The
results is that the membership functions defined to date follow non-
adaptive, global approaches that are unable to model local information on
feature distributions. In this paper, a new class of zone-based membership
functions with adaptive capabilities is introduced and its effectiveness is
shown. The basic idea is to select, for each zone of the zoning method,
the membership function best suited to exploit the characteristics of the
feature distribution of that zone. In addition, a genetic algorithm is
proposed to determine – in a unique process - the most favorable
membership functions along with the optimal zoning topology, described
by Voronoi tessellation. The experimental tests show the superiority of
the new technique with respect to traditional zoning methods.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Cloud Computing94.A Gossip Protocol for Dynamic Resource Management in Large
Cloud Environments
Abstract:
We address the problem of dynamic resource management for a large-
scale cloud environment. Our contribution includes outlining a distributed
middleware architecture and presenting one of its key elements: a gossip
protocol that (1) ensures fair resource allocation among sites/applications,
(2) dynamically adapts the allocation to load changes and (3) scales both
in the number of physical machines and sites/applications. We formalize
the resource allocation problem as that of dynamically maximizing the
cloud utility under CPU and memory constraints. We first present a
protocol that computes an optimal solution without considering memory
constraints and prove correctness and convergence properties. Then, we
extend that protocol to provide an efficient heuristic solution for the
complete problem, which includes minimizing the cost for adapting an
allocation. The protocol continuously executes on dynamic, local input
and does not require global synchronization, as other proposed gossip
protocols do. We evaluate the heuristic protocol through simulation and
find its performance to be well-aligned with our design goals.
Parallel and Distributed Systems95.Cut Detection in Wireless Sensor Networks
Abstract:
A wireless sensor network can get separated into multiple connected
components due to the failure of some of its nodes,which is called a
“cut”. In this article we consider the problem of detecting cuts by the
remaining nodes of a wireless sensor network.We propose an algorithm
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
that allows (i) every node to detect when the connectivity to a specially
designated node has been lost,and (ii) one or more nodes (that are
connected to the special node after the cut) to detect the occurrence of the
cut. The algorithm is distributed and asynchronous: every node needs to
communicate with only those nodes that are within its communication
range. The algorithm is based on the iterative computation of a fictitious
“electrical potential” of the nodes. The convergence rate of the
underlying iterative scheme is independent of the size and structure of the
network.
Image Processing & Secure Computing96.Robust Video Data Hiding Using Forbidden Zone Data Hiding And
Selective Embedding.
Abstract:
Video data hiding is still an important research topic due to the design
complexities involved. We propose a new video data hiding method that
makes use of erasure correction capability of Repeat Accumulate codes
and superiority of Forbidden Zone Data Hiding. Selective embedding is
utilized in the proposed method to determine host signal samples suitable
for data hiding. This method also contains a temporal synchronization
scheme in order to withstand frame drop and insert attacks. The proposed
framework is tested by typical broadcast material against MPEG-2, H.264
compression, frame-rate conversion attacks, as well as other well-known
video data hiding methods. The decoding error values are reported for
typical system parameters. The simulation results indicate that the
framework can be successfully utilized in video data hiding applications.
Image Processing & Knowledge and Data Engg97.Sketch4Match – Content-based Image Retrieval System Using
Sketches.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Abstract:
The content based image retrieval (CBIR) is one of the most popular,
rising research areas of the digital image processing. Most of the
available image search tools, such as Google Images and Yahoo! Image
search, are based on textual annotation of images. In these tools, images
are manually annotated with keywords and then retrieved using text-
based search methods. The performances of these systems are not
satisfactory. The goal of CBIR is to extract visual content of an image
automatically, like color, texture, or shape. This paper aims to introduce
the problems and challenges concerned with the design and the creation
of CBIR systems, which is based on a free hand sketch (Sketch based
image retrieval – SBIR). With the help of the existing methods, describe a
possible solution how to design and implement a task specific descriptor,
which can handle the informational gap between a sketch and a colored
image, making an opportunity for the efficient search hereby. The used
descriptor is constructed after such special sequence of preprocessing
steps that the transformed full color image and the sketch can be
compared. We have studied EHD, HOG and SIFT. Experimental results
on two sample databases showed good results. Overall, the results show
that the sketch based system allows users an intuitive access to search-
tools. The SBIR technology can be used in several applications such as
digital libraries, crime prevention, and photo sharing sites. Such a system
has great value in apprehending suspects and identifying victims in
forensics and law enforcement. A possible application is matching a
forensic sketch to a gallery of mug shot images. The area of retrieve
images based on the visual content of the query picture intensified
recently, which demands on the quite wide methodology spectrum on the
area of the image processing.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Networking98.Denial of Service Attacks in Wireless Networks: The Case of
Jammers
Abstract:
The shared nature of the medium in wireless networks makes it easy for
an adversary to launch a Wireless Denial of Service (WDoS) attack.
Recent studies, demonstrate that such attacks can be very easily
accomplished using off-the shelf equipment. To give a simple example, a
malicious node can continually transmit a radio signal in order to block
any legitimate access to the medium and/or interfere with reception. This
act is called jamming and the malicious nodes are referred to as jammers.
Jamming techniques vary from simple ones based on the continual
transmission of interference signals, to more sophisticated attacks that
aim at exploiting vulnerabilities of the particular protocol used. In this
survey, we present a detailed up-to-date discussion on the jamming
attacks recorded in the literature. We also describe various techniques
proposed for detecting the presence of jammers. Finally, we survey
numerous mechanisms which attempt to protect the network from
jamming attacks. We conclude with a summary and by suggesting future
directions.
99.Optimal Bandwidth Assignment for Multiple-Description-Coded Video
Abstract:
In video streaming over multicast network, user bandwidth requirement is
often heterogeneous possibly with orders of magnitude difference (say,
from hundreds of kb/s for mobile devices to tens of Mb/s for high-
definition TV). Multiple descriptions coding (MDC) can be used to Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
address this bandwidth heterogeneity issue. In MDC, the video source is
encoded into multiple independent descriptions. A receiver, depending on
its available bandwidth, joins different descriptions to meet their
bandwidth requirements. An important but challenging problem for MDC
video multicast is how to assign bandwidth to each description in order to
maximize overall user satisfaction. In this paper,we investigate this issue
by formulating it as an optimization problem, with the objective to
maximize user bandwidth experience by taking into account the encoding
inefficiency due to MDC. We prove that the optimization problem is NP-
hard. However, if the description number is larger than or equal to a
certain threshold (e.g., if the minimum and maximum bandwidth
requirements are 100 kb/s and 10 Mb/s, respectively, such threshold is
seven descriptions), there is an exact and simple solution to achieve
maximum user satisfaction, i.e., meeting all the bandwidth requirements.
For the case when the description number is smaller, we present an
efficient heuristic called simulated annealing for MDC bandwidth
assignment (SAMBA) to assign bandwidth to each description given the
distribution of user bandwidth requirement. We evaluate our algorithm
using simulations. SAMBA achieves virtually the same optimal
performance based on exhaustive search. By comparing with other
assignment algorithms, SAMBA significantly improves user satisfaction.
We also show that, if the coding efficiency decreases with the number of
descriptions, there is an optimal description number to achieve maximal
user satisfaction.
Image Processing & Secure Computing100.Embedded Extended Visual Cryptography Schemes
Abstract
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
A visual cryptography scheme (VCS) is a kind of secret sharing scheme
which allows the encoding of a secret image into shares distributed to
participants. The beauty of such a scheme is that a set of qualified
participants is able to recover the secret image without any cryptographic
knowledge and computation devices. An extended visual cryptography
scheme (EVCS) is a kind of VCS which consists of meaningful shares
(compared to the random shares of traditional VCS). In this paper, we
propose a construction of EVCS which is realized by embedding random
shares into meaningful covering shares, and we call it the embedded
EVCS. Experimental results compare some of the well-known EVCSs
proposed in recent years systematically, and show that the proposed
embedded EVCS has competitive visual quality compared with many of
the well-known EVCSs in the literature. In addition, it has many specific
advantages against these well-known EVCSs, respectively.
Mobile Computing101.Protecting Location Privacy in Sensor Networks against a Global
Eavesdropper
Abstract:
While many protocols for sensor network security provide confidentiality
for the content of messages, contextual information usually remains
exposed. Such contextual information can be exploited by an adversary to
derive sensitive information such as the locations of monitored objects
and data sinks in the field. Attacks on these components can significantly
undermine any network application. Existing techniques defend the
leakage of location information from a limited adversary who can only
observe network traffic in a small region. However, a stronger adversary,
the global eavesdropper, is realistic and can defeat these existing
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
techniques. This paper first formalizes the location privacy issues in
sensor networks under this strong adversary model and computes a lower
bound on the communication overhead needed for achieving a given level
of location privacy. The paper then proposes two techniques to provide
location privacy to monitored objects (source-location privacy)—periodic
collection and source simulation—and two techniques to provide location
privacy to data sinks (sink-location privacy)—sink simulation and
backbone flooding. These techniques provide trade-offs between privacy,
communication cost, and latency. Through analysis and simulation, we
demonstrate that the proposed techniques are efficient and effective for
source and sink-location privacy in sensor networks.
Knowledge And Data Engineering102.Slicing: A New Approach for Privacy Preserving Data Publishing
Abstract:
Several anonymization techniques, such as generalization and
bucketization, have been designed for privacy preserving microdata
publishing. Recent work has shown that generalization loses considerable
amount of information, especially for high dimensional data.
Bucketization, on the other hand, does not prevent membership disclosure
and does not apply for data that do not have a clear separation between
quasi-identifying attributes and sensitive attributes. In this paper, we
present a novel technique called slicing, which partitions the data both
horizontally and vertically. We show that slicing preserves better data
utility than generalization and can be used for membership disclosure
protection. Another important advantage of slicing is that it can handle
high-dimensional data. We show how slicing can be used for attribute
disclosure protection and develop an efficient algorithm for computing
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
the sliced data that obey the ‘-diversity requirement. Our workload
experiments confirm that slicing preserves better utility than
generalization and is more effective than bucketization in workloads
involving the sensitive attribute. Our experiments also demonstrate that
slicing can be used to prevent membership disclosure.
IMAGE PROCESSING111. Linear Distance Coding for Image Classification
112. Local Directional Number Pattern for Face Analysis: Face and
Expression Recognition
113. Log-Gabor Filters for Image-Based Vehicle Verification
114. Noise Reduction Based on Partial-Reference, Dual-Tree
Complex Wavelet Transform Shrinkage
115. Query-Adaptive Image Search With Hash Codes
116. Regional Spatially Adaptive Total Variation Super-Resolution
With Spatial Information Filtering and Clustering
117. Revealing the Traces of JPEG Compression Anti-Forensics
118. Reversible Data Hiding With Optimal Value Transfer
119. Reversible Watermarking Based on Invariant Image
Classification and Dynamic Histogram Shifting
120. Robust Hashing for Image Authentication Using Zernike
Moments and Local Features
121. Scene Text Detection via Connected Component Clustering and
Non- text Filtering
122. Robust Face Recognition for Uncontrolled Pose and Illumination Changes
Face recognition has made significant advances in the last decade, but robust commercial applications are still lacking. Current authentication/identification applications are limited to controlled settings, e.g., limited pose and illumination changes, with the user usually
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
aware of being screened and collaborating in the process. Among others, pose and illumination changes are limited. To address challenges from looser restrictions, this paper proposes a novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE). Its robustness comes from normalization (“correction”) strategies to address pose and illumination variations. In addition, two separate image quality indices quantitatively assess pose and illumination changes for each biometric query, before submitting it to the classifier. Samples with poor quality are possibly discarded or undergo a manual classification or, when possible, trigger a new capture. After such filter, template similarity for matching purposes is measured using a localized version of the image correlation index. Finally, FACE adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier.
123. Reversible Watermarking Based on Invariant Image Classification and Dynamic Histogram Shifting
In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so. Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion. In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction.
124. Automatic Detection and Reconstruction of Building Radar Footprints From Single VHR SAR Images
The spaceborne synthetic aperture radar (SAR) systems Cosmo-SkyMed, TerraSAR-X, and TanDEM-X acquire imagery with very high
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
spatial resolution (VHR), supporting various important application scenarios, such as damage assessment in urban areas after natural disasters. To ensure a reliable, consistent, and fast extraction of the information from the complex SAR scenes, automatic information extraction methods are essential. Focusing on the analysis of urban areas, which is of prime interest of VHR SAR, in this paper, we present a novel method for the automatic detection and 2-D reconstruction of building radar footprints from VHR SAR scenes. Unlike most of the literature methods, the proposed approach can be applied to single images. The method is based on the extraction of a set of low-level features from the images and on their composition to more structured primitives using a production system. Then, the concept of semantic meaning of the primitives is introduced and used for both the generation of building candidates and the radar footprint reconstruction. The semantic meaning represents the probability that a primitive belongs to a certain scattering class (e.g., double bounce, roof, facade) and has been defined in order to compensate for the lack of detectable features in single images. Indeed, it allows the selection of the most reliable primitives and footprint hypotheses on the basis of fuzzy membership grades.
125. Interactive Segmentation for Change Detection in Multispectral Remote-Sensing Images
In this letter, we propose to solve the change detection (CD) problem in multitemporal remote-sensing images using interactive segmentation methods. The user needs to input markers related to change and no-change classes in the difference image. Then, the pixels under these markers are used by the support vector machine classifier to generate a spectral-change map. To enhance further the result, we include the spatial contextual information in the decision process using two different solutions based on Markov random field and level-set methods.
126. Estimating Information from Image Colors: An Application to Digital Cameras and Natural Scenes
The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations.
127. Airborne Vehicle Detection in Dense Urban Areas Using HoG Features
Vehicle detection has been an important research field for years as there are a lot of valuable applications, ranging from support of traffic planners to real-time traffic management. Especially detection of cars in dense urban areas is of interest due to the high traffic volume and the limited space. In city areas many car-like objects (e.g., dormers) appear which might lead to confusion. Additionally, the inaccuracy of road databases supporting the extraction process has to be handled in a proper way. This paper describes an integrated real-time processing chainwhich utilizes multiple occurrence of objects in images.
128. Histology Image Retrieval in Optimized Multifeature Spaces
Content-based histology image retrieval systems have shown great potential in supporting decision making in clinical activities, teaching, and biological research. In content-based image retrieval, feature combination plays a key role. It aims at enhancing the descriptive power of visual features corresponding to semantically meaningful queries. It is particularly valuable in histology image analysis where intelligent mechanisms are needed for interpreting varying tissue composition and architecture into histological concepts. This paper presents an approach to automatically combine heterogeneous visual features for histology image retrieval. The aim is to obtain the most representative fusion model for a particular keyword that is associated with multiple query images. The core of this approach is a multiobjective learning method, which aims to understand an optimal visual-semantic matching function by jointly considering the different preferences of the group of query images. The task is posed as an optimization problem, and a multiobjective optimization strategy is employed in order to handle potential contradictions in the query images associated with the same keyword.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
129. Automatic License Plate Recognition (ALPR)
Automatic license plate recognition (ALPR) is the extraction of vehicle license plate information from an image or a sequence of images. The extracted information can be used with or without a database in many applications, such as electronic payment systems (toll payment, parking fee payment), and freeway and arterial monitoring systems for traffic surveillance. The ALPR uses either a color, black and white, or infrared camera to take images. The quality of the acquired images is a major factor in the success of the ALPR. ALPR as a reallife application has to quickly and successfully process license plates under different environmental conditions, such as indoors, outdoors, day or night time. It should also be generalized to process license plates from different nations, provinces, or states. These plates usually contain different colors, are written in different languages, and use different fonts; some plates may have a single color background and others have background images. The license plates can be partially occluded by dirt, lighting, and towing accessories on the car.
130. Context-Based Hierarchical Unequal Merging for SAR Image Segmentation
This paper presents an image segmentation method named Context-based Hierarchical Unequal Merging for Synthetic aperture radar (SAR) Image Segmentation (CHUMSIS), which uses superpixels as the operation units instead of pixels. Based on the Gestalt laws, three rules that realize a new and natural way to manage different kinds of features extracted from SAR images are proposed to represent superpixel context. The rules are prior knowledge from cognitive science and serve as top-down constraints to globally guide the superpixel merging. The features, including brightness, texture, edges, and spatial information, locally describe the superpixels of SAR images and are bottom-up forces. While merging superpixels, a hierarchical unequalmerging algorithm is designed, which includes two stages: 1) coarse merging stage and 2) fine merging stage. The merging algorithm unequally allocates computation resources so as to spend less running time in the superpixels without ambiguity and more running time in the superpixels with ambiguity.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
131. Context-Dependent Logo Matching and Recognition
We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution.
132. Human Detection in Images via Piecewise Linear Support Vector Machines
Human detection in images is challenged by the view and posture variation problem. In this paper, we propose a piecewise linear support vector machine (PL-SVM) method to tackle this problem. The motivation is to exploit the piecewise discriminative function to construct a nonlinear classification boundary that can discriminate multiview and multiposture human bodies from the backgrounds in a high-dimensional feature space. A PL-SVM training is designed as an iterative procedure of feature space division and linear SVM training, aiming at the margin maximization of local linear SVMs. Each piecewise SVM model is responsible for a subspace, corresponding to a human cluster of a special view or posture. In the PL-SVM, a cascaded detector is proposed with block orientation features and a histogram of oriented gradient features. Extensive experiments show that compared with several recent SVM methods, our method reaches the state of the art in both detection accuracy and computational efficiency, and it performs best when dealing with low-resolution human regions in clutter backgrounds.
133. Learning-based, automatic 2D-to-3D image and video conversion
Despite a significant growth in the last few years, the availability of 3D content is still dwarfed by that of its 2D counterpart. In order to
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
close this gap, many 2D-to-3D image and video conversion methods have been proposed.Methods involving human operators have been most successful but also time-consuming and costly. Automatic methods, that typically make use of a deterministic 3D scene model, have not yet achieved the same level of quality for they rely on assumptions that are often violated in practice. In this paper, we propose a new class of method that are based on the radically different approach of learning the 2D-to-3D conversion from examples. We develop a method based on globally estimating the entire depth map of a query image directly from a repository of 3D images (image + depth pairs or stereopairs) using a nearest-neighbor regression type idea. We demonstrate both the efficacy and the computational efficiency of our methods on numerous 2D images and discuss their drawbacks and benefits. Although far from perfect, our results demonstrate that repositories of 3D content can be used for effective 2D-to-3D image conversion. An extension to video is immediate by enforcing temporal continuity of computed depth maps.
134. Automated Biometric Voice-Based Access Control in ATM
An automatic teller machine requires a user to pass an identity test before any transaction can be granted. The current method available for access control in ATM is based on smartcard. Efforts were made to conduct an interview with structured questions among the ATM users and the result proofed that a lot of problems was associated with ATM smartcard for access control. Among the problems are; it is very difficult to prevent another person from attaining and using a legitimate persons card, also conventional smartcard can be lost, duplicated, stolen or impersonated with accuracy. To address the problems, the paper proposed the use of biometric voice-based access control system in automatic teller machine. In the proposed system, access will be authorized simply by means of an enroll user speaking into a microphone attached to the automatic teller machine. There are 2 phases in implementation of the proposed system: first training phase, second testing or operational phase.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
135. Steganography using Genetic Algorithm along with Visual Cryptography
Image steganography is an emerging field of research for secure data hiding and transmission over networks. The proposed system provides the best approach for Least Significant Bit (LSB) based steganography using Genetic Algorithm (GA) along with Visual Cryptography (VC). Original message is converted into cipher text by using secret key and then hidden into the LSB of original image. Genetic Algorithm and Visual Cryptography has been used for enhancing the security. Genetic Algorithm is used to modify the pixel location of stego image and the detection of this message is complex. Visual Cryptography is used to encrypt the visual information. It is achieved by breaking the image into two shares based on a threshold. The performance of the proposed system is experimented by performing steganalysis and conducting benchmarking test for analysing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The main aim of this paper is to design the enhanced secure algorithm which uses both steganography using Genetic Algorithm and Visual Cryptography to ensure improved security and reliability.
136. Human Skeleton Identification Methods to Reduce Uncomfortable Light from a Digital Projector
When a speaker stands in front of a projector screen for a presentation, the eyes will be hurt by the direct light from the digital projector. This paper proposes a design to reduce the strong light by projecting a black round mask on the speaker's head. The black round mask is superimposed to the slide frame by the software of this design and the mask traces the speaker’s head. The Webcam captures the images from the speaker with the projector screen. The location of the speaker’s head is determined. This design efficiently continues to trace the head location. The computer uses this head location and superimposes a black round mask to reduce the uncomfortable feeling caused by the strong light of the projector.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
137. IMAGE STITCHING WITH COMBINED MOMENT INVARIANTS AND SIFT FEATURES
Image stitching is used to combine multiple photographic images from camera network with overlapping field of view to produce panoramic view. With image stitching, the view is enlarged and the amount of information increases with the no. of images that are stitched. In the existing methods, the whole images from the adjacent views are considered thus leads to increase in both time and computational complexity. In this paper, an approach for image stitching using invariant moments combined with SIFT features is presented to reduce the time and computational complexity. It is observed that only a small portion of the adjacent view images are overlapped. Hence, the proposed method aims in detecting overlapping portion for extracting matching points. The overlapping regions are determined using gradient based dominant edge extraction and invariant moments. In the deduced region, the SIFT (Shift Invariant Feature Transform) features are extracted to determine the matching features. The registration is carried on with RANSAC (Random Sample Consensus) algorithm and final output mosaic is obtained by warping the images. The proposed approach results in reduced time and computational when compared to existing methods.
138. Vertical-Edge-Based Car-License-Plate Detection Method
This paper proposes a fast method for car-licenseplate detection (CLPD) and presents three main contributions. The first contribution is that we propose a fast vertical edge detection algorithm (VEDA) based on the contrast between the grayscale values, which enhances the speed of the CLPD method. After binarizing the input image using adaptive thresholding (AT), an unwanted-line elimination algorithm (ULEA) is proposed to enhance the image, and then, the VEDA is applied. The second contribution is that our proposed CLPD method processes very-low-resolution images taken by a web camera. After the vertical edges have been detected by the VEDA, the desired plate details based on color information are highlighted. Then, the candidate region based on statistical and logical operations will be extracted. Finally, an LP is detected. The third contribution is that we compare the VEDA to the
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Sobel operator in terms of accuracy, algorithm complexity, and processing time.
Bio-Medical Based Image Processing139. Lossless medical image compression by IWT
The proposed work is to compress the medical data without any loss(i.e. lossless). Medical information is either in multidimensional or multi-resolution form, this creates enormous amount of data. Retrieval, Efficient storage, management and transmission of this voluminous data are highly complex. This technique combines integer transforms and JPEGLS Prediction to enhance the performance of lossless compression.
140. Analyzing Macular Edema In Diabetic Patients
Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using the neural networks.
141. Wavelet Based Image Fusion for Detection of Brain Tumor
Brain tumor, is one of the major causes for the increase in mortality among children and adults. Detecting the regions of brain is the major challenge in tumor detection. In the field of medical image processing, multi sensor images are widely being used as potential sources to detect brain tumor. In this paper, a wavelet based image fusion algorithm is applied on the Magnetic Resonance (MR) images and Computed Tomography (CT) images which are used as primary sources to extract the redundant and complementary information in order to enhance the tumor detection in the resultant fused image. The main features taken into account for detection of brain tumor are location of tumor and size of the tumor, which is further optimized
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
through fusion of images using various wavelet transforms parameters. We discuss and enforce the principle of evaluating and comparing the performance of the algorithm applied to the images with respect to various wavelets type used for the wavelet analysis. The performance efficiency of the algorithm is evaluated on the basis of PSNR values. The obtained results are compared on the basis of PSNR with gradient vector field and big bang optimization. The algorithms are analyzed in terms of performance with respect to accuracy in estimation of tumor region and computational efficiency of the algorithms.
Power Systems142. Synchronous Detection and Digital control of Shunt Active Power Filter in Power Quality Improvement
Power Quality means to maintain purely sinusoidal current wave form in phase with a purely sinusoidal voltage wave form. Power quality improvement using traditional compensation methods include many disadvantages like electromagnetic interference, possible resonance, fixed compensation, bulkiness etc. So power system and power electronic engineers need to develop adjustable and dynamic solutions using custom power devices. These power conditioning equipments use static power electronic converters to improve the power quality of distribution system customers. The devices include Active Power Filter (APF), dynamic voltage restorer (DVR) and Unified Power Quality Conditioner (UPQC). APF is a compensator used to eliminate the disturbances in current. There are basically two types of APFs: the shunt type and the series type. This paper examines the control of Shunt Active Power Filter (SAPF) from two different aspects: Synchronous Detection Method (SDM) and digital control based on instantaneous power theory (p-q theory). Simulation results using MATLAB SIMULINK demonstrates the application of these methods to the control of APF. Moreover, this work shows that digital control provides better power quality improvement than SDM.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
ANDROID1. T-Drive: Enhancing Driving Directions with Taxi Drivers’
Intelligence
Abstract
This paper presents a smart driving direction system leveraging the
intelligence of experienced drivers. In this system, GPS-equipped
taxis are employed as mobile sensors probing the traffic rhythm of
a city and taxi drivers’ intelligence in choosing driving directions
in the physical world. We propose a time-dependent landmark
graph to model the dynamic traffic pattern as well as the
intelligence of experienced drivers so as to provide a user with the
practically fastest route to a given destination at a given departure
time. Then, a Variance-Entropy-Based Clustering approach is
devised to estimate the distribution of travel time between two
landmarks in different time slots. Based on this graph, we design a
two-stage routing algorithm to compute the practically fastest and
customized route for end users. We build our system based on a
real-world trajectory data set generated by over 33,000 taxis in a
period of three months, and evaluate the system by conducting both
synthetic experiments and in-the-field evaluations. As a result, 60-
70 percent of the routes suggested by our method are faster than
the competing methods, and 20 percent of the routes share the
same results. On average, 50 percent of our routes are at least 20
percent faster than the competing approaches.
2. Dynamic Personalized Recommendation on Sparse Data
Abstract:
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Recommendation techniques are very important in the fields of E-
commerce and other Web-based services. One of the main
difficulties is dynamically providing high-quality recommendation
on sparse data. In this paper, a novel dynamic personalized
recommendation algorithm is proposed, in which information
contained in both ratings and profile contents are utilized by
exploring latent relations between ratings, a set of dynamic features
are designed to describe user preferences in multiple phases, and
finally a recommendation is made by adaptively weighting the
features. Experimental results on public datasets show that the
proposed algorithm has satisfying performance.
3. Spatial Query Integrity with Voronoi Neighbors
Abstract:
With the popularity of location-based services and the abundant
usage of smart phones and GPS-enabled devices, the necessity of
outsourcing spatial data has grown rapidly over the past few years.
Meanwhile, the fast a rising trend of cloud storage and cloud
computing services has provided a flexible and cost-effective
platform for hosting data from businesses and individuals, further
enabling many location-based applications. Nevertheless, in this
database outsourcing paradigm, the authentication of the query
results at the client remains a challenging problem. In this paper,
we focus on the Outsourced Spatial Database (OSDB) model and
propose an efficient scheme, called VN-Auth, which allows a client
to verify the correctness and completeness of the result set. Our
approach is based on neighborhood information derived from the
Voronoi diagram of the underlying spatial data set and can handle Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
fundamental spatial query types, such as k nearest neighbor and
range queries, as well as more advanced query types like reverse k
nearest neighbor, aggregate nearest neighbor, and spatial skyline.
We evaluated VN-Auth based on real-world data sets using mobile
devices (Google Droid smart phones with Android OS) as query
clients. Compared to the current state-of-the-art approaches (i.e.,
methods based on Merkle Hash Trees), our experiments show that
VN-Auth produces significantly smaller verification objects and is
more computationally efficient, especially for queries with low
selectivity.
4. Meet You -- Social Networking on Android
Abstract:
This paper aims to present a system that illustrates the social nature
of a human being – the need to be always in touch with family and
friends – taking into account facilities available on Android
platform. The role of this application is to create a social network
in which the users are being alerted when their friends are around.
This gives them the possibility to set up a meeting or to avoid one.
The users have the possibility to check in some locations and allow
their friends to follow their activity. Taking into account the
security of the users, we included in the facilities of the application
an option which allows close friends or family to check the user’s
location based on a keyword text message. For this purpose,
available Android location and messages services are used for
finding an approximate location of a mobile phone running this
program and then sharing it through Meet You or via SMS.
Information is being displayed using default components provided Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
by Android platform and also more complex elements including
heterogeneous lists CWAC, Google Maps and augmented reality
using Mixare Library.
5. A Proxy-Based Approach to Continuous Location-Based
Spatial Queries in Mobile Environments
Abstract:
Caching valid regions of spatial queries at mobile clients is
effective in reducing the number of queries submitted by mobile
clients and query load on the server. However, mobile clients
suffer from longer waiting time for the server to compute valid
regions. We propose in this paper a proxy-based approach to
continuous nearest-neighbor (NN) and window queries. The proxy
creates estimated valid regions (EVRs) for mobile clients by
exploiting spatial and temporal locality of spatial queries. For NN
queries, we devise two new algorithms to accelerate EVR growth,
leading the proxy to build effective EVRs even when the cache size
is small. On the other hand, we propose to represent the EVRs of
window queries in the form of vectors, called estimated window
vectors (EWVs), to achieve larger estimated valid regions. This
novel representation and the associated creation algorithm result in
more effective EVRs of window queries. In addition, due to the
distinct characteristics, we use separate index structures, namely
EVR-tree and grid index, for NN queries and window queries,
respectively. To further increase efficiency, we develop algorithms
to exploit the results of NN queries to aid grid index growth,
benefiting EWV creation of window queries. Similarly, the grid
index is utilized to support NN query answering and EVR
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
updating. We conduct several experiments for performance
evaluation. The experimental results show that the proposed
approach significantly outperforms the existing proxy-based
approaches.
6. PMSE: A Personalized Mobile Search Engine
ABSTRACT:
We propose a personalized mobile search engine (PMSE) that
captures the users’ preferences in the form of concepts by mining
their click through data. Due to the importance of location
information in mobile search, PMSE classifies these concepts into
content concepts and location concepts. In addition, users’
locations (positioned by GPS) are used to supplement the location
concepts in PMSE. The user preferences are organized in an
ontology-based, multifaceted user profile, which are used to adapt
a personalized ranking function for rank adaptation of future search
results. To characterize the diversity of the concepts associated
with a query and their relevance to the user’s need, four entropies
are introduced to balance the weights between the content and
location facets. Based on the client-server model, we also present a
detailed architecture and design for implementation of PMSE. In
our design, the client collects and stores locally the click through
data to protect privacy, whereas heavy tasks such as concept
extraction, training, and re-ranking are performed at the PMSE
server. Moreover, we address the privacy issue by restricting the
information in the user profile exposed to the PMSE server with
two privacy parameters. We prototype PMSE on the Google
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
Android platform. Experimental results show that PMSE
significantly improves the precision comparing to the baseline.
7. Crowd sourced Trace Similarity with Smart phones
ABSTRACT:
Smart phones are nowadays equipped with a number of sensors,
such as WiFi, GPS, accelerometers, etc. This capability allows
smart phone users to easily engage in crowd sourced computing
services, which contribute to the solution of complex problems in a
distributed manner. In this work, we leverage such a computing
paradigm to solve efficiently the following problem: comparing a
query trace Q against a crowd of traces generated and stored on
distributed smart phones. Our proposed framework, coined Smart
Traceþ, provides an effective solution without disclosing any part
of the crowd traces to the query processor. Smart Traceþ, relies on
an in-situ data storage model and intelligent top-K query
processing algorithms that exploit distributed trajectory similarity
measures, resilient to spatial and temporal noise, in order to derive
the most relevant answers to Q. We evaluate our algorithms on
both synthetic and real workloads. We describe our prototype
system developed on the Android OS. The solution is deployed
over our own Smart Lab test bed of 25 smart phones. Our study
reveals that computations over Smart Traceþ result in substantial
energy conservation; in addition, results can be computed faster
than competitive approaches.
8. Twitsper: Tweeting Privately
ABSTRACT:Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
While OSNs today provide some form of privacy controls to
protect a user’s shared content from other users, these controls are
not sufficiently expressive to provide fine grained protection. In
this article, we introduce Twitsper, to support fine-grained control
over who sees a user’s messages. Twitsper provides privacy
controls to the users of Twitter today without relying on Twitter to
make changes. This is because it is a wrapper around Twitter that
enables private group communication while preserving Twitter’s
commercial interests. It preserves privacy both from the Twitsper
server as well as from undesired Twitsper users.
9. Review of Behavior Malware Analysis for Android
ABSTRACT:
Android based Smartphone are now a day’s getting more
popularity. With the use of Smartphone user must always concern
about the security breaching and malicious attacks. Here we
introduce an approach for proactive malware detection working by
abstraction of program behaviors. Suspicious behaviors are
detected by comparing trace abstractions to reference malicious
behaviors. The sensitive power of concept allows us to grip
common mistrustful behaviors rather than specific malware
code and then, to distinguish malware transformation. We
present and discuss an implementation validating our approach.
First have to analyze the programs or apps, then represented them
as trace languages, which are abstracted by altering with
respect to elementary behavior patterns, defined as regular string
rephrasing systems. This paper review the state of the art on
threats, vulnerabilities , We aimed at existing approaches to
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
protecting mobile devices against these classes of attacks into
different categories, based upon the detection principles,
architectures, collected data and operating systems, especially
focusing on IDS-based models and tools.
10.Research in Progress - Defending Android Smart phones from
Malware Attacks
ABSTRACT:
Smart phones are becoming enriched with confidential information
due to their powerful computational capabilities and attractive
communications features. The Android smart phone is one of the
most widely used platforms by businesses and users alike. This is
partially because Android smart phones use the free, open-source
Linux as the underlying operating system, which allows
development of applications by any software developer. This
research study aims to explore security risks associated with the
use of Android smart phones and the sensitive information they
contain; the researcher devised a survey questionnaire to
investigate and further understand security threats targeting
Android smart phones. The survey also intended to study the scope
of malware attacks targeting Android phones and the effectiveness
of existing defense measures. The study surveyed the average
Android users as the target population to understand how they
perceive security and what security controls they use to protect
their smart phones.
11.Secure Encounter-based Mobile Social Networks:
Requirements, Designs, and Tradeoffs
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
ABSTRACT:
Encounter-based social networks and encounter-based systems link
users who share a location at the same time, as opposed to the
traditional social network paradigm of linking users who have an
offline friendship. This new approach presents challenges that are
fundamentally different from those tackled by previous social
network designs. In this paper, we explore the functional and
security requirements for these new systems, such as availability,
security, and privacy, and present several design options for
building secure encounter-based social networks. To highlight
these challenges we examine one recently proposed encounter-
based social network design and compare it to a set of idealized
security and functionality requirements. We show that it is
vulnerable to several attacks, including impersonation, collusion,
and privacy breaching, even though it was designed specifically for
security. Mindful of the possible pitfalls, we construct a flexible
framework for secure encounter-based social networks, which can
be used to construct networks that offer different security, privacy,
and availability guarantees. We describe two example constructions
derived from this framework, and consider each in terms of the
ideal requirements. Some of our new designs fulfill more
requirements in terms of system security, reliability, and privacy
than previous work. We also evaluate real-world performance of
one of our designs by implementing a proof-of-concept iPhone
application called Meet Up. Experiments highlight the potential of
our system and hint at the deploy ability of our designs on a large
scale.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
12.Cloud FTP: A Case Study of Migrating Traditional
Applications to the Cloud
ABSTRACT:
The cloud computing is growing rapidly for it offers on-demand
computing power and capacity. The power of cloud enables
dynamic scalability of applications facing various business
requirements. However, challenges arise when considering the
large amount of existing applications. In this work we propose to
move the traditional FTP service to the cloud. We implement FTP
service on Windows Azure Platform along with the auto-scaling
cloud feature. Based on this, we implement a benchmark to
measure the performance of our Cloud FTP. This case study
illustrates the potential benefits and technical issues associated
with the migration of the traditional applications to the clouds.
13.Collaborative Policy Administration
ABSTRACT:
Policy based management is a very effective method to protect
sensitive information. However, the over claim of privileges is
widespread in emerging applications, including mobile applications
and social network services, because the applications’ users
involved in policy administration have little knowledge of policy
based management. The over claim can be leveraged by malicious
applications, then lead to serious privacy leakages and financial
loss. To resolve this issue, this paper proposes a novel policy
administration mechanism, referred to as Collaborative Policy
Administration (CPA for short), to simplify the policy
administration. In CPA, a policy administrator can refer to other
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
similar policies to set up their own policies to protect privacy and
other sensitive information. This paper formally defines CPA, and
proposes its enforcement framework. Furthermore, in order to
obtain similar policies more effectively, which is the key step of
CPA, a text mining based similarity measure method is presented.
We evaluate CPA with the data of Android applications, and
demonstrate that the text mining based similarity measure method
is more effective in obtaining similar policies than the previous
category based method
14.SPOC: A Secure and Privacy-Preserving Opportunistic
Computing Framework for Mobile-Healthcare Emergency
ABSTRACT:With the pervasiveness of smart phones and the advance of
wireless body sensor networks (BSNs), mobile Healthcare (m-
Healthcare), which extends the operation of Healthcare provider
into a pervasive environment for better health monitoring, has
attracted considerable interest recently. However, the flourish of
m-Healthcare still faces many challenges including information
security and privacy preservation. In this paper, we propose a
secure and privacy-preserving opportunistic computing framework,
called SPOC, for m-Healthcare emergency. With SPOC, smart
phone resources including computing power and energy can be
opportunistically gathered to process the computing-intensive
personal health information (PHI) during m-Healthcare emergency
with minimal privacy disclosure. In specific, to leverage the PHI
privacy disclosure and the high reliability of PHI process and
transmission in m-Healthcare emergency, we introduce an efficient
user-centric privacy access control in SPOC framework, which is Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.
Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
based on an attribute-based access control and a new privacy-
preserving scalar product computation (PPSPC) technique, and
allows a medical user to decide who can participate in the
opportunistic computing to assist in processing his overwhelming
PHI data. Detailed security analysis shows that the proposed SPOC
framework can efficiently achieve user-centric privacy access
control in m-Healthcare emergency. In addition, performance
evaluations via extensive simulations demonstrate the SPOC’s
effectiveness in term of providing high reliable PHI process and
transmission while minimizing the privacy disclosure during m-
Healthcare emergency.
15.Scalable and Secure Sharing of Personal Health Records in
Cloud Computing Using Attribute-Based Encryption
ABSTRACT:
Personal health record (PHR) is an emerging patient-centric model
of health information exchange, which is often outsourced to be
stored at a third party, such as cloud providers. However, there
have been wide privacy concerns as personal health information
could be exposed to those third party servers and to unauthorized
parties. To assure the patients’ control over access to their own
PHRs, it is a promising method to encrypt the PHRs before
outsourcing. Yet, issues such as risks of privacy exposure,
scalability in key management, flexible access and efficient user
revocation, have remained the most important challenges toward
achieving fine-grained, cryptographically enforced data access
control. In this paper, we propose a novel patient-centric
framework and a suite of mechanisms for data access control to
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
PHRs stored in semi-trusted servers. To achieve fine-grained and
scalable data access control for PHRs, we leverage attribute based
encryption (ABE) techniques to encrypt each patient’s PHR file.
Different from previous works in secure data outsourcing, we focus
on the multiple data owner scenario, and divide the users in the
PHR system into multiple security domains that greatly reduces the
key management complexity for owners and users. A high degree
of patient privacy is guaranteed simultaneously by exploiting
multi-authority ABE. Our scheme also enables dynamic
modification of access policies or file attributes, supports efficient
on-demand user/attribute revocation and break-glass access under
emergency scenarios. Extensive analytical and experimental results
are presented which show the security, scalability and efficiency of
our proposed scheme.
16.Collaborative Learning Assistant for Android
ABSTRACT:
The quantitative and qualitative increase in mobile devices that
reach the average user opens more and more topics for research. In
education, m-Learning has been an interesting topic for several
years. However, the smart phones, that today display an
unprecedented mix of computing capability, connectivity and
interactivity, leverage new possibilities for m-Learning
applications. Such applications can seamlessly connect remote
individuals, but can also provide access to various resources, such
as media, or interactive quizzes. We focus on collaborative
learning and peer-review learning, two closely related concepts,
promoting methods such as sharing educational resources,
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School
Sanjeev Kumar. P # 9844641410, 9902752525 Email: [email protected]
organizing study sessions and giving feedback to fellow students.
We propose a client-server system to provide all these features and
we study it under performance considerations such as scalability
and mobility.
Mindsoft: 2nd Floor, 2nd Main, Service Road, Vijayanagara, Bangalore-40.Landmark: Near Hotel New Santhi Sagar or BSVP School