IEEE 2010 PROJECT

download IEEE 2010  PROJECT

of 15

Transcript of IEEE 2010 PROJECT

  • 8/7/2019 IEEE 2010 PROJECT

    1/15

    1. LAYERED APPROACH USING CONDITIONAL RANDOM

    FIELDS FOR INTRUSION DETECTION-JAVA

    Dynamic authority-based keyword search algorithms, such as ObjectRank

    and personalized PageRank, leverage semantic link information to provide highquality, high recall search in databases, and the Web. Conceptually, these

    algorithms require a querytime PageRank-style iterative computation over the

    full graph. This computation is too expensive for large graphs, and not feasible

    at query time. Alternatively, building an index of precomputed results for some

    or all keywords involves very expensive preprocessing. We introduce BinRank,

    a system that approximates ObjectRank results by utilizing a hybrid approach

    inspired by materialized views in traditional query processing. We materialize a

    number of relatively small subsets of the data graph in such a way that any

    keyword query can be answered by running ObjectRank on only one of the

    subgraphs. BinRank generates the subgraphs by partitioning all the terms in

    the corpus based on their co-occurrence, executing ObjectRank for each

    partition using the terms to generate a set of random walk starting points, and

    keeping only those objects that receive non-negligible scores. The intuition is

    that a subgraph that contains all objects and links relevant to a set of related

    terms should have all the information needed to rank objects with respect to

    one of these terms. We demonstrate that BinRank can achieve subsecond

    query execution time on the English Wikipedia data set, while producing high-

    quality search results that closely approximate the results of ObjectRank on

    the original graph. The Wikipedia link graph contains about 108 edges, whichis at least two orders of magnitude larger than what prior state of the art

    dynamic authority-based search systems have been able to demonstrate. Our

    experimental evaluation investigates the trade-off between query execution

    time, quality of the results, and storage requirements of BinRank.

    2. ON WIRELESS SCHEDULING ALGORITHMS FORMINIMIZING THE QUEUE-OVERFLOW PROBABILITY JUNE 2010-JAVA

    The k-anonymity privacy requirement for publishing microdata requires thateach equivalence class (i.e., a set of records that are indistinguishable from

    each other with respect to certain identifying attributes) contains at least k

    records. Recently, several authors have recognized that k-anonymity cannot

    prevent attribute disclosure. The notion of `-diversity has been proposed to

    address this; `-diversity requires that each equivalence class has at least ` well-

    represented (in Section 2) values for each sensitive attribute. In this article, we

  • 8/7/2019 IEEE 2010 PROJECT

    2/15

    show that `-diversity has a number of limitations. In particular, it is neither

    necessary nor sufficient to prevent attribute disclosure. Motivated by these

    limitations, we propose a new notion of privacy called closeness. We first

    present the base model t-closeness, which requires that the distribution of a

    sensitive attribute in any equivalence class is close to the distribution of the

    attribute in the overall table (i.e., the distance between the two distributions

    should be no more than a threshold t). We then propose a more flexible privacy

    model called (n, t)-closeness that offers higher utility. We describe our

    desiderata for designing a distance measure between two probability

    distributions and present two distance measures. We discuss the rationale for

    using closeness as a privacy measure and illustrate its advantages through

    examples and experiments

    3.DATA LEAKAGE DETECTION JUNE 2010-DOT NET

    We study the following problem: A data distributor has given sensitive data to a

    set of supposedly trusted agents (third parties). Some of the data is leaked and

    found in an unauthorized place (e.g., on the web or somebodys laptop). The

    distributor must assess the likelihood that the leaked data came from one or

    more agents, as opposed to having been independently gathered by other

    means. We propose data allocation strategies (across the agents) that improve

    the probability of identifying leakages. These methods do not rely on alterations

    of the released data (e.g., watermarks). In some cases we can also inject

    realistic but fake data records to further improve our chances of detecting

    leakage and identifying the guilty party.

    4.PAM: AN EFFICIENT AND PRIVACY-AWAREMONITORING FRAMEWORK FOR CONTINUOUSLYMOVING OBJECTS -- MARCH 2010-J2EE

    Efficiency and privacy are two fundamental issues in moving object monitoring.

    This paper proposes a privacy-aware monitoring (PAM) framework that

    addresses both issues. The framework distinguishes itself from the existingwork by being the first to holistically address the issues of location updating in

    terms of monitoring accuracy, efficiency, and privacy, particularly, when and

    how mobile clients should send location updates to the server. Based on the

    notions of safe region and most probable result, PAM performs location

    updates only when they would likely alter the query results. Furthermore, by

    designing various client update strategies, the framework is flexible and able to

  • 8/7/2019 IEEE 2010 PROJECT

    3/15

    optimize accuracy, privacy, or efficiency. We develop efficient query

    evaluation/reevaluation and safe region computation algorithms in the

    framework. The experimental results show that PAM substantially outperforms

    traditional schemes in terms of monitoring accuracy, CPU cost, and scalability

    while achieving close-to-optimal communication cost.

    5. P2P REPUTATION MANAGEMENT USING

    DISTRIBUTED IDENTITIES AND DECENTRALIZED

    RECOMMENDATION CHAINS JULY 2010-JAVAPeer-to-peer (P2P) networks are vulnerable to peers who cheat, propagate

    malicious code, leech on the network, or simply do not cooperate. The

    traditional security techniques developed for the centralized distributed

    systems like client-server networks are insufficient for P2P networks by the

    virtue of their centralized nature. The absence of a central authority in a P2P

    network poses unique challenges for reputation management in the network.

    These challenges include identity management of the peers, secure reputation

    data management, Sybil attacks, and above all, availability of reputation data.

    In this paper, we present a cryptographic protocol for ensuring secure and

    timely availability of the reputation data of a peer to other peers at extremely

    low costs. The past behavior of the peer is encapsulated in its digital

    reputation, and is subsequently used to predict its future actions. As a result,

    a peers reputation motivates it to cooperate and desist from malicious

    activities. The cryptographic protocol is coupled with self-certification andcryptographic mechanisms for identity management and countering Sybil

    attack. We illustrate the security and the efficiency of the system analytically

    and by means of simulations in a completely decentralized Gnutella-like P2P

    network.

    6.MANAGING MULTIDIMENSIONAL HISTORICAL

    AGGREGATE DATA IN UNSTRUCTURED P2P NETWORKS

    SEPTEMBER 2010-JAVA

    A P2P-based framework supporting the extraction of aggregates from

    historical multidimensional data is proposed, which provides efficient and

    robust query evaluation. When a data population is published, data are

    summarized in a synopsis, consisting of an index built on top of a set of sub

    synopses (storing compressed representations of distinct data portions). The

    index and the sub synopses are distributed across the network, and suitable

  • 8/7/2019 IEEE 2010 PROJECT

    4/15

    replication mechanisms taking into account the query workload and network

    conditions are employed that provide the appropriate coverage for both the

    index and the sub synopses.

    7. BRIDGING DOMAINS USING WORLD WIDE

    KNOWLEDGE FOR TRANSFER LEARNING-DOT NET

    A major problem of classification learning is the lack of ground-truth labeled

    data. It is usually expensive to label new data instances for training a model.

    To solve this problem, domain adaptation in transfer learning has been

    proposed to classify target domain data by using some other source domain

    data, even when the data may have different distributions. However, domain

    adaptation may not work well when the differences between the source and

    target domains are large. In this paper, we design a novel transfer learning

    approach, called BIG (Bridging Information Gap), to effectively extract usefulknowledge in a worldwide knowledge base, which is then used to link the

    source and target domains for improving the classification performance. BIG

    works when the source and target domains share the same feature space but

    different underlying data distributions. Using the auxiliary source data, we can

    extract a bridge that allows cross-domain text classification problems to be

    solved using standard semi supervised learning algorithms. A major

    contribution of our work is that with BIG, a large amount of worldwide

    knowledge can be easily adapted and used for learning in the target domain.

    We conduct experiments on several real-world cross-domain text classification

    tasks and demonstrate that our proposed approach can outperform several

    existing domain adaptation approaches significantly.

    8. CLOSENESS: A NEW PRIVACY MEASURE FOR DATA

    PUBLISHING - JULY 2010-J2EE

    In this paper, we are interested in wireless scheduling algorithms for the

    downlink of a single cell that can minimize the queue-overflow probability.

    Specifically, in a large-deviation setting, we are interested in algorithms that

    maximize the asymptotic decay-rate of the queue-overflow probability, as thequeue-overflow threshold approaches infinity. We first derive an upper bound

    on the decay-rate of the queue-overflow probability over all scheduling policies.

    We then focus on a class of scheduling algorithms collectively referred to as the

    -algorithms. For a given >= 1, the -algorithm picks the user for service at

    each time that has the largest product of the transmission rate multiplied by

    the backlog raised to the power. We show that when the overflow metric is

  • 8/7/2019 IEEE 2010 PROJECT

    5/15

    appropriately modified, the minimum-cost-to-overflow under the -algorithm

    can be achieved by a simple linear path, and it can be written as the solution of

    a vector-optimization problem. Using this structural property, we then show

    that when a approaches infinity, the -algorithms asymptotically achieve the

    largest decay-rate of the queue over flow probability. Finally, this result enables

    us to design scheduling algorithms that are both close-to-optimal in terms of

    the asymptotic decay-rate of the overflow probability, and empirically shown to

    maintain small queue-overflow probabilities over queue-length ranges of

    practical interest.

    9.A DISTRIBUTED CSMA ALGORITHM FOR

    THROUGHPUT AND UTILITY MAXIMIZATION INWIRELESS NETWORKS JUNE 2010-JAVAIn multichip wireless networks, designing distributed scheduling algorithms to

    achieve the maximal throughput is a challenging problem because of the

    complex interference constraints among different links. Traditional maximal-

    weight scheduling (MWS), although throughput-optimal, is difficult to

    implement in distributed networks. On the other hand, a distributed greedy

    protocol similar to IEEE 802.11 does not guarantee the maximal throughput.

    In this paper, we introduce an adaptive carrier sense multiple access (CSMA)

    scheduling algorithm that can achieve the maximal throughput distributive.

    Some of the major advantages of the algorithm are that it applies to a very

    general interference model and that it is simple, distributed, and

    asynchronous. Furthermore, the algorithm is combined with congestion controlto achieve the optimal utility and fairness of competing flows. Simulations

    verify the effectiveness of the algorithm. Also, the adaptive CSMA scheduling is

    a modular MAC-layer algorithm that can be combined with various protocols in

    the transport layer and network layer. Finally, the paper explores some

    implementation issues in the setting of 802.11 networks.

    10.A DYNAMIC EN-ROUTE FILTERING SCHEME FOR

    DATA REPORTING IN WIRELESS SENSOR NETWORKS-

    JAVAIn wireless sensor networks, adversaries can inject false data reports via

    compromised nodes and launch DoS attacks against legitimate reports.

    Recently, a number of filtering schemes against false reports have been

    proposed. However, they either lack strong filtering capacity or cannot support

    highly dynamic sensor networks very well. Moreover, few of them can deal with

    DoS attacks simultaneously. In this paper, we propose a dynamic en-route

  • 8/7/2019 IEEE 2010 PROJECT

    6/15

    filtering scheme that addresses both false report injection and DoS attacks in

    wireless sensor networks. In our scheme, each node has a hash chain of

    authentication keys used to endorse reports; meanwhile, a legitimate report

    should be authenticated by a certain number of nodes. First, each node

    disseminates its key to forwarding nodes. Then, after sending reports, the

    sending nodes disclose their keys, allowing the forwarding nodes to verify their

    reports. We design the hill climbing key dissemination approach that ensures

    the nodes closer to data sources have stronger filtering capacity. Moreover, we

    exploit the broadcast property of wireless communication to defeat DoS attacks

    and adopt multipath routing to deal with the topology changes of sensor

    networks. Simulation results show that compared to existing solutions, our

    scheme can drop false reports earlier with a lower memory requirement,

    especially in highly dynamic sensor networks.

    11. EFFICIENT AND DYNAMIC ROUTING TOPOLOGYINFERENCE FROM END-TO-END MEASUREMENTS-JAVAInferring the routing topology and link performance from a node to a set of

    other nodes is an important component in network monitoring and application

    design. In this paper we propose a general framework for designing topology

    inference algorithms based on additive metrics. The framework can flexibly fuse

    information from multiple measurements to achieve better estimation

    accuracy. We develop computationally efficient (polynomial-time) topology

    inference algorithms based on the framework. We prove that the probability of

    correct topology inference of our algorithms converges to one exponentially fastin the number of probing packets. In particular, for applications where nodes

    may join or leave frequently such as overlay network construction, application-

    layer multicast, peer-to-peer file sharing/streaming, we propose a novel

    sequential topology inference algorithm which significantly reduces the probing

    overhead and can efficiently handle node dynamics. We demonstrate the

    effectiveness of the proposed inference algorithms via Internet experiments.

    12.SECURE DATA COLLECTION IN WIRELESS SENSOR

    NETWORKS USING RANDOMIZED DISPERSIVE ROUTES JULY 2010-JAVACompromised-node and denial-of-service are two key attacks in wireless sensor

    networks (WSNs). In this paper, we study routing mechanisms that circumvent

    (bypass) black holes formed by these attacks. We argue that existing multi-

    path routing approaches are vulnerable to such attacks, mainly due to their

    deterministic nature. So once an adversary acquires the routing algorithm, it

  • 8/7/2019 IEEE 2010 PROJECT

    7/15

    can compute the same routes known to the source, and hence endanger all

    information sent over these routes. In this paper, we develop mechanisms that

    generate randomized multipath routes. Under our design, the routes taken by

    the shares of different packets change over time. So even if the routing

    algorithm becomes known to the adversary, the adversary still cannot pinpoint

    the routes traversed by each packet. Besides randomness, the routes generated

    by our mechanisms are also highly dispersive and energy-efficient, making

    them quite capable of bypassing black holes at low energy cost. Extensive

    simulations are conducted to verify the validity of our mechanisms.

    13.VEBEK: VIRTUAL ENERGY-BASED ENCRYPTION AND

    KEYING FOR WIRELESS SENSOR NETWORKS JULY

    2010-dotnet

    Designing cost-efficient, secure network protocols for Wireless Sensor Networks

    (WSNs) is a challenging problem because sensors are resource-limited wireless

    devices. Since the communication cost is the most dominant factor in a

    sensors energy consumption, we introduce an energy-efficient Virtual Energy-

    Based Encryption and Keying (VEBEK) scheme for WSNs that significantly

    reduces the number of transmissions needed for rekeying to avoid stale keys.

    In addition to the goal of saving energy, minimal transmission is imperative for

    some military applications of WSNs where an adversary could be monitoring

    the wireless spectrum. VEBEK is a secure communication framework where

    sensed data is encoded using a scheme based on a permutation code generatedvia the RC4 encryption mechanism. The key to the RC4 encryption mechanism

    dynamically changes as a function of the residual virtual energy of the sensor.

    Thus, a one-time dynamic key is employed for one packet only and different

    keys are used for the successive packets of the stream. The intermediate nodes

    along the path to the sink are able to verify the authenticity and integrity of the

    incoming packets using a predicted value of the key generated by the senders

    virtual energy, thus requiring no need for specific rekeying messages. VEBEK is

    able to efficiently detect and filter false data injected into the network by

    malicious outsiders. The VEBEK framework consists of two operational modes

    (VEBEK-I and VEBEK-II), each of which is optimal for different scenarios. In

    VEBEK-I, each node monitors its one-hop neighbors where VEBEK-II

    statistically monitors downstream nodes. We have evaluated VEBEKs

    feasibility and performance analytically and through simulations. Our results

    show that VEBEK, without incurring transmission overhead (increasing packet

    size or sending control messages for rekeying), is able to eliminate malicious

  • 8/7/2019 IEEE 2010 PROJECT

    8/15

    data from the network in an energy efficient manner. We also show that our

    framework performs better than other comparable schemes in the literature

    with an overall 60-100 percent improvement in energy savings without the

    assumption of a reliable medium access control layer.

    14.LOCALIZED MULTICAST: EFFICIENT ANDDISTRIBUTED REPLICA DETECTION IN LARGE-SCALE

    SENSOR NETWORKS-DOT NETDue to the poor physical protection of sensor nodes, it is generally assumed

    that an adversary can capture and compromise a small number of sensors in

    the network. In a node replication attack, an adversary can take advantage of

    the credentials of a compromised node to surreptitiously introduce replicas of

    that node into the network. Without an effective and efficient detection

    mechanism, these replicas can be used to launch a variety of attacks that

    undermine many sensor applications and protocols. In this paper, we present a

    novel distributed approach called Localized Multicast for detecting node

    replication attacks. The efficiency and security of our approach are evaluated

    both theoretically and via simulation. Our results show that, compared to

    previous distributed approaches proposed by Par no et al., Localized Multicast

    is more efficient in terms of communication and memory costs in large-scale

    sensor networks, and at the same time achieves a higher probability of

    detecting node replicas.

    15. BINRANK: SCALING DYNAMIC AUTHORITY-BASED

    SEARCH USING MATERIALIZED SUBGRAPHS - AUGUST

    2010-J2EEIntrusion detection faces a number of challenges; an intrusion detection

    system must reliably detect malicious activities in a network and must perform

    efficiently to cope with the large amount of network traffic. In this paper, we

    address these two issues of Accuracy and Efficiency using Conditional Random

    Fields and Layered Approach. We demonstrate that high attack detectionaccuracy can be achieved by using Conditional Random Fields and high

    efficiency by implementing the Layered Approach. Experimental results on the

    benchmark KDD 99 intrusion data set show that our proposed system based

    on Layered Conditional Random Fields outperforms other well-known methods

    such as the decision trees and the naive Bayes. The improvement in attack

    detection accuracy is very high, particularly, for the U2R attacks (34.8 percent

  • 8/7/2019 IEEE 2010 PROJECT

    9/15

    improvement) and the R2L attacks (34.5 percent improvement). Statistical

    Tests also demonstrate higher confidence in detection accuracy for our method.

    Finally, we show that our system is robust and is able to handle noisy data

    without compromising performance.

    16.PRIVACY-CONSCIOUS LOCATION-BASED QUERIES INMOBILE ENVIRONMENTS-JAVA

    In location-based services, users with location-aware mobile devices are able to

    make queries about their surroundings anywhere and at any time. While this

    ubiquitous computing paradigm brings great convenience for information

    access, it also raises concerns over potential intrusion into user location

    privacy. To protect location privacy, one typical approach is to cloak user

    locations into spatial regions based on user-specified privacy requirements,

    and to transform location-based queries into region-based queries. In thispaper, we identify and address three new issues concerning this location

    cloaking approach. First, we study the representation of cloaking regions and

    show that a circular region generally leads to a small result size for region-

    based queries. Second, we develop a mobility-aware location cloaking

    technique to resist trace analysis attacks. Two cloaking algorithms, namely

    MaxAccu_Cloak and MinComm_Cloak, are designed based on different

    performance objectives. Finally, we develop an efficient polynomial algorithm

    for evaluating circular-region-based kNN queries. Two query processing modes,

    namely bulk and progressive, are presented to return query results either all at

    once or in an incremental manner. Experimental results show that our

    proposed mobility-aware cloaking algorithms significantly improve the quality

    of location cloaking in terms of an entropy measure without compromising

    much on query latency or communication cost. Moreover, the progressive query

    processing mode achieves a shorter response time than the bulk mode by

    parallelizing the query evaluation and result transmission.

    17.LOGOOT-UNDO: DISTRIBUTED COLLABORATIVEEDITING SYSTEM ON P2P NETWORKS-JAVAPeer-to-peer systems provide scalable content distribution for cheap and resist

    to censorship attempts. However, P2P networks mainly distribute immutable

    content and provide poor support for highly dynamic content such as produced

    by collaborative systems. A new class of algorithms called CRDT (Commutative

    Replicated Data Type), which ensures consistency of highly dynamic content on

    P2P networks, is emerging. However, if existing CRDT algorithms support the

  • 8/7/2019 IEEE 2010 PROJECT

    10/15

    edit anywhere, anytime feature, they do not support the undo anywhere,

    anytime feature. In this paper, we present the Logout-Undo CRDT algorithm,

    which integrates the undo anywhere, anytime feature. We compare the

    performance of the proposed algorithm with related algorithms and measure

    the impact of the undo feature on the global performance of the algorithm. We

    prove that the cost of the undo feature remains low on a corpus of data

    extracted from Wikipedia.

    18.IRM: INTEGRATED FILE REPLICATION AND

    CONSISTENCY MAINTENANCE IN P2P SYSTEMS-JAVA

    In peer-to-peer file sharing systems, file replication and consistency

    maintenance are widely used techniques for high system performance. Despite

    significant interdependencies between them, these two issues are typically

    addressed separately. Most file replication methods rigidly specify replicanodes, leading to low replica utilization, unnecessary replicas and hence extra

    consistency maintenance overhead. Most consistency maintenance methods

    propagate update messages based on message spreading or a structure without

    considering file replication dynamism, leading to inefficient file update and

    hence high possibility of outdated file response. This paper presents an

    Integrated file Replication and consistency Maintenance mechanism (IRM) that

    integrates the two techniques in a systematic and harmonized manner. It

    achieves high efficiency in file replication and consistency maintenance at a

    significantly low cost. Instead of passively accepting replicas and updates, each

    node determines file replication and update polling by dynamically adapting to

    time-varying file query and update rates, which avoids unnecessary file

    replications and updates. Simulation results demonstrate the effectiveness of

    IRM in comparison with other approaches. It dramatically reduces overhead

    and yields significant improvements on the efficiency of both file replication

    and consistency maintenance approaches.

    19.ACTIVE RERANKING FOR WEB IMAGE SEARCH

    MARCH 2010-J2EE

    Image search reranking methods usually fail to capture the users intention

    when the query term is ambiguous. Therefore, reranking with user

    interactions, or active reranking, is highly demanded to effectively improve the

    search performance. The essential problem in active reranking is how to target

    the users intention. To complete this goal, this paper presents a structural

    information based sample selection strategy to reduce the users labeling

  • 8/7/2019 IEEE 2010 PROJECT

    11/15

    efforts. Furthermore, to localize the users intention in the visual feature space,

    a novel local-global discriminative dimension reduction algorithm is proposed.

    In this algorithm, a submanifold is learned by transferring the local geometry

    and the discriminative information from the labelled images to the whole

    (global) image database. Experiments on both synthetic datasets and a real

    Web image search dataset demonstrate the effectiveness of the proposed active

    reranking scheme, including both the structural information based active

    sample selection strategy and the local-global discriminative dimension

    reduction algorithm.

    20.AN IMPROVED LOSSLESS IMAGE COMPRESSION

    ALGORITHM LOCO-R-201O International Conference OnComputer Design And Applications (ICCDA 2010)-JAVA This paper presents a state-of-the-art implementation of lossless image

    compression algorithm LOCO-R, which is based on the LOCO-I (low complexity

    lossless compression for images) algorithm developed by weinberger, Seroussi

    and Sapiro, with modifications and betterment, the algorithm reduces

    obviously the implementation complexity. Experiments illustrate that this

    algorithm is better than Rice Compression typically by around 15 percent.

    21.A DWT BASED APPROACH FOR STEGANOGRAPHYUSING BIOMETRICS

    2010 International Conference on Data Storage and

    Data Engineering-DOT NET

    Steganography is the art of hiding the existence of data in another

    transmission medium to achieve secret communication. It does not replace

    cryptography but rather boosts the security using its obscurity features.

    Steganography method used in this paper is based on biometrics. And the

    biometric feature used to implement steganography is skin tone region of

    images [1]. Here secret data is embedded within skin region of image that will

    provide an excellent secure location for data hiding. For this skin tone

    detection is performed using HSV (Hue, Saturation and Value) color space.Additionally secret data embedding is performed using frequency domain

    approach - DWT (Discrete Wavelet Transform), DWT outperforms than DCT

    (Discrete Cosine Transform). Secret data is hidden in one of the high frequency

    sub-band of DWT by tracing skin pixels in that sub-band. Different steps of

    data hiding are applied by cropping an image interactively. Cropping results

    into an enhanced security than hiding data without cropping i.e. in whole

  • 8/7/2019 IEEE 2010 PROJECT

    12/15

    image, so cropped region works as a key at decoding side. This study shows

    that by adopting an object oriented steganography mechanism, in the sense

    that, we track skin tone objects in image, we get a higher security. And also

    satisfactory PSNR (Peak- Signal-to-Noise Ratio) is obtained.

    22. ON EVENT-BASED MIDDLEWARE FOR LOCATION-AWARE MOBILE APPLICATIONS-JAVAAs mobile applications become more widespread, programming paradigms and

    middleware architectures designed to support their development are becoming

    increasingly important. The event-based programming paradigm is a strong

    candidate for the development of mobile applications due to its inherent

    support for the loose coupling between components required by mobile

    applications. However, existing middleware that supports the event-based

    programming paradigm is not well suited to supporting location-aware mobile

    applications in which highly mobile components come together dynamically to

    collaborate at some location. This paper presents a number of techniques

    including location-independent announcement and subscription coupled with

    location-dependent filtering and event delivery that can be used by event-based

    middleware to support such collaboration. We describe how these techniques

    have been implemented in STEAM, an event-based middleware with a fully

    decentralized architecture, which is particularly well suited to deployment in

    ad hoc network environments. The cost of such location-based event

    dissemination and the benefits of distributed event filtering are evaluated.

    23. INFERENCE FROM AGING INFORMATION JUNE

    2010-DOT NET

    For many learning tasks the duration of the data collection can be greater than

    the time scale for changes of the underlying data distribution. The question we

    ask is how to include the information that data are aging. Ad hoc methods to

    achieve this include the use of validity windows that prevent the learning

    machine from making inferences based on old data. This introduces the

    problem of how to define the size of validity windows. In this brief, a new

    adaptive Bayesian inspired algorithm is presented for learning drifting

    concepts. It uses the analogy of validity windows in an adaptive Bayesian way

    to incorporate changes in the data distribution over time. We apply a

    theoretical approach based on information geometry to the classification

    problem and measure its performance in simulations. The uncertainty about

    the appropriate size of the memory windows is dealt with in a Bayesian manner

  • 8/7/2019 IEEE 2010 PROJECT

    13/15

    by integrating over the distribution of the adaptive window size. Thus, the

    posterior distribution of the weights may develop algebraic tails. The learning

    algorithm results from tracking the mean and variance of the posterior

    distribution of the weights. It was found that the algebraic tails of this posterior

    distribution give the learning algorithm the ability to cope with an evolving

    environment by permitting the escape from local traps.

    24.MITIGATING SELECTIVE FORWARDING ATTACKS

    WITH A CHANNEL-AWARE APPROACH IN WMNS MAY

    2010-JAVA

    In this paper, we consider a special case of denial of service (DoS) attack in

    wireless mesh networks (WMNs) known as selective forwarding attack (a.k.a

    gray hole attacks). With such an attack, a misbehaving mesh router just

    forwards a subset of the packets it receives but drops the others. While most of

    the existing studies on selective forwarding attacks focus on attack detection

    under the assumption of an error-free wireless channel, we consider a more

    practical and challenging scenario that packet dropping may be due to an

    attack, or normal loss events such as medium access collision or bad channel

    quality. Specifically, we develop a channel aware detection (CAD) algorithm that

    can effectively identify the selective forwarding misbehavior from the normal

    channel losses. The CAD algorithm is based on two strategies, channel

    estimation and traffic monitoring. If the monitored loss rate at certain hops

    exceeds the estimated normal loss rate, those nodes involved will be identifiedas attackers. Moreover, we carry out analytical studies to determine the

    optimal detection thresholds that minimize the summation of false alarm and

    missed detection probabilities. We also compare our CAD approach with some

    existing solutions, through extensive computer simulations, to demonstrate the

    efficiency of discriminating selective forwarding attacks from normal channel

    losses.

    25.NOVEL DEFENSE MECHANISM AGAINST DATA FLOODING ATTACKS INWIRELESS AD HOC NETWORKS-JAVA

    Mobile users like to use their own consumer electronic devices anywhere and

    at anytime to access multimedia data. Hence, we expect that wireless ad hoc

    networks will be widely used in the near future since these networks form the

    topology with low cost on the fly. However, consumer electronic devices

    generally operate on limited battery power and therefore are vulnerable to

    security threats like data flooding attacks. The data flooding attack causes

  • 8/7/2019 IEEE 2010 PROJECT

    14/15

    Denial of Service (DoS) attacks by flooding many data packets. However, there

    are a few existing defense systems against data flooding attacks. Moreover, the

    existing schemes may not guarantee the Quality of Service (QoS) of burst traffic

    since multimedia data are usually burst. Therefore, we propose a novel defense

    mechanism against data flooding attacks with the aim of enhancing the

    throughput. The simulation results show that the proposed scheme enhances

    the throughput of burst traffic

  • 8/7/2019 IEEE 2010 PROJECT

    15/15