[Italian]
Home
People
Projects
Past Projects
Internal projects
Activities
Teaching
Software and Tools
Publications
Members Only

Study of a Byzantine-tolerant P2P-TV infrastructure

Purpose

Security is usually a minor concern in the design of P2P streaming applications. Requisites such as low latency and high quality of the stream are the first to be addressed, often neglecting the disruption that malicious users can bring into the system.

While in file sharing systems, a complex mechanism of incentives has been developed and adopted during the recent years [1], incentives are hardly to be considered a solution in a system where users come and go far more frequently than in file sharing networks, where there is plenty of time to build up a reputation for a user. The low-latency requirement forces the application of a faster method to identify and ban from the system malicious participants.

On the other hand, many results exist in the fiels of Distributed Systems about secure computation and resilience to different flavours of attacks. Recently, the research team led by prof. Alvisi [2] began applying their results in this field to a low-latency system, especially in case of Byzantine failures.

Starting from the BAR model [3], named after the acronyms of the 3 groups in which participants in a streaming session are divided -- namely, Byzantine, altruistic and rational, and using material from the Game Theory (Nash equilibria), the researchers developed FlightPath, a system for streaming live events that is proved to be resilient to a known percentage of participants acting maliciously, and introduced the approximated $\varepsilon$-Nash equilibrium in the design of systems.

The purpose of this project is to replicate the results presented in [2], and to investigate limitations and improvements to the system, to gain useful insights into the resiliency properties achievable by such a system.

Details about the experiments

We need about 500 nodes on each of which we will deploy a client (ClientUI.py); results will be collected through the scripts provided with the source code. The component modules of the system are:
  • BROADCASTER.PY: code for the source of the packets
  • AUTHORITY.PY: manages keys and associated identities
  • CLIENT.PY: requests packets of the stream to other clients and/or to the source
  • TRACKER.PY: keeps the complete membership list

Tracker, Authority and Broadcaster are central entities and can reside, while not advisable, in the same host. In a realistic scenario, the choice of putting these 3 entities in the same host is somehow wrong: it is possible that Authority and Tracker live together in the same point of the system, but usually one expects that the Broadcaster is a role assumed by any Client who wants to stream something to his Peers. We should therefore keep distinguished the FlightPath infrastructure maintainer (the entity that maintains the Tracker and the Authority) and the content provider (the broadcaster).

Experiments to be replicated

General setup

Stream characteristics: 200 Kbps to 517 peers (443 after churning). Each experiment is repeated thrice, and the results are averaged.


Table 1: General setup
Stream characteristics (Kbps) 200
Round duration (sec) 2
Epoch duration (rounds) 40
Content provided per round 100 RS updates + 2 linear digests
Stream update dimension (B) 1072
Linear digest dimension (B) 1153
Secure hashes MD5
Digital signatures RSA-FDH with 512 bits
Peer's budget1 100

Experiment 1: Steady State Operation

  • Goal: Evaluate the system without anything happening other than streaming, thus finding an upper bound to the performances.
  • Objectives:
    • measure average jitter
    • measure the worst 3 jitters (quantify number of peers affected)
    • measure the average bandwidth used by peers (peak bandwidth, CDF, 99 and 95 percentiles).
  • Total experiments: 3.

Experiment 2: Joins

  • Description: Start session with 50 peers. At round 40, other peers attempt to join the system; at round 120, other peers join.
  • Goal: Evaluate the system under joins.
  • Objectives:
    • confirm high bandwidth consumption when other peers join at round 40
    • confirm that the original 50 peers have a drop in bandwidth utilization when at round 120
    • Verify that the peak bandwidth is 482.5 Kbps.
    • Verify no jitter is experienced by the original 50 peers2
    • Confirm results in Fig. 10 of [2], compute the time a peer takes to stream reliably, i.e. with no jitter for 3 consecutive rounds
    • Verify that the more peers join, the more the performance improves. The paper states this is due to the tub algorithm
  • Setup: Peers joining are 50, 100, 200, 400.
  • Total experiments: 4 setups x 3 experiments per setup = 12 experiments

Experiment 3: Departures

  • Setup: make random 70% and 75% peers depart without completing trades nor notifying the tracker.
  • Goal: Show that there exists a threshold between 70% and 75% where FlightPath can not tolerate more departures
  • Objective: Monitor performances of the trouble predictor component (Fig.12)
  • Total experiments: 2 setups x 3 experiments per setup = 6 experiments

Experiment 4: Churning

  • Goal: Prove that peers that stay 640 seconds in the system experience little jitter even for 37% churn per minute
  • Objectives:
    • Compute the probability to experience jitter. Show that it is larger during the first 2 minutes after join, and that it is almost 0 after.
    • Verify conclusions in Fig. 12 of [2] (increased join delay as the visible effect of churn)
    • Find the amount of churn that makes the startup time unacceptable 3
  • Suggestion from the paper: Implement a bootstrappping service to help startup time in presence of churning

Experiment 5: Malicious attack

  • Setup: Malicious peers act normally for the first 100 rounds. After round 100, they respond positively to any trade proposal and initiate as many trades as they can. Objective of malicious nodes is to monopolize trades in the system. Percentages of malicious peers: 12%, 14% and 16%.
  • Notes on setup: Malicious peers only participate to history exchanges and not in the rest of the trade. Byzantine peers claim they have updates from the last 3 rounds and miss all the others.
  • Goal: Find strategies that do more harm than those found in the paper. Can malicious peers conceive a multi-round, collusive strategy?
  • Objective: Compute the average bandwidth of non-Byzantine peers
  • Total experiments: 3 setups x 3 experiments per setup = 9 experiments

Bibliography

1
Bram Cohen.
Incentives build robustness in bittorrent.
In Workshop on Economics of Peer-to-Peer Systems, 2003.

2
Harry C. Li, Allen Clement, Mirco Marchetti, Manos Kapritsos, Luke Robison, Lorenzo ALvisi, and Mike Dahlin.
Flightpath: Obedience vs. choice in cooperative services.
In OSDI '08: Proceedings of the 8th USENIX Symposium on Operating Systems Design and Implementation, 2008.

3
Harry C. Li, Allen Clement, Edmund L. Wong, Jeff Napper, Indrajit Roy, Lorenzo Alvisi, and Michael Dahlin.
Bar gossip.
In OSDI '06: Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation, Berkeley, CA, USA, 2006. USENIX Association.

About this document ...

Emulab Project by the Networking Group of the University of Trento

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 emulab_prop.tex

The translation was initiated by Gianluca Ciccarelli on 2008-10-10


Footnotes

... budget1
How much a client decides to upload per round
... peers2
What about the others?
... unacceptable3
When is a startup time unacceptable?
Pages hosted by "Networking Group" - DIT - Università di Trento - Italy.
© NETWORKING Group 2007, All Rights Reserved.
Last updated: 2009-03-23 02:49:04