PETS 2006
New York, USA - 18 June 2006

In Conjunction with IEEE Conference on Computer Vision and Pattern Recognition 2006

The electronic proceedings for this workshop can be downloaded here (PDF, 8Mb).

PETS 2006 Benchmark Data

Overview

The data-sets are multi-sensor sequences containing left-luggage scenarios with increasing scene complexity. The results of processing the datasets are to submitted in XML format (details below).

Please e-mail datasets@pets2006.net if you require assistance obtaining these data-sets for the workshop.

Aims and Objectives

The aim of this workshop is to use existing systems for the detection of left (i.e. abandoned) luggage in a real-world environment. The scenarios are filmed from multiple cameras and involve multiple actors.

Definition of Left-Luggage

Left-luggage in the context of PETS 2006 is defined as items of luggage that have been abandoned by their owner. In the published scenarios each item of luggage has one owner and each person owns at most one item of luggage.

To implement a system based on this definition there are three additional components that need to be defined:

A.What items are classed as luggage? Luggage is defined to include all types of baggage that can be carried by hand e.g. trunks, bags, rucksacks, backpacks, parcels, and suitcases.

Five common types of luggage are considered in this study:

  1. Briefcase
  2. Suitcase
  3. 25 litre rucksack
  4. 70 litre backpack
  5. Ski gear carrier

B.What constitutes attended and unattended luggage? In this study three rules are used to determine whether luggage is attended to by a person (or not):

  1. A luggage is owned and attended to by a person who enters the scene with the luggage until such point that the luggage is not in physical contact with the person (contextual rule).
  2. At this point the luggage is attended to by the owner ONLY when they are within a distance a metres of the luggage (spatial rule). All distances are measured between object centroids on the ground plane (i.e. z=0). The image below shows a person within a (=2) metres of their luggage. In this situation no alarm should be raised by the system.
  3. A luggage item is unattended when the owner is further than b metres (where b>=a *) from the luggage. The image below shows a person crossing the line at b (=3) metres. In this situation the system should use the spatio-temporal rule in item C, below, to detect whether this item of luggage has been abandoned (an alarm event).
* If b > a, the distance between radii a and b is determined to be a warning zone where the luggage is neither attended to nor left unattended. This zone is defined to separate the detection points of the two states, reducing uncertainties introduced due to calibration / detection errors in the sensor system etc. The image below shows a person crossing the line at a (=2) metres, but within the radius b (=3) metres. In this scenario the system can be set up to trigger a warning event, using a rule similar to the spatio-temporal rule in item C, below. Both warning and alarm events will be given in the ground truth.


C. What constitutes abandonment of luggage by the owner? The abandonment of an item of luggage is defined spatially and temporally. Abandonment (causing an alarm) is defined as:

  1. An item of luggage that has been left unattended by the owner for a period of t consecutive seconds in which time the owner has not re-attended to the luggage, nor has the luggage been attended to by a second party (instigated by physical contact, in which case a theft / tampering event may be raised). The image below shows an item of luggage left unattended for t (=30) seconds, at which point the alarm event is triggered.

Calibration Data

The geometric patterns on the floor of the station were used for calibration purposes. The following point locations were used as the calibration pattern (click to view full 1800x500 resolution image):

All spatial measurements are in metres. The provided calibration parameters were obtained using the freely available Tsai Camera Calibration Software by Reg Willson. For instructions on how to use Reg Willsons software visit Chris Needhams helpful page. More information on the Tsai camera model is available on CVonline.

An example of the provided calibration parameter XML file is given here. This XML file contains Tsai camera parameters obtained from Reg Willsons software (output file), using this reference image and this set of points. C++ code (available here) is provided to allow you to load and use the calibration parameters in your program (courtesy of project ETISEO).

The DV cameras used to film all data-sets are:

Camera 1: Canon MV-1 1xCCD w/progressive scan

Camera 2: Sony DCR-PC1000E 3xCMOS

Camera 3: Canon MV-1 1xCCD w/progressive scan

Camera 4: Sony DCR-PC1000E 3xCMOS

The resolution of all sequences are PAL standard (768 x 576 pixels, 25 frames per second) and compressed as JPEG image sequences (approx. 90% quality).

XML schema

All scenarios come with two XML files. The first of these files contains camera calibration parameters, these are given in the sub-directory 'calibration'. See the previous section (Calibration Data) for information on this XML file format.

The second XML file (given in the sub-directory 'xml') contains both configuration and ground-truth information. This xml format is also used for submission of results.

The XML schema for the configuration / ground-truth / submission is given here.

The XML files provided contain scenario details, parameters and ground-truth information (e.g. the radii distances, luggage location, warning / alarm triggers etc). A fully commented example of the provided XML is given here.

For submitted XML not all details need to be provided. An example of the (minimum) data to be submitted is given here.

Dataset S1 (Take 1-C)

Scenario: left luggage

Elements: 1 person, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains a single person with a rucksack who loiters before leaving the item of luggage unattended.

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S1-T1-C.zip, (1.10Gb)

Dataset S2 (Take 3-C)

Scenario: left luggage

Elements: 2 people, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains two people who enter the scene from opposite directions. One person places a suitcase on the ground, before both people leave together (without the suitcase).

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S2-T3-C.zip, (0.93Gb)

Dataset S3 (Take 7-A)

Scenario: left luggage

Elements: 1 person, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains a person waiting for a train, the person temporarily places their briefcase on the ground before picking it up again and moving to a nearby shop.

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S3-T7-A.zip, (0.88Gb)

Dataset S4 (Take 5-A)

Scenario: left luggage

Elements: 2 people, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains a person placing a suitcase on the ground. Following this a second person arrives and talks with the first person. The first person leaves the scene without their luggage. Distracted by a newspaper, the second person does not notice that the first persons luggage is left unattended.

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S4-T5-A.zip, (1.04Gb)

Dataset S5 (Take 1-G)

Scenario: left luggage

Elements: 1 person, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains a single person with ski equipment who loiters before abandoning the item of luggage.

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S5-T1-G.zip, (1.25Gb)

Dataset S6 (Take 3-H)

Scenario: left luggage

Elements: 2 people, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains two people who enter the scene together. One person places a rucksack on the ground, before both people leave together (without the rucksack).

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S6-T3-H.zip, (0.98Gb)

Dataset S7 (Take 6-B)

Scenario: left luggage

Elements: 6 people, 1 luggage item

Ground truth parameters: a = 2 metres, b = 3 metres, t = 30 seconds

Subjective Difficulty:

This scenario contains a single person with a suitcase who loiters before leaving the item of luggage unattended. During this event five other people move in close proximity to the item of luggage.

Sample Images

The following images show representative images captured from cameras 1-4.

Download

The entire scenario including the calibration and ground truth data S7-T6-B.zip, (1.22Gb)

Additional Information

The scenarios can also be downloaded from ftp://ftp.cs.rdg.ac.uk/pub/PETS2006/ (use anonymous login). Warning: ftp://ftp.pets.rdg.ac.uk is not listing files correctly on some ftp clients. If you experience problems you can connect to the http server at http://ftp.cs.rdg.ac.uk/PETS2006/.

Legal note: The UK Information Commisioner has agreed that the PETS 2006 data-sets described here may be made publicly available for the purposes of academic research. The video sequences are copyright ISCAPS consortium and permission is hereby granted for free download for the purposes of the PETS 2006 workshop.