BARB TVPR Project


 

TV PLAYER REPORT


General Info for this Page

Any critical changes to this page will be communicated directly to Project Owners via BARB Dovetail Basecamp.

Last updates of this page:

  • Released new JavaScript library 2.5.10 for Browser, TVML/TVJS (Apple TV), TAL (Smart-TV), WebMAF (Playstation), and Electron
  • Released new iOS library 1.14
  • Released new tvOS library 1.14
  • Released new Android library 1.9.0
  • Released new Roku library 1.4.4
  • Released new xbox library 1.2.0



PROJECT OVERVIEW

Project Summary

The TV Player Report represents the first stage of BARB Project Dovetail.

As audiences continue to fragment across both content and delivery mechanisms, the stresses of using of a sample survey alone to create and report audience estimates at a granular level increase. Project Dovetail sets out an ultimate aim of augmenting a panel approach with new, granular datasets. In short, raw absolute census data providing the totality of what has been consumed, with the panel providing the context of who has viewed. The resulting merged dataset will allow the full dissemination and analysis of viewing behaviour across viewing platforms, who is watching what, and what incremental and de-duplicated coverage is obtained.

The first stage of this unified vision is to standardise and centrally collect the site centric IP data from all participating broadcasters, enabling a regular account of the size and delivery content through a variety of devices. This means that all broadcaster data will be directly comparable without differing definitions or collection methods hindering the direct analysis.

To this end, BARB has chosen Kantar Media as its supplier of this data. Broadcasters are asked to implement the Kantar Media plugin' library sensor into their media players. In addition to the integration of the Kantar Media measurement, BARB requires a broadcaster consistent approach to cataloguing and tracking of content. This parallel Content ID project is discussed BARB TVPR Project#Content ID.


Participating Broadcasters

The following companies are currently part of the BARB TV Player Report project:

  • BBC
  • Channel 4
  • Channel 5
  • ITV
  • Sky
  • STV
  • S4C
  • UKTV

Current Project Status

For more information on the current project status and schedule, please contact Kantar UK and/or BARB.

Project Teams and Resources

The Kantar project team is led by Kantar Audiences in London.  Our technology specialists from Kantar Germany will assist with your technical queries as you begin and progress each player/platform implementation.

Before you start your implementation we will discuss with you and your teams which of our libraries you should use and be available to answer queries arising from your review of our documentation and testing framework.

We will be directly involved in the sign-off process for your implementation, undertaking a review of the data we receive from your implementation once on a staging environment.  You can find out more about how together we verify your stream implementation BARB TVPR Project#FROM IMPLEMENTATION TO PUBLICATION - PROCESS.

Who Do I Contact and When?

General Queries

Please contact us at the following address: uk-tvpr-ops@kantarmedia.com

More about us

Kantar Online Data and Development Unit is a technology-oriented company located in Saarlouis, Germany. It was founded as a spin-off of the renowned German Research Center for Artificial Intelligence (DFKI), Saarbrücken.

Realizing as early as 1995 that the ever-growing global importance of web-based information dissemination and product marketing would require more and more sophisticated solutions for measuring online audiences,
spring began to develop an integrated technology for measuring usage data, estimation algorithms, projection methodologies and reporting tools which soon began to draw the attention of innovative corporations
and national Joint Industry Committees.

Partnering with JICs, spring’s technology for collecting and measuring usage data is now well-tried and tested and has helped to define the standard for audience reach measurement of advertising media in Norway, Finland, Baltic States, Germany, Switzerland, Romania and many other countries as well as to a number of broadcasters, publishers and telecommunication companies.

In 2011, the company became a wholly-owned subsidiary of Kantar, a global industry leader in audience measurement for TV, radio and the web.

Today, we are one of the leading pan-European companies in site-centric and user-centric Internet measurement, online research and analysis.




LIBRARY CODE BASE AND ROADMAP

Our Technology Roadmap

This expert-based forward roadmap is constructed based on BARB-driven industry platform coverage priorities. It is expected that each project stakeholder will engage in the process to agree ongoing solution design and prioritisation. We anticipate being able to build upon our existing technologies to meet many of the new target platforms.

The roadmap contains the following key phases:

  1. Discovery stage
    1. Internal project discovery workshop
    2. Identify the player “library product”, and potential alternatives (if any)
    3. Broadcaster workshop (conference call)
    4. Agree scope and boundaries of solution
  2. Initial library development
  3. POC with selected broadcasters
  4. Final standard library development (incorporating changes where necessary)
  5. Library official release
  6. Individual stakeholder implementations scheduled

Each project stakeholder will have the opportunity to critique and validate the roadmap, and that in-house plans across the project lifecycle will be shared with Kantar. A regular 3-monthly review process will be used to review and update project platform priorities between BARB, Kantar, and each broadcaster stakeholder.

Library Downloads

Which library do I need?

You can find our currently available libraries below.  We will work with you to identify which library you should use as part of your initial implementation planning discussions.

Streaming libraries and JavaScript downloads
Type Desktop PlayerNotesRelease DateDownload link
Streaming JavaScript

For the web environment or other Java Script capable environments

(not natively supported)

 kantarmedia-streaming-js-barb-2.5.10.zip
Library for Flash/ActionScript 3
 spring-appstreaming-as3-barb-1.4.0.zip
Library for Flash/OSMF2
 spring-streaming-osmfplugin-barb-1.0.1.zip
Plugin for Brightcove
 spring-streaming-brightcove-barb-1.2.0.zip
Type Mobile Player
NotesRelease DateDownload link
Library for iOS

 kantarmedia-streaming-iOS-barb-1.14.zip
Library for Android

Supports Android versions 4.4 and higher.

 kantarmedia-streaming-android-barb-1.9.0.zip
Type Big Screen Player
NotesRelease DateDownload link
Library for tvOSimage2024-2-23_13-46-3.png kantarmedia-streaming-tvOS-barb-1.14.zip

Type Game Console Player

Notes

Release Date

Library for XboxSupports Xbox One 

kantarmedia-streaming-xbox-barb-1.2.0.zip

Type Settop Box

Notes

Release Date

Download link

Library for Roku
 kantarmedia-streaming-roku-barb-1.4.4.zip


Help with Adapters

Why do I need an Adapter?

When a library specifically suited for your player is not available, you can still use our measurement by implementing an "adapter". This is a small piece of code that ensures the connection between the Kantar Media library and your player. It is typically written by yourself; and Kantar Media can provide consultancy to aid you with this.

Documentation about how to integrate the libraries by using an adapter is available in the general documentation Implementation of Stream Measurement for BARB TV Player Report.

Some examples that require an adapter are here below.

Microsoft Silverlight

Silverlight Implementation

Brightcove

For Flash: /wiki/spaces/KASRLCS/pages/159726836

Not available natively yet for Android or iOS. It can be done currently with the available Android and iOS Spring libraries + a custom adapter to connect the library with the Brightcove API. 

YouView

YouView runs a proprietary version of Flash and they do not use "flash.net.NetStream" but "MediaRouter" instead. An adapter is needed in this case to extend flash.net.NetStream and grab information from MediaRouter such as the position and duration. It is similar to what is demonstrated in NetStream adapter for the flash.media.Sound object example in the documentation.

The library does not rely on flash.external.ExternalInterface being available or cookies. The ExternalInterface is normally used for passing an "unload" call from the browser back to the library in case the browser is being closed. However, this "unload()" method can also be triggered from inside the Flash application itself.


UNDERSTANDING HOW TO IMPLEMENT THE SPRING LIBRARIES

Our Measurement Ethos

The Kantar Media technology was conceived and developed to be as platform-agnostic as possible.
Every video player has at least the possibility to report a playhead-position and the duration of the video (enables the user to see how long the video is and can keep track of the progress). These two variables form the basis of our measurement system, and they are common in every environment.
We capture a combination of environment-specific identifiers and additional metadata to attribute each streaming heartbeat to a session and a device.

Documentation

This is one of the most important documents because it describes in a general way how the Kantar measurement works. Reading and understanding this document is crucial to taking full ownership of the implementation process.

Our comprehensive Implementation of Stream Measurement document describes how the Kantar libraries work. It includes sample implementations and guidelines for writing a custom adapter for our libraries.

In order to measure any streaming content, a sensor on the client side is necessary, which measures which sequences of the stream were played by the user. These sequences can be defined by time intervals in seconds - therefore, the player or the playing instance must be able to provide a method for delivering the actual position on the stream in seconds.

The regular reading of the current position allows the tracking of all actions on a stream. Winding operations can be identified, if there are unexpected jumps in reading out the position. Stop or pause operations are identified by the fact, that the current position will not change.

User actions and operations like stop or pause are not measured directly (not player event based) but are instead derived from measuring the current position on stream.

 Specific app library documentation are included in the library deliverables themselves.

Step-by-Step Guide

The step-by-step guide gives a graphical and concise overview of HOW, WHEN, and WHY to measure. We recommend to have look at it: here.

General metadata tagging instructions

It is essential that you ensure the standardised functions and values are passed in your library implementation.

How to map:


Metric / DimensionDescriptionVariable Namespace
in the Library
Required for
Library Functionality
SourceNotes
1sitenameunique Kantar system name per broadcaster -
assigned by Kantar
sitenamemandatoryAssigned by Kantar

"bbcios", "itvdotcom", "c4android", etc

NOTE: You will be assigned "test"-sitenames for testing purposes!

2playerbroadcaster website or app player being usedplmandatoryFree choice"skygo", "demand5", "4oD", etc
3player versionversion of media player or app being usedplvmandatoryFree choice"1.4.1.1", "1.0.5"
4window dimension widthwidth of the stream window, embedded or pop-outsxoptionalPass value on to Libraryrecommended although can be blank where unavailable
5window dimension heightheight of the stream window, embedded or pop-outsyoptionalPass value on to Libraryrecommended although can be blank where unavailable
6content idunique BARB system program IDcqmandatoryFollowing BARB conventionlinked to separate content id database and master file:
  • where stream (see Field 7) is "od" (on-demand) - field should always be populated
  • where stream (see Field 7) is "live" (live simulcast) - field should always be populated if possible
  • where stream (see Field 7) is "dwn" (download) - field should always be populated
  • where stream (see Field 7) is "ad" (advertisement) - field should always be populated
7streamdescription of the content stream
(activity type/livestream channel id)
streammandatoryFollowing BARB convention

descriptors of the type and delivery of content, not an identifier of the content itself.

  • To identify when a stream is on-demand use "od"
  • To identify when a stream is simulcast supply an indicator that the stream is live and identify the channel using convention "live/channelname"
  • To identify playback of a download use "dwn"
  • BARB expects to use a different methodology to identify advertising.  It is not necessary to tag ads with this library, only content.
8content durationduration of the video being played, reported in secondsdurmandatoryPass value on to Library
  • in case of on-demand (od) the correct length of the video in seconds
  • in case of download (dwn) the correct length of the video in seconds
  • in case of live broadcast (live) set to "0" when stream is live simulcast

9

Physical Content SourceOrigination source of Sky contentctoptionalPass value on to Library
  • This variable will only be populated by Sky.
  • Expected values are: “OTT”, “STB” and “APP”.
10Registration IDBroadcaster application user registration IDloginoptionalPass value on to Library
  • This is a unique identifier assigned by the broadcaster app to identify the user.
  • This will have different formats for different broadcaster apps.

Before you start

Once you have read the documentation, and before you begin your implementation, please contact us so we may review together the behaviour of your player and therefore the scope of the implementation.

LIVE and TESTING environments are separated by the sitename. You will be given sitenames for both these purposes.

Live Stream Measurement – Instructions

The instructions for measuring your Live TV streams will vary depending on the features of your Player.
Instructions on how to map the “cq”, “stream”, and “dur” variables for Live TV streams are detailed in the General metadata tagging instructions section. As your implementation requires that you also provide access to other functionality, this section declares in full the Live TV stream requirement.
As part of your implementation, to ensure that all your Live TV streams are included in the BARB TV Player Report production you must inform BARB and our UK Project Manager of the following information:

  1. Complete up-to-date list of your Player channel identifiers (or Service Key identifier) including regional programming identifiers
  2. The official “known as” name of each channel identifier
  3. Date from which the Live TV channel becomes available online
  4. Player platforms the channel is available on
  5. Identify whether the channel carries commercials, and how they are handled
  6. The agreed offset between broadcast time and streaming time for your channel.  A table of all existing offsets can be seen here


METHOD 1 – “BARB OFFICIAL CHANNEL METHOD”

This is the recommended method for implementing livestreams. This method assumes that for live simulcast streams, your Player will not provide any indicator of where a stream crosses a programme boundary. It also assumes your Player will not provide any unique content identifier.

You will call trackMethod whenever a Live TV channel stream begins; a new View will be created. Every time the Live TV stream is pause/resume no action is required. When the Live TV stream is stop/start you will need to stop the library and once again call trackMethod. By doing this you will correctly create a new View.

INSTRUCTIONS:
  1. Map the “dur” variable for Live TV streams as detailed in the General metadata tagging instructions section.
  2. Populate the channel name (or Service Key identifier) into the stream variable as instructed in the General metadata tagging instructions section.
  3. Set the Live TV channel identifier within the stream variable, e.g.
    1. live/channelone
    2. live/channeltwo
    3. live/channeltwosouth
    4. live/channelthree
    5. live/channelfour
    6. live/F230
    7. live/X234
  4. Supply the playhead position (“pst” variable) using the linear broadcast date and time as a timestamp value in seconds. You will need to account for the fact that the delay between linear broadcast and online playout may vary between platforms. Your Player should retain control over how any delay versus linear broadcast is handled.
  5. Handle FFWD/RWD (“tricks”) by updating the broadcast date and time in the pst variable, i.e. for the scenarios commonly understood as “Scrubbing”, “Live Rewind” and “Live Restart”.
Example tracking call:

http://tvplayerplugintest.2cnt.net/j0=,,,,v=A%201.1.0+app=mobiletestapp+pl=mobiletestapp+did=868e10589389fd35+aid=bb97262e90e4ef1f+sy=768+plv=1.0.0.0+sx=1196;+,vt=16+uid=3a8w7ib+stream=live/channelname+pst=,,0+0+na8vcw;+,1407932109+1407932126+na8vcw;;+sy=360+dur=0+sx=640;;;;?lt=hysmffp1

 deprecated – METHOD 2 – “CONTENT ASSET TRACKING METHOD”

This method must only be used if the Live TV channel stream can be restarted at each programme boundary, i.e. your Player must know the programme boundary and relevant EPG information, enabling you to create a new View for each programme. It assumes that your Player can handle the tracking of programmes delivered as part of a live stream in the same way as on-demand content.

INSTRUCTIONS:

You will call trackMethod whenever a new programme on your Live TV channel stream begins; a new View will be created every time. Pause/resume of a Live TV programme represents a continuation of the same stream View. Only when the stream is stop/start or background/foreground will you reset the playhead position to “0” and create a new View.

  1. Map the “cq” variable using the unique BARB system program ID, as detailed above in the General metadata tagging instructions section,
  2. Map the “dur” variable for Live TV streams as detailed above in the General metadata tagging instructions section
  3. Set the Live TV channel identifier within the stream variable, e.g.
    1. live/channelone
    2. live/channeltwo
    3. live/channeltwosouth
    4. live/channelthree
    5. live/channelfour
    6. live/F230
    7. live/X234
  4. Supply the playhead position (“pst” variable) using an offset in seconds, in the same way as for on-demand steams. It is not necessary to expose a variable delay within your player or to pass it to the measurement system.
  5. Handle FFWD/RWD (“tricks”) by updating the offset, in the same way as for on-demand steams.
Example tracking call:

http://tvplayerplugintest.2cnt.net/j0=,,,,v=A%201.1.0+app=mobiletestapp+pl=mobiletestapp+did=868e10589389fd35+aid=bb97262e90e4ef1f+sy=768+plv=1.0.0.0+sx=1196;+,vt=16+uid=3a8w7ib+stream=live/channelname+cq=C4:12345/2+pst=,,0+0+na8vcw;+,1+16+na8vcw;;+sy=360+dur=0+sx=640;;;;?lt=hysmffp1

METHOD 3 – “CHANNEL-ONLY METHOD”

This method assumes that for live simulcast streams, your Player will not provide any indicator of where a stream crosses a programme boundary. It also assumes your Player will not provide any unique content identifier. Moreover the broadcast date and time cannot be exposed to the measurement tracking system.

INSTRUCTIONS:

You will call trackMethod whenever a Live TV channel stream begins; a new View will be created. Every time the Live TV stream is pause/resume or stop/start, the position in stream is reset to “0” and a new View is created.

  1. Map the “dur” variable for Live TV streams as detailed above in the General metadata tagging instructions section
  2. Populate the channel name (or Service Key identifier) into the stream variable as instructed in the General metadata tagging instructions section.
  3. Set the Live TV channel identifier within the stream variable, e.g.
    1. live/channelone
    2. live/channeltwo
    3. live/channeltwosouth
    4. live/channelthree
    5. live/channelfour
    6. live/F230
    7. live/X234
Example tracking call:

http://tvplayerplugintest.2cnt.net/j0=,,,,v=A%201.1.0+app=mobiletestapp+pl=mobiletestapp+did=868e10589389fd35+aid=bb97262e90e4ef1f+sy=768+plv=1.0.0.0+sx=1196;+,vt=16+uid=3a8w7ib+stream=live/channelname+pst=,,0+0+na8vcw;+,1+16+na8vcw;;+sy=360+dur=0+sx=640;;;;?lt=hysmffp1

New Kantar Media Library Code Releases

Each and every time our library sensor technology is updated, it is subject to a strict release checklist process as outlined below:

  1. Component Testing
    • We test the different components of the library with the help of special test-app (Kantar Media made) that allows
      changing internal parameters of the library. With this app we trigger the overflowing of configured maximum values,
      in order to make sure that everything is handled correctly.
    • For iOS specifically, we test for memory leaks and search for potential memory leaks.
      Tests are conducted with the tools XCode and Instruments.
    • These tests are conducted for every new release of the library; and of course also tested on any new feature should
      there be one.
  2. Functional and System Testing
    Here we test if the data from the libraries are leading to correct results in reporting. All components of the system are being
    passed through here.
    • Test 1 - Test Tool
      Test of Lib-Testapp with the help of our TestTool (internal and external)
      internal address: http://10.20.1.4:7779/doc/html5/mobile.html
      Currently the Test-Tool can be used to test the following:
      • Does the AUT (ApplicationUnderTest) send all requests?
        (STARTED, FOREGROUND, BACKGROUND, CLOSED (optional))

      • Heartbeat events can then be analyzed.
    • Test 2 - Unittests
      • Check the DID, AID, and AI in the logstream and in Hadoop cluster.
        (Android ID, Apple Advertising ID, Device ID, ... 16 character a-f0-9 (hex)).
        See also BARB TVPR Project#CUSTOMER DATA PRIVACY DOCUMENTATION.
      • Without connection
      • Via WiFi
        • With and without SIM card
      • Via 2/3/4/5G
      • With bad connection
        • WiFi when you are finding yourself on the edge of its reach
      • Test if the cookie is stable (when applicable)
        • Does the cookie remain the same even after the app is restarted?
        • Does the cookie remain while the app is opened?
    • Test 3a - Integration Test in App Environment
      • Implement in different apps and test:
        • Panelapp
        • Lib-Testapp
        • Mobile-streaming-app
      • Scenario
        • start Lib-Testapp
          => Lib-Testapp shows DID (, AID), AI?
          => DID (, AID), AI in Logstream?
        • For Panelapp: log on as Testpanelist
          => same DID (, AID), AI in Logstream, and additionally the pid?

    • Test 3b - Integrationtest in Web Environment
      • Start browser
        Log on to Panelwebsite as same Testpanelist
        => cookie and pid in Logstream?
         Also check mobile websites, e.g. heise.de (m-heisede.2cnt.net, surf to http://heise-online.mobi/)

      • Cookie in Logstream
        In the Hadoop cluster we have to be able to identify the DID (, AID), pid and cookie.

  3. Release Process and Documentation

    We will communicate the availability of new code releases in the following manner:

    1. Library downloads will be made publically available on this wiki site, with new version numbering process strictly applied.

    2. High-level communication about what has been changed, informing you whether a change is required or not on your side.
      This will include an explanation of why you should upgrade, and whether the change is critical or optional. Critical changes will also be communicated directly outside of this wiki site.

    3. Accompanying change log document inside the new library download and also mirrored on the project page here.

    4. Kantar QC report documenting that the new library has passed release tests.



FROM IMPLEMENTATION TO PUBLICATION - PROCESS

The process of verifying your stream implementation follows four big steps: Unit testing, Comms testing, Go-live acceptance testing, and Operational testing.

Unit Tests for Initial Implementation

By the time the broadcaster gets his hands on a library, the package has already run through unit tests that have been conducted by Kantar (see above).

Communications Tests

In this phase the http-requests that are being sent from the Kantar library inside your player to the collection servers are tested and verified.

Desktop Player (“dotcom”) Streaming Measurement implementations:

For desktop player implementations, you will need to observe the log stream data and verify the content of the heartbeats. You do this by running a simple analysis of the http-requests sent from and to a browser whilst your webplayer is operating.

There are NO warnings or error messages produced using this method, it is a simple tracking of the http requests sent to the Kantar measurement systems.

You can observe an example of a correct webplayer implementation using an http analyser such as the "httpfox" plugin in Firefox or "developer tools" in Chrome.

Step-by-step instructions for running an http-request Analyser:


http analyzer

1. On a standard laptop / desktop device – Open the browser.

2. Using Chrome: Press ”CTRL+SHIFT+i”. This will bring the information screen along the bottom of the browser screen. (On other browser, you might need specific plugins. For example httpfox on Firefox browser).

3. The info screen contains several tabs across the page, but the one that matters is “network”. Select “network”.

4. In the main browser window - Open the player being tested, i.e. your webplayer

5. As soon as you load the webpage, you will notice a stream of events occurring in the information screen along the bottom of the screen.

6. Click in the information screen to make sure it has focus, now click on the filter icon. This will allow you to enter a value in the search box.

7. Enter “2cnt.net” (this is the receiving server) and click the option “filter”. You will now see only the requests going and coming to and from the TVPR project systems.

8. If you return focus to the player you can now test the various functions (pause, rewind, fast forward etc.) and watch the results in the http-request data scrolling along the information screen.

9. This data is NOT captured automatically so you MUST copy and paste ALL http-requests after the test has been completed; this data should be shared with us in order for the implementation to be signed off.


You can see an example of the heartbeats sent from the Kantar Media libraries here:
kantarmedia.atlassian.net/wiki/spaces/public/pages/159727225/BARB+TVPR+Test+Tool#BARBTVPRTestTool-Heartbeatssentfromthelibraries

We recommend that the verification should be done using an http proxy, such as Charles Proxy or Fiddler.

Once verification is complete your implementation can move to the next QA stage.

Go-Live Acceptance Tests

Before live release, your desktop or mobile app player integration must pass tailored acceptance tests to ensure it adheres to the desired TV Player report project outcomes.

Every player works in subtly different ways and is often subject to customisation. Desired behaviours must be understood in the context of the functionality and user features specific to your player. The specific acceptance criteria will be determined as a result of discussions between developers and business analysts at both the Broadcaster and Kantar Media.

Kantar Media will support this process to assess desired behaviours, e.g. correct handling of stream interruptions. We will undertake acceptance testing of your desktop or mobile app player integration if you can make a staging/beta release available.
For mobile app streaming measurement implementations we recommend you share your build using the test flight platform:

https://testflightapp.com/

We will provide our device/account details. Other methods or products for providing access to a pre-release version of your implementation are of course also accepted.

The table below describes example test scenarios.


ScenarioDescriptionOutcomeNotes
aOD / DWN stream, uninterrupted by buffering or commercials30 minute stream is viewed completely in one unbroken session, no pausing or buffering and no commercial breaks.This process should test the library app is measuring the complete viewing stream, it will show both start position and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings).i.e. the last reported position should be equal to the duration
bOD / DWN stream, interrupted only by commercials30 minute stream is viewed completely in one unbroken session, with only commercials interrupting the programme stream.This process should test the library app is measuring the complete viewing stream, it will show both start position, the breaks for the commercials  and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also continues measuring correctly after each commercial break (mid/pre rolls).i.e. the last reported position should be equal to the duration AND "uid" remains unchanged
cOD / DWN stream paused <30 minutes30 minute stream is viewed for 15 minutes. The viewer pauses the content for <30 minutes then resumes the stream from the same point (point of pause), watching to the end of the stream. Commercials viewed as normal.This process should test the library app is measuring the complete viewing stream, it will show both start position and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also resumes measuring at the correct position after the paused period (Minor).
dOD / DWN stream paused >30 minutes30 minute stream is viewed for 15 minutes. The viewer pauses the content for >30 minutes then resumes the stream from the same point (point of pause), watching to the end of the stream.  Commercials viewed as normal.This process should test the library app is measuring the complete viewing stream, it will show both start position and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also resumes measurement at the correct position after the paused period (Major). The 30 min period used for the test can/should be extended if the player has internal “sleep” functions enabled.
eOD / DWN stream on mobile device, background/foreground <30 minutesThe user begins watching a stream on a mobile device. After 10 minutes (position in stream = 00:10:00) the user sends the app to the background and uses a different app. 5 mins later the user returns the player app to the foreground and continues viewing the same programme. The player app itself has remembered the position in stream and begins again from position 00:10:01. The user completes viewing of the programme stream uninterrupted until the end.This process should test the library app is measuring the complete viewing stream, it will show both start position, the breaks for the commercials  and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also continues measuring correctly after each commercial break (mid/pre rolls). It should also indicate if the process of foreground / background activities is being measured correctly after a minor period of time.
fOD / DWN stream on mobile device, background/foreground >30 minutesThe user begins watching a stream on a mobile device. After 10 minutes (position in stream = 00:10:00) the user sends the app to the background and uses a different app. 60 mins later the user returns the player app to the foreground and continues viewing the same programme. The player app itself has remembered the position in stream and begins again from position 00:10:01. The user completes viewing of the programme stream uninterrupted until the end.This process should test the library app is measuring the complete viewing stream, it will show both start position, the breaks for the commercials  and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also continues measuring correctly after each commercial break (mid/pre rolls). It should also indicate if the process of foreground / background activities is being measured correctly after a major period of time.
gOD / DWN stream on PC device, “Shutting the lid”The user begins watching a stream on a PC device.  After 10 minutes (position in stream = 00:10:00) the user closes the laptop lid (for desk top you could simply press the power off button). This should be similar to sending the app to the background on a mobile device.  5 mins later the user opens the laptop and resumes watching the same programme. The player app should / may remember the position in stream and begins again from position 00:10:01. The user completes viewing of the programme stream uninterrupted until the end.This process should test the library app is measuring the complete viewing stream, it will show both start position, the breaks for the commercials and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also continues measuring correctly after each commercial break (mid/pre rolls). It should also indicate if the process of “foreground / background” activities on a PC device is being measured correctly.This test may need refinement to cater for any Windows peculiarities that may be active on the laptop.
hOD / DWN asset segment viewed multiple times (“the football goal”)The user begins viewing a 60 minute stream. After 5 minute viewing, the user FFWD 45 minutes into the stream.  The user then continues viewing the stream for 5 minutes of the programme (content minutes 45 – 49). The user then RWDs the content back to minute 45 and watches the same 5 minutes again. This behavior is repeated 4 more times, leading to a total streaming of 35 minutes comprising 6 x 5 minute streaming of the same piece of content plus that original first 5 minutes, within a single asset.This process should test the library app is measuring the complete viewing stream, it will show both start position, the breaks for the commercials  and the final position in the log stream record. This will ensure the process is measuring the viewing until the end of the stream (NO false endings) and also continues measuring correctly after each commercial break (mid/pre rolls). It should also indicate if the process of multiple rewind activities is being measured correctly and measuring the correct period of viewing.
iOD / DWN Commercial segment viewed multiple timesThe user begins viewing a 30 minute stream. After 1 minute viewing, the user FFWD past the first commercial break.  (The commercials should / may play out as normal). The user then continues viewing the stream for 1 more minute of the stream. The user then RWDs the stream to a point immediately before the commercial break, then continues watching for 1 more minute.  (The commercials should play out as normal). This behavior is repeated X more times, before stopping the stream. This process should test the library app is correctly measuring the asset viewing stream, it will show both start position, the breaks for the commercials  and the final position in the log stream record. This will ensure the process is measuring correctly after each commercial break (mid/pre rolls). It should also indicate if the process of multiple rewind activities is being measured correctly and measuring the correct period of viewing.
jOD / DWN cross-deviceThe user begins streaming a programme on a PC device. The stream is viewed for 15 minutes and then pause the content (position in stream = 00:15:00). Now move to using a tablet device and resume the stream viewed previously on the PC, but this time via the tablet player app. The tablet player should resume streaming of the same programme at the position of the pause on the PC device (position in stream = 00:15:01). The user completes viewing of the programme stream uninterrupted until the end.This process should test the library app is measuring the viewing stream correctly from both devices, it will show both start positions, the breaks for the commercials  and the final position in the log stream record for each device used. This will ensure the process is measuring correctly after each commercial break (mid/pre rolls), it should also highlight if the process is correctly measuring the device type change and the starting position of the resume activity.
kOD asset finish/auto-restart featureWhen an on-demand stream concludes, the player automatically returns to the beginning of the stream and restarts the stream.This process should test what happens when a player automatically restarts a programme asset.  It is important to make sure that track method is correctly handled when the programme finishes (unload) and that tracking begins again when the asset auto-restarts (call trackMethod).  A new View should be created.  Where the auto-restarted content begins again and the stream progresses beyond the 3-sec point, a new AV Start will also be created.
lCookie persistencyUser playback multiple long-tail streams. Observe the cookie's persistence across different streaming sessions.The process should test that cookie remains the same across all streaming sessions. Any changes to the cookie should be reported.


The table below describes example test scenarios for live streams.


ScenarioDescriptionOutcome
1

Live - View one stream 30 minutes

The user begins viewing a live stream, continues to play the stream for 30 minutes.

This process should test if the library app is correctly measuring live streaming.

2

Live - View multiple streams

The user begins viewing a live stream, continues to play the stream for 10 minutes, and then starts playing a different live stream. Continue to play this for another 10 minutes before stopping.

This process should test if the library app is correctly measuring live streams when multiple streams are played.

3

Live - Pause & resume

The user begins viewing a live stream, after 5 minutes pauses the session, 5 minutes later resumes the session.

This process should test if the library app is correctly measuring live stream on Pause & resume events.

4

Live - App backgrounded & Foregrounded

The user begins viewing a live stream, after 5 minutes minimises the app, returns to the app after 5 minutes.

This process should test if the library app is correctly measuring live stream on App background & foreground events.

5

Cookie persistency

User playback multiple long-tail streams. Observe the cookie's persistence across different streaming sessions.

The process should test that cookie remain the same cross all streaming sessions. Any changes to the cookie should be reported.


We will provide written confirmation once acceptance tests have been successfully completed, signing off your implementation.

Publishing your player

With acceptance tests complete, you may now schedule your implementation for publication.
Please:

  1. Inform us in advance of the go-live date
  2. Change your site-specific sitename from test value to live measurement, e.g. sitename "tvprdotcomtest" must be changed to "tvprdotcom". 

You will use your site-specific test sitename (e.g. "tvprdotcomtest") for testing future upgrades in staging environments.

Live Operational Calibration

The Live Operational Calibration process is used to “calibrate” the TVPR project metrics in the weeks immediately after your player implementation goes live.

BARB JIC ask you to share data from your internal measures for comparison versus the BARB metrics. These are reviewed between BARB, the Broadcaster, and Kantar. At the end of the process BARB will sign-off the data for publication on an ongoing basis. All calibration phase data and discussions are in the strictest confidence.

The diagram below details the optimum schedule. We recognise that iterations of each stage may be required before sign-off can take place, leading to an extended calibration phase.

Managing Future Updates

Once your integration is live, library sensor code changes will be infrequent. Changes will largely be driven by software environment changes (e.g. introduction of Apple IFA/IFV). When a new Kantar library becomes available, you will be notified whether the update is critical to the measurement and therefore mandatory, or whether an upgrade can be scheduled at your discretion.

Integration and testing of your player with the new Kantar library code must not take place on your live system – you must not use your live sitename at any point during the integration process. Your live service is protected during the testing stages by simply directing your test traffic to your alternative site-specific test sitename, e.g.

Test traffic directed to sitename: tvprdotcomtest
Live traffic directed to sitename: tvprdotcom

Once the upgrade implementation has been verified by all parties, you will be advised to switch to your standard sitename before publishing your upgraded player.




CUSTOMER DATA PRIVACY DOCUMENTATION

Here is an overview of all the types of data that is being collected and/or processed by Kantar in the context of the TVPR project.

  1. Stream metadata that are explicitly defined in the video players by the broadcasters. See at:  General metadata tagging instructions.
  2. Stream metadata that are not explicitly defined in the video players by the broadcasters, but are rather automatically transferred to and via the libraries' functionalities:
    1. Device identifiers:
      1. From web players: 
        cookie-ID
      2. From iOS devices: 
        Apple Advertising ID (IFA) and the ID for Vendors (IFV) for iOS 6/7, mac address for iOS 5. All are MD5 hashed and truncated to 16 characters.
        NOTE
        full unencrypted ID's are never sent to our systems!
      3. From Android devices: 
        Google Advertising ID, Android ID and Device ID (MD5 hashed and trunctated to 16 characters) 
        NOTE
        full unencrypted ID's are never sent to our systems!
    2. Screen resolution (!= stream resolution)
    3. Viewtime, contact time with the player, including all buffering, pausing, advertisement. This is different from "playtime"!
  3. Data that are inherent to internet traffic and the measurement process:
    1. IP-addresses are used for processing, but they are not saved. They are used for:
      1. Geo-location of users.
      2. When no cookie is accepted: creating a browser fingerprint based on IP+browser user agent.
    2. Browser user agent where applicable.
    3. Timestamps when heartbeats are received.


Data Collection and Aggregation

Kantar publishes only aggregated results. There will be no device identification possible within the published results.


Informing Users

With regards to data privacy, there is an obligation to inform the user that the application monitors the user actions and transmits them to a measuring system. Furthermore, the user must be informed that he has the possibility to switch of the tracking in the application and how to do this.

For this purpose, you can use the following example text in an appropriate place in your app implementation:

Our app uses the "mobile app streaming sensor" of Kantar GmbH, München, Germany, to gather statistics about the usage of our site. This data is collected anonymously.
This measurement of the mobile usage uses an anonymised device identifier for recognition purposes. To ensure that your device ID can not be clearly identified in our systems, it is encrypted and will be reduced by half. Only the encrypted and shortened device identifier is used in this measurement context.
This mobile measurement was developed under the observance of data protection laws. The aim of the measurement is to determine the intensity of use, the extent of use and the number of users of a mobile application. At no time, individual users will be identified. Your identity is always protected. You get no advertising by this system.
You can opt-out of the measurement by our app with the following activation switch.

Please note that only the measurement of our app is disabled. It may be that you will continue to be measured by other broadcasters using the "mobile app streaming sensor".

Opt-Out on Mobile Applications

The application developer has to give users the ability to stop the further tracking of the user actions

The library offers the following method to do this:

/**
 * When the value <code>false</code> is specified, the sending of
 * requests to the measuring system is switched off.
 * This value is <code>true</code> by default.
 */
public void setTracking(boolean tracking) { }
/**
 * Delivers the value <code>true</code> when the tracking
 * is activated otherwise the value is <code>false</code>.
 */
public boolean isTracking() { }

A persistent saving of the opt-out decision in the library is not provided and needs to be implemented by the app developer.

Opt-Out on Desktop Player

In the web-environment, the opt-out mechanism uses a specific cookie content to identify the client who does not want to be measured. Due to the fact that a direct and/or standardized way of recognition of the client is not provided, there is no other way to identify such clients.
A client is able to change his identity anytime (e.g. by cookie deletion). Such an identity change always leads to a loss of specific settings of the initial clients (causing them to appear in the system again from the moment of change).

Therefore it is necessary that the client who refuses measurement, communicates this setting to the system constantly. This means that the client may not delete his specific cookie.

The optout page for the TVPR project is located at http://optout.2cnt.net/. Any broadcaster can link to or embed that page on their own pages.

More information can be found at https://kantarmedia.atlassian.net/wiki/spaces/public/pages/159726816/OPT-OUT+and+Anonymization.




FAQ

The content of this FAQ section is based on previous correspondence and experience from UK broadcaster implementations of our library software. To help share knowledge between broadcasters, we will continue to add to this section as new questions arise.

General Info

What should I be tracking?
You are expected to track both content and advertising. You should call the library's track-Method only once per content-stream, and once per ad(-block).

Can you summarise how your libraries work in simple terms?
All libraries for our supported platforms offer a mechanism to adapt any player for measurement (on the supported platform). The only information the library needs, is the current position on the stream in seconds. Beside that the measurement also requires the information about the stream itself (Content ID or live stream channel) and the duration of the stream in seconds ("0" for live simulcast streams).

Can you summarise how to use your libraries in simple terms?

The basic use of the library is:
1. Create once a "SpringStreams" instance which contains basic information like the sitename (required) and the application name when it's an app.
2. Choose or implement an adapter for the player or stream you want to use
3. Call the track method of the library by giving the adapter and the information about the stream itself
From this point the stream will be measured. These steps are always the same on each supported platform.
(see also: https://confluence.spring.de/display/public/Implementation+of+Stream+Measurement#ImplementationofStreamMeasurement-BasicUseoftheAPI )

What metadata must I provide to the library?
See General metadata tagging instructions. Our library needs to receive mandatory information on the current position of the stream in seconds, a stream identifier (unique BARB Content ID and/or name of the stream) and information about the stream duration if the stream is VOD not live (so-called duration of the stream).


How do identify my program content?
See General metadata tagging instructions. For on-demand you must provide the BARB standard Content ID. For live, this is still being revised because it might not always be possible to supply a standard Content ID in a live context.

What should I use to populate the Content ID (cq) variable?
Please discuss this with your Project Owner who will advise you which internal code you should use.

What is the relation between playtime and viewtime (as can be observed in the testtool)?
In the testtool, it is possible to observe values of the measured playstates and the viewtime (which is the elapsed time that the library was active). For example:
> "pst"=>",,0+22+njmad9;;",
> "vt"=>"23",
The viewtime (vt) has to be equal to or bigger than the playtime, otherwise the data are invalid!

Our Content ID contains slashes and other special characters. Does it need to be URI encoded to be sent correctly?
You don’t need to URL-encode, the library does that. It will not be truncated.

What should the duration of a live simulcast stream be?
Live streams should always have a duration of “0”.

What should I do if duration (media length) is not available at the start of my stream?
We have seen cases where duration of a stream is not available to the library until after 10-20 seconds.This is not a problem, because it will later be naturally updated “in-play”.  You must let the library have continuous access to reading this variable – therefore it will not matter if it updates from “0”.  For each stream we will handle any change by taking the value assigned for the final data block, i.e. the final heartbeat.

For Live Simulcast, we have an existing rolling playhead parameter which updates every 0.25s. Is this sufficient for measurement, as I understand the implementation 'polls' for an update of the position every 0.2s?
This will not be a problem for the variable containing the position value. It will not create creating “pauses” in the stream. You do not need to do anything.

When should I call Start/Stop?
You should call Start (so-called "track-Method") only once at the beginning of a stream.
You should call Stop only at the end of a stream, to stop the measurement.

Should I call Pause/Resume?
The library does not have ‘pause’ mechanism per se, but relies on polling the current playhead position (every 200ms) from the player object supplied. If the playhead position does not change between subsequent polls then the library detects this as a ‘pause’ event automatically and this will be evidenced in the pst variable in the tags.

We currently use a False stop system for detecting the end of program. Should we continue to use this for the BARB implementation?
You should not use a false end when implementing our libraries.

Once playback commences, the window dimension width and height can change if AirPlay is engaged. What should I do?
You should not try to actively set the sx and sy variables. Instead let the library continuously read these variables from the player and we will capture them as they arise.  Any changes in state should be automatically captured.

It is specified to use Stream *stream = [spring track:adapter atts:atts] to start tracking. When specifically should this happen?  Whilst we are preparing the content for playback (with possible buffering)? When the main content is ready to play frames?
The only difference this will make is in the captured viewtime (complete time of exposure), not the content playtime. However for the BARB implementation you should be implementing the first option (preparing content), starting tracking as soon as the contact begins.

We assume that we would call [stream stop] when the user has stopped playing the current item through either:
(1) Natural completion of playback,
(2) User exiting playback,
(3) User selecting another content item within the player to playback,
(4) Error occurs in the player causing it to exit,
(5) closing the browser,
(6) replay button re-appearing on screen

Yes these scenarios are correct.

We’re working with a StreamAdapter subclass. The class returns the position and duration, which in theory could be read from any structure rather than an actual player instance. Our video player instance is quite a few layers 'down the chain' of our hierarchy, and existing reporting is abstracted from player instances and takes place at a higher level. Is this a problem?
No, you will use a pointer to  READ information from the player, but it does not have to be a player. Our library should be given the possibility to permanently/constantly poll the position of the player in the stream, the stream duration, and so on (screen width, screen height, …).  These values can be built into the adapter and it doesn’t matter to the library where they come from. However remember that the player instance is required and nil should not be passed.

How and when is data transmitted?
All transmission is over http;
Get requests, not POST;
Flushed on event

What happens during clock change (DST changeover)?
All measurement is done in GMT. Timezones are handled in post-processing.

Where can we find a copy of the privacy statement and any software licenses which may be necessary?
Please discuss this with your Project Owner who will advise you accordingly.

How to counter when your player is sending strange playstates? (mainly Android, but might also apply to other environments)
Depending on how the lifecycle of the VideoView / MediaPlayer is managed by the app, compared to the point at which polling is stopped (via stopping the tracking Stream) you may get undefined results out of the getCurrentPosition method. You can defend against it by wrapping the call to getCurrentPosition in a check on VideoView.isPlaying().

It’s also worth noting that the original VideoViewAdapter class probably should not be keeping a hard reference to the VideoView class as it increases the potential for leaking the view if the lifecycle of the Stream is not very tightly handled. For instance, when the base class relies on calling Stream.stop from the Activity.onStop method, and similarly SpringStream.unload from the Activity.onDestroy method. Neither of onStop or onDestroy are guaranteed to be called, so in any scenario where they don’t get called you may leak a VideoVideo and all of the associated resources that go with it.

Content and Advertising

How do I manage pre- and mid-rolls?
In most cases, it is enough to connect the Kantar library to the programme content object in your player, and connect it separately to advertisement objects. The library will automatically keep track of the programme content when it is being paused for mid-rolls. You should not try to micro-manage what the library is doing. The play head position reported via the main content won't change whilst commercials are playing (it's paused during this time).

What is the lifecycle of a Stream object returned in the Stream *stream = [spring track:adapter atts:atts] call? Should we retain it or is a weak pointer enough? If we retain, when should we release? After [stream stop]?
The object itself should be retained and only released after the stream stop. It can be changed mid-stream if the content itself has changed.  For example where mid-roll commercials are served you will need to pause the program content stream and retain that variable in order to return to it after the commercials have been delivered and the same program stream recommences, e.g. player reports to the sensor.

Our media player uses a different net stream for every piece of content. For example, a typical playlist could include:

Sting →Pre-roll adverts (2/3 adverts) →Content part 1 / program →Mid roll adverts (2/3 adverts) →Content part 2/ program →Post-roll adverts (2/3 adverts)

In total we could potentially have up to 11 net streams created in one session. If we were to start measuring from the first advert, the net stream created and passed to SpringStream would not be the same net stream that would be used for the actual content. We can call "stream.stop()" and then call "track()" again with new meta data and net stream, but would this be classed as the same user session?  Using the above example we would be calling the "stop()" "track()" combination quite a few times. Is this correct?
Unless commercials are being specifically tracked, you’ll need to find a way to ensure that the different content parts are delivered to the library as one. There should be no calling of the stop() and start() functions, because then you will lose the UID (a random-generated session ID), and the same stream will in fact become another view. (1 view per content part, multiple views for the entire content).

I am serving multiple advertisments during my pre- and mid-rolls. Is there any specific instruction?
You need to stop the library after every ad, and call the track-method again with every new ad!

Mobile App Streaming

What happens if my app is sent to the background by the user?
A Stop event is called when the app goes to background. This necessarily means that any continuation of the viewing by bringing the app back to the foreground will result in a new measurement session. Failure to call the library on foreground-event will result in no measurement.

Device type information is not visible to our player – we don't drill down to device versions.  Instead we can report the device string constant exposed by Apple's APIs, which could then be converted to a 'human' string outside of the player environment.
Is this suitable?

In this case the device string should be reported. This approach means if new devices are released, or Apple were to change the device strings, there is no need to update the App as the logic resides externally.

It is mentioned that the 'unload' call in SpringStreams is executed when the App is backgrounded.  We currently close our player down on a background event. Is there any implication to be aware of if we need to release Stream instances?
When the player is sent to the background it closes down and any stream is canceled. When the user restarts the app and it is brought to the foreground, the previously viewed content will either resume or not, but in any case: a brand new stream is started with a new session ID in the library. This should present no issues for the measurement. Assure yourself that also on resumes like this, the library is started!

 

 



FEEDBACK

We recognize the fact that even with all the information presented here, there might still be questions or remarks from your side.
In order to address this, and if you feel that anything is still missing or not enough explained, please send us an email at UK-TVPR-Ops@kantarmedia.com.
We will happily add your remarks to the document! It is after all a living document for a living project!