Barb TVPR FAQ Streaming implementation

General Info

What should I be tracking?
You are expected to track both content and advertising. You should call the library's track-Method only once per content-stream, and once per ad(-block).

Can you summarise how your libraries work in simple terms?
All libraries for our supported platforms offer a mechanism to adapt any player for measurement (on the supported platform). The only information the library needs, is the current position on the stream in seconds. Beside that the measurement also requires the information about the stream itself (Content ID or live stream channel) and the duration of the stream in seconds ("0" for live simulcast streams).

Can you summarise how to use your libraries in simple terms?

The basic use of the library is:
1. Create once a "SpringStreams" instance which contains basic information like the sitename (required) and the application name when it's an app.
2. Choose or implement an adapter for the player or stream you want to use
3. Call the track method of the library by giving the adapter and the information about the stream itself
From this point the stream will be measured. These steps are always the same on each supported platform.
(see also: https://confluence.spring.de/display/public/Implementation+of+Stream+Measurement#ImplementationofStreamMeasurement-BasicUseoftheAPI )

What metadata must I provide to the library?
See General metadata tagging instructions. Our library needs to receive mandatory information on the current position of the stream in seconds, a stream identifier (unique Barb Content ID and/or name of the stream) and information about the stream duration if the stream is VOD not live (so-called duration of the stream).


How do identify my program content?
See General metadata tagging instructions. For on-demand you must provide the Barb standard Content ID. For live, this is still being revised because it might not always be possible to supply a standard Content ID in a live context.

What should I use to populate the Content ID (cq) variable?
Please discuss this with your Project Owner who will advise you which internal code you should use.

What is the relation between playtime and viewtime (as can be observed in the testtool)?
In the testtool, it is possible to observe values of the measured playstates and the viewtime (which is the elapsed time that the library was active). For example:
> "pst"=>",,0+22+njmad9;;",
> "vt"=>"23",
The viewtime (vt) has to be equal to or bigger than the playtime, otherwise the data are invalid!

Our Content ID contains slashes and other special characters. Does it need to be URI encoded to be sent correctly?
You don’t need to URL-encode, the library does that. It will not be truncated.

What should the duration of a live simulcast stream be?
Live streams should always have a duration of “0”.

What should I do if duration (media length) is not available at the start of my stream?
We have seen cases where duration of a stream is not available to the library until after 10-20 seconds.This is not a problem, because it will later be naturally updated “in-play”.  You must let the library have continuous access to reading this variable – therefore it will not matter if it updates from “0”.  For each stream we will handle any change by taking the value assigned for the final data block, i.e. the final heartbeat.

For Live Simulcast, we have an existing rolling playhead parameter which updates every 0.25s. Is this sufficient for measurement, as I understand the implementation 'polls' for an update of the position every 0.2s?
This will not be a problem for the variable containing the position value. It will not create creating “pauses” in the stream. You do not need to do anything.

When should I call Start/Stop?
You should call Start (so-called "track-Method") only once at the beginning of a stream.
You should call Stop only at the end of a stream, to stop the measurement.

Should I call Pause/Resume?
The library does not have ‘pause’ mechanism per se, but relies on polling the current playhead position (every 200ms) from the player object supplied. If the playhead position does not change between subsequent polls then the library detects this as a ‘pause’ event automatically and this will be evidenced in the pst variable in the tags.

We currently use a False stop system for detecting the end of program. Should we continue to use this for the BARB implementation?
You should not use a false end when implementing our libraries.

Once playback commences, the window dimension width and height can change if AirPlay is engaged. What should I do?
You should not try to actively set the sx and sy variables. Instead let the library continuously read these variables from the player and we will capture them as they arise.  Any changes in state should be automatically captured.

It is specified to use Stream *stream = [spring track:adapter atts:atts] to start tracking. When specifically should this happen?  Whilst we are preparing the content for playback (with possible buffering)? When the main content is ready to play frames?
The only difference this will make is in the captured viewtime (complete time of exposure), not the content playtime. However for the BARB implementation you should be implementing the first option (preparing content), starting tracking as soon as the contact begins.

We assume that we would call [stream stop] when the user has stopped playing the current item through either:
(1) Natural completion of playback,
(2) User exiting playback,
(3) User selecting another content item within the player to playback,
(4) Error occurs in the player causing it to exit,
(5) closing the browser,
(6) replay button re-appearing on screen

Yes these scenarios are correct.

We’re working with a StreamAdapter subclass. The class returns the position and duration, which in theory could be read from any structure rather than an actual player instance. Our video player instance is quite a few layers 'down the chain' of our hierarchy, and existing reporting is abstracted from player instances and takes place at a higher level. Is this a problem?
No, you will use a pointer to  READ information from the player, but it does not have to be a player. Our library should be given the possibility to permanently/constantly poll the position of the player in the stream, the stream duration, and so on (screen width, screen height, …).  These values can be built into the adapter and it doesn’t matter to the library where they come from. However remember that the player instance is required and nil should not be passed.

How and when is data transmitted?
All transmission is over http;
Get requests, not POST;
Flushed on event

What happens during clock change (DST changeover)?
All measurement is done in GMT. Timezones are handled in post-processing.

Where can we find a copy of the privacy statement and any software licenses which may be necessary?
Please discuss this with your Project Owner who will advise you accordingly.

How to counter when your player is sending strange playstates? (mainly Android, but might also apply to other environments)
Depending on how the lifecycle of the VideoView / MediaPlayer is managed by the app, compared to the point at which polling is stopped (via stopping the tracking Stream) you may get undefined results out of the getCurrentPosition method. You can defend against it by wrapping the call to getCurrentPosition in a check on VideoView.isPlaying().

It’s also worth noting that the original VideoViewAdapter class probably should not be keeping a hard reference to the VideoView class as it increases the potential for leaking the view if the lifecycle of the Stream is not very tightly handled. For instance, when the base class relies on calling Stream.stop from the Activity.onStop method, and similarly SpringStream.unload from the Activity.onDestroy method. Neither of onStop or onDestroy are guaranteed to be called, so in any scenario where they don’t get called you may leak a VideoVideo and all of the associated resources that go with it.

Content and Advertising

How do I manage pre- and mid-rolls?
In most cases, it is enough to connect the Kantar library to the programme content object in your player, and connect it separately to advertisement objects. The library will automatically keep track of the programme content when it is being paused for mid-rolls. You should not try to micro-manage what the library is doing. The play head position reported via the main content won't change whilst commercials are playing (it's paused during this time).

What is the lifecycle of a Stream object returned in the Stream *stream = [spring track:adapter atts:atts] call? Should we retain it or is a weak pointer enough? If we retain, when should we release? After [stream stop]?
The object itself should be retained and only released after the stream stop. It can be changed mid-stream if the content itself has changed.  For example where mid-roll commercials are served you will need to pause the program content stream and retain that variable in order to return to it after the commercials have been delivered and the same program stream recommences, e.g. player reports to the sensor.

Our media player uses a different net stream for every piece of content. For example, a typical playlist could include:

Sting →Pre-roll adverts (2/3 adverts) →Content part 1 / program →Mid roll adverts (2/3 adverts) →Content part 2/ program →Post-roll adverts (2/3 adverts)

In total we could potentially have up to 11 net streams created in one session. If we were to start measuring from the first advert, the net stream created and passed to SpringStream would not be the same net stream that would be used for the actual content. We can call "stream.stop()" and then call "track()" again with new meta data and net stream, but would this be classed as the same user session?  Using the above example we would be calling the "stop()" "track()" combination quite a few times. Is this correct?
Unless commercials are being specifically tracked, you’ll need to find a way to ensure that the different content parts are delivered to the library as one. There should be no calling of the stop() and start() functions, because then you will lose the UID (a random-generated session ID), and the same stream will in fact become another view. (1 view per content part, multiple views for the entire content).

I am serving multiple advertisments during my pre- and mid-rolls. Is there any specific instruction?
You need to stop the library after every ad, and call the track-method again with every new ad!

Mobile App Streaming

What happens if my app is sent to the background by the user?
A Stop event is called when the app goes to background. This necessarily means that any continuation of the viewing by bringing the app back to the foreground will result in a new measurement session. Failure to call the library on foreground-event will result in no measurement.

Device type information is not visible to our player – we don't drill down to device versions.  Instead we can report the device string constant exposed by Apple's APIs, which could then be converted to a 'human' string outside of the player environment.
Is this suitable?

In this case the device string should be reported. This approach means if new devices are released, or Apple were to change the device strings, there is no need to update the App as the logic resides externally.

It is mentioned that the 'unload' call in SpringStreams is executed when the App is backgrounded.  We currently close our player down on a background event. Is there any implication to be aware of if we need to release Stream instances?
When the player is sent to the background it closes down and any stream is canceled. When the user restarts the app and it is brought to the foreground, the previously viewed content will either resume or not, but in any case: a brand new stream is started with a new session ID in the library. This should present no issues for the measurement. Assure yourself that also on resumes like this, the library is started!