...
Can you summarise how your libraries work in simple terms?
All libraries for our supported platforms offer mechanism to adapt any player for measurement. The only information the library needs, is the current position on the stream in seconds. Beside that the measurement requires additionally the information about the stream itself (Content ID or live stream channel) and optionally a duration of the stream in seconds.
Can you summarise how to use your libraries in simple terms?
The basic use of the library is:
1. Create once a SpringStreams instance with some information like the site name and for apps the application name
2. Choose or implement an adapter for the player or stream you want to use
3. Call the track method of the library by giving the adapter and the information about the stream itself
From this point the stream will be measured. These steps are always the same on each supported platform.
(see also: https://confluence.spring.de/display/public/Implementation+of+Stream+Measurement#ImplementationofStreamMeasurement-BasicUseoftheAPI )
What metadata must I provide to the library?
See Player monitoring requirements. Our library needs to receive mandatory information on the current position of the stream in seconds, a stream identifier (unique BARB Content ID and/or name of the stream) and information about the stream duration if the stream is VOD not live (so-called duration of the stream).
How do identify my program content?
See Player monitoring requirements. For on-demand you must provide the BARB standard Content ID. For live, this is still being revised because it might not always be possible to supply a standard Content ID in a live context.
What should I use to populate the Content ID (cq) variable?
Please discuss this with your Project Owner who will advise you which internal code you should use.
Our Content ID contains slashes and other special characters. Does it need to be URI encoded to be sent correctly?
You don’t need to URL-encode, the library does that. It will not be truncated.
What should the duration of a live simulcast stream be?
Live streams should always have a duration of “0”.
...
When should I call Start/Stop?
You should call Start (so-called "track-Method") only once at the beginning of a stream.
You should call Stop only at the end of a stream, to stop the measurement.
Should I call Pause/Resume?
There is no functionality to "pause" or "resume" the measurement.The Kantar Spring library "lives and dies" together with the content being played.
We currently use a False stop system for detecting the end of program. Should we continue to use this for the BARB implementation?
You should not use a false end when implementing our libraries.
Once playback commences, the window dimension width and height can change if AirPlay is engaged. What should I do?
You should not try to actively set the sx and sy variables. Instead let the library continuously read these variables from the player and we will capture them as they arise. Any changes in state should be automatically captured.
It is specified to use Stream *stream = [spring track:adapter atts:atts] to start tracking. When specifically should this happen? Whilst we are preparing the content for playback (with possible buffering)? When the main content is ready to play frames?
The only difference this will make is in the captured viewtime (complete time of exposure), not the content playtime. However for the BARB implementation you should be implementing the first option (preparing content), starting tracking as soon as the contact begins.
We assume that we would call [stream stop] when the user has stopped playing the current item through either:
(1) Natural completion of playback,
(2) User exiting playback,
(3) User selecting another content item within the player to playback,
(4) Error occurs in the player causing it to exit,
(5) closing the browser,
(6) replay button re-appearing on screen
Yes these scenarios are correct.
We’re working with a StreamAdapter subclass. The class returns the position and duration, which in theory could be read from any structure rather than an actual player instance. Our video player instance is quite a few layers 'down the chain' of our hierarchy, and existing reporting is abstracted from player instances and takes place at a higher level. Is this a problem?
No, you will use a pointer to READ information from the player, but it does not have to be a player. Our library should be given the possibility to permanently/constantly poll the position of the player in the stream, the stream duration, and so on (screen width, screen height, …). These values can be built into the adapter and it doesn’t matter to the library where they come from. However remember that the player instance is required and nil should not be passed.
How and when is data transmitted?
All transmission is over http;
Get requests, not POST;
Flushed on event
What happens during clock change (DST changeover)?
All measurement is done in GMT. Timezones are handled in post-processing.
...
What is the lifecycle of a Stream object returned in the Stream *stream = [spring track:adapter atts:atts] call? Should we retain it or is a weak pointer enough? If we retain, when should we release? After [stream stop]?
The object itself should be retained and only released after the stream stop. It can be changed mid-stream if the content itself has changed. For example where mid-roll commercials are served you will need to pause the program content stream and retain that variable in order to return to it after the commercials have been delivered and the same program stream recommences, e.g. player reports to the sensor.
...