Exp-lookit-preferential-looking Class
These docs have moved here.This class is deprecated.
Basic image display for looking measures (e.g. preferential looking, looking time). Trial consists of four phases, each of which is optional.
This is a composite trial very similar to exp-lookit-composite-video-trial except that it allows specifying either video stimuli or static images.
In general it may be simpler to use a combination of exp-lookit-calibration, exp-lookit-video, and exp-lookit-images-audio frames.
-
Announcement: The audio in announcementAudio is played while the announcementVideo video is played centrally, looping as needed. This lasts for announcementLength seconds or the duration of the audio, whichever is longer. To skip this phase, set announcementLength to 0 and do not provide announcementAudio.
-
Intro: The introVideo video is played centrally until it ends. To skip this phase, do not provide introVideo.
-
Calibration: The video in calibrationVideo is played (looping as needed) in each of the locations specified in calibrationPositions in turn, remaining in each position for calibrationLength ms. At the start of each position the audio in calibrationAudio is played once. (Audio will be paused and restarted if it is longer than calibrationLength.) Set calibrationLength to 0 to skip calibration.
-
Test: Test images are displayed or the video in testVideo and audio in testAudio (optional) are played until either: testLength seconds have elapsed (with video looping if needed), or the video has been played testCount times. If testLength is set, it overrides testCount - for example if testCount is 1 and testLength is 30, a 10-second video will be played 3 times. If the participant pauses the study during the test phase, then after restarting the trial, the video in altTestVideo will be used again (defaulting to the same video if altTestVideo is not provided).
To specify test images, you can provide leftImage, rightImage, and/or centerImage, or you can provide a list of possibleImages and give an index in that list for any of those three placements.
Specifying media locations: For any parameters that expect a list of audio/video sources, you can EITHER provide a list of src/type pairs with full paths like this:
[
{
'src': 'http://.../video1.mp4',
'type': 'video/mp4'
},
{
'src': 'http://.../video1.webm',
'type': 'video/webm'
}
]
OR you can provide a string 'stub', which will be expanded
based on the parameter baseDir. Expected audio/video locations will be based on either audioTypes or
videoTypes as appropriate; images will be expected to all be in an img/ subdirectory.
For example, if you provide the audio source intro
,
and baseDir is https://mystimuli.org/mystudy/
, with audioTypes ['mp3', 'ogg']
, then this
will be expanded to:
[
{
src: 'https://mystimuli.org/mystudy/mp3/intro.mp3',
type: 'audio/mp3'
},
{
src: 'https://mystimuli.org/mystudy/ogg/intro.ogg',
type: 'audio/ogg'
}
]
This allows you to simplify your JSON document a bit and also easily switch to a
new version of your stimuli without changing every URL. You can mix source objects with
full URLs and those using stubs within the same directory. However, any stimuli
specified using stubs MUST be
organized as expected under baseDir/MEDIATYPE/filename.MEDIATYPE
.
This frame is displayed fullscreen; if the frame before it is not, that frame needs to include a manual "next" button so that there's a user interaction event to trigger fullscreen mode. (Browsers don't allow us to switch to FS without a user event.)
Example usage:
"sample-trial": {
"kind": "exp-lookit-preferential-looking",
"isLast": false,
"baseDir": "https://s3.amazonaws.com/lookitcontents/labelsconcepts/",
"leftImage": "stapler_test_02.jpg",
"testLength": 8,
"audioTypes": [
"ogg",
"mp3"
],
"pauseAudio": "pause",
"rightImage": "novel_02.jpg",
"videoTypes": [
"webm",
"mp4"
],
"announcementVideo": "attentiongrabber",
"announcementAudio": "video_02",
"introVideo": "cropped_book",
"testAudio": "400Hz_tones",
"unpauseAudio": "return_after_pause",
"calibrationLength": 0,
"calibrationAudio": "chimes",
"calibrationVideo": "attentiongrabber",
"loopTestAudio": false
}
Item Index
Methods
- destroyRecorder
- destroySessionRecorder
- exitFullscreen
- hideRecorder
- makeTimeEvent
- onRecordingStarted
- onSessionRecordingStarted
- serializeContent
- setupRecorder
- showFullscreen
- showRecorder
- startRecorder
- startSessionRecorder
- stopRecorder
- stopSessionRecorder
- whenPossibleToRecordObserver
- whenPossibleToRecordSessionObserver
Properties
- allowPauseDuringTest
- altTestVideo
- announcementAudio
- announcementLength
- announcementVideo
- assetsToExpand
- audioOnly
- audioTypes
- autosave
- baseDir
- calibrationAudio
- calibrationLength
- calibrationPositions
- calibrationVideo
- centerImage
- centerImageIndex
- displayFullscreen
- displayFullscreenOverride
- doRecording
- doUseCamera
- endSessionRecording
- fsButtonID
- fullScreenElementId
- generateProperties
- introVideo
- leftImage
- leftImageIndex
- loopTestAudio
- maxRecordingLength
- maxUploadSeconds
- parameters
- pauseAudio
- pauseText
- possibleImages
- recorder
- recorderElement
- recorderReady
- rightImage
- rightImageIndex
- selectNextFrame
- sessionAudioOnly
- sessionMaxUploadSeconds
- showWaitForRecordingMessage
- showWaitForUploadMessage
- startRecordingAutomatically
- startSessionRecording
- stoppedRecording
- testAudio
- testCount
- testLength
- testVideo
- unpauseAudio
- videoId
- videoList
- videoTypes
- waitForRecordingMessage
- waitForRecordingMessageColor
- waitForUploadMessage
- waitForUploadMessageColor
- waitForWebcamImage
- waitForWebcamVideo
Data collected
Methods
destroyRecorder
()
destroySessionRecorder
()
exitFullscreen
()
hideRecorder
()
makeTimeEvent
-
eventName
-
[extra]
Create the time event payload for a particular frame / event. This can be overridden to add fields to every event sent by a particular frame
Parameters:
Returns:
Event type, time, and any additional metadata provided
onRecordingStarted
()
onSessionRecordingStarted
()
serializeContent
-
eventTimings
Each frame that extends ExpFrameBase will send at least an array eventTimings
,
a frame type, and any generateProperties back to the server upon completion.
Individual frames may define additional properties that are sent.
Parameters:
-
eventTimings
Array
Returns:
setupRecorder
-
element
Parameters:
-
element
NodeA DOM node representing where to mount the recorder
Returns:
showFullscreen
()
showRecorder
()
startRecorder
()
Returns:
startSessionRecorder
()
Returns:
stopRecorder
()
Returns:
stopSessionRecorder
()
Returns:
whenPossibleToRecordObserver
()
whenPossibleToRecordSessionObserver
()
Properties
allowPauseDuringTest
Boolean
Whether to allow participant to pause study during test. If no, study still pauses but upon unpausing moves to next trial. If yes, study restarts from beginning upon unpausing (with alternate sources).
Default: true
altTestVideo
Array
Array of objects specifying video src and type for alternate test video, as for testVideo. Alternate test video will be shown if the first test is paused, after restarting the trial. If alternate test video is also paused, we just move on. If altTestVideo is not provided, defaults to playing same test video again (but still only one pause of test video allowed per trial).
Default: []
Sub-properties:
announcementAudio
Array
List of objects specifying intro announcement src and type. If empty and minimum announcementLength is 0, announcement is skipped.
Example: [{'src': 'http://.../audio1.mp3', 'type': 'audio/mp3'}, {'src': 'http://.../audio1.ogg', 'type': 'audio/ogg'}]
Default: []
Sub-properties:
announcementLength
Number
minimum amount of time to show attention-getter in seconds. Announcement phase (attention-getter plus audio) will last the minimum of announcementLength and the duration of any announcement audio.
Default: 2
announcementVideo
Array
Array of objects specifying attention-grabber video src and type, as for testVideo. The attention-grabber video is shown (looping) during the announcement phase and when the study is paused.
Default: []
Sub-properties:
audioOnly
Number
Default: 0
audioTypes
String[]
['typeA', 'typeB']
and an audio source
is given as intro
, the audio source will be
expanded out to
[
{
src: 'baseDir' + 'typeA/intro.typeA',
type: 'audio/typeA'
},
{
src: 'baseDir' + 'typeB/intro.typeB',
type: 'audio/typeB'
}
]
Default: ['mp3', 'ogg']
autosave
Number
private
Default: 1
baseDir
String
baseDir
+ img/
. Any audio/video src values provided as
strings rather than objects with src
and type
will be
expanded out to baseDir/avtype/[stub].avtype
, where the potential
avtypes are given by audioTypes
and videoTypes
.
baseDir should include a trailing slash
(e.g., http://stimuli.org/myexperiment/
); if a value is provided that
does not end in a slash, one will be added.
Default: ''
calibrationAudio
Object[]
Sources Array of {src: 'url', type: 'MIMEtype'} objects for calibration audio (played at each calibration position). Ignored if calibrationLength is 0.
Default: []
calibrationLength
Number
length of single calibration segment in ms. 0 to skip calibration.
Default: 3000
calibrationPositions
Array
Ordered list of positions to show calibration segment in. Options are "center", "left", "right". Ignored if calibrationLength is 0.
Default: ["center", "left", "right", "center"]
calibrationVideo
Object[]
Sources Array of {src: 'url', type: 'MIMEtype'} objects for calibration video (played from start at each calibration position). Ignored if calibrationLength is 0.
Default: []
centerImage
String
URL of image to show at center, if any. Can be a full URL or
a stub that will be appended to baseDir
+ img/
(see
baseDir).
centerImageIndex
String
Index in possibleImages for center image. This will be overridden by any actual value provided for centerImage. Index must be in range [0, len(possibleImages)]. Omit or -1 not to use.
displayFullscreenOverride
String
true
to display this frame in fullscreen mode, even if the frame type
is not always displayed fullscreen. (For instance, you might use this to keep
a survey between test trials in fullscreen mode.)
Default: false
doRecording
Boolean
Whether to do any video recording during this frame. Default true. Set to false for e.g. last frame where just doing an announcement.
Default: true
doUseCamera
Boolean
Default: true
endSessionRecording
Number
Default: false
fullScreenElementId
String
private
generateProperties
String
Function to generate additional properties for this frame (like {"kind": "exp-lookit-text"}) at the time the frame is initialized. Allows behavior of study to depend on what has happened so far (e.g., answers on a form or to previous test trials). Must be a valid Javascript function, returning an object, provided as a string.
Arguments that will be provided are: expData
, sequence
, child
, pastSessions
, conditions
.
expData
, sequence
, and conditions
are the same data as would be found in the session data shown
on the Lookit experimenter interface under 'Individual Responses', except that
they will only contain information up to this point in the study.
expData
is an object consisting of frameId
: frameData
pairs; the data associated
with a particular frame depends on the frame kind.
sequence
is an ordered list of frameIds, corresponding to the keys in expData
.
conditions
is an object representing the data stored by any randomizer frames;
keys are frameId
s for randomizer frames and data stored depends on the randomizer
used.
child
is an object that has the following properties - use child.get(propertyName)
to access:
additionalInformation
: String; additional information field from child formageAtBirth
: String; child's gestational age at birth in weeks. Possible values are "24" through "39", "na" (not sure or prefer not to answer), "<24" (under 24 weeks), and "40>" (40 or more weeks).birthday
: Date objectgender
: "f" (female), "m" (male), "o" (other), or "na" (prefer not to answer)givenName
: String, child's given name/nicknameid
: String, child UUIDlanguageList
: String, space-separated list of languages child is exposed to (2-letter codes)conditionList
: String, space-separated list of conditions/characteristics- of child from registration form, as used in criteria expression, e.g. "autism_spectrum_disorder deaf multiple_birth"
pastSessions
is a list of previous response objects for this child and this study,
ordered starting from most recent (at index 0 is this session!). Each has properties
(access as pastSessions[i].get(propertyName)):
completed
: Boolean, whether they submitted an exit surveycompletedConsentFrame
: Boolean, whether they got through at least a consent frameconditions
: Object representing any conditions assigned by randomizer framescreatedOn
: Date objectexpData
: Object consisting of frameId: frameData pairsglobalEventTimings
: list of any events stored outside of individual frames - currently just used for attempts to leave the study earlysequence
: ordered list of frameIds, corresponding to keys in expDataisPreview
: Boolean, whether this is from a preview session (possible in the event this is an experimenter's account)
Example:
function(expData, sequence, child, pastSessions, conditions) {
return {
'blocks':
[
{
'text': 'Name: ' + child.get('givenName')
},
{
'text': 'Frame number: ' + sequence.length
},
{
'text': 'N past sessions: ' + pastSessions.length
}
]
};
}
(This example is split across lines for readability; when added to JSON it would need to be on one line.)
Default: null
introVideo
Array
Array of objects specifying intro video src and type, as for testVideo. If empty, intro segment will be skipped.
Default: []
Sub-properties:
leftImage
String
URL of image to show on left, if any. Can be a full URL or a
stub that will be appended to baseDir
+ img/
(see
baseDir).
leftImageIndex
String
Index in possibleImages for image to use on left. This will be overridden by any actual value provided for leftImage. Index must be in range [0, len(possibleImages)]. Omit or -1 not to use.
maxRecordingLength
Number
Default: 7200
maxUploadSeconds
Number
Default: 5
parameters
Object[]
An object containing values for any parameters (variables) to use in this frame.
Any property VALUES in this frame that match any of the property NAMES in parameters
will be replaced by the corresponding parameter value. For example, suppose your frame
is:
{
'kind': 'FRAME_KIND',
'parameters': {
'FRAME_KIND': 'exp-lookit-text'
}
}
Then the frame kind
will be exp-lookit-text
. This may be useful if you need
to repeat values for different frame properties, especially if your frame is actually
a randomizer or group. You may use parameters nested within objects (at any depth) or
within lists.
You can also use selectors to randomly sample from or permute
a list defined in parameters
. Suppose STIMLIST
is defined in
parameters
, e.g. a list of potential stimuli. Rather than just using STIMLIST
as a value in your frames, you can also:
- Select the Nth element (0-indexed) of the value of
STIMLIST
: (Will cause error ifN >= THELIST.length
)
'parameterName': 'STIMLIST#N'
- Select (uniformly) a random element of the value of
STIMLIST
:
'parameterName': 'STIMLIST#RAND'
- Set
parameterName
to a random permutation of the value ofSTIMLIST
:
'parameterName': 'STIMLIST#PERM'
- Select the next element in a random permutation of the value of
STIMLIST
, which is used across all substitutions in this randomizer. This allows you, for instance, to provide a list of possible images in yourparameterSet
, and use a different one each frame with the subset/order randomized per participant. If moreSTIMLIST#UNIQ
parameters than elements ofSTIMLIST
are used, we loop back around to the start of the permutation generated for this randomizer.
'parameterName': 'STIMLIST#UNIQ'
Default: {}
pauseAudio
Object[]
Sources Array of {src: 'url', type: 'MIMEtype'} objects for audio played upon pausing study
Default: []
pauseText
String
Text to show under "Study paused / Press space to resume" when study is paused. Default: (You'll have a moment to turn around again.)
Default: []
possibleImages
String
List of possible images that may be shown. Can be full URLs or
stubs that will be appended to baseDir
+ img/
(see
baseDir). If leftImageIndex, rightImageIndex, and/or centerImageIndex
are provided, they indicate the index of the item in this list.
recorder
VideoRecorder
private
recorderReady
Boolean
private
rightImage
String
URL of image to show on right, if any. Can be a full URL or a
stub that will be appended to baseDir
+ img/
(see
baseDir).
rightImageIndex
String
Index in possibleImages for image to use on right. This will be overridden by any actual value provided for rightImage. Index must be in range [0, len(possibleImages)]. Omit or -1 not to use.
selectNextFrame
String
Function to select which frame index to go to when using the 'next' action on this frame. Allows flexible looping / short-circuiting based on what has happened so far in the study (e.g., once the child answers N questions correctly, move on to next segment). Must be a valid Javascript function, returning a number from 0 through frames.length - 1, provided as a string.
Arguments that will be provided are:
frames
, frameIndex
, expData
, sequence
, child
, pastSessions
frames
is an ordered list of frame configurations for this study; each element
is an object corresponding directly to a frame you defined in the
JSON document for this study (but with any randomizer frames resolved into the
particular frames that will be used this time).
frameIndex
is the index in frames
of the current frame
expData
is an object consisting of frameId
: frameData
pairs; the data associated
with a particular frame depends on the frame kind.
sequence
is an ordered list of frameIds, corresponding to the keys in expData
.
child
is an object that has the following properties - use child.get(propertyName)
to access:
additionalInformation
: String; additional information field from child formageAtBirth
: String; child's gestational age at birth in weeks. Possible values are "24" through "39", "na" (not sure or prefer not to answer), "<24" (under 24 weeks), and "40>" (40 or more weeks).birthday
: timestamp in format "Mon Apr 10 2017 20:00:00 GMT-0400 (Eastern Daylight Time)"gender
: "f" (female), "m" (male), "o" (other), or "na" (prefer not to answer)givenName
: String, child's given name/nicknameid
: String, child UUID
pastSessions
is a list of previous response objects for this child and this study,
ordered starting from most recent (at index 0 is this session!). Each has properties
(access as pastSessions[i].get(propertyName)):
completed
: Boolean, whether they submitted an exit surveycompletedConsentFrame
: Boolean, whether they got through at least a consent frameconditions
: Object representing any conditions assigned by randomizer framescreatedOn
: timestamp in format "Thu Apr 18 2019 12:33:26 GMT-0400 (Eastern Daylight Time)"expData
: Object consisting of frameId: frameData pairsglobalEventTimings
: list of any events stored outside of individual frames - currently just used for attempts to leave the study earlysequence
: ordered list of frameIds, corresponding to keys in expData
Example that just sends us to the last frame of the study no matter what:
`"function(frames, frameIndex, frameData, expData, sequence, child, pastSessions) {return frames.length - 1;}"
``
Default: null
sessionAudioOnly
Number
Default: 0
sessionMaxUploadSeconds
Number
Default: 10
showWaitForRecordingMessage
Boolean
Default: true
showWaitForUploadMessage
Boolean
Default: true
startSessionRecording
Number
Default: false
stoppedRecording
Boolean
private
testAudio
Array
List of objects specifying test audio src and type, as for announcementAudio. If empty, no additional test audio is played besides any audio in testVideo.
Default: []
Sub-properties:
testCount
Number
Number of times to play test video before moving on. This is ignored if testLength is set to a finite value.
Default: 1
testLength
Number
Length to loop test videos, in seconds. Set if you want a time-based limit. E.g., setting testLength to 20 means that the first 20 seconds of the video will be played, with shorter videos looping until they get to 20s. Leave out or set to Infinity to play the video through to the end a set number of times instead. If a testLength is set, it overrides any value set in testCount.
Default: Infinity
testVideo
Array
Array of objects specifying video src and type for test video (these should be the same video, but multiple sources--e.g. mp4 and webm--are generally needed for cross-browser support). If none provided, skip test phase.
Example value:
[{'src': 'http://.../video1.mp4', 'type': 'video/mp4'}, {'src': 'http://.../video1.webm', 'type': 'video/webm'}]
Default: []
Sub-properties:
unpauseAudio
Object[]
Sources Array of {src: 'url', type: 'MIMEtype'} objects for audio played upon unpausing study. Unpausing audio will always be played before proceeding to next trial, even if this trial will not be redone (e.g. because it was paused during test and allowPauseDuringTest is set to false)
Default: []
videoId
String
private
videoStream_<experimentId>_<frameId>_<sessionId>_timestampMS_RRR
where RRR are random numeric digits.
videoList
List
private
videoTypes
String[]
['typeA', 'typeB']
and a video source
is given as intro
, the video source will be
expanded out to
[
{
src: 'baseDir' + 'typeA/intro.typeA',
type: 'video/typeA'
},
{
src: 'baseDir' + 'typeB/intro.typeB',
type: 'video/typeB'
}
]
Default: ['mp4', 'webm']
waitForRecordingMessage
Boolean
Default: 'Please wait... <br><br> starting webcam recording'
waitForRecordingMessageColor
Boolean
Default: 'white'
waitForUploadMessage
Boolean
Default: 'Please wait... <br><br> uploading video'
waitForUploadMessageColor
String
Default: 'white'
waitForWebcamImage
String
`baseDir/img/
if this frame otherwise supports use of
baseDir
`.
Default: ''
waitForWebcamVideo
String
`{'src': 'https://...', 'type': '...'}
` objects (e.g. providing both
webm and mp4 versions at specified URLS) or a single string relative to `baseDir/<EXT>/
` if this frame otherwise
supports use of `baseDir
`.
Default: ''
Data keys collected
These are the fields that will be captured by this frame and sent back to the Lookit server. Each of these fields will correspond to one row of the CSV frame data for a given response - the row will havekey
set to the data key name, and value
set to the value for this response.
Equivalently, this data will be available in the exp_data
field of the response JSON data.
eventTimings
Ordered list of events captured during this frame (oldest to newest). Each event is
represented as an object with at least the properties
{'eventType': EVENTNAME, 'timestamp': TIMESTAMP}
.
See Events tab for details of events that might be captured.
frameType
Type of frame: EXIT (exit survey), CONSENT (consent or assent frame), or DEFAULT (anything else)
generatedProperties
Any properties generated via a custom generateProperties function provided to this frame (e.g., a score you computed to decide on feedback). In general will be null.
Events
enteredFullscreen
leftFullscreen
nextFrame
Move to next frame
pauseVideo
previousFrame
Move to previous frame
recorderReady
sessionRecorderReady
startCalibration
Start of EACH calibration segment
Event Payload:
-
location
Stringlocation of calibration ball, relative to child: 'left', 'right', or 'center'
startSessionRecording
stoppingCapture
stopSessionRecording
unpauseVideo
videoStreamConnection
Event Payload:
-
status
Stringstatus of video stream connection, e.g. 'NetConnection.Connect.Success' if successful