Skip to content

Helper Functions

Heiko Schütt edited this page Jul 17, 2019 · 3 revisions

With the model we provide some helper functions to allow easier usage of the model:

Conversion to luminance

To use the model an image first has to be converted to luminance scale [cd/m^2]. To make this step easier we provide three functions which convert RGB images into luminance images, based on measurements of the monitor:

  • convert_lum_fit: Uses measured luminances for the three channels to fit a power function, which is then used for conversion
  • convert_lum_levels: Also uses measured luminances for the three channels but expects measurements for all 256 levels to convert by lookup without any fitting
  • convert_lum_spectra: Uses measured spectra of a monitor to compute the luminances for 256 levels and do conversion by lookup

image generation functions

Many psychophysical studies already define stimuli in luminance or contrast units. As a convenience for investigating such stimuli we provide the image_* functions which can generate various grating like stimuli. The exact input parameters vary by stimulus type and are explained in the help text of the functions. Shared inputs, which exist over many types are:

  • imSize : The size of the image in pixels
  • degSize : The size of the image in degrees of visual angle
  • AntiAliasing : The image is first generated at a higher resolution and then downsized to avoid aliasing. This gives the factor by which the large image is larger. This input defaults to 4.

simulating experiments

The low-level function to predict discriminability is compare_images, which takes two image representations with their noise levels as input and computes a predicted average difference, noise variance and percent correct.

To simulate whole discrimination experiments we provide two convenience functions: detection_experiment and grating_experiment to predict the discriminability of either arbitrary images grating stimuli generated from parameters.

[diff, noise, p] = detection_experiment(signal, background, degSize, timecourse, contrast, L, fixation, pars, sizePx, lumBorder, Gradidx, V1Mode)

This function computes the discriminability of [background] and [background + L * contrast * signal], where L defaults to the mean luminance of the background. The other parameters are passed on to the model.

[diff, noise, p] = grating_experiment(imSize, degSize, timecourse, freq, contrast, contrast0, orientation, L, winSize, phase, fixation, window, pars, pedestalFreq, pedestalOrientation, pedestalPhase, sizePx, Gradidx, V1Mode)

This function computes discriminability of a pedestal grating and a signal grating + the pedestal grating.

  • The parameters of the signal grating are given by: freq, contrast, orientation & phase
  • The parameters of the pedestal grating are given by: pedestalFreq, contrast0, pedestalOrientation, pedestalPhase, which all default to the values of the signal grating.
  • The window given with parameters winSize and window is assumed to be the same for both gratings.

All other parameters are the same as for the model and are passed on.

calculating thresholds

Based on the two experiment types we also provide functions to calculate thresholds. These functions vary the contrast in the experiment following a bisection method to find the contrast which yields a certain percent correct. calc_threshold does this for grating_experiment and calc_threshold_image does this for detection_experiment. To do so they add two more inputs:

  • PC : which percentage correct to aim for
  • tolerance : how close the percent correct should be to the requested value.

All other parameters specify the experiment and the exact model to be run and are passed on unmodified.

Clone this wiki locally