Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPR1-3147: Phase up observation NMAD cal-report plots #75

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

tamerakassie
Copy link
Contributor

This PR calculates the Normalised Median Absolute Deviation (NMAD) of the phases for the uncorrected and corrected track scanstates of a phase-up observation and thereafter plots the NMAD across frequency and adds these plots into the cal-report. In JIRA : SPR1-3147 we have collected an investigation of NMAD metric as a statistic to determine phase stability at the 2 tracks of a phase up observation, and in the sub-tickets we have investigated the use of Katdal sensors to create a trigger condition for when these plots are to be added to the cal-report.

The slew scanstates were been sent into the pipeline for correction  they were been considered as
tracks by the previous code. This was due to adding the 'correct' sync_time to the simulator and
this resulted in the timestamps being incorrected handel. This commit adds a fix to the simulator.py
by subtracting the sync_time from the timestamps to keep the times aligned with an observation.
In this commit we add a fix for the failing test_control.py test.
The test inject exceptions into the pipeline. For our case the test
failed because the test telstate_cb does not have the stream type
`sdp.vis` for the `check_applied_gain()` to extract sensor information
from Katdal. We bypass the test by adding a target with a `pointingcal`
tag and added this target to the telstate, this will ensure that the
test checks for the reference pointing solutions are flushed from the
pipeline. Phase up observations have a `bfcal` tag. The test will
therefore check exceptions for observations that have a reference pointing tag.

In this commit we also debug the plotting of the tracks, to ensure that
in each track the respective nmad phase is plotted. The issue here was
the name was duplicated for the plots and this resulted in the same plot
appearing in the cal_report for the different tracks.
In this commit we add a y-axis clipping for the NMAD phase plots, to
improve readibility of the plots. And we have changed the 3rd heading of
these plots to expand on `NMAD` for better understanding of the metric
that has been plotted.
Copy link
Contributor

@KimMcAlpine KimMcAlpine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor changes

Comment on lines +1333 to +1335
timestamp = int(np.average(self.timestamps)) # Convert to an integer for filename
filename = f"phase_nmad_{timestamp}.npy"
np.save(filename, phase_nmad)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug leftovers, best to remove these

Comment on lines +220 to +225
# append the nmad av_corr phase from each scan
elif key.endswith('_nmad'):
sum_corr[key] += new_corr[key]
del new_corr[key]
keylist.append(key)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need this ? It seems like it would be covered by the case below

@@ -1290,6 +1290,51 @@ def summarize(self, av_corr, key, data=None, nchans=8, avg_ant=False,
val = (av_vis.compute(), np.average(self.timestamps))
av_corr[key].insert(0, val)

def summarize_stats(self, av_corr, key, data=None, nchans=8):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have left in the nchans as a parameter here but I don't think you are actually using this anywhere in the function ?

@@ -189,10 +189,24 @@ def setup_telstate(self, telstate):
raise ValueError('number of substreams must divide into the number of channels')
parameter_dict['sdp_l0_n_chans_per_substream'] = n_chans // self.n_substreams
# separate keys without times from those with times
obs_activity_key = 'obs_activity'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can make this alot more readable and compact.
You are trying to separate things that need to be added to telstate with a time (ie a sensor) versus things that don't need a time, only a value, (ie an attribute). These need to be treated differently but within these two categories they are added in the same way, so you don't need multiple subcategories here.
A handy tip here is that the endswith function will actually take a tuple of values and evaluate to True if any of the tuple values match.
So I think you can do something like

sensor_key_suffixes = ('obs_activity', '_eq', 'target_activity', 'obs_label, 'noise_diode')
sensor_keys = [k for k in parameter_dict.keys() if k.endswith(sensor_key_suffixes)]
for k in parameter_dict.keys():
    if k in sensor_keys:
            for v, t in parameter_dict[k]:
                    logger.info(Setting %s = %s', key)
                    telstate.add(key, v, ts=t) 
    else:
            logger.info(Setting %s = %s', k, parameter_dict[k])
            telstate.add[k] = parameter_dict[k]

@@ -830,6 +895,7 @@ async def tx_data(self, telstate, tx, max_scans):

# data values to transmit
tx_time = self.file.timestamps[i] # timestamp
# logger.info('Timestamp %d', tx_time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More debug that needs to be removed

@@ -1364,6 +1364,9 @@ async def test_flush_pipeline_exception(self):
rs = np.random.RandomState(seed=1)
with mock.patch.object(control.Pipeline, 'flush_pipeline', side_effect=ZeroDivisionError):
await self.assert_sensor_value('pipeline-exceptions', 0)
target = ('J1331+3030_2, radec pointingcal, 13:31:08.29, +30:30:33.0, '
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a short comment here explaining why its necessary to change the target tag

Comment on lines +1130 to +1134
if is_calibrator:
suffix = (' and Phase', 'all gain-calibrated calibrators')
else:
suffix = ('', 'all target fields')
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need this anymore you can remove it and remove the unnecessary parameter from the definition

Comment on lines +899 to +908
# summarise phase_nmad
refant = parameters['refant']
applied_gain_check = check_applied_gain_sensor(telstate=ts, ref_ant=refant, pol='h')
if applied_gain_check > 1:
logger.info('Is Phase Up: Calculate NMAD Phases')

if any(k in phase_tag for k in taglist):

s.summarize_stats(av_corr, target_name + '_nmad_phase')

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we still want to only do this for the corrected track. You will need to add another check for this.
You can add another quick helper function similar to the one used for the noise_diode e.g below

def check_is_corrected(telstate, time_range):
    obs_label = telstate.get_range('obs_label', st=time_range[0], et=time_range[1], include_previous=True)
    values, times = zip(*obs_label)
    if 'corrected' in values:
        return True
    else:
        return False

The you can check both is applied_gain_check > 1 and is check_is_corrected before carrying on with your summarize_stats function.
Also we want to do this on the data that doesn't have the pipeline solutions applied. You will see earlier there is a s.apply_in_place function. You need to do this summarize_stats function before this happens .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants