You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have noticed the same thing. Especially, if I provide matrix with all zeroes, loss shape still comebacks as negative. In function
@njit
def compute_softdtw_batch_channel(D, gamma):
batch_size = D.shape[0]
num_channels = D.shape[1]
N = D.shape[2]
M = D.shape[3]
R = np.zeros((batch_size, num_channels, N + 2, M + 2), dtype=np.float32) + 1e8
R[:, :, 0, 0] = 0
for j in range(1, M + 1):
for i in range(1, N + 1):
r0 = -R[:, :, i - 1, j - 1] / gamma
r1 = -R[:, :, i - 1, j] / gamma
r2 = -R[:, :, i, j - 1] / gamma
rmax = np.maximum(np.maximum(r0, r1), r2)
rsum = np.exp(r0 - rmax) + np.exp(r1 - rmax) + np.exp(r2 - rmax)
softmin = - gamma * (np.log(rsum) + rmax)
R[:, :, i, j] = D[:, :, i - 1, j - 1] + softmin
return R
I have noticed the addition of rmax to np.log(rsum) - I do not see such action in the paper and if |np.log(rsum)|<rmax, softmin becomes negative. I was wondering if this could be reason by loss shape is negative
i have a question about my loss_shape, i don't know why the value is negative. i think it's not right, but i don't know why
The text was updated successfully, but these errors were encountered: