-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to tune parameters for lag estimation? #24
Comments
Dear Michael,
I am sorry to hear that TRENTOOL's lag estimation is causing trouble.
From my experience, the reasons maybe fourfold:
1. An insufficient amount of data; yet, this will typically render the
lag variable, whereas you get a bias towards too short delays.
2. A low entropy (rate) of the source process, i.e. not much information
is produced in the first place, to be then transferred.
3. Wrong embedding parameters (for the target). Try to run the Ragwitz
optinization with a much larger range of tau and dim
4. A pathologcal case that truly requires the use of embedded source
states instead of scalar values from the source.
what kind of simulated processes do you use?
Best,
Michael
…On 25.07.2017 20:35, Michael J. Curry wrote:
The group I work for is trying to use transfer entropy to compare some
simulated signals, in order to eventually apply the same computational
pipeline to real data once it is available. In particular, our focus
is on estimating the lags. We've found that in a variety of simple and
complicated simulated datasets, TRENTOOL consistently hugely
underestimates the delay time. Usually it picks the smallest lag from
its search range, no matter what.
This may be a problem with our choice of parameters for the estimator.
We're considering doing running a grid search over the parameter
space, but figured we should first ask:
1. Has this problem been observed before? Do you have any speculation
as to what caused it?
2. For successful uses of delay estimation, what parameters did
people choose?
3. When doing a grid search, are there any parameters that you think
might be more important than others?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#24>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIqYGiJSch_FfC0rtIzmHDgnt-vid_Jeks5sRjVXgaJpZM4Oi80Q>.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/trentool/TRENTOOL3","title":"trentool/TRENTOOL3","subtitle":"GitHub
repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open
in
GitHub","url":"https://github.com/trentool/TRENTOOL3"}},"updates":{"snippets":[{"icon":"DESCRIPTION","message":"How
to tune parameters for lag estimation? (#24)"}],"action":{"name":"View
Issue","url":"#24"}}}
--
----------------------------------
Prof. Dr. rer. nat. Michael Wibral
MEG Labor, Brain Imaging Center
Goethe Universität
Heinrich Hoffmann Strasse 10
60528 Frankfurt am Main
Phone: +49 69 6301 83193
Fax: +49 69 6301 83231
-----------------------------------
|
We had been using trajectories sampled from a set of stochastic differential equations that are supposed to simulate something like the dynamics observed when measuring EEG. In the interest of figuring out where we're going wrong, though, I've tried to simplify things. When simulating a simple AR(1) process generated from the code below (the output is just a channels x time points x trials matrix which is later converted to FieldTrip format) I can't even get the code to run due to failure to meet the autocorrelation threshold, even after quite a bit of tinkering. Are there parameters you recommend raising or lowering here? Or is this just a time series that shouldn't be expected to work?
While this problem is far simpler than the original process we were using, I think it may help me understand where we're going wrong and/or what the limitations of transfer entropy are. Thanks for your help, Michael |
The group I work for is trying to use transfer entropy to compare some simulated signals, in order to eventually apply the same computational pipeline to real data once it is available. In particular, our focus is on estimating the lags. We've found that in a variety of simple and complicated simulated datasets, TRENTOOL consistently hugely underestimates the delay time. Usually it picks the smallest lag from its search range, no matter what.
This may be a problem with our choice of parameters for the estimator. We're considering doing running a grid search over the parameter space, but figured we should first ask:
The text was updated successfully, but these errors were encountered: