RQA comparisons

Everything about quantification of recurrence plots and recurrence networks.
Post Reply
Nikita
Junior
Posts: 3
Joined: Fri Apr 3, 2009 17:58
Affiliation (Univ., Inst., Dept.): University of Cincinnati
Location: Cincinnati

RQA comparisons

Post by Nikita »

Hi everyone,

How would one go about comparing the level of deteriminism between two conditions of a behavioral experiment? FIrst, I have used FNN and AVD method to find the embedding dimension and the appropriate delay for each trial. It turned out that they are pretty uniform across all trials. Thus, I decided to use fixed dimension and delay for further analysis. The data are quite noisy and it seems to me that radius is crucial for valid description of the changes in %DET. Should I use the mean or maximum normalization? What is the best norm? Is it appropriate to compare %DET while holding %REC constant? Would fixed percent of nearest neighbors be more appropriate for noisy behavioral data?

Thank you,
Nikita
User avatar
stefan
Specialist
Posts: 2
Joined: Mon Apr 6, 2009 18:20
Affiliation (Univ., Inst., Dept.): Interdisciplinary Centre for Dynamics of Complex System
Location: Potsdam, Germany

Re: RQA comparisons

Post by stefan »

Hi Nikita,

one question up front. What kind of behavioural data are we talking about here? MEG,EEG,fMRI,RT,SCM ? It does not necessarily change my answer but I'd be curious.

Anyway, when comparing 2 datasets the core issue is to not compare apples and oranges. What I means is, however you come up with your RQA measure make sure the ways to get them are identical in both datasets (conditions). Therefore it is good if you use a constant dimension and delay. Depending on the kind of data you have you might not even need those.

As you said the radius epsilon affects the outcome of the analysis. Therefore do make sure you use the same for both conditions. That must not necessarily be a fixed value (I'd even avoid that) but rather sth. relative to the data. Because if the PS diameter in condition A is 1 and 100 in condition B, and you use a fixed epsilon of,say , .1, it is quite probable that this epsilon will produce a lot of structures in the RP of condition A but hardly any in B. As a result you could end up saying: "In condtion A the brain (which in the end produces all behaviour) is more deterministic than in B." That might be downright wrong, you simply mistreated the data
In this paper ( http://dx.doi.org/10.1140/epjst/e2008-00833-5 ) we have found that a relative epsilon about 5% of the maximal PS diameter worked quite well for signal detection. Admittedly on a very limited data body but my experience with other datasets showed that it's quite a good rule of thumb.
The core reasoning in choosing epsilon as sth. relative to the data is to establish a frame for a fair comparsion (Simply put, the RPs were constructed equally hence all difference is indeed due to the data.) The same logic applies to using a fixed RR (%REC) or a fixed amount of neighbours. I am not quite sure though whether any norm is more appropriate for behavioural data.

HTH,

stefan
---
No Microsoft products, child labor, or animal testing were used in the creation of this post.
Nikita
Junior
Posts: 3
Joined: Fri Apr 3, 2009 17:58
Affiliation (Univ., Inst., Dept.): University of Cincinnati
Location: Cincinnati

Re: RQA comparisons

Post by Nikita »

Stefan!

Thank you very much for your reply and a reference! I'm working with time series of force values that people produced with their index finger. These tasks are called isometric force production and are mainly used in research of motor control.
Anyway, when comparing 2 datasets the core issue is to not compare apples and oranges. What I means is, however you come up with your RQA measure make sure the ways to get them are identical in both datasets (conditions). Therefore it is good if you use a constant dimension and delay. Depending on the kind of data
you have you might not even need those.[/quote]

- What your are suggesting makes perfect sense to me. It seems plausible to keep the dimension and delay constant for relatively clean data. However, setting the delay and dimension constant in noisy data can sometimes be problematic, right? The dimension is not so much of a problem because it's usually more or less stable across participants in my limited experience. The delay (calculated with AVD method) is more variable though and I've been using individualized delays for each participant. That might seem that it leads to comparing apples and oranges, but I'm thinking that it is not so as long as the AVD criteria for delay selection are uniform. Am I thinking right?
In this paper ( http://dx.doi.org/10.1140/epjst/e2008-00833-5 ) we have found that a relative epsilon about 5% of the maximal PS diameter worked quite well for signal detection. Admittedly on a very limited data body but my experience with other datasets showed that it's quite a good rule of thumb.
- Thank you! It makes sense that you set the radius to be based on the phase space diameter. There could be a difference in the size of PS between the conditions leading to misrepresented results. So, I can probably set the radius as a percentage of the standard deviation of the time series (SD is a good estimate of PS size, right?). Right now, I ended up using fixed RR across the conditions as well as used a number of other combinations of parameters (uniform and individualized). The basic pattern of results is still the same: %LAM is greater in one condition as compared to the other.

Nikita
User avatar
stefan
Specialist
Posts: 2
Joined: Mon Apr 6, 2009 18:20
Affiliation (Univ., Inst., Dept.): Interdisciplinary Centre for Dynamics of Complex System
Location: Potsdam, Germany

Re: RQA comparisons

Post by stefan »

Hi Nikita,

sorry for taking a while to reply. I didn't check the "Notify me on reply"-button. Did it now.
However, setting the delay and dimension constant in noisy data can sometimes be problematic, right?
Well, yes and no. Of course the best would be to compute dimension/delay separate for each measurement. But on the other hand you
somehow have to cope with reality (i.e. what is doable) and secondly delay and dimension are estimates. They are not necessarily right.
I think it is therefore legitimate to use sth. like the median of the estimated dimension/delay values. As a justification I'd go for a reasoning along
the lines that: With every recording we take a snapshot of the system. Not necessarily a good one. By repeating this we get to a good approximation
of what the actual dimension/delay for the system might be. Assuming of course the system does not change significantly. Which I think can be
assumed for a human brain, at least in a very controlled setup.

A core point though is to keep values constant for individual subjects. In my experience people are pretty idiosyncratic and cross-subject comparisons
tend to mess up things. (I use quite different data though.)
SD is a good estimate of PS size, right?
Yes, for a one-dimensional system conforming to a standardised normal distribution. Other than that things get tricky and require eg.PCA or other means.
So, in reality No.
I ended up using fixed RR across the conditions as well as used a number of other combinations of parameters (uniform and individualized). The basic pattern of results is still the same: %LAM is greater in one condition as compared to the other.
Using a fixed RR is, in my opinion, a very good thing. The only drawback here is increased computing time. If you can afford it that's fine. And if %LAM is larger in one condition for a range of parameters, that is actually the best you can hope for. Unless you have very specific expectations. In the end when tackling known data/paradigms with new/different techniques the best you can hope for is consistency in results. You will of course have to cross-validate your findings somehow, but depending on your paradigm that might be difficult/infeasible as the relating the RQA measures to other well-known measures say, median of the data or sth. like that, is not necessarily possible and hardly ever trivial.

hth,
stefan
---
No Microsoft products, child labor, or animal testing were used in the creation of this post.
Nikita
Junior
Posts: 3
Joined: Fri Apr 3, 2009 17:58
Affiliation (Univ., Inst., Dept.): University of Cincinnati
Location: Cincinnati

Re: RQA comparisons

Post by Nikita »

Thank you very much, Stefan! I really like your point about cross-validating the results with other methods!
Physaz
Junior
Posts: 5
Joined: Mon Dec 9, 2013 18:45
Affiliation (Univ., Inst., Dept.): University of Tours
Location: Tours, Central Region, France
Research field: Nonlinear Dynamics and Signal Processing

RQA comparisons

Post by Physaz »

Hello Everybody,

I am currently working on Recurrence plot quantification and I have a question, while reading the article of L.L. Trula et al. (1996) for d=3, tau=1 and epsilon=0.5 it was possible to calculate the %determinism versus 'a' of the embedded logistic map and the plot was clear in the paper. However, while reading the article of N. Marwan et al. (2002) entitled' Recurrence-plot based measures of complexity and their application to heart rate variability data' for the same values of parameters: d=3 tau=1 and epsilon=0.5 the %determinism versus 'a' of the logistic map did not look the same as the one in L.L. Trula et al. (1996), so can anyone lend me a hand to understand this ?
Besides, the initial value of x(n) of the logistic time series was not specified in both articles, so anybody has an idea about this?

Thank You In Advance.
User avatar
Norbert
Expert
Posts: 194
Joined: Wed Jan 4, 2006 11:03
Affiliation (Univ., Inst., Dept.): Potsdam Institute for Climate Impact Research, Germany
Location: Potsdam, Germany
Location: Potsdam Institute for Climate Impact Research, Germany

Re: RQA comparisons

Post by Norbert »

Hi,

the difference between the two papers is that in the first one the parameter a was slightly changed with each iteration step, where in the other paper for each value of a a separate time series was calculated and analysed by RQA.

The initial value should not be so important. You could start, e.g., with x(0) = 0.7.

Best regards
Norbert
Physaz
Junior
Posts: 5
Joined: Mon Dec 9, 2013 18:45
Affiliation (Univ., Inst., Dept.): University of Tours
Location: Tours, Central Region, France
Research field: Nonlinear Dynamics and Signal Processing

Re: RQA comparisons

Post by Physaz »

Norbert wrote:Hi,

the difference between the two papers is that in the first one the parameter a was slightly changed with each iteration step, where in the other paper for each value of a a separate time series was calculated and analysed by RQA.

The initial value should not be so important. You could start, e.g., with x(0) = 0.7.

Best regards
Norbert
Hi Nobert,

Thank you for this feedback and I really appreciate your work on complexity analysis techniques, yes I understood that in terms of recurrence plots in the previous papers they have done it by changing 'a' inside the iteration and in your paper you have showed for 4 different 'a' values the different time series and their corresponding Reconstructed Recurrence plot. However, when you calculated the RQA, afterwards, you didn't calculate it for each of the 4 plots separately, because if you have done so, you would have got 4 discrete values of DET, but rather you calculated the RQA, for instance, DET versus 'a' as in the previous paper (1996) so in your paper you have varied 'a' and thus the time series then for each time series the recurrence plot (d=3,tau=1,epsilon=0.5) has been calculated and you have calculated the DET value from each plot, and finally you plotted DET versus 'a', Lmax versus 'a' ... which is the same way done by the previous paper, however, I found that the results differ. So can you help me understand this ?
Thank You Nobert.
User avatar
Norbert
Expert
Posts: 194
Joined: Wed Jan 4, 2006 11:03
Affiliation (Univ., Inst., Dept.): Potsdam Institute for Climate Impact Research, Germany
Location: Potsdam, Germany
Location: Potsdam Institute for Climate Impact Research, Germany

Re: RQA comparisons

Post by Norbert »

Hi,

no, this is not correct. We have fixed the control parameter a as I wrote and then calculated a time series for this one value of a. For this time series we calculated one recurrence plot and from this recurrence plot the RQA measures. The value of a was not changed here. It is the same. Thus, we get for this value of a one value of DET, L, etc. Next, we increased the value a and repeated all calculation and get another value for DET, L, etc. Therefore, we get the RQA measures for different values of a but calculated from a comlete time series for a single value of a. This is completely different to the approach where a is changing with each iteration step. You can simply test this by plotting the bifurcation diagram for these two approaches.

Here is some Matlab code which should make it clear:

Code: Select all

% range of control parameter
a = [3.5:0.0005:4];

% create logistic map of length 2000 for each value of a
for i = 1:length(a);
    x(1,i) = .666;
    for j = 2:2000;
        x(j,i) = a(i) * x(j-1,i) * (1-x(j-1,i));
    end
end

% remove transients
x(1:1000,:) = [];

% plot bifurcation diagram
plot(a,x,'.k','markers',.1)

% calculate RQA using CRP Toolbox
for i = 1:length(a)
    Y(i,:) = crqa(x(:,i),3,1,.1,'euc','sil');
end

% plot DET
plot(a,Y(:,2))
Best regards
Norbert
Physaz
Junior
Posts: 5
Joined: Mon Dec 9, 2013 18:45
Affiliation (Univ., Inst., Dept.): University of Tours
Location: Tours, Central Region, France
Research field: Nonlinear Dynamics and Signal Processing

Re: RQA comparisons

Post by Physaz »

okay, got your point, thanks for your helpful explanation,

thanks a lot,
Regards.
Physaz
Junior
Posts: 5
Joined: Mon Dec 9, 2013 18:45
Affiliation (Univ., Inst., Dept.): University of Tours
Location: Tours, Central Region, France
Research field: Nonlinear Dynamics and Signal Processing

Re: RQA comparisons

Post by Physaz »

Mr. Nobert is their an advantage of computing for each fixed 'a' the RQA separately, over computing RQA values by iterating all the time series for different 'a' values if at the end in both cases we are able to detect bifurcations ?

Thanks a lot.
User avatar
Norbert
Expert
Posts: 194
Joined: Wed Jan 4, 2006 11:03
Affiliation (Univ., Inst., Dept.): Potsdam Institute for Climate Impact Research, Germany
Location: Potsdam, Germany
Location: Potsdam Institute for Climate Impact Research, Germany

Re: RQA comparisons

Post by Norbert »

Increasing a with each iteration step causes abrupt bifurcations and at partly unpredictable time points. You can see this by comparing the bifurcation diagrams derived from both approaches. But usually we are more interested in the more subtle transitions, therefore, calculating a time series for fixed value of a is better in this case. If we are only roughly interested in the subtle transitions but need one time series covering different dynamical characteristics, the changing-a approach is fine. Yiu see, it depends on the question.
Physaz
Junior
Posts: 5
Joined: Mon Dec 9, 2013 18:45
Affiliation (Univ., Inst., Dept.): University of Tours
Location: Tours, Central Region, France
Research field: Nonlinear Dynamics and Signal Processing

Re: RQA comparisons

Post by Physaz »

Thank You Mr. Nobert
Post Reply