Sites: STRFPak | Challenge |

Frequently Asked Questions

Please be sure to read the rules, your questions might be answered there.

Why can't I have all of the data?

One of the common pitfalls of predictive modeling is the danger of over-fitting. Overfitting occurs when a predictive model is fit to the noise (or the stimulus correlations) in a small data set. This will tend to make predictions look better than they actually are. By holding back the response data from the validation set, we ensure that your predictive model cannot overfit to the noise in the validation set. That allows us to evaluate the predictive power of your model fairly.

Do I have to send you my model?

No! The Neural Prediction Challenge is designed so that you don't have to reveal any proprietary information unless you want to. We send you the fit data (stimuli and responses) and the validation data (responses only). You only return the predicted responses for the validation data to us. We do not need to see your model at all! However, if your model gives excellent predictions the laboratory that contributed the Challenge data that you predicted may contact you for further collaboration.

How will I find out my prediction score, and will anyone else know?

When you submit a Challenge prediction it will be processed and evaluated by our computers. Your prediction score will be expressed as the percentage of explainable variance explained. This score will be returned to you, and it will be forwarded to the laboratory that contributed the Challenge data. (That laboratory may contact you to discuss further collaborations.) In addition, your prediction score will be posted anonymously on the neural prediction web site, using your nickname.

What is my nickname?

Your neural prediction nickname is your public face on the Challenge web site. Any predictions that you submit will be identified by your nickname. We will not reveal your nickname to anyone, so your anonymity will be protected.

What is explainable variance?

Any neural signal can be divided in to two distinct components: the deterministic part that is predictable from the stimulus, and the stochastic part that cannot be predicted from the stimulus alone. (The stochastic part reflects contributions os true noise, sampling limitations, and other uncontrolled factors.) Because the stochastic part of the response cannot be predicted from the stimulus alone, it will never be possible to predict 100% of the variance in responses. The explainable variance is the part of the variance that does not reflect noise, and is therefore potentially explainable.

Why do you use the correlation coefficient in calculating explainable variance?

The correlation coefficient (or the squared correlation, also called the percentage of variance explained and the proportional reduction in error) is insensitive to differences in the mean of two variables, and it assumes that they variables are Gaussian distributed. The first property seems undesireable for our purposes and the second is clearly inappropriate for many neural data. Clearly, correlation-based measures are not the best metrics for describing the relationship between predicted and observed responses. On the other hand, most non-correlation-based measures of the relationship between predicted and observed neural responses are highly correlated with the correlation coefficient! Because most researchers are already familiar with the correlation coefficient, we use that measure in calculating the explainable and explained variance.

How often can I enter the Neural Prediction Challenge?

We are still discussing what if any restrictions should be placed on entries. We generally believe that no restrictions should be placed on entries, that people should be allowed to enter as often as they like. On the other hand, we want to avoid any chance that the data might be overfit by

Can I publish the data results of my model predictions?

This is a complicated issue. The neural prediction challenge is made possible by the generous contributions of raw data from several different neuroscience laboratories. Let us refer to the laboratory that contributed a partcular data set as the contributing laboratory. The only restriction on publication of challenge data is that the data may not be published without the express prior permission of the contributing laboratory. This restriction is being enforced for the protection of both you and the contributing laboratory. Obviously, the contributing laboratory has a vested interest in ensuring that their data are described accurately and completely; it would be in no one's interest to publish misinformation about the data themselves or how they were collected. In addition, some of the Challenge data may be covered by regulations or other restrictions. Because of the complexity of neural data we believe that the only way that accuracy can be assured is if the contributing laboratory takes part in any publication of their data. We leave it to you and the contributing laboratory to negotiate the terms of publication. The contributing laboratory may simply require pre-review of the paper, or they may request that you include citation(s) to relevant paper(s), or in some cases they may require a co-authorship. To ensure that this rule is followed, we have intentionally ommitted many details about these data. We believe that Challenge data are unpublishable without these details. Thus, if you want to publish Challenge data you will need to contact the contributing laboratory in order to obtain the necessary information. On the other hand, we place no restrictions whatsoever on your publishing a theoretical paper that describes a predictive model that you have used in the Neural Prediction Challenge, but which does not contain data from the Challenge. You are free to publish such models as you would have if the Challenge had ever occurred. You may also cite the prediction score that your model achieves in the Challenge, as long as you cite this web site and state the name of the data set that was used in prediction.

How do I contact the laboratory that contributed a particular Challenge data set?

If you submit a good prediction, they will contact you! Whenever you submit a prediction it is processed and evaluated by our computers, and the result forwarded to the laboratory that supplied the original data. That laboratory can then contact contestants who make accurate predictions for further consultation.

Don't we already have many computational models of neurons?

There are many theoretical and computational models of neural systems that aim to account for neuronal function at a variety of levels: molecular models, compartmental models, network models, models of specific cell types, specific neural systems or specific circumstances. However, almost all of these models have been constructed to account for data collected under highly controlled experimental conditions. The aim of the Neural Prediction Challenge is to develop models that can account for sensory processing under naturalistic conditions.

Why does this contest only address models of sensory neurons?

One marked advantage of studying sensory neurons is that we can guess their general function: to process sensory input. Neurons in other regions of the brain are farther removed from the sensory input, so it is more difficult to predict their responses. However, we are in discussion with several laboratories around the world to obtain additional representative data from other neural systems.

How do I know if I won?

This is not a contest in the traditional sense, so you cannot really win. In the larger sense no one really wins until some model explains 100% of the explainable variance. That isn't likely to happen any time soon. On the other hand, early contestants have expressed interest in some form of acknowledgement or award for good predictions. We are looking in to some potential prize or award meeting and will make an announcement about this in the future.

Is it "neural" or "neuronal"?

We generally prefer to use "neural" to refer to more than one neuron, and "neuronal" to refer to a single neuron. But sometimes "neural" just sounds better.