Statistics
During our runnings, we saw that not only the α of the protein is important. Actually he is limited by a serie of factors, specially the complex formation. Also the time of the production is more coupled with the basal expression, that actually represents the strength of the promoter, than the connection with the sender, represented by the α of the protein, itself. So, for doing a good analysis of the system we should do a N-dimensional analysis of the system.
Also, considering that we are working with a scale with a lot of stochastic influence, the statistical of it, is very far more complicated then deterministic events. So, combining both ideas, we propose here an analysis for the system, which could bring more elucidation about the results of works with Quorum Sensing
Our proposal is, then, use a simply computational method: During the integrations of the ODE, we are going to do a stochastic perturbation and then, at each step keep doing this. Using this simply approach we can run a Monte Carlo simulation to establish an confidence interval, we're we can compare our data. This is a kind of stochastic deviation that is intrinsic to the processes, not simply of measure, therefore is way more hard to be treated.
This is a very know approach, specially in ecology, to compare simulations and compare to the data. But the big trick, is then to make a dimensional analysis of the models, were we can adjust the parameters an variables to find stable states of our system for each parameter of interest, basal expression or the relation between the complex formation and separation, which limits the entire system. Then, use this approach with Monte Carlo simulation to create small perturbations around the values of these values and combine they with the variations in the systems itself, creating a combination in a N-dimensional space which can not only test our data, but also, by knowing some few chemical data, we could predict an entire set of systems.