Zhaowenxue (Talk | contribs) |
Zhaowenxue (Talk | contribs) |
||
(22 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
<html> | <html> | ||
− | <section class="article-banner" style="background-image: url(https://static.igem.org/mediawiki/2018/ | + | <section class="article-banner" style="background-image: url(https://static.igem.org/mediawiki/2018/f/f6/T--SKLMT-China--art-banner-3.jpg);background-size: auto"> |
<div class="banner-content"> | <div class="banner-content"> | ||
− | <h2 class="title"> | + | <h2 class="title"> PROCESS</h2> |
− | <p class="content"> | + | <p class="content">Details of the modelling work</p> |
+ | </div> | ||
+ | <div class="container"> | ||
+ | |||
+ | <div class="row"> | ||
+ | <ol class="breadcrumb"> | ||
+ | <li><a href="https://2018.igem.org/Team:SKLMT-China"><i class="fa fa-home">  Home</i></a></li> | ||
+ | <li><a href="Model"><i class="fa fa-rocket">  Model</i></a></li> | ||
+ | |||
+ | <li class="active dropdown"> | ||
+ | |||
+ | <!-- Catalog --> | ||
+ | <div class="dropdown-toggle" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> | ||
+ | <i class="fa fa-code"> | ||
+ |   Process | ||
+ | </i> | ||
+ | </div> | ||
+ | <ul class="dropdown-menu"> | ||
+ | <li><a href="#s1" class="scrolly">Background</a></li> | ||
+ | <li><a href="#s2" class="scrolly">Assumptions</a></li> | ||
+ | <li><a href="#s3" class="scrolly">Modelling Process</a></li> | ||
+ | <li><a href="#s4" class="scrolly">Model Improvement--Algorithm optimization</a></li> | ||
+ | <li><a href="#s5" class="scrolly">Model analysis and evaluation</a></li> | ||
+ | <li><a href="#s6" class="scrolly">References</a></li> | ||
+ | </ul> | ||
+ | |||
+ | </li> | ||
+ | </ol> | ||
+ | </div> | ||
</div> | </div> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</section> | </section> | ||
− | + | <div class="container"> | |
− | <div class="container | + | |
<div class="paragraph shadow"> | <div class="paragraph shadow"> | ||
− | <h2 class="title">Background</h2> | + | <h2 class="title" id="s1">Background</h2> |
− | + | <p>We have built a promoter library of inner promoters with various strength in <latin>pf-5</latin>. We characterized the strength by a fluorescent reporter gene, <latin>firefly luciferase</latin>. Having gotten so many statistics, we thought about determine element for the strength of promoters. As we all know, strength composite promoters usually have high affinity with RNA polymerase related to the UP elements like −35 sequences and −10 sequences. We wonder the connection between the promoter sequence and its strength. So we use the resulting fluorescence data to construct a model with the promoter sequence. </p> | |
− | <p>1.To facilitate the promoter transformation of pseudomonas fluorescence in new site;</p> | + | <p>1.To facilitate the promoter transformation of pseudomonas fluorescence in new site;</p> |
− | <p>2.To reveal the correlation between promoter weights and their strength;</p> | + | <p>2.To reveal the correlation between promoter weights and their strength;</p> |
− | <p>4.To improve the parameter setting of the promoter strength prediction in the species outside the large intestine with the method of position weight;</p> | + | <p>4.To improve the parameter setting of the promoter strength prediction in the species outside the large intestine with the method of position weight;</p> |
− | <p>5.To provide reference for other microbe promoter strength prediction modeling | + | <p>5.To provide reference for other microbe promoter strength prediction modeling.</p> |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | <h2 class="title" id="s2">Assumptions</h2> | |
− | + | <p>1. It is assumed that the promoter strength is only related to the core DNA sequence.In other words, the DNA sequence can uniquely determine the promoter strength.</p> | |
+ | <p>2. It is assumed that the laboratory environment has no impact on the test results.In other words, our test data can fully reflect the promoter strength.</p> | ||
+ | <p>3.It is assumed that there is a simple monotonic relationship between the obtained fluorescence data and the promoter strength: the larger the fluorescence value is, the stronger the promoter can be considered.</p> | ||
− | + | </div> | |
<div class="paragraph shadow"> | <div class="paragraph shadow"> | ||
− | <h2 class="title">Modelling Process</h2> | + | <h2 class="title" id="s3">Modelling Process</h2> |
− | + | <h3 class="title">DNA Sequence Encoding</h3> | |
− | <p>The 5’-3’ region includes 70bp DNA information in each promoter. Currently, we have got 25 promoter sequences. We need to encode these 70 bases in order to establish a mathematical model. Notice that there’re only four types of bases(A,T,C and G),and calculating the numbers of each type is not enough to reflect interaction between adjacent bases.<sup>[2]</sup> Thus, we consider 3 continuous bases <latin>(Fig 1)</latin>.</p> | + | <p>The 5’-3’ region includes 70bp DNA information in each promoter. Currently, we have got 25 promoter sequences. We need to encode these 70 bases in order to establish a mathematical model. Notice that there’re only four types of bases(A,T,C and G),and calculating the numbers of each type is not enough to reflect interaction between adjacent bases.<sup>[2]</sup> Thus, we consider 3 continuous bases <latin>(Fig 1)</latin>.</p> |
− | <span class="image fit"> | + | <span class="image fit"> |
− | + | <img src="https://static.igem.org/mediawiki/2018/4/47/T--SKLMT-China--Codingmethodoftriplebases.png" alt="<latin>Fig 1. Coding method of triple bases</latin>" /> | |
− | </span> | + | </span> |
− | <p>We encode each three-base sequence with quaternary number(A,T,C,G represents quaternary number 0,1,2,3 ), and by running a C++ program (the whole code is shown in appendix), we get a vector with 64 dimensions from AAA to GGG, which calculates the frequency of appearance for each type in DNA sequence.</p> | + | <p>We encode each three-base sequence with quaternary number(A,T,C,G represents quaternary number 0,1,2,3 ), and by running a C++ program (the whole code is shown in appendix), we get a vector with 64 dimensions from AAA to GGG, which calculates the frequency of appearance for each type in DNA sequence.</p> |
− | <p>After encoding, all DNA sequences are transformed into a vector with 64 dimensions. The entire data is stored in the document <latin>“data.xlsx”</latin>. For example, the vectors of promoters ampC and araA are shown as follows:</p> | + | <p>After encoding, all DNA sequences are transformed into a vector with 64 dimensions. The entire data is stored in the document <latin>“data.xlsx”</latin>. For example, the vectors of promoters ampC and araA are shown as follows:</p> |
− | <span class="image fit"> | + | <span class="image fit"> |
− | + | <img src="https://static.igem.org/mediawiki/2018/2/2a/T--SKLMT-China--codedvectorschematictable.png" alt="Table 1. coded vector schematic table" /> | |
− | </span> | + | </span> |
− | <p>At the end of the encoding step, we get an 25*64 matrix which reflects the features of these DNA sequences of each promoters. This encoding method has deep internal connection with the characteristics of DNA sequence, and it simultaneously contains the sequential relationship and details of DNA sequence. With this matrix, a mathematical model can be established.</p> | + | <p>At the end of the encoding step, we get an 25*64 matrix which reflects the features of these DNA sequences of each promoters. This encoding method has deep internal connection with the characteristics of DNA sequence, and it simultaneously contains the sequential relationship and details of DNA sequence. With this matrix, a mathematical model can be established.</p> |
− | + | <h3 class="title">Model establishment and solution</h3> | |
− | <p>In order to simplify the problem, we assume that the connect between matrix and promoter strength is linear, in other words, the vector \({x}\) exists when equation \({y=Ax}\) is satisfied.</p> | + | <p>In order to simplify the problem, we assume that the connect between matrix and promoter strength is linear, in other words, the vector \({x}\) exists when equation \({y=Ax}\) is satisfied.</p> |
− | <p>Now, through a lot of experiments, we have measured a total of 24 values, and all twenty-two valid values are shown below. Each data is the average of the results of three parallel experiments.</p> | + | <p>Now, through a lot of experiments, we have measured a total of 24 values, and all twenty-two valid values are shown below. Each data is the average of the results of three parallel experiments.</p> |
− | <span class="image fit"> | + | <span class="image fit"> |
− | + | <img src="https://static.igem.org/mediawiki/2018/1/1c/T--SKLMT-China--table2promoterstrength.png" alt="Table 2. Corresponding table for natural logarithm processing of data" /> | |
− | </span> | + | </span> |
− | <p>The gap in original data is too wide, so we take the natural log of the original data as the value of the vector . Now, what we're going to do is to invert the linear relationship vector \({x}\) by taking the linear transformation matrix \({A}\) and the resulting vector \({y}\).\({A,x}\) and \({y}\)satisfy the following equation: \({y=Ax}\),or \({x=A^{-1}y}\).</p> | + | <p>The gap in original data is too wide, so we take the natural log of the original data as the value of the vector . Now, what we're going to do is to invert the linear relationship vector \({x}\) by taking the linear transformation matrix \({A}\) and the resulting vector \({y}\).\({A,x}\) and \({y}\)satisfy the following equation: \({y=Ax}\),or \({x=A^{-1}y}\).</p> |
− | <p>This is a linear mapping of 64 variables but there are only 22 data points. In other words, this is an underdetermined equation which cannot be solved accurately by solving inhomogeneous linear equations. The sparse solution algorithm of underdetermined equations based on iteration3 will be adopted in the following part.</p> | + | <p>This is a linear mapping of 64 variables but there are only 22 data points. In other words, this is an underdetermined equation which cannot be solved accurately by solving inhomogeneous linear equations. The sparse solution algorithm of underdetermined equations based on iteration3 will be adopted in the following part.</p> |
− | <p>This is an effective algorithm to restore deterministic signals with fewer premeasured values.Our goal is to improve the precision of signal reduction as much as possible. Consider the following optimization problem:</p> | + | <p>This is an effective algorithm to restore deterministic signals with fewer premeasured values.Our goal is to improve the precision of signal reduction as much as possible. Consider the following optimization problem:</p> |
− | $$min \left \| x \right \|_{0}$$ | + | $$min \left \| x \right \|_{0}$$ |
− | $$s.t.\; \; \; y=Ax$$ | + | $$s.t.\; \; \; y=Ax$$ |
− | + | <p> \(\left \| x \right \|_{0}\)is called \(l^{0}\)norm, which represents the number of nonzero components in vector x in mathematics. This value is equivalent to this expression:</p> | |
− | $$\lim_{\varepsilon \rightarrow 0}\sum_{i=1}^{64}\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }, because\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }=$$ | + | $$\lim_{\varepsilon \rightarrow 0}\sum_{i=1}^{64}\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }, because\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }= \begin{cases} |
− | <p>The following iterative steps are obtained from the perspective of numerical calculation:</p> | + | & \text{ 0, } x_{i}=0 \\ |
− | <p>⑴At first , set the initial \(x^{(0)}=0,\varepsilon ^{(0)}=1 | + | & \text{ 1, } x_{i}\neq 0 |
− | <p>⑵After that, loop to calculate the current \({x^{(n+1)},\varepsilon ^{(n+1)}} | + | \end{cases} |
+ | (\varepsilon \rightarrow 0)$$ | ||
+ | <p>The following iterative steps are obtained from the perspective of numerical calculation:</p> | ||
+ | <p>⑴At first , set the initial \({x^{(0)}=0}\),\({\varepsilon ^{(0)}=1}\)value</p> | ||
+ | <p>⑵After that, loop to calculate the current \({x^{(n+1)} }\),\({\varepsilon ^{(n+1)}}\)value based on the previous value \({x^{(n+1)} }\),\({\varepsilon ^{(n+1)}}\) with this formula:</p> | ||
− | $$x^{(n+1)}=arg\, min\left \{ \sum_{i=1}^{64}\frac{x_{i}^{2}}{(x_{i}^{(n)})^{2}+\varepsilon^{(0)} } \right\},(y=Ax);$$ | + | $$x^{(n+1)}=arg\, min\left \{ \sum_{i=1}^{64}\frac{x_{i}^{2}}{(x_{i}^{(n)})^{2}+\varepsilon^{(0)} } \right\},(y=Ax);$$ |
− | $$\varepsilon ^{(0)}=min\left \{ \varepsilon ^{(n)} ,\frac{min(x_{i}^{(n)})}{N}\right \}$$ | + | $$\varepsilon ^{(0)}=min\left \{ \varepsilon ^{(n)} ,\frac{min(x_{i}^{(n)})}{N}\right \}$$ |
− | <p>⑶Finally, if is less than a given bound such as 1e-6, abort this program, and regard current \({x^{(n)}} | + | <p>⑶Finally, if is less than a given bound such as 1e-6, abort this program, and regard current \({x^{(n)}}\) as the solution vector of this underdetermined equation.</p> |
− | <p>The explicit iterative form of the above steps is given below:</p> | + | <p>The explicit iterative form of the above steps is given below:</p> |
− | $$ f(x,\lambda )=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}+\lambda (Ax-y)$$ | + | $$ f(x,\lambda )=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}+\lambda (Ax-y)$$ |
− | <p>we use Lagrange multiplier method to solve the minimum variable. Vector \({\lambda }\)with 22 dimensions is introduced. So calculate the partial derivative to each component of \({x}\),then solve this equation:</p> | + | <p>we use Lagrange multiplier method to solve the minimum variable. Vector \({\lambda }\)with 22 dimensions is introduced. So calculate the partial derivative to each component of \({x}\),then solve this equation:</p> |
− | $$\frac{\partial f}{\partial x_{i}}=0$$ | + | $$\frac{\partial f}{\partial x_{i}}=0$$ |
− | <p>We find that the vector \({x}\)can be iterated by this formula:</p> | + | <p>We find that the vector \({x}\)can be iterated by this formula:</p> |
− | + | ||
− | $$x^{(n+1)}=D^{(n)}{A}'(AD^{(n)}{A}')^{-1}y.$$ | + | $$x^{(n+1)}=D^{(n)}{A}'(AD^{(n)}{A}')^{-1}y.$$ |
− | <p>where we construct matrix \({D^{(n)} }\) as follow:</p> | + | <p>where we construct matrix \({D^{(n)} }\) as follow:</p> |
− | <p>With the help of <latin>MATLAB</latin>, we wrote the program to find the final \({x}\)value (the whole script and vector \({x}\)is shown in appendix), and the corresponding y and measured values are compared as follows:</p> | + | $$D^{(n)}_{ij}= \begin{cases} |
− | <span class="image fit"> | + | & \text{ 0, } i\neq j \\ |
− | + | & \text (x_{i}^{(n)})^{2}+\varepsilon ^{n},i=j | |
− | </span> | + | \end{cases}$$ |
− | <p>Part of the value without bold is used to solve the equation, so it is a perfect match; From the comparison of data in bold, it can be seen that the result of data fitting is good, with the average fitting degree higher than 95%.</p> | + | |
− | <p>The solution results of the matrix are good, which confirms the basic reliability of the assumed linear relationship. Therefore, we will use the neural network model to further improve the accuracy and stability of the model prediction.</p> | + | <p>With the help of <latin>MATLAB</latin>, we wrote the program to find the final \({x}\)value (the whole script and vector \({x}\)is shown in appendix), and the corresponding y and measured values are compared as follows:</p> |
− | + | <span class="image fit"> | |
+ | <img src="https://static.igem.org/mediawiki/2018/6/61/T--SKLMT-China--table3promoterstrengthtest.png" alt="Table 3. Corresponding table of equation solution and test data" /> | ||
+ | </span> | ||
+ | <p>Part of the value without bold is used to solve the equation, so it is a perfect match; From the comparison of data in bold, it can be seen that the result of data fitting is good, with the average fitting degree higher than 95%.</p> | ||
+ | <p>The solution results of the matrix are good, which confirms the basic reliability of the assumed linear relationship. Therefore, we will use the neural network model to further improve the accuracy and stability of the model prediction.</p> | ||
+ | </div> | ||
</div> | </div> | ||
Line 89: | Line 116: | ||
<div class="paragraph shadow"> | <div class="paragraph shadow"> | ||
− | <h2 class="title">Model Improvement--Algorithm optimization </h2> | + | <h2 class="title" id="s4">Model Improvement--Algorithm optimization </h2> |
− | <p>By solving the matrix sparse solution algorithm, we can conclude that there is indeed a linear relationship between the total 64 data of AAA-GGG and the promoter strength. Notice that we used merely 22 sets of data to approximately solve the sparse solutions of 64 equations with a fitting precision higher than 95%, therefore, when we provide as much data as possible, the fitting precision of the model will greatly increase.</p> | + | <p>By solving the matrix sparse solution algorithm, we can conclude that there is indeed a linear relationship between the total 64 data of AAA-GGG and the promoter strength. Notice that we used merely 22 sets of data to approximately solve the sparse solutions of 64 equations with a fitting precision higher than 95%, therefore, when we provide as much data as possible, the fitting precision of the model will greatly increase.</p> |
− | + | <p>However, there are still many problems with this algorithm:</p> | |
− | + | <p>(1) All known data cannot be reused. The sparse solution is uniquely calculated as the data is determined, so the data is not very well utilized, and it is difficult to improve the accuracy of the original results as well.</p> | |
− | + | <p>(2) This algorithm is not well adapted to the extension of existing data sets. Every time a new set of data is added, all matrix operations change apparently, so it's not very stable when data updates.</p> | |
− | + | <p>On the basis of the linear structure of the original algorithm, we add more parameters to make the algorithm more stable and reliable. This algorithm adopts the neural network structure and improves the utilization rate of data by supervised learning.The construction of neural network is shown in the Figure 2:</p> | |
− | <span class="image fit"> | + | <span class="image fit"> |
− | + | <img src="https://static.igem.org/mediawiki/2018/5/53/T--SKLMT-China--neuralnet.png" alt="Fig.2 Schematic diagram of neural network's neural structure" /> | |
− | </span> | + | </span> |
− | + | <p>There are 6 neurons in the hidden layer, and each neuron is connected with 32 input terminals. The corresponding relation is as follows: The binary number of integer 1~63 is exactly 6 bits, and the deficiency is added. Input and hidden neuron \({j}\)is connected when the \({ j^{th}}\) number of the binary number is 1. In particular, we consider port 64 separately.And then we give a certain weight \({\omega _{ij}}\) to each edge between Input \({i}\)and hidden neuron \({j}\),and give to each edge between hidden neuron \({ p^{i}}\)and output terminal.</p> | |
− | + | <p>The final output can be obtained by the following formula:</p> | |
− | $$y_{output}=\sum_{i=1}^{6}y_{i}p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}\, \; .$$ | + | $$y_{output}=\sum_{i=1}^{6}y_{i}p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}\, \; .$$ |
− | + | <p> \({y_{i}}\)represents the input(output) data of hidden neuron \({i}\).</p> | |
− | <p>We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values:</p> | + | <p>We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values:</p> |
− | $$e=\frac{1}{2}(y_{output}-\bar{y})^{2}$$ | + | $$e=\frac{1}{2}(y_{output}-\bar{y})^{2}$$ |
− | + | <p>Our goal is to minimize the value of this function by generations of iterations. Difference between vector , \({ \omega =(\omega _{16},...,\omega _{n6})}\),\({p=(p_{1},...,p_{n}) }\)and its next generation is calculated by this formula:</p> | |
− | $$y_{output}=\sum_{i=1}^{6}y_{i}•p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}.$$ | + | $$y_{output}=\sum_{i=1}^{6}y_{i}•p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}.$$ |
− | <p> \({y_{i}}\)represents the input(output) data of hidden neuron \({i}\).</p> | + | <p> \({y_{i}}\)represents the input(output) data of hidden neuron \({i}\).</p> |
− | <p>We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values: | + | <p>We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values:</p> |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | $$e=\frac{1}{2}(y_{output}-\bar{y})^{2}$$ | ||
+ | <p>Our goal is to minimize the value of this function by generations of iterations. Difference between vector , \omega =(\omega _{16},...,\omega _{n6}),p=(p_{1},...,p_{n}) and its next generation is calculated by this formula:</p> | ||
− | + | $$ \Delta \omega =-v\cdot grad\omega ,\Delta p=-v\cdot gradp$$ | |
+ | <p>where \({v}\)represents learning speed. We use the chain rule to calculate the specific value of the gradient: </p> | ||
+ | $$\begin{cases} | ||
+ | & \text{ if } \frac{\partial e}{\partial \omega_{ij} }= (y_{output}-\bar{y})\frac{\partial y_{output}}{\partial \omega _{ij}}=(y_{output}-\bar{y})x_{i}p_{j}\\ | ||
+ | & \text{ if } \frac{\partial e}{\partial p_{i}}= (y_{output}-\bar{y})\frac{\partial y_{output}}{\partial p_{i}}=(y_{output}-\bar{y})y_{i} | ||
+ | \end{cases}$$ | ||
+ | |||
+ | <p>At the same time, a reasonable learning speed is designed for the program, which can accelerate the convergence speed while ensuring the convergence. After several convergence tests, we set the learning speed as\({0.00045\cdot \left | y_{output}-\bar{y} \right |.}\)That's where our mathematical model comes in.</p> | ||
+ | |||
+ | |||
+ | </div> | ||
</div> | </div> | ||
+ | <div class="container" id="section-2"> | ||
+ | |||
+ | <div class="paragraph shadow"> | ||
+ | <h2 class="title" id="s5">Model analysis and evaluation</h2> | ||
+ | <p>After improvement of the above algorithm, the advantages of the model are as follows:</p> | ||
+ | <p>1. High stability of the model. This algorithm adopts the mode of deep learning, which shortens the step length of vector change and expands the relevant linear parameters to 206, ensuring the precision of model fitting and greatly improving the stability of the model.</p> | ||
+ | <p>2. The input of the model can be extended conveniently. When the data of the promoter library is expanded, it only needs to input the latest measured data into our system, which is automatically imported into the database, and the model can also use this data for training immediately.</p> | ||
+ | <p>3. The method of this model can be used for reference by other subjects. The coding method and mathematical treatment of the model are in line with the effect of DNA sequence on the promoter strength and can be transplanted to other similar studies such as <latin>E.coli</latin>.</p> | ||
+ | <p>At the same time, there are also many deficiencies in this model that need to be improved:</p> | ||
+ | <p>1. The model is not robust to error data. When a group of incorrect fluorescent values are given in the input data, the fitting precision of the whole program will be greatly affected, and this interference will gradually weaken as the number of correct data increases.</p> | ||
+ | <p>2. Convergence speed and accuracy still need to be improved. If the number of training rounds is too high, the running speed of the program is too slow. When the number of rounds is too low, the results presented are often unsatisfactory. We need to find a balance between speed and accuracy.</p> | ||
+ | </div> | ||
+ | |||
+ | <div class="paragraph shadow ref"> | ||
+ | <h2 class="title" id="s6">References</h2> | ||
+ | <p>[1]Lim, C. K.Hassan, K. A.Tetu, S. G.Loper, J. E.Paulsen, I. T. The effect of iron limitation on the transcriptome and proteome of Pseudomonas fluorescens Pf-5. PLoS One. 2012, June 2012 | Volume 7 | Issue 6 | e39139,2-18</p> | ||
+ | <p>[2] Xuan Zhou, Xin Zhou, Zhaojie Zhong, Prediction of start-up strength of E. coli promoter, Computers and Applied Chemistry, Jan.28,2014|Volume 31| Issue 1,101-103</p> | ||
+ | <p>[3]Xiaoliang Cheng, Xuan Zheng, Weimin Han, An algorithm for solving sparse solutions of undetermined linear equations,College journal of applied mathematics,2013:28(2):235-248</p> | ||
+ | |||
+ | </div> | ||
+ | </div> | ||
+ | |||
− | |||
</html> | </html> | ||
{{SKLMT-China/footer}} | {{SKLMT-China/footer}} |
Latest revision as of 22:12, 17 October 2018
Background
We have built a promoter library of inner promoters with various strength in
1.To facilitate the promoter transformation of pseudomonas fluorescence in new site;
2.To reveal the correlation between promoter weights and their strength;
4.To improve the parameter setting of the promoter strength prediction in the species outside the large intestine with the method of position weight;
5.To provide reference for other microbe promoter strength prediction modeling.
Assumptions
1. It is assumed that the promoter strength is only related to the core DNA sequence.In other words, the DNA sequence can uniquely determine the promoter strength.
2. It is assumed that the laboratory environment has no impact on the test results.In other words, our test data can fully reflect the promoter strength.
3.It is assumed that there is a simple monotonic relationship between the obtained fluorescence data and the promoter strength: the larger the fluorescence value is, the stronger the promoter can be considered.
Modelling Process
DNA Sequence Encoding
The 5’-3’ region includes 70bp DNA information in each promoter. Currently, we have got 25 promoter sequences. We need to encode these 70 bases in order to establish a mathematical model. Notice that there’re only four types of bases(A,T,C and G),and calculating the numbers of each type is not enough to reflect interaction between adjacent bases.[2] Thus, we consider 3 continuous bases
We encode each three-base sequence with quaternary number(A,T,C,G represents quaternary number 0,1,2,3 ), and by running a C++ program (the whole code is shown in appendix), we get a vector with 64 dimensions from AAA to GGG, which calculates the frequency of appearance for each type in DNA sequence.
After encoding, all DNA sequences are transformed into a vector with 64 dimensions. The entire data is stored in the document
At the end of the encoding step, we get an 25*64 matrix which reflects the features of these DNA sequences of each promoters. This encoding method has deep internal connection with the characteristics of DNA sequence, and it simultaneously contains the sequential relationship and details of DNA sequence. With this matrix, a mathematical model can be established.
Model establishment and solution
In order to simplify the problem, we assume that the connect between matrix and promoter strength is linear, in other words, the vector \({x}\) exists when equation \({y=Ax}\) is satisfied.
Now, through a lot of experiments, we have measured a total of 24 values, and all twenty-two valid values are shown below. Each data is the average of the results of three parallel experiments.
The gap in original data is too wide, so we take the natural log of the original data as the value of the vector . Now, what we're going to do is to invert the linear relationship vector \({x}\) by taking the linear transformation matrix \({A}\) and the resulting vector \({y}\).\({A,x}\) and \({y}\)satisfy the following equation: \({y=Ax}\),or \({x=A^{-1}y}\).
This is a linear mapping of 64 variables but there are only 22 data points. In other words, this is an underdetermined equation which cannot be solved accurately by solving inhomogeneous linear equations. The sparse solution algorithm of underdetermined equations based on iteration3 will be adopted in the following part.
This is an effective algorithm to restore deterministic signals with fewer premeasured values.Our goal is to improve the precision of signal reduction as much as possible. Consider the following optimization problem:
$$min \left \| x \right \|_{0}$$ $$s.t.\; \; \; y=Ax$$\(\left \| x \right \|_{0}\)is called \(l^{0}\)norm, which represents the number of nonzero components in vector x in mathematics. This value is equivalent to this expression:
$$\lim_{\varepsilon \rightarrow 0}\sum_{i=1}^{64}\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }, because\frac{x_{i}^{2}}{x_{i}^{2}+\varepsilon }= \begin{cases} & \text{ 0, } x_{i}=0 \\ & \text{ 1, } x_{i}\neq 0 \end{cases} (\varepsilon \rightarrow 0)$$The following iterative steps are obtained from the perspective of numerical calculation:
⑴At first , set the initial \({x^{(0)}=0}\),\({\varepsilon ^{(0)}=1}\)value
⑵After that, loop to calculate the current \({x^{(n+1)} }\),\({\varepsilon ^{(n+1)}}\)value based on the previous value \({x^{(n+1)} }\),\({\varepsilon ^{(n+1)}}\) with this formula:
$$x^{(n+1)}=arg\, min\left \{ \sum_{i=1}^{64}\frac{x_{i}^{2}}{(x_{i}^{(n)})^{2}+\varepsilon^{(0)} } \right\},(y=Ax);$$ $$\varepsilon ^{(0)}=min\left \{ \varepsilon ^{(n)} ,\frac{min(x_{i}^{(n)})}{N}\right \}$$⑶Finally, if is less than a given bound such as 1e-6, abort this program, and regard current \({x^{(n)}}\) as the solution vector of this underdetermined equation.
The explicit iterative form of the above steps is given below:
$$ f(x,\lambda )=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}=\sum_{i=1}^{64}\frac{x_{i}^{2}}{{(}x_{i}^{(n)})^{2}+\varepsilon ^{(n)}}+\lambda (Ax-y)$$we use Lagrange multiplier method to solve the minimum variable. Vector \({\lambda }\)with 22 dimensions is introduced. So calculate the partial derivative to each component of \({x}\),then solve this equation:
$$\frac{\partial f}{\partial x_{i}}=0$$We find that the vector \({x}\)can be iterated by this formula:
$$x^{(n+1)}=D^{(n)}{A}'(AD^{(n)}{A}')^{-1}y.$$where we construct matrix \({D^{(n)} }\) as follow:
$$D^{(n)}_{ij}= \begin{cases} & \text{ 0, } i\neq j \\ & \text (x_{i}^{(n)})^{2}+\varepsilon ^{n},i=j \end{cases}$$With the help of
Part of the value without bold is used to solve the equation, so it is a perfect match; From the comparison of data in bold, it can be seen that the result of data fitting is good, with the average fitting degree higher than 95%.
The solution results of the matrix are good, which confirms the basic reliability of the assumed linear relationship. Therefore, we will use the neural network model to further improve the accuracy and stability of the model prediction.
Model Improvement--Algorithm optimization
By solving the matrix sparse solution algorithm, we can conclude that there is indeed a linear relationship between the total 64 data of AAA-GGG and the promoter strength. Notice that we used merely 22 sets of data to approximately solve the sparse solutions of 64 equations with a fitting precision higher than 95%, therefore, when we provide as much data as possible, the fitting precision of the model will greatly increase.
However, there are still many problems with this algorithm:
(1) All known data cannot be reused. The sparse solution is uniquely calculated as the data is determined, so the data is not very well utilized, and it is difficult to improve the accuracy of the original results as well.
(2) This algorithm is not well adapted to the extension of existing data sets. Every time a new set of data is added, all matrix operations change apparently, so it's not very stable when data updates.
On the basis of the linear structure of the original algorithm, we add more parameters to make the algorithm more stable and reliable. This algorithm adopts the neural network structure and improves the utilization rate of data by supervised learning.The construction of neural network is shown in the Figure 2:
There are 6 neurons in the hidden layer, and each neuron is connected with 32 input terminals. The corresponding relation is as follows: The binary number of integer 1~63 is exactly 6 bits, and the deficiency is added. Input and hidden neuron \({j}\)is connected when the \({ j^{th}}\) number of the binary number is 1. In particular, we consider port 64 separately.And then we give a certain weight \({\omega _{ij}}\) to each edge between Input \({i}\)and hidden neuron \({j}\),and give to each edge between hidden neuron \({ p^{i}}\)and output terminal.
The final output can be obtained by the following formula:
$$y_{output}=\sum_{i=1}^{6}y_{i}p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}\, \; .$$\({y_{i}}\)represents the input(output) data of hidden neuron \({i}\).
We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values:
$$e=\frac{1}{2}(y_{output}-\bar{y})^{2}$$Our goal is to minimize the value of this function by generations of iterations. Difference between vector , \({ \omega =(\omega _{16},...,\omega _{n6})}\),\({p=(p_{1},...,p_{n}) }\)and its next generation is calculated by this formula:
$$y_{output}=\sum_{i=1}^{6}y_{i}•p_{i},y_{i}=\sum_{j=1}^{64}\omega _{ji}x_{j}.$$\({y_{i}}\)represents the input(output) data of hidden neuron \({i}\).
We use gradient descent method to determine the learning direction. The objective function is the error of output and ideal values:
$$e=\frac{1}{2}(y_{output}-\bar{y})^{2}$$Our goal is to minimize the value of this function by generations of iterations. Difference between vector , \omega =(\omega _{16},...,\omega _{n6}),p=(p_{1},...,p_{n}) and its next generation is calculated by this formula:
$$ \Delta \omega =-v\cdot grad\omega ,\Delta p=-v\cdot gradp$$where \({v}\)represents learning speed. We use the chain rule to calculate the specific value of the gradient:
$$\begin{cases} & \text{ if } \frac{\partial e}{\partial \omega_{ij} }= (y_{output}-\bar{y})\frac{\partial y_{output}}{\partial \omega _{ij}}=(y_{output}-\bar{y})x_{i}p_{j}\\ & \text{ if } \frac{\partial e}{\partial p_{i}}= (y_{output}-\bar{y})\frac{\partial y_{output}}{\partial p_{i}}=(y_{output}-\bar{y})y_{i} \end{cases}$$At the same time, a reasonable learning speed is designed for the program, which can accelerate the convergence speed while ensuring the convergence. After several convergence tests, we set the learning speed as\({0.00045\cdot \left | y_{output}-\bar{y} \right |.}\)That's where our mathematical model comes in.
Model analysis and evaluation
After improvement of the above algorithm, the advantages of the model are as follows:
1. High stability of the model. This algorithm adopts the mode of deep learning, which shortens the step length of vector change and expands the relevant linear parameters to 206, ensuring the precision of model fitting and greatly improving the stability of the model.
2. The input of the model can be extended conveniently. When the data of the promoter library is expanded, it only needs to input the latest measured data into our system, which is automatically imported into the database, and the model can also use this data for training immediately.
3. The method of this model can be used for reference by other subjects. The coding method and mathematical treatment of the model are in line with the effect of DNA sequence on the promoter strength and can be transplanted to other similar studies such as
At the same time, there are also many deficiencies in this model that need to be improved:
1. The model is not robust to error data. When a group of incorrect fluorescent values are given in the input data, the fitting precision of the whole program will be greatly affected, and this interference will gradually weaken as the number of correct data increases.
2. Convergence speed and accuracy still need to be improved. If the number of training rounds is too high, the running speed of the program is too slow. When the number of rounds is too low, the results presented are often unsatisfactory. We need to find a balance between speed and accuracy.
References
[1]Lim, C. K.Hassan, K. A.Tetu, S. G.Loper, J. E.Paulsen, I. T. The effect of iron limitation on the transcriptome and proteome of Pseudomonas fluorescens Pf-5. PLoS One. 2012, June 2012 | Volume 7 | Issue 6 | e39139,2-18
[2] Xuan Zhou, Xin Zhou, Zhaojie Zhong, Prediction of start-up strength of E. coli promoter, Computers and Applied Chemistry, Jan.28,2014|Volume 31| Issue 1,101-103
[3]Xiaoliang Cheng, Xuan Zheng, Weimin Han, An algorithm for solving sparse solutions of undetermined linear equations,College journal of applied mathematics,2013:28(2):235-248