Line 146: | Line 146: | ||
<br> | <br> | ||
− | |||
− | |||
− | |||
+ | <h2><center><font color=#ffe100>LEARN MORE</font></center></h2> | ||
+ | <br> | ||
+ | <div class="collapse slide"> | ||
<h3><font size="5">About the geometry adopted</font></h3> | <h3><font size="5">About the geometry adopted</font></h3> | ||
Line 155: | Line 155: | ||
<br> | <br> | ||
− | <figure><center><img src="https://static.igem.org/mediawiki/2018/a/aa/T--Grenoble-Alpes--fluofig9.png" style="width: | + | <figure><center><img src="https://static.igem.org/mediawiki/2018/a/aa/T--Grenoble-Alpes--fluofig9.png" style="width:60vh" ><figcaption>Figure 9: Block diagram of the fluorescence sensor </figcaption></center></figure> |
<br> | <br> | ||
<p>In practice, the prototype looks like in Figure 2. It is just the prototype that was used for the experiments, the parts do not suit the final device. However, this prototype was designed to look alike the final device as much as possible. In particular, a sliding part was printed in order to place the fluorescent sample at the exact same location for each experiment to ensure its repeatability. Holes were drilled in the sample holder to place the LED as close as possible to the sample and orthogonally to the camera to minimize the noise level. </p> | <p>In practice, the prototype looks like in Figure 2. It is just the prototype that was used for the experiments, the parts do not suit the final device. However, this prototype was designed to look alike the final device as much as possible. In particular, a sliding part was printed in order to place the fluorescent sample at the exact same location for each experiment to ensure its repeatability. Holes were drilled in the sample holder to place the LED as close as possible to the sample and orthogonally to the camera to minimize the noise level. </p> | ||
− | < | + | </div> |
+ | <div class="collapse slide"> | ||
<h3><font size="5">About the camera performances</font></h3> | <h3><font size="5">About the camera performances</font></h3> | ||
<p>The camera is located above the sample. It means that the top surface of the sample is pictured, displaying a circle (Figure 10): </p> | <p>The camera is located above the sample. It means that the top surface of the sample is pictured, displaying a circle (Figure 10): </p> | ||
<br> | <br> | ||
− | <figure><center><img src="https://static.igem.org/mediawiki/2018/c/c4/T--Grenoble-Alpes--fluofig10.jpeg" style="width: | + | <figure><center><img src="https://static.igem.org/mediawiki/2018/c/c4/T--Grenoble-Alpes--fluofig10.jpeg" style="width:60vh"><figcaption>Figure 10: Fluorescence picture </figcaption></center></figure> |
<br> | <br> | ||
Line 174: | Line 175: | ||
<br> | <br> | ||
− | <figure><center><img src="https://static.igem.org/mediawiki/2018/4/46/T--Grenoble-Alpes--fluofig11.png" style="width: | + | <figure><center><img src="https://static.igem.org/mediawiki/2018/4/46/T--Grenoble-Alpes--fluofig11.png" style="width:60vh"><figcaption>Figure 11: Histogram of a fluorescent sample </figcaption></center></figure> |
<br> | <br> | ||
Line 180: | Line 181: | ||
<br> | <br> | ||
− | <figure><center><img src="https://static.igem.org/mediawiki/2018/f/fc/T--Grenoble-Alpes--fluofig12.png" style="width: | + | <figure><center><img src="https://static.igem.org/mediawiki/2018/f/fc/T--Grenoble-Alpes--fluofig12.png" style="width:60vh"><figcaption>Figure 12: Non fluorescent sample histogram. Here, absolutely all the pixels are dark as they all have a grey level of 0. |
</figcaption></center></figure> | </figcaption></center></figure> | ||
− | < | + | </div> |
+ | <div class="collapse slide"> | ||
<h3><font size="5">About the image processing</font></h3> | <h3><font size="5">About the image processing</font></h3> | ||
Line 190: | Line 192: | ||
<br> | <br> | ||
− | <figure><center><img src="https://static.igem.org/mediawiki/2018/e/e6/T--Grenoble-Alpes--fluofig13.png" style="width: | + | <figure><center><img src="https://static.igem.org/mediawiki/2018/e/e6/T--Grenoble-Alpes--fluofig13.png" style="width:60vh"><figcaption>Figure 13: Sample picture before correction (left) and after correction (right) |
</figcaption></center></figure> | </figcaption></center></figure> | ||
<br> | <br> |
Revision as of 10:21, 11 October 2018
Template loop detected: Template:Grenoble-Alpes
FLUORESCENCE MODULE
In our Top10 competent bacteria, we previously inserted, thanks to a bacterial transformation process, the pSB1C3-BBa_J04450 backbone that contains a gene coding for mRFP1. Hence, when the bacteria will express this gene, they will start producing fluorescence, which is related to the presence of the pathogenic bacterium.
The fluorescence expressed by the Top10 comes from the mRFP1 protein, whose spectrum is given in figure 1.
This fluorescence is detected thanks to our home-made unit which is shown here:
Light pathway through the fluorescence sensor
1. Excitation light
To excite the fluorophore, it is necessary to use a light which emission spectrum is the narrower possible. Therefore, LEDs are much appreciated as their emission spectrum are usually very narrow. Moreover, they are easy to supply (they just need a resistance to protect them and keep the light intensity constant) and very small. We used 565 nm LNJ309GKGAD LEDs (Panasonic) whose spectrum is given in Figure 3.
It is a small and cheap LED, which light is not that bright, but sufficiently focused to excite the bacteria as it is showcased in Figure 4:
If the red circle is not obvious enough on your screen, you may check the histogram of the picture in the “Learn more” section; each picture is associated to its histogram, which displays the number of pixels having the same intensity given as the grey level. It is a very useful tool in image processing to avoid being abused by our eyes. Concerning the camera, it does not face such problems, it sees the picture as it is.
Filtering
Before reaching the camera, it is mandatory for the incoming light to be filtered, that is to say to get rid of all the background light, mainly coming from the LEDs. A simple way to do it is to use what is called an optical filter. This device can, depending on its characteristics, filter the light to keep some colors and turn others off. This filter is highly recommended to reduce the noise level. For instance, our filter is theoretically able to switch off completely the yellowish-green LEDs (Figure 5):
The cut-off wavelength of the filter is 605 nm. It however transmits light of a little lower wavelength, but the light LED intensity is so low at the right end of the curve that it is totally filtered, as Figure 6 shows it:
The picture in Figure 6 is completely dark. It means that the LED light cannot be seen by the camera. It is a primordial condition to confirm to make sure that no light is captured by the camera under control measurement, which is actually the case as Figure 7 shows it:
This proves that the light that is detected by the camera comes from fluorescence only, which makes the processing easier.
2. Detecting fluorescence
All of these pictures were taken with the camera that we used in our module. It is a Pi noIR Camera V2 [4]. This camera is perfectly suited to our application, as it can be easily controlled by a little computer called a Raspberry Pi [5] that also monitors the touch screen. Moreover, this camera has the same performance as a middle-class smartphone, to give an idea of the quality of the pictures.
Eventually, calculating the overall intensity of each picture gives the linear graph Figure 8:
CONCLUSION
As a conclusion, here are some advice you should follow if you need to create your own fluorescence sensor, to avoid painful troubles:
- Unless you have a photomultiplier available with the appropriate supply, make a camera your first choice before a photosensitive device, such as a photoresistance or a photodiode. The fluorescence analysis is highly limited by the fluorescence intensity. It is easier to see fluorescence than detect it. Hence, with a photosensitive device, the minimal sensitivity will not suit your application. Photosensitive devices are helpful when you try to do quantitative measurements, as they directly link the light intensity to their output, but it is meaningless if you cannot detect a low-level fluorescence. If a photosensitive sensor is mandatory for your application, you should use an optic fiber to focus the light to the sensor with no losses.
- Be careful of the material you work with. The plastic you use, for instance, might be fluorescent as well, making your results wrong. The material might also reflect the excitation light to the sensor when you expected it to be confined.
- Look carefully at the characteristics of the excitation light. The brighter the better it is to boost the fluorescence intensity, obviously, but it is not enough. A high angle of vision (more than 100°) can be interesting to illuminated a large surface, but beware not to illuminate your sensor. A small angle of vision would ensure that it does not happen but the resulting light intensity would be usually lower.
- For the sake of reducing the noise level, as usual, you should consider using an excitation light with a lower central wavelength than the optimal one in case the excitation spectrum of the fluorescent protein overflows on the emission spectrum, and hence taking advantage of the excitation spectrum spreading (taking a higher cutoff wavelength longpass filter works as well).
LEARN MORE
REFERENCES
[1] Spectra from the online fluorescent protein database: https://www.fpbase.org/protein/mrfp1/
[2] Spectrum from the LNJ309GKGAD (Panasonic) datasheet, July 2012, p1
[3] Spectrum from the longpass filter, https://www.edmundoptics.fr/p/50mm-diameter-red-dichroic-filter/10607/#downloads
[4] Picamera datasheet: https://www.raspberrypi.org/documentation/hardware/camera/
[5] https://www.raspberrypi.org/
[6] https://digital-photography-school.com/understanding-depth-field-beginners/
[7] https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/
[8] lien github
[9] https://www.w3.org/TR/AERT/#color-contrast