Difference between revisions of "Team:XMU-China/Hardware"

Line 229: Line 229:
 
                 <p class="video"><video src="https://static.igem.org/mediawiki/2018/9/9c/T--XMU-China--hardware.mp4" controls></video></p>
 
                 <p class="video"><video src="https://static.igem.org/mediawiki/2018/9/9c/T--XMU-China--hardware.mp4" controls></video></p>
 
                 <h1>Designs</h1>
 
                 <h1>Designs</h1>
                 <p>We gave our testing device a name, "<i>Eye of Agamotto</i>"(EA), whose inspiration comes from Time Stone owned by Dr.Strange(<i>Marvel</i>).</p>
+
                 <p>We gave our testing device a name, "<i>Fang</i>", whose inspiration comes from Time Stone owned by Dr.Strange(<i>Marvel</i>).</p>
 
                 <p>In general, EA consists of four parts, namely the Microfluidic chip, the fluorescence detection apparatus, <i>Raspberry Pi</i> (RPi) and a mobile-phone application(Software). Among these four parts, Raspberry Pi is the main operating system of the entire device, and its functions include chip-driving controlling, image capturing and server-client data transmission.</p>
 
                 <p>In general, EA consists of four parts, namely the Microfluidic chip, the fluorescence detection apparatus, <i>Raspberry Pi</i> (RPi) and a mobile-phone application(Software). Among these four parts, Raspberry Pi is the main operating system of the entire device, and its functions include chip-driving controlling, image capturing and server-client data transmission.</p>
 
                 <p>At the beginning, the sample, which is added into our designed microfluidic chip based on ABCD system, will react to Cas12a and give out the fluorescence signal. Then RPi controlled camera captures the image that would be transmitted to App through Socket. In the meantime, the App will convert the image results into visual and readable analysis reports based on its internal machine learning sample database. Finally, App would also achieve Information Sharing between User and Doctor.</p>
 
                 <p>At the beginning, the sample, which is added into our designed microfluidic chip based on ABCD system, will react to Cas12a and give out the fluorescence signal. Then RPi controlled camera captures the image that would be transmitted to App through Socket. In the meantime, the App will convert the image results into visual and readable analysis reports based on its internal machine learning sample database. Finally, App would also achieve Information Sharing between User and Doctor.</p>
Line 235: Line 235:
 
                     <p>The overwhelming advantage of EA is that it gives the best interpretation of POCT. EA can overcome the drawback that some tumor/cancer detections could only be achieved in the big clinical laboratory by large and expensive equipment. It can be more widely used in the community hospitals, especially in remote areas which are short of necessary medical resources. In addition, EA can also be used for risk-monitoring by people who have familial-hereditary disease at their home.</p>
 
                     <p>The overwhelming advantage of EA is that it gives the best interpretation of POCT. EA can overcome the drawback that some tumor/cancer detections could only be achieved in the big clinical laboratory by large and expensive equipment. It can be more widely used in the community hospitals, especially in remote areas which are short of necessary medical resources. In addition, EA can also be used for risk-monitoring by people who have familial-hereditary disease at their home.</p>
 
                     <p class="F3"><img src="https://static.igem.org/mediawiki/2018/1/16/T--XMU-China--electronic_circui_testing2.png">
 
                     <p class="F3"><img src="https://static.igem.org/mediawiki/2018/1/16/T--XMU-China--electronic_circui_testing2.png">
                         <p class="Figure_word">Figure1(UP): The Whole Design of our hardware – Eye of Agamotto.</p>
+
                         <p class="Figure_word">Figure1(UP): The Whole Design of our hardware –<i>Fang</i>.</p>
 
                     </p>
 
                     </p>
  

Revision as of 15:54, 17 October 2018

Team:XMU-China/Hardware - 2018.igem.org

Hardware
Overview

Background

Nowadays, most disease-diagnosing methods are confined to specific delicate testing apparatus, which are expensive, time-consuming and low sensitivity. The study of Point-of-care testing (POCT), also called bedside testing (with the definition of medical diagnostic testing at or near the time and place of patient care), has become very heated because of its convenience, simplicity and highly efficiency. Internet of things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables the connection and exchange of data.

Here we came up with a design. We combined the idea of our project Aptamer-based Cell-free Detection system (ABCD system), IoT, and the above concept of POCT so as to develop a microfluidic device, which is small while convenient for real-time detection of cancer.

Given that the biomarkers of different cancers are overlapping and the time was limited, we just took the pancreatic cancer as an example to certify the feasibility of our testing theory as well as our device.

Designs

We gave our testing device a name, "Fang", whose inspiration comes from Time Stone owned by Dr.Strange(Marvel).

In general, EA consists of four parts, namely the Microfluidic chip, the fluorescence detection apparatus, Raspberry Pi (RPi) and a mobile-phone application(Software). Among these four parts, Raspberry Pi is the main operating system of the entire device, and its functions include chip-driving controlling, image capturing and server-client data transmission.

At the beginning, the sample, which is added into our designed microfluidic chip based on ABCD system, will react to Cas12a and give out the fluorescence signal. Then RPi controlled camera captures the image that would be transmitted to App through Socket. In the meantime, the App will convert the image results into visual and readable analysis reports based on its internal machine learning sample database. Finally, App would also achieve Information Sharing between User and Doctor.

The overwhelming advantage of EA is that it gives the best interpretation of POCT. EA can overcome the drawback that some tumor/cancer detections could only be achieved in the big clinical laboratory by large and expensive equipment. It can be more widely used in the community hospitals, especially in remote areas which are short of necessary medical resources. In addition, EA can also be used for risk-monitoring by people who have familial-hereditary disease at their home.

Figure1(UP): The Whole Design of our hardware –Fang.

Microfluidic chips

According to the model of Aptamer Based Cell Free testing system(ABCD System), we put forward the following design using traditional material for hardware—PMMA as the layers.

There are two rooms and two pipelines in our microfluidic chips. The first room is Incubation Room, where we has already covered with BAS systems(Biotin-Avidin System). Inspired by the project of 2017 EPFL, we used ternary affinity coating(TERACOAT) method[1] to pre-treat our microfluidic chip, aiming to coat the aptamers which will compete with target protein. The second room is Detection Room, and we put Cas12a (Cfp1) and DNase Alert in advance to achieve it that we convert the protein signal to fluorescence signal. What's more, there is a pipeline we named Pneumatic Valve between Incubation Room and Detection Room. It could control liquid flow through changing the motor speed, which would generate differential air pressure caused by barometric pressure and centrifugal force.

When the samples are added into loading slot, they would flow into the first room (Incubation Room).The biomarker would compete the complementary sequences in the aptamer, which would flow to the second room (Detection Room) by increasing rotate speed. In the second room, the complementary sequence would activate the Cas12a (Cfp1) to cut DNase Alert. The short sequence between Quencher and Fluorophore is cut and then the quencher wouldn't restrain the fluorophore anymore. Above all, it would give out the green fluorescence we want in detecting room.

Figure 2: The Microfluidic Chips Cutaway View(UP) and the Sandwich Structure of BAS(Biotin-Avidin System)(DOWN).

References

[1] Piraino, Francescoet, et al. ACS nano. 2016, 10:1699-1710.

Fluorescence detection

Inspired by the principle of the ultraviolet gel tester, we came up with the fluorescence detection design. Given that the fluorophore of the DNaseAlert (IDT) would be excited by green light with 535 nm and give out light of 565 nm, we selected the green LED lamp beads (which give out light around 535 nm) as light source. The camera with specific optical filter is in the same level with the light. It will get the Emission light from sample and then take a photo of the whole microfluidic chip.

Figure 3(UP): The Flow Path of Fluorescence Detection.

As shown in the figure below, this is the picture of the results we have photographed. Our No. 1 and No. 3 wells coated with EpCAM showed significant green fluorescence compared to other wells coated with random sequences. This proves that our experimental method of converting protein signals into fluorescent signals is feasible.

Raspberry Pi

The Raspberry Pi (RPi) plays an important role as server, which can receive the command from the APP and apply it to the peripherals (motor and camera). To achieve it, all you need is just clicking the button on the mobile phone.

Figure 4(UP): Circuit Diagram of Raspberry Pi Operating System.

Besides, it's worth mentioning that it can execute the whole series detection processes automatically. We used SSH to make a connection between RPi and IP address, and programmed in RPi. We called Picamera Function by Python to achieve imaging-capture, and called Wringpi by C-language. We made Speed-controlling come to realize by PID speed mode.

The speed acquisition is realized by the encoder and the call interrupt-function. The RPi detects the waveform (square wave) returned by the encoder through the pin. It generates an external trigger when electrical level changes, and then it enters the interrupt function so as to realize counting and come to null. It could convert to the current motor’s speed by a pulse count for a fixed time. The maximum output voltage of PWM is only 3.3V.But the voltage will be applied to the signal input through 12V battery resistance transformer mode, which can realize 12V~0V voltage regulation, that is, speed regulation in the range of 500~0rpm. NodeJS calls the corresponding file and executes the corresponding program by receiving signals and parameters, which makes RPi's corresponding to mobile phone’s command be realized. More details about the principle of the Raspberry Pi,please switch to our code on the Github(链接待给).

Application

In order to conveniently control our hardware for users, we have designed our own App which can be used with EA. It is worth mentioning that our App gathers AIT (Artificial Intelligence Technology) and blocks chain technology, equipped with functions providing excellent user experience. According to the navigation bar, we can easily find that App consists of four function modules: Control, Analysis, Interaction and Info.

Figure 5(UP): Interfacial Design of our Application.

The first function module named Control, which is used to control the operation of EA. There is a switch which can control the power-driven machine to turn on or off. And the PHOTO button can be used to control the camera to capture the fluorescent signal, and display it on the mobile phone. In addition, users are free to control the speed of the revolution(Auto/Seton). What's more, users can press the SET button to change the speed and the REQUIRE button to require the current speed.

The second function module named Analysis aims at taking advantage of AIT to analyze the images that EA has already photographed. The technology we chose is Tensorflow-the second generationartificial intelligent learning system developed by Google based on DistBelief. When an user takes a photo, the photo will be sent to our cloud server. In the cloud server, there are some modules which have been trained by large amounts of samples with CNN (Convolutional Neural Network). Intelligently, the photo uploaded will match the model by machine learning and user can obtain the information containing analysis results accurately and efficiently from App.

The third function module named Interaction. It is based on block chain technology. In our cloud server, we have already established a private chain of Ethereum, and we will issue our own digital currency provided for users to deploy smart contract of their own, including all theirdiagnostic messages. There is a One-to-One correspondence between a user and a doctor. When a smart contract is deployed, it allows only one doctor to confirm. Later the doctor can write the therapeutic methods or suggestions into the smart contract to which the patient can refer. On account of its transparency, openness and immutable property, a medical certificate (smart contract) can be corresponded to one specific patient and one specific doctor. Thus, it can effectively avoid medical dispute such as misdiagnose and unscrupulous disavowing.

The last function module named Info.contains introduction of our teamand some users' information.

Following the function recommended above comes the introduction about the communication mechanism of our App. It is based on three-party communication among Raspberry Pi, App, and cloud server. The instruction sent from App will first arrive cloud server, and will be transmit to Raspberry Pi by server. Actually, the instruction sent from Raspberry will be in a similar way. What's more, the machine learning and mining mechanism of block chain will be operating in the cloud server, thus we can enhance operational efficiency and real-time performance.