Difference between revisions of "Team:HUST-China/Software"

Line 237: Line 237:
 
             </div>
 
             </div>
 
             <div class="col-md-12">
 
             <div class="col-md-12">
                 <h3>Algorithms used in sensing module</h3>
+
                 <h3>Sensing module</h3>
 +
                <h4>Algorithms used in sensing module</h4>
 
                 <p class="red-content">Gradient descent<p>
 
                 <p class="red-content">Gradient descent<p>
 
                   <p>Gradient descent is a iterative algorithm used find a local minimum of a function using gradient descent. If we want to fit a function h(X) (X is set of independent variable xi), loss function will be j(X). We will calculate the gradient of xi ∈ X : ∇j(X) and x<sub>(n+1)</sub>=x<sub>n</sub>-i*∇j(X).</p>
 
                   <p>Gradient descent is a iterative algorithm used find a local minimum of a function using gradient descent. If we want to fit a function h(X) (X is set of independent variable xi), loss function will be j(X). We will calculate the gradient of xi ∈ X : ∇j(X) and x<sub>(n+1)</sub>=x<sub>n</sub>-i*∇j(X).</p>
Line 253: Line 254:
 
             <br>
 
             <br>
 
             <div class="col-md-12">
 
             <div class="col-md-12">
                 <h3>Step of software</h3>
+
                 <h4>Step of sensing module</h4>
 
                 <p>Firstly, we will feed all variable’s initial value (X_0^all) and use our model to predict all variable’s value in next unit of time. Secondly, we feed the true value of variable that can be sensed (X^s).
 
                 <p>Firstly, we will feed all variable’s initial value (X_0^all) and use our model to predict all variable’s value in next unit of time. Secondly, we feed the true value of variable that can be sensed (X^s).
 
Then, we calculate the Euclidean Distance between our prediction and true value(||P_t^s [i]-X_t^s ||) and use it as weight1. Sp is the sum of predictions.</p>
 
Then, we calculate the Euclidean Distance between our prediction and true value(||P_t^s [i]-X_t^s ||) and use it as weight1. Sp is the sum of predictions.</p>

Revision as of 23:56, 17 October 2018

HillSide Multi purpose HTML5 Template

Software

Sensing module

Algorithms used in sensing module

Gradient descent

Gradient descent is a iterative algorithm used find a local minimum of a function using gradient descent. If we want to fit a function h(X) (X is set of independent variable xi), loss function will be j(X). We will calculate the gradient of xi ∈ X : ∇j(X) and x(n+1)=xn-i*∇j(X).

Mean Shift

Mean shift is a non-parametric feature-space analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm (Cheng et al. 1995)[1]. This algorithm will set a core whose radius is r, each data x in core will add the move vector:

Each iteration will judge and calculate a new move vector until the coordinate of the core doesn’t move.


Step of sensing module

Firstly, we will feed all variable’s initial value (X_0^all) and use our model to predict all variable’s value in next unit of time. Secondly, we feed the true value of variable that can be sensed (X^s). Then, we calculate the Euclidean Distance between our prediction and true value(||P_t^s [i]-X_t^s ||) and use it as weight1. Sp is the sum of predictions.

After that, we use mean shift algorithm based on the weight1 (||P_t^s [i]-X_t^s ||) to find several scores (SC). Next step is to sort possible predictions by weigh2, use gradient descent on top np predictions to get new possible prediction and kill last dp predictions. (np = new_rate *(max_prediction - sp),dp=dead_rate*sp). Finally the software start next iteration.

Learning Module

We use Q-Learning method to learn how to make decision based on environment. Q-learning is a reinforcement learning method used in machine learning. It can learn a policy, which tells us what actions to take under what situations. It can handle problems with stochastic transitions and rewards, without requiring adaptations.