How does a technical and theoretical study of a project of implementing an algorithm on a specific hardware architecture? Function example:
%Expresion 1:
y1 = exp (- (const1 + x) ^ 2 / ((conts2 ^ 2)),
y2 = y1 * const3
Where x is the input variable, y2 is the output and const1, const3 const2 and are constant. I need to determine the error you get in terms of architecture that decides to develop, for example it suppose that is not the same architecture with 11 bits for the exponent and 52 bits for mantissa. This is the concept of error I handle:
Relative Error = (Real Data - Architecture Data) / (Real Data) * 100
I consider as 'Real Data' the ouput of my algorithm I get from Matlab (Matlab use double precission float point, IEEE 754, 52 bits to mantisa, 11 bits for exponent, one bit fo sign ) with the expression 1 and I consider as 'Architecture Data' the ouput of my algorithm running in an particular architecture (For instance a architecture that uses 12 bits for mantisa, 5 bits for exponent and 1 bit for sign)
EDIT: NOTE: The kind of algorithm I am refering to are all those which use matemathical functions that can be descompose in terms of addition, multiplication, subtrations and division.
Thank you!