sin(PI*x)*sin(PI*x*.825)/(.825*x*x*PI*PI)

Percentage Accurate: 50.6% → 99.3%
Time: 6.5s
Alternatives: 2
Speedup: 96.0×

Specification

?
\[0 \leq x \land x \leq 2.17\]
\[\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi} \]
(FPCore (x)
  :precision binary64
  (/
 (* (sin (* PI x)) (sin (* (* PI x) 0.825)))
 (* (* (* (* 0.825 x) x) PI) PI)))
double code(double x) {
	return (sin((((double) M_PI) * x)) * sin(((((double) M_PI) * x) * 0.825))) / ((((0.825 * x) * x) * ((double) M_PI)) * ((double) M_PI));
}
public static double code(double x) {
	return (Math.sin((Math.PI * x)) * Math.sin(((Math.PI * x) * 0.825))) / ((((0.825 * x) * x) * Math.PI) * Math.PI);
}
def code(x):
	return (math.sin((math.pi * x)) * math.sin(((math.pi * x) * 0.825))) / ((((0.825 * x) * x) * math.pi) * math.pi)
function code(x)
	return Float64(Float64(sin(Float64(pi * x)) * sin(Float64(Float64(pi * x) * 0.825))) / Float64(Float64(Float64(Float64(0.825 * x) * x) * pi) * pi))
end
function tmp = code(x)
	tmp = (sin((pi * x)) * sin(((pi * x) * 0.825))) / ((((0.825 * x) * x) * pi) * pi);
end
code[x_] := N[(N[(N[Sin[N[(Pi * x), $MachinePrecision]], $MachinePrecision] * N[Sin[N[(N[(Pi * x), $MachinePrecision] * 0.825), $MachinePrecision]], $MachinePrecision]), $MachinePrecision] / N[(N[(N[(N[(0.825 * x), $MachinePrecision] * x), $MachinePrecision] * Pi), $MachinePrecision] * Pi), $MachinePrecision]), $MachinePrecision]
\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi}

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 2 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 50.6% accurate, 1.0× speedup?

\[\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi} \]
(FPCore (x)
  :precision binary64
  (/
 (* (sin (* PI x)) (sin (* (* PI x) 0.825)))
 (* (* (* (* 0.825 x) x) PI) PI)))
double code(double x) {
	return (sin((((double) M_PI) * x)) * sin(((((double) M_PI) * x) * 0.825))) / ((((0.825 * x) * x) * ((double) M_PI)) * ((double) M_PI));
}
public static double code(double x) {
	return (Math.sin((Math.PI * x)) * Math.sin(((Math.PI * x) * 0.825))) / ((((0.825 * x) * x) * Math.PI) * Math.PI);
}
def code(x):
	return (math.sin((math.pi * x)) * math.sin(((math.pi * x) * 0.825))) / ((((0.825 * x) * x) * math.pi) * math.pi)
function code(x)
	return Float64(Float64(sin(Float64(pi * x)) * sin(Float64(Float64(pi * x) * 0.825))) / Float64(Float64(Float64(Float64(0.825 * x) * x) * pi) * pi))
end
function tmp = code(x)
	tmp = (sin((pi * x)) * sin(((pi * x) * 0.825))) / ((((0.825 * x) * x) * pi) * pi);
end
code[x_] := N[(N[(N[Sin[N[(Pi * x), $MachinePrecision]], $MachinePrecision] * N[Sin[N[(N[(Pi * x), $MachinePrecision] * 0.825), $MachinePrecision]], $MachinePrecision]), $MachinePrecision] / N[(N[(N[(N[(0.825 * x), $MachinePrecision] * x), $MachinePrecision] * Pi), $MachinePrecision] * Pi), $MachinePrecision]), $MachinePrecision]
\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi}

Alternative 1: 99.3% accurate, 10.7× speedup?

\[\mathsf{fma}\left(x \cdot x, -2.7645173160968004, 1\right) \]
(FPCore (x)
  :precision binary64
  (fma (* x x) -2.7645173160968004 1.0))
double code(double x) {
	return fma((x * x), -2.7645173160968004, 1.0);
}
function code(x)
	return fma(Float64(x * x), -2.7645173160968004, 1.0)
end
code[x_] := N[(N[(x * x), $MachinePrecision] * -2.7645173160968004 + 1.0), $MachinePrecision]
\mathsf{fma}\left(x \cdot x, -2.7645173160968004, 1\right)
Derivation
  1. Initial program 50.6%

    \[\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi} \]
  2. Taylor expanded in x around 0

    \[\leadsto \color{blue}{1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \left(\frac{-3715469692580659}{27021597764222976} \cdot {\pi}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right)} \]
  3. Step-by-step derivation
    1. lower-+.f64N/A

      \[\leadsto 1 + \color{blue}{\frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \left(\frac{-3715469692580659}{27021597764222976} \cdot {\mathsf{PI}\left(\right)}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right)} \]
    2. lower-*.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \color{blue}{\left({x}^{2} \cdot \left(\frac{-3715469692580659}{27021597764222976} \cdot {\mathsf{PI}\left(\right)}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right)} \]
    3. lower-*.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \color{blue}{\left(\frac{-3715469692580659}{27021597764222976} \cdot {\mathsf{PI}\left(\right)}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)}\right) \]
    4. lower-pow.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \left(\color{blue}{\frac{-3715469692580659}{27021597764222976} \cdot {\mathsf{PI}\left(\right)}^{2}} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    5. lower-fma.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, \color{blue}{{\mathsf{PI}\left(\right)}^{2}}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    6. lower-pow.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\mathsf{PI}\left(\right)}^{\color{blue}{2}}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    7. lower-PI.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    8. lower-*.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    9. lower-pow.f64N/A

      \[\leadsto 1 + \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\mathsf{PI}\left(\right)}^{2}\right)\right) \]
    10. lower-PI.f6499.3%

      \[\leadsto 1 + 1.2121212121212122 \cdot \left({x}^{2} \cdot \mathsf{fma}\left(-0.13749999999999998, {\pi}^{2}, -0.09358593749999998 \cdot {\pi}^{2}\right)\right) \]
  4. Applied rewrites99.3%

    \[\leadsto \color{blue}{1 + 1.2121212121212122 \cdot \left({x}^{2} \cdot \mathsf{fma}\left(-0.13749999999999998, {\pi}^{2}, -0.09358593749999998 \cdot {\pi}^{2}\right)\right)} \]
  5. Step-by-step derivation
    1. lift-+.f64N/A

      \[\leadsto 1 + \color{blue}{\frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right)} \]
    2. +-commutativeN/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + \color{blue}{1} \]
    3. lift-*.f64N/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + 1 \]
    4. lift-pow.f64N/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left({x}^{2} \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + 1 \]
    5. pow2N/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left(\left(x \cdot x\right) \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + 1 \]
    6. lift-*.f64N/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left(\left(x \cdot x\right) \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + 1 \]
    7. lower-*.f64N/A

      \[\leadsto \frac{4503599627370496}{3715469692580659} \cdot \left(\left(x \cdot x\right) \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right)\right) + 1 \]
    8. associate-*r*N/A

      \[\leadsto \left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot \mathsf{fma}\left(\frac{-3715469692580659}{27021597764222976}, {\pi}^{2}, \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right) + 1 \]
    9. lift-fma.f64N/A

      \[\leadsto \left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot \left(\frac{-3715469692580659}{27021597764222976} \cdot {\pi}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right) + 1 \]
    10. lift-*.f64N/A

      \[\leadsto \left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot \left(\frac{-3715469692580659}{27021597764222976} \cdot {\pi}^{2} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616} \cdot {\pi}^{2}\right) + 1 \]
    11. distribute-rgt-outN/A

      \[\leadsto \left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot \left({\pi}^{2} \cdot \left(\frac{-3715469692580659}{27021597764222976} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616}\right)\right) + 1 \]
    12. associate-*r*N/A

      \[\leadsto \left(\left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot {\pi}^{2}\right) \cdot \left(\frac{-3715469692580659}{27021597764222976} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616}\right) + 1 \]
    13. lower-fma.f64N/A

      \[\leadsto \mathsf{fma}\left(\left(\frac{4503599627370496}{3715469692580659} \cdot \left(x \cdot x\right)\right) \cdot {\pi}^{2}, \color{blue}{\frac{-3715469692580659}{27021597764222976} + \frac{-51291000332774071962762600992856043193091131179}{548063113999088594326381812268606132370974703616}}, 1\right) \]
  6. Applied rewrites99.3%

    \[\leadsto \mathsf{fma}\left(\left(\left(x \cdot x\right) \cdot 1.2121212121212122\right) \cdot \left(\pi \cdot \pi\right), \color{blue}{-0.23108593749999998}, 1\right) \]
  7. Evaluated real constant99.3%

    \[\leadsto \mathsf{fma}\left(\left(\left(x \cdot x\right) \cdot 1.2121212121212122\right) \cdot 9.869604401089358, -0.23108593749999998, 1\right) \]
  8. Step-by-step derivation
    1. lift-fma.f64N/A

      \[\leadsto \left(\left(\left(x \cdot x\right) \cdot \frac{4503599627370496}{3715469692580659}\right) \cdot \frac{2778046668940015}{281474976710656}\right) \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616} + \color{blue}{1} \]
    2. lift-*.f64N/A

      \[\leadsto \left(\left(\left(x \cdot x\right) \cdot \frac{4503599627370496}{3715469692580659}\right) \cdot \frac{2778046668940015}{281474976710656}\right) \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616} + 1 \]
    3. associate-*l*N/A

      \[\leadsto \left(\left(x \cdot x\right) \cdot \frac{4503599627370496}{3715469692580659}\right) \cdot \left(\frac{2778046668940015}{281474976710656} \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616}\right) + 1 \]
    4. lift-*.f64N/A

      \[\leadsto \left(\left(x \cdot x\right) \cdot \frac{4503599627370496}{3715469692580659}\right) \cdot \left(\frac{2778046668940015}{281474976710656} \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616}\right) + 1 \]
    5. associate-*l*N/A

      \[\leadsto \left(x \cdot x\right) \cdot \left(\frac{4503599627370496}{3715469692580659} \cdot \left(\frac{2778046668940015}{281474976710656} \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616}\right)\right) + 1 \]
    6. lower-fma.f64N/A

      \[\leadsto \mathsf{fma}\left(x \cdot x, \color{blue}{\frac{4503599627370496}{3715469692580659} \cdot \left(\frac{2778046668940015}{281474976710656} \cdot \frac{-126649678507648749626158179449455301604649895723}{548063113999088594326381812268606132370974703616}\right)}, 1\right) \]
    7. metadata-evalN/A

      \[\leadsto \mathsf{fma}\left(x \cdot x, \frac{4503599627370496}{3715469692580659} \cdot \frac{-351838717500497418955682415440338456092912421457722572692055845}{154266052248863066452028360864751609842131487403148112188932096}, 1\right) \]
    8. metadata-eval99.3%

      \[\leadsto \mathsf{fma}\left(x \cdot x, -2.7645173160968004, 1\right) \]
  9. Applied rewrites99.3%

    \[\leadsto \mathsf{fma}\left(x \cdot x, \color{blue}{-2.7645173160968004}, 1\right) \]
  10. Add Preprocessing

Alternative 2: 98.8% accurate, 96.0× speedup?

\[1 \]
(FPCore (x)
  :precision binary64
  1.0)
double code(double x) {
	return 1.0;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0
end function
public static double code(double x) {
	return 1.0;
}
def code(x):
	return 1.0
function code(x)
	return 1.0
end
function tmp = code(x)
	tmp = 1.0;
end
code[x_] := 1.0
1
Derivation
  1. Initial program 50.6%

    \[\frac{\sin \left(\pi \cdot x\right) \cdot \sin \left(\left(\pi \cdot x\right) \cdot 0.825\right)}{\left(\left(\left(0.825 \cdot x\right) \cdot x\right) \cdot \pi\right) \cdot \pi} \]
  2. Taylor expanded in x around 0

    \[\leadsto \color{blue}{1} \]
  3. Step-by-step derivation
    1. Applied rewrites98.8%

      \[\leadsto \color{blue}{1} \]
    2. Add Preprocessing

    Reproduce

    ?
    herbie shell --seed 1 
    (FPCore (x)
      :name "sin(PI*x)*sin(PI*x*.825)/(.825*x*x*PI*PI)"
      :precision binary64
      :pre (and (<= 0.0 x) (<= x 2.17))
      (/ (* (sin (* PI x)) (sin (* (* PI x) 0.825))) (* (* (* (* 0.825 x) x) PI) PI)))