?

Average Error: 0.0 → 0.0
Time: 7.8s
Precision: binary64
Cost: 6784

?

\[\left(\left(\left(-1.79 \cdot 10^{+308} \leq self_x \land self_x \leq 1.79 \cdot 10^{+308}\right) \land \left(-1.79 \cdot 10^{+308} \leq rhs_x \land rhs_x \leq 1.79 \cdot 10^{+308}\right)\right) \land \left(-1.79 \cdot 10^{+308} \leq self_y \land self_y \leq 1.79 \cdot 10^{+308}\right)\right) \land \left(-1.79 \cdot 10^{+308} \leq rhs_y \land rhs_y \leq 1.79 \cdot 10^{+308}\right)\]
\[self_x \cdot rhs_x - self_y \cdot rhs_y \]
\[\mathsf{fma}\left(-rhs_y, self_y, self_x \cdot rhs_x\right) \]
(FPCore (self_x rhs_x self_y rhs_y)
 :precision binary64
 (- (* self_x rhs_x) (* self_y rhs_y)))
(FPCore (self_x rhs_x self_y rhs_y)
 :precision binary64
 (fma (- rhs_y) self_y (* self_x rhs_x)))
double code(double self_x, double rhs_x, double self_y, double rhs_y) {
	return (self_x * rhs_x) - (self_y * rhs_y);
}
double code(double self_x, double rhs_x, double self_y, double rhs_y) {
	return fma(-rhs_y, self_y, (self_x * rhs_x));
}
function code(self_x, rhs_x, self_y, rhs_y)
	return Float64(Float64(self_x * rhs_x) - Float64(self_y * rhs_y))
end
function code(self_x, rhs_x, self_y, rhs_y)
	return fma(Float64(-rhs_y), self_y, Float64(self_x * rhs_x))
end
code[self$95$x_, rhs$95$x_, self$95$y_, rhs$95$y_] := N[(N[(self$95$x * rhs$95$x), $MachinePrecision] - N[(self$95$y * rhs$95$y), $MachinePrecision]), $MachinePrecision]
code[self$95$x_, rhs$95$x_, self$95$y_, rhs$95$y_] := N[((-rhs$95$y) * self$95$y + N[(self$95$x * rhs$95$x), $MachinePrecision]), $MachinePrecision]
self_x \cdot rhs_x - self_y \cdot rhs_y
\mathsf{fma}\left(-rhs_y, self_y, self_x \cdot rhs_x\right)

Error?

Derivation?

  1. Initial program 0.0

    \[self_x \cdot rhs_x - self_y \cdot rhs_y \]
  2. Applied egg-rr0.0

    \[\leadsto \color{blue}{self_y \cdot \left(-rhs_y\right) + \left(self_x \cdot rhs_x + \mathsf{fma}\left(-self_y, rhs_y, self_y \cdot rhs_y\right)\right)} \]
  3. Taylor expanded in self_y around 0 0.0

    \[\leadsto \color{blue}{\left(rhs_y + -2 \cdot rhs_y\right) \cdot self_y + self_x \cdot rhs_x} \]
  4. Simplified0.0

    \[\leadsto \color{blue}{\mathsf{fma}\left(-rhs_y, self_y, self_x \cdot rhs_x\right)} \]
    Proof

    [Start]0.0

    \[ \left(rhs_y + -2 \cdot rhs_y\right) \cdot self_y + self_x \cdot rhs_x \]

    distribute-rgt1-in [=>]0.0

    \[ \color{blue}{\left(\left(-2 + 1\right) \cdot rhs_y\right)} \cdot self_y + self_x \cdot rhs_x \]

    metadata-eval [=>]0.0

    \[ \left(\color{blue}{-1} \cdot rhs_y\right) \cdot self_y + self_x \cdot rhs_x \]

    fma-def [=>]0.0

    \[ \color{blue}{\mathsf{fma}\left(-1 \cdot rhs_y, self_y, self_x \cdot rhs_x\right)} \]

    mul-1-neg [=>]0.0

    \[ \mathsf{fma}\left(\color{blue}{-rhs_y}, self_y, self_x \cdot rhs_x\right) \]
  5. Final simplification0.0

    \[\leadsto \mathsf{fma}\left(-rhs_y, self_y, self_x \cdot rhs_x\right) \]

Alternatives

Alternative 1
Error15.6
Cost1298
\[\begin{array}{l} \mathbf{if}\;self_x \cdot rhs_x \leq -7.8 \cdot 10^{+24} \lor \neg \left(self_x \cdot rhs_x \leq -160000000000\right) \land \left(self_x \cdot rhs_x \leq -1.02 \cdot 10^{-82} \lor \neg \left(self_x \cdot rhs_x \leq 8\right)\right):\\ \;\;\;\;self_x \cdot rhs_x\\ \mathbf{else}:\\ \;\;\;\;rhs_y \cdot \left(-self_y\right)\\ \end{array} \]
Alternative 2
Error0.0
Cost448
\[self_x \cdot rhs_x - rhs_y \cdot self_y \]
Alternative 3
Error31.0
Cost192
\[self_x \cdot rhs_x \]

Error

Reproduce?

herbie shell --seed 1 
(FPCore (self_x rhs_x self_y rhs_y)
  :name "self_x * rhs_x - self_y * rhs_y"
  :precision binary64
  :pre (and (and (and (and (<= -1.79e+308 self_x) (<= self_x 1.79e+308)) (and (<= -1.79e+308 rhs_x) (<= rhs_x 1.79e+308))) (and (<= -1.79e+308 self_y) (<= self_y 1.79e+308))) (and (<= -1.79e+308 rhs_y) (<= rhs_y 1.79e+308)))
  (- (* self_x rhs_x) (* self_y rhs_y)))