Calculating Sines
January 12, 2010
[ Today’s exercise was written by guest author Bill Cruise, who blogs at Bill the Lizard, where he is currently describing his adventures studying SICP. Feel free to contact me if you have an idea for an exercise, or if you wish to contribute your own exercise. ]
We calculated the value of pi, and logarithms to seven digits, in two previous exercises. We continue that thread in the current exercise with a function that calculates sines.
Sines were discovered by the Indian astronomer Aryabhata in the sixth century, further developed by the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī (from whose name derives our modern word algorithm) in the ninth century. Sines were studied by European mathematicians Leibniz and Euler in the seventeenth and eighteenth centuries. It was Euler who coined the word “sine”, based on an earlier mis-translation (to the Latin “sinus”) of the word “jya” used by Aryabhata. The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse in a right triangle. (You may remember the mnemonic SOHCAHTOA if you’ve ever taken a course in trigonometry.)
One way to calculate the sine of an angle expressed in radians is by summing terms of the Taylor series:
Another method of computing the sine comes from the triple-angle formula . Since the limit
, a recursion that drives x to zero can calculate the sine of x.
Your task is to write two functions to calculate the sine of an angle, one based on the Taylor series and the other based on the recursive formula. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Here is my version of
taylor-sine, which keeps the numerator and denominator separately and calculates each by updating the previous value rather than recalculating each from scratch at each iteration.(define (taylor-sine x)(let loop ((n x) (d 1) (k 1) (s x) (t -1))
(let* ((next-n (* n x x))
(next-d (* d (+ k 1) (+ k 2)))
(next (* t (/ next-n next-d))))
(if (< (abs next) epsilon)
(exact->inexact (+ s next))
(loop next-n next-d(+ k 2) (+ s next) (* t -1))))))
I also eschew the range-reduction optimization, as http://dotancohen.com/eng/taylor-sine.php shows little improvement from the optimization.
my solution using c
#include <stdio.h> #include <math.h> #define PI 3.141592654 long double factorial(int n) { return (n<=1) ? 1 : n*factorial(n-1); } double tylor_sine(double x) { int n=0,fac; double result=0,term=0; do { fac=(2*n+1); result=result+term; term=pow(-1,n)*(pow(x,fac)/factorial(fac)); n++; }while(fabs(term)>=.00000001); return result; } double cube (double x) { return x*x*x; } double sine (double angle) { if (angle < 0.00000001) { return angle; } return 3*(sine (angle/3.0)) - 4*cube(sine (angle/3.0)); } int main() { printf("tylor-series sine(10): %f\n",tylor_sine(10.0)); printf("triple-angle sine(10): %f\n",sine(10.0)); printf("tylor-series sine(1): %f\n",tylor_sine(1.0)); printf("triple-angle sine(1): %f\n",sine(1.0)); printf("tylor-series sine(PI/2): %f\n",tylor_sine(PI/2)); printf("triple-angle sine(PI/2): %f\n",sine(PI/2)); return 0; }programmingpraxis,
I’d say that the page you linked to gives a strong argument in favor of range reduction. It shows that when you reduce the range, only very few terms of the Taylor series are required to accurately approximate sine(x), even for large x. Your optimization of keeping the numerator and (particularly) the denominator, and updating each instead of recalculating on each iteration helps a lot. However, as x grows, even just updating the factorial in the denominator can take quite a bit of time. For example, without using range reduction it takes my machine about 25 seconds to compute (taylor-sine 1000). With range reduction I get the answer immediately, with no significant loss of accuracy.
[…] Praxis – Calculating Sines By Remco Niemeijer In today’s Programming Praxis exercise we have to implement two ways of calculating sines. Let’s get […]
My Haskell solution (see http://bonsaicode.wordpress.com/2010/01/12/programming-praxis-calculating-sines/ for a version with comments):
import Data.Fixed taylorSin :: Double -> Double taylorSin x = sum . useful $ zipWith (/) (map (\k -> (mod' x (2*pi)) ** (2*k + 1) * (-1) ** k) [0..]) (scanl (\a k -> a * k * (k - 1)) 1 [3,5..]) where useful ~(a:b:c) = a : if abs (a-b) > 1e-7 then useful (b:c) else [] recSin :: Double -> Double recSin = f . flip mod' (2 * pi) where f x = if abs x < 1e-7 then x else let s = f (x / 3) in 3 * s - 4 * s**3Bill the Lizard: You are correct. My mistake.
My purely iterative solution, with range reduction, in 27 lines: http://codepad.org/HgaJbSbh
Just a nitpicking. Shouldn’t it be lim ( sin(x) / x ) = 1 when x -> 0
My first attempt to programming in haskell:
sin' :: (Double -> Double) -> Double -> Double
sin' f x = reduce1 f (x - fromIntegral(truncate(x / (2*pi)))*(2*pi))
reduce1 :: (Double -> Double) -> Double -> Double
reduce1 f x
| x Double) -> Double -> Double
reduce2 f x
| x <= pi/2 = f x
| x <= pi = f (pi - x)
| x Double
taylor x = let (_,_,_,r) = foldl sumOdd (1,1,-1,0) [1..20]
where sumOdd (n,d,s,p) el
| even el = (n*x,d*fromIntegral el,-s,p-s*n/d)
| otherwise = (n*x,d*fromIntegral el,s,p)
in r
sinIter :: Double -> Double
sinIter x
| x < 1e-7 = x
| otherwise = 3*sinIter(x/3) - 4*sinIter(x/3)**3
And is amazingly working:
*Main> sin’ taylor (pi*0.25)
0.7071067811865475
*Main> sin’ sinIter (pi*0.25)
0.7071067811865475
*Main> sin (pi*0.25)
0.7071067811865475
And it took me only 10 times more time that it would have taken me in C …
Manish: Fixed. My fault, not Bill’s.
A version written in F#.
let epsilon = 1e-7
let pi = 3.141592654
let sin (x:float) =
let rec series sum f d v k2 =
let next = f * d / float v
if abs(next) < epsilon then
sum + next
else
series (sum + next) (-f) (d * x * x) (v * (k2 + 1) * (k2 + 2)) (k2 + 2)
series 0.0 1.0 1.0 1 1
let rec sin' x =
if abs(x) float
val sin’ : float -> float
> sin 1.0;;
val it : float = 0.8414709846
> sin’ 1.0;;
val it : float = 0.8414709848
the comment section ate my code …
Here is a version written in F#.
http://pastie.org/777280
[…] dabbling with haskell in the recent days. My first semi-serious attempt was inspired by a prompt at programming praxis. The code I concocted is as follows (I guess it would ashame any serious haskell programmer, if you […]
Python:
def reduce_range( x ): sign,x = (1, x) if x >= 0 else (-1, -x) if x >= 2*pi: x -= int(x/2/pi) * pi if x > pi: x = x - 2*pi return x if sign>0 else -x def sin_taylor( x, eps=1.0e-7 ): x = reduce_range( x ) x2 = x*x sinx = 0 num, den, k, sign = ( float( x ), 1.0, 3.0, 1 ) while True: delta = num / den sinx += sign * delta if delta < eps: break num, den, k, sign = ( num * x2, den*k*(k-1), k+2, -sign ) return sinx def sin_3angle( x, eps=1e-7 ): def _3angle( x ): if x < eps: return x else: f = _3angle( x/3.0 ) return f * ( 3 - 4*f*f ) return _3angle( reduce_range( x ) )Version in Factor. Interesting that the Taylor series is minimally accurate to the given precision (1e-7), but the recursive version is much more precise than our minimum of 1e-7. Probably the case as for values that small sin x ~= x.
USING: kernel math math.constants math.libm locals arrays sequences lists lists.lazy ; IN: sines CONSTANT: epsilon 1e-7 : normalize-theta ( x -- x ) [let 1 :> sign! dup 0 < [ -1 * -1 sign! ] when pi 2 * fmod dup pi > [ 2 pi * - ] when sign * ] ; ! create the lazy list of taylor terms (not so simple :) : numerators ( x -- lazy-list ) dup 2array [ first2 over * over * -1 * 2array ] lfrom-by [ second ] lazy-map ; : denominators ( -- lazy-list ) { 1 1 } [ first2 [ 1 + ] dip over * [ 1 + ] dip over * 2array ] lfrom-by [ second ] lazy-map ; : taylor-terms ( x -- lazy-list ) numerators denominators lzip [ first2 / ] lazy-map ; : taylor-sine ( x -- x ) >float normalize-theta taylor-terms [ abs epsilon < ] luntil 0 [ + ] foldl ; ! sin x = 3 sin (x/3) - 4 sin^3 (x/3) : sine ( x -- x ) dup abs epsilon > [ 3.0 / sine [ 3 * ] [ dup sq * 4 * ] bi - ] when ;Session:
My
sin_taylor()works quite well, but my recursivesin_limitruns into numerical trouble if epsilon is too small…#!/usr/bin/env python from __future__ import division from itertools import count, takewhile from operator import mul def factorial(n): return reduce(mul, xrange(2, n + 1), 1) def sin_taylor(x, eps): a = lambda k: pow(x, 2 * k + 1) / factorial(2 * k + 1) return sum(pow(-1, k) * a(k) for k in takewhile(lambda k: a(k) > eps, count())) def sin_limit(x, eps): #Loses accuracy if eps < 1e-6... if x < eps: return eps else: return 3 * sin_limit(x / 3, eps) - 4 * pow(sin_limit(x / 3, eps), 3)like Matías Giovannini, i went for the purely iterative implementations
use strict; use warnings; our $ABSTOL = 0.0000000001; our $PI2 = 6.28318530717958648; # # evaluates the taylor expansion of sin(x) by # first performing range reduction, and then # updating the numerator, denominator, and # partial sum at each iteration. # sub taylor_sine { my $x = shift; # sine is periodic, so map domain to [-x, x] $x -= int($x / $PI2) * $PI2; # local vars my ($s, $t, $n, $d, $k) = ($x, 0, $x, 1, 1); do { # save previous term $t = $s; # update numerator and denominator $n *= $x * $x; $d *= ++$k; $d *= ++$k; # divide (odd) iterator by 2 and check if odd if (($k >> 1) & 1) { $s -= $n / $d } else { $s += $n / $d } } until abs($t - $s) <= $ABSTOL; return $s; } # # iteratively calculates sin(x) through the triple # angle formula, first performing range reduction # # MUCH faster than the recursive implementation # sub triple_sine { my $x = shift; # sine is periodic, so map domain to [-x, x] $x -= int($x / $PI2) * $PI2; # recursion depth counter my $n = 0; # find our base case while (abs($x) > $ABSTOL and ++$n) { $x /= 3; } # evaluate back up our expression tree while ($n--) { $x = 3 * $x - 4 * $x * $x * $x; } return $x; } die "\nusage:\n\t\tperl $0 <degrees>\n" unless scalar @ARGV and $ARGV[0] =~ /^[-0-9.]+$/; printf("\ntaylor_sine(%f) = %4.12f\n", $ARGV[0], taylor_sine($ARGV[0])); printf("\ntriple_sine(%f) = %4.12f\n", $ARGV[0], triple_sine($ARGV[0]));oops, usage says to provide input in degrees, but that should be radians