One of the questions was if a calculation method should return a rounded value, or if the result should be precise and rounded by the caller. Another question was how to represent the values and which functionality to use to actually do the rounding.

There was a suggestion to use

`BigDecimal`

objects everywhere instead of simple `double`

types because this class provides convenient methods for doing rounding. Of course, when you need the higher precision, this might be a great choice. However, when you don't need that and are just using the class for being able to easily use it's rounding capabilities, the solution is probably over-engineered. Well, I voted against that mainly for two reasons: performance and object size.

#### 1) Performance

It's obvious that calculations with primitive data types are faster than with

`BigDecimal`

s (or `BigInteger`

). But... how much?A small Java code snippet helps to estimate the performance penalty:

final long iterations = 1000000;

long t = System.currentTimeMillis();

double d = 123.456;

for (int i = 0; i < iterations; i++) {

final double b = d * (

(double)System.currentTimeMillis()

+ (double)System.currentTimeMillis());

}

System.out.println("double: "+(System.currentTimeMillis() - t));

t = System.currentTimeMillis();

BigDecimal bd = new BigDecimal("123.456");

for (int i = 0; i < iterations; i++) {

final BigDecimal b = bd.multiply(

BigDecimal.valueOf(System.currentTimeMillis()).add(

BigDecimal.valueOf(System.currentTimeMillis())));

}

System.out.println("java.math.BigDecimal: "+(System.currentTimeMillis() - t));

We are not interested in absolute numbers here, but only in the comparison between

`double`

's and `BigDecimal`

's. It turns out that one million operations (each is one multiplication and one addition of a double value) takes approximately 3-4 times longer with `BigDecimal`

than with `double`

s (on my poor old laptop with Java 5).Interestingly, when trying the same for

`BigInteger`

and `long`

, the factor is approximately 5, i.e. the performance difference is even higher.With Java 6, the method runs faster for all types, but calculation with primitives has a greater improvement so that the performance penalty for using

`Big*`

is even higher: 4-5 for `BigDecimal`

, 6 for `BigInteger`

.#### 2) Object Size

Everybody would expect that a

`BigDecimal`

would need more memory than a primitive `double`

, right? But, how much is it? We are going to have big objects with up to hundreds of decimal values, so the bigger `BigDecimal`

's might sum up to a critical value when thinking of transporting those objects between processes (web service calls) or holding them in the session (for web applications).It happended that I have blogged about how to determine an object's size in my last post ;-) Hence, we can just move on to the actual figures:

`double`

: 8 bytes`Double`

: 16 bytes (8 bytes overhead for the class, 8 bytes for the contained`double`

)`BigDecimal`

: 32 bytes`long`

: 8 bytes`Long`

: 16 bytes (8 bytes overhead for the class, 8 bytes for the contained`long`

)`BigInteger`

: 56 bytes

Wow. It seems that

`BigDecimal`

is 4 times as big than `double`

and twice the size of `Double`

– which is not that bad. As before, `BigInteger`

has a bigger penalty with respect to object size as well.#### 3) Conclusion

All in all, when using

`BigDecimal`

instead of `double`

, this means factor 4 for both memory footprint as well as performance penalty. A good reason to not use `BigDecimal`

's just for using the rounding functionality...!
Wrapping always causes performance and memory penalty.

ReplyDeleteHowever, wherever precise rounding is required (finanses z.B.) there's almost always required also the precision of number storage, and double simple does not provide that. I never use float/double types when operating on finantial data.