[RndTbl] Problem with g++ optimization

Dan Martin ummar143 at cc.umanitoba.ca
Tue Sep 5 00:15:26 CDT 2006


Thanks for the advice, Sean.

I suspect it's an issue of double precision, but I cannot find what 
specific flags are involved.

Image colum from original program, unknown compiler optimizations:
  0:  9.40411e-05
  1:  0.000374236
  2:  0.000421015
  3:  0.000158423
  4:  0.000421016
  5: -6.64318e-05
  6:  0.000225337

Image colum from my program, no compiler optimizations:
  0:  9.40411e-05
  1:  0.000388825
  2:  0.000421015
  3:  0.000158423
  4:  0.000421016
  5:  0.000374236
  6:  0.000356633

Difference:
  0:            0
  1: -1.45885e-05
  2:            0
  3:            0
  4:            0
  5: -0.000440668
  6: -0.000131296

Percentage wise, these differences can be quite significant.
If  I compile a single file from my version of the program 
(backprojectors.cpp) with the -O2 option, the differences disappear.
I combined all of the options that the gcc man page said were included 
in -O1 and -O2, and the differences did NOT disappear, so I still don't 
know what specific options are making the difference in the original 
program.

Sean Cody wrote:

> From gcc man page...
>
>            Important notes: -ffast-math results in code that is not  
> necessar-
>            ily IEEE-compliant.  -fstrict-aliasing is highly likely  
> break non-
>            standard-compliant programs.  -malign-natural only works  
> properly
>            if the entire program is compiled with it, and none of  the 
> standard
>            headers/libraries contain any code that changes alignment  
> when this
>            option is used.
>
> If you are having precision issues with the doubles kill the above  
> flags with your optimizations (eg. -fno-fast-math).
> I'm going to assume that v is global so it may not be scoping and  
> from your description it _sounds_ more like precision issues.
> If it is precision issues start turning off selectively the options  
> from the optimized flags and find out which one is mucking up the  
> results.
>
> I wouldn't assume it is the code generated until you can prove it by  
> finding a particular optimization strategy which is producing the issue.
> In integer only cases I would argue the compiler is outputting  
> correct code but with reals the size/space optimizations can do a  
> _whole lot_ of damage.
>
> If the values are within a valid range I would suggest liberal  
> application of assertions, it is hard to say what you mean by correct  
> values but if the numbers are off by a certain amount or are rounded  
> up or down from what you expect then I would put money on precision  
> issues.
>

-- 
  -Dan

Dr. Dan Martin, MD, CCFP, BSc, BCSc (Hon)

GP Hospital Practitioner
Computer Science grad student
ummar143 at cc.umanitoba.ca




More information about the Roundtable mailing list