You can change rounding mode 
with http://docs.julialang.org/en/latest/stdlib/base/#Base.set_rounding

kl. 08:16:20 UTC+1 mandag 24. februar 2014 skrev Jason Merrill følgende:
>
> Thanks for the kind words.
>
> Is the goal of the linked float range code to make things like 
> `1.0:1/n:2.0` work more reliably? Seems like a nice approach.
>
> On Sunday, February 23, 2014 10:28:24 AM UTC-8, Stefan Karpinski wrote:
>>
>> This is a lovely blog post. I've given a few talks on floating-point 
>> arithmetic using Julia for live coding and demonstrations. The fact that 
>> there is a next and previous floating point number – with nothing in 
>> between – always blows people's minds, even though this is an immediate and 
>> fairly obvious consequence of there only being a finite number of floats. 
>> Internalizing that fact is, imo, the key to understanding many of the 
>> unintuitive aspects of floating point – and this post is an excellent 
>> exposition of that fact.
>>
>> I'm also increasingly convinced that if you're using eps, you're probably 
>> doing it wrong. You should instead rely on the quantized nature of floats 
>> like your correct stopping criterion and _middle algorithm do. The pending 
>> "intuitive" float range 
>> algorithm<https://github.com/JuliaLang/julia/blob/adff4353ef3f50e8ffa1bebc857c40c10454f150/base/range.jl#L116-L157>,
>>  
>> for example, is completely epsilon-free. Even the usage nextfloat and 
>> prevfloat is just an optimization, allowing the algorithm to skip trying to 
>> "lift" the start and step values when there's no possible chance of it 
>> working.
>>
>> Next time I give a floating point talk, I'm going to give this blog post 
>> as suggested further reading!
>>
>> On Sat, Feb 22, 2014 at 8:52 PM, Jason Merrill <[email protected]> wrote:
>>
>>> I'm working on a series of blog posts that highlight some basic aspects 
>>> of floating point arithmetic with examples in Julia. The first one, on 
>>> bisecting floating point numbers, is available at
>>>
>>>   http://squishythinking.com/2014/02/22/bisecting-floats/
>>>
>>> The intended audience is basically a version of me several years ago, 
>>> early in physics grad. school. I wrote a fair amount of basic numerical 
>>> code then, both for problem sets and for research, but no one ever sat me 
>>> down and explained the nuts and bolts of how computers represent numbers. I 
>>> thought that floating point numbers were basically rounded off real numbers 
>>> that didn't quite work right all the time, but were usually fine.
>>>
>>> In the intervening years, I've had the chance to work on a few 
>>> algorithms that leverage the detailed structure of floats, and I'd like to 
>>> share some of the lessons I picked up along the way, in case there's anyone 
>>> else reading who is now where I was then.
>>>
>>> Some of the material is drawn from a talk I gave at the Bay Area Julia 
>>> Users meetup in January, on the motivations behind PowerSeries.jl
>>>
>>
>>

Reply via email to