Qt qFloor and double comparison

0.1 + 0.2 == 0.3 -> false 
0.1 + 0.2 -> 0.30000000000000004 

Any ideas why this happens?

Replay

Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1, which is 1/10) whose denominator is not a power of two cannot be exactly represented.

For 0.1 in the standard binary64 format, the representation can be written exactly as

  • 0.1000000000000000055511151231257827021181583404541015625 in decimal, or
  • 0x1.999999999999ap-4 in C99 hexfloat notation.

In contrast, the rational number 0.1, which is 1/10, can be written exactly as

  • 0.1 in decimal, or
  • 0x1.99999999999999...p-4 in an analogue of C99 hexfloat notation, where the ... represents an unending sequence of 9's.

The constants 0.2 and 0.3 in your program will also be approximations to their true values. It happens that the closest double to 0.2 is larger than the rational number 0.2 but that the closest double to 0.3 is smaller than the rational number 0.3. The sum of 0.1 and 0.2 winds up being larger than the rational number 0.3 and hence disagreeing with the constant in your code.

A fairly comprehensive treatment of floating-point arithmetic issues is What Every Computer Scientist Should Know About Floating-Point Arithmetic. For an easier-to-digest explanation, see floating-point-gui.de.

A Hardware Designer's Perspective

I believe I should add a hardware designer’s perspective to this since I design and build floating point hardware. Knowing the origin of the error may help in understanding what is happening in the software, and ultimately, I hope this helps explain the reasons for why floating point errors happen, and seem to accumulate over time.

1. Overview

From an engineering perspective, most floating point operations will have some element of error since the hardware that does the floating point computations is only required to have an error of less than one half of one unit in the last place. Therefore, much hardware will stop at a precision that's only necessary to yield an error of less than one half of one unit in the last place for a single operation which is especially problematic in floating point division. What constitutes a single operation depends upon how many operands the unit takes. For most, it is two, but some units take 3 or more operands. Because of this, there is no guarantee that repeated operations will result in a desirable error since the errors add up over time.

2. Standards

Most processors follow the IEEE-754 standard but some use denormalized, or different standards . For example, there is a denormalized mode in IEEE-754 which allows representation of very small floating point numbers at the expense of precision. The following however, will cover the normalized mode of IEEE-754 which is the typical mode of operation.

In the IEEE-754 standard, hardware designers are allowed any value of error/epsilon as long as it's less than one half of one unit in the last place, and the result only has to be less than one half of one unit in the last place for one operation. This explains why when there are repeated operations, the errors add up. For IEEE-754 double precision, this is the 54th bit, since 53 bits are used to represent the numeric part (normalized), also called the mantissa, of the floating point number (e.g. the 5.3 in 5.3e5). The next sections go into more detail on the causes of hardware error on various floating point operations.

3. Cause of Rounding Error in Division

The main cause of the error in floating point division, are the division algorithms used to calculate the quotient. Most computer systems calculate division using multiplication by an inverse, mainly in Z=X/Y, Z = X * (1/Y). Division is computed iteratively i.e. each cycle computes some bits of the quotient until the desired precision is reached, which for IEEE-754 is anything with an error of less than one unit in the last place. The table of reciprocals of Y (1/Y) is known as the quotient selection table (QST) in slow division, and the size in bits of the quotient selection table is usually the width of the radix, or number of bits of the quotient computed in each iteration, plus a few guard bits. For the IEEE-754 standard, double precision (64-bit), it would be the size of the radix of the divider, plus a few guard bits k, where k>=2. So for example, a typical Quotient Selection Table for a divider that computes 2 bits of the quotient at a time (radix 4) would be 2+2= 4 bits (plus a few optional bits).

3.1 Division Rounding Error: Approximation of Reciprocal

What reciprocals are in the quotient selection table depend on the division method: slow division such as SRT division, or fast division such as Goldschmidt division; each entry is modified according to the division algorithm in an attempt to yield the lowest possible error. In any case though, all reciprocals are approximations of the actual reciprocal, and introduce some element of error. Both slow division and fast division methods calculate the quotient iteratively, i.e. some number of bits of the quotient are calculated each step, then the result is subtracted from the dividend, and the divider repeats the steps until the error is less than one half of one unit in the last place. Slow division methods calculate a fixed number of digits of the quotient in each step and are usually less expensive to build, and fast division methods calculate a variable number of digits per step and are usually more expensive to build. The most important part of the division methods is that most of them rely upon repeated multiplication by an approximation of a reciprocal, so they are prone to error.

4. Rounding Errors in Other Operations: Truncation

Another cause of the rounding errors in all operations are the different modes of truncation of the final answer that IEEE-754 allows. There's truncate, round-towards-zero, round-to-nearest (default), round-down, and round-up. All methods introduce an element of error of less than one half of one unit in the last place for a single operation. Over time and repeated operations, truncation also adds cumulatively to the resultant error. This truncation error, is especially problematic in exponentiation, which involves some form of repeated multiplication.

5. Repeated Operations

Since the hardware that does the floating point calculations only needs to yield a result with an error of less than one half of one unit in the last place for a single operation, the error will grow over repeated operations if not watched. This is the reason that in computations that require a bounded error, mathematicians use methods such as using the round-to-nearest even digit in the last place of IEEE-754, because over time, the errors are more likely to cancel each other out, and Interval Arithmetic combined with variations of the IEEE 754 rounding modes to predict rounding errors, and correct them. Because of its low relative error compared to other rounding modes, round to nearest even digit (in the last place), is the default rounding mode of IEEE-754.

Note that the default rounding mode, round-to-nearest even digit in the last place, guarantees an error of less than one half of one unit in the last place for one operation. Using the truncation, round-up, and round down alone may result in an error that is greater than one half of one unit in the last place, but less than one unit in the last place, so these modes are not recommended unless they are used in Interval Arithmetic.

6. Summary

In short, the fundamental reason for the errors in floating point operations is a combination of the truncation in hardware, and the truncation of a reciprocal in the case of division. Since the IEEE-754 standard only requires an error of less than one half of one unit in the last place for a single operation, the floating point errors over repeated operations will add up unless corrected.

When you convert .1 or 1/10 to base 2 (binary) you get a repeating pattern after the decimal point, just like trying to represent 1/3 in base 10. The value is not exact, and therefore you can't do exact math with it using normal floating point methods.

Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Just as 1/3 takes an infinite number of digits to represent in decimal, but is "0.1" in base-3, 0.1 takes an infinite number of digits in base-2 where it does not in base-10. And computers don't have an infinite amount of memory.

Most answers here address this question in very dry, technical terms. I'd like to address this in terms that normal human beings can understand.

Imagine that you are trying to slice up pizzas. You have a robotic pizza cutter that can cut pizza slices exactly in half. It can halve a whole pizza, or it can halve an existing slice, but in any case, the halving is always exact.

That pizza cutter has very fine movements, and if you start with a whole pizza, then halve that, and continue halving the smallest slice each time, you can do the halving 53 times before the slice is too small for even its high-precision abilities. At that point, you can no longer halve that very thin slice, but must either include or exclude it as is.

Now, how would you piece all the slices in such a way that would add up to one-tenth (0.1) or one-fifth (0.2) of a pizza? Really think about it, and try working it out. You can even try to use a real pizza, if you have a mythical precision pizza cutter at hand. :-)



Most experienced programmers, of course, know the real answer, which is that there is no way to piece together an exact tenth or fifth of the pizza using those slices, no matter how finely you slice them. You can do a pretty good approximation, and if you add up the approximation of 0.1 with the approximation of 0.2, you get a pretty good approximation of 0.3, but it's still just that, an approximation.

For double-precision numbers (which is the precision that allows you to halve your pizza 53 times), the numbers immediately less and greater than 0.1 are 0.09999999999999999167332731531132594682276248931884765625 and 0.1000000000000000055511151231257827021181583404541015625. The latter is quite a bit closer to 0.1 than the former, so a numeric parser will, given an input of 0.1, favour the latter.

(The difference between those two numbers is the "smallest slice" that we must decide to either include, which introduces an upward bias, or exclude, which introduces a downward bias. The technical term for that smallest slice is an ulp.)

In the case of 0.2, the numbers are all the same, just scaled up by a factor of 2. Again, we favour the value that's slightly higher than 0.2.

Notice that in both cases, the approximations for 0.1 and 0.2 have a slight upward bias. If we add enough of these biases in, they will push the number further and further away from what we want, and in fact, in the case of 0.1 + 0.2, the bias is high enough that the resulting number is no longer the closest number to 0.3.

In particular, 0.1 + 0.2 is really 0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125 = 0.3000000000000000444089209850062616169452667236328125, whereas the number closest to 0.3 is actually 0.299999999999999988897769753748434595763683319091796875.



P.S. Some programming languages also provide pizza cutters that can split slices into exact tenths. Although such pizza cutters are uncommon, if you do have access to one, you should use it when it's important to be able to get exactly one-tenth or one-fifth of a slice.

(Originally posted on Quora.)

In addition to the other correct answers, you may want to consider scaling your values to avoid problems with floating-point arithmetic.

For example:

var result = 1.0 + 2.0;     // result === 3.0 returns true

... instead of:

var result = 0.1 + 0.2;     // result === 0.3 returns false

The expression 0.1 + 0.2 === 0.3 returns false in JavaScript, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling.

As a practical example, to avoid floating-point problems where accuracy is paramount, it is recommended1 to handle money as an integer representing the number of cents: 2550 cents instead of 25.50 dollars.



1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).

A solution to tidy up the unsightly overflow

function strip(number) {
    return (parseFloat(number.toPrecision(12)));
}

Using 'toPrecision(12)' leaves trailing zeros which 'parseFloat()' removes. Assume it is accurate to plus/minus one on the least significant digit.

My answer is quite long, so I've split it into three sections. Since the question is about floating point mathematics, I've put the emphasis on what the machine actually does. I've also made it specific to double (64 bit) precision, but the argument applies equally to any floating point arithmetic.

Preamble

A IEEE 754 double-precision binary floating-point format (binary64) number represents a number of the form

value = (-1)^s * (1.m51m50...m2m1m0)2 * 2e-1023

in 64 bits:

  • The first bit is the sign bit: 1 if the number is negative, 0 otherwise1.
  • The next 12 bits are the exponent, which is offset by 1023. In other words, after reading the exponent bits from a double-precision number, 1023 must be subtracted to obtain the power of two.
  • The remaining 52 bits are the significand (or mantissa). In the mantissa, an 'implied' 1. is always2 omitted since the most significant bit of any binary value is 1.

1 - IEEE 754 allows for the concept of a signed zero - +0 and -0 are treated differently: 1 / (+0) is positive infinity; 1 / (-0) is negative infinity. For zero values, the mantissa and exponent bits are all zero. Note: zero values (+0 and -0) are explicitly not classed as denormal2.

2 - This is not the case for denormal numbers, which have an offset exponent of zero (and an implied 0.). The range of denormal double precision numbers is dmin ≤ |x| ≤ dmax, where dmin (the smallest representable nonzero number) is 2-1023 - 51 (≈ 4.94 * 10-324) and dmax (the largest denormal number, for which the mantissa consists entirely of 1s) is 2-1023 + 1 - 2-1023 - 51 (≈ 2.225 * 10-308).



Turning a double precision number to binary

Many online converters exist to convert a double precision floating point number to binary (e.g. at binaryconvert.com), but here is some sample C# code to obtain the IEEE 754 representation for a double precision number (I separate the three parts with colons (:):

public static string BinaryRepresentation(double value)
{
    long valueInLongType = BitConverter.DoubleToInt64Bits(value);
    string bits = Convert.ToString(valueInLongType, 2);
    string leadingZeros = new string('0', 64 - bits.Length);
    string binaryRepresentation = leadingZeros + bits;

    string sign = binaryRepresentation[0].ToString();
    string exponent = binaryRepresentation.Substring(1, 11);
    string mantissa = binaryRepresentation.Substring(12);

    return string.Format("{0}:{1}:{2}", sign, exponent, mantissa);
}



Getting to the point: the original question

(Skip to the bottom for the TL;DR version)

@CatoJohnston (the question asker) asked why 0.1 + 0.2 != 0.3.

Written in binary (with colons separating the three parts), the IEEE 754 representations of the values are:

0.1 => 0:01111111011:1001100110011001100110011001100110011001100110011010
0.2 => 0:01111111100:1001100110011001100110011001100110011001100110011010

Note that the mantissa is composed of recurring digits of 0011. This is key to why there is any error to the calculations - 0.1, 0.2 and 0.3 cannot be represented in binary precisely in a finite number of binary bits any more than 1/9, 1/3 or 1/7 can be represented precisely in decimal digits.

Converting the exponents to decimal, removing the offset, and re-adding the implied 1 (in square brackets), 0.1 and 0.2 are:

0.1 = 2^-4 * [1].1001100110011001100110011001100110011001100110011010
0.2 = 2^-3 * [1].1001100110011001100110011001100110011001100110011010

To add two numbers, the exponent needs to be the same, i.e.:

0.1 = 2^-3 *  0.1100110011001100110011001100110011001100110011001101(0)
0.2 = 2^-3 *  1.1001100110011001100110011001100110011001100110011010
sum = 2^-3 * 10.0110011001100110011001100110011001100110011001100111

Since the sum is not of the form 2n * 1.{bbb} we increase the exponent by one and shift the decimal (binary) point to get:

sum = 2^-2 * 1.0011001100110011001100110011001100110011001100110011(1)

There are now 53 bits in the mantissa (the 53rd is in square brackets in the line above). The default rounding mode for IEEE 754 is 'Round to Nearest' - i.e. if a number x falls between two values a and b, the value where the least significant bit is zero is chosen.

a = 2^-2 * 1.0011001100110011001100110011001100110011001100110011
x = 2^-2 * 1.0011001100110011001100110011001100110011001100110011(1)
b = 2^-2 * 1.0011001100110011001100110011001100110011001100110100

Note that a and b differ only in the last bit; ...0011 + 1 = ...0100. In this case, the value with the least significant bit of zero is b, so the sum is:

sum = 2^-2 * 1.0011001100110011001100110011001100110011001100110100

TL;DR

Writing 0.1 + 0.2 in a IEEE 754 binary representation (with colons separating the three parts) and comparing it to 0.3, this is (I've put the distinct bits in square brackets):

0.1 + 0.2 => 0:01111111101:0011001100110011001100110011001100110011001100110[100]
0.3       => 0:01111111101:0011001100110011001100110011001100110011001100110[011]

Converted back to decimal, these values are:

0.1 + 0.2 => 0.300000000000000044408920985006...
0.3       => 0.299999999999999988897769753748...

The difference is exactly 2-54, which is ~5.5511151231258 × 10-17 - insignificant (for many applications) when compared to the original values.

Comparing the last few bits of a floating point number is inherently dangerous, as anyone who reads the famous "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (which covers all the major parts of this answer) will know.

Most calculators use additional guard digits to get around this problem, which is how 0.1 + 0.2 would give 0.3: the final few bits are rounded.

Floating point rounding error. From What Every Computer Scientist Should Know About Floating-Point Arithmetic:

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.

My workaround:

function add(a, b, precision) {
    var x = Math.pow(10, precision || 2);
    return (Math.round(a * x) + Math.round(b * x)) / x;
}

precision refers to the number of digits you want to preserve after the decimal point during addition.

All numbers in JavaScript are represented in binary as IEEE-754 Doubles, which provides an accuracy to about 14 or 15 significant digits. Because they are floating point numbers, they do not always exactly represent real numbers, including fractions.

JavaScript syntax: Number

I found a solution you can use this function to parse floats correctly also you can set your own precision

function getFloat(int) {
    var num = new Number(int);
    return parseFloat(num.toPrecision(2));
}

Did you try the duct tape solution?

Try to determine when errors occur and fix them with short if statements, it's not pretty but for some problems it is the only solution and this is one of them.

 if( (n * 0.1) < 100.0 ) { return n * 0.1 - 0.000000000000001 ;}
                    else { return n * 0.1 + 0.000000000000001 ;}

I had the same problem in a scientific simulation project in c#, and I can tell you that if you ignore the butterfly effect it's gonna turn to a big fat dragon and bite you in the a**

Those weird numbers appear because computers use binary(base 2) number system for calculation purposes, while we use decimal(base 10).

There are a majority of fractional numbers that cannot be represented precisely either in binary or in decimal or both. Result - A rounded up (but precise) number results.

A lot of good answers was been posted. But short answer is that not all decimal numbers are the binary representation of floating point numbers. For example, the number "0.2" will be represented as "0.200000003" in single precision in IEEE754 float point standart.

For those reading through this thread looking to get precision to a specific number of decimal places and not numbers, instead of num.toPrecision(2) you can use num.toFixed(2).

To cut a long story short...

For those who are using JAVA and having problems like that: Use BigDecimal class.

Some statistics related to this famous double precision question. I used this code.

When adding all values (a+b) using a step of 0.1 (from 0.1 to 100) we have ~15% chance of precision error. Here are some examples (for full .txt results here):

0.1 + 0.2 = 0.30000000000000004
0.1 + 0.7 = 0.7999999999999999
...
1.7 + 1.9 = 3.5999999999999996
1.7 + 2.2 = 3.9000000000000004
...
3.2 + 3.6 = 6.800000000000001
3.2 + 4.4 = 7.6000000000000005

When subtracting all values (a-b where a>b) using a step of 0.1 (from 100 to 0.1) we have ~34% chance of precision error. Here are some examples (for full .txt results here):

0.6 - 0.2 = 0.39999999999999997
0.5 - 0.4 = 0.09999999999999998
...
2.1 - 0.2 = 1.9000000000000001
2.0 - 1.9 = 0.10000000000000009
...
100 - 99.9 = 0.09999999999999432
100 - 99.8 = 0.20000000000000284

*I was surprised with these 15% and 34%.. they are huge, so always use BigDecimal when precision is of big importance. With 2 decimal digits (step 0.01) the situation worsens a bit more (18% and 36%).

Took from PHP doc: http://php.net/manual/en/language.types.float.php#language.types.float.comparison

To test floating point values for equality, an upper bound on the relative error due to rounding is used. This value is known as the machine epsilon, or unit roundoff, and is the smallest acceptable difference in calculations.

$a and $b are equal to 5 digits of precision.

<?php
$a = 1.23456789;
$b = 1.23456780;
$epsilon = 0.00001;

if(abs($a-$b) < $epsilon) {
    echo "true";
}

http://jsfiddle.net/ozbob/y4rak722/5/

var foo = 0.1;
var bar = 0.2;
function add(foo, bar, precision){
    return parseFloat((foo + bar).toFixed(precision));
}

kudos shared with to @Funkodebat

Given that nobody has mentioned this...

Some high level languages such as Python and Java come with tools to overcome binary floating point limitations. For example:

  • Python's decimal module and Java's BigDecimal class, that represent numbers internally with decimal notation (as opposed to binary notation). Both have limited precision, so they are still error prone, however they solve most common problems with binary floating point arithmetic.

    Decimals are very nice when dealing with money: ten cents plus twenty cents are always exactly thirty cents:

    >>> 0.1 + 0.2 == 0.3
    False
    >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.3')
    True
    
    

    Python's decimal module is based on IEEE standard 854-1987.

  • Python's fractions module and Apache Common's BigFraction class. Both represent rational numbers as (numerator, denominator) pairs and they may give more accurate results than decimal floating point arithmetic.

Neither of these solutions is perfect (especially if we look at performances, or if we require a very high precision), but still they solve a great number of problems with binary floating point arithmetic.

The kind of floating-point math that can be implemented in a digital computer necessarily uses an approximation of the real numbers and operations on them. (The standard version runs to hundreds of pages of documentation and has a committee to deal with its errata and further refinement.)

This approximation is a mixture of approximations of different kinds, each of which can either be ignored or carefully accounted for due to its specific manner of deviation from exactitude. It also involves a number of explicit exceptional cases at both the hardware and software levels that most people walk right past while pretending not to notice.

If you need infinite precision (using the number π, for example, instead of one of its many shorter stand-ins), you should write or use a symbolic math program instead.

But if you're okay with the idea that sometimes floating-point math is fuzzy in value and logic and errors can accumulate quickly, and you can write your requirements and tests to allow for that, then your code can frequently get by with what's in your FPU.

Many of this question's numerous duplicates ask about the effects of floating point rounding on specific numbers. In practice, it is easier to get a feeling for how it works by looking at exact results of calculations of interest rather than by just reading about it. Some languages provide ways of doing that - such as converting a float or double to BigDecimal in Java.

Since this is a language-agnostic question, it needs language-agnostic tools, such as a Decimal to Floating-Point Converter.

Applying it to the numbers in the question, treated as doubles:

0.1 converts to 0.1000000000000000055511151231257827021181583404541015625,

0.2 converts to 0.200000000000000011102230246251565404236316680908203125,

0.3 converts to 0.299999999999999988897769753748434595763683319091796875, and

0.30000000000000004 converts to 0.3000000000000000444089209850062616169452667236328125.

Adding the first two numbers manually or in a decimal calculator such as Full Precision Calculator, shows the exact sum of the actual inputs is 0.3000000000000000166533453693773481063544750213623046875.

If it were rounded down to the equivalent of 0.3 the rounding error would be 0.0000000000000000277555756156289135105907917022705078125. Rounding up to the equivalent of 0.30000000000000004 also gives rounding error 0.0000000000000000277555756156289135105907917022705078125. The round-to-even tie breaker applies.

Returning to the floating point converter, the raw hexadecimal for 0.30000000000000004 is 3fd3333333333334, which ends in an even digit and therefore is the correct result.

No, not broken, but most decimal fractions must be approximated

Floating point arithmetic is exact, unfortunately, it doesn't match up well with our usual base-10 number representation.

So we often ask it to do something that seems simple in base 10 but is a repeating fraction in base 2. And FP numbers almost always use base 2 fractions.

When we write in decimal, every fraction is a rational number of the form x/(2n + 5n). In binary, we only get the 2 term.

So in decimal, we can't represent 1/3.

In binary, we can't do 1/10 or 1/3.

Worse than that, while every binary fraction can be written in decimal, the reverse is not true. And in fact most decimal fractions repeat in binary.

This isn't that hard to deal with in programs. While people are usually instructed to do < epsilon comparisons, better advice might be to round to integral values (in the C library: round() and roundf(), i.e., stay in the FP format) and then compare. Rounding to a specific decimal fraction length solves most problems with output.

I love the Pizza answer by Chris, because it describes the actual problem, not just the usual handwaving about "inaccuracy". If FP were simply "inaccurate", we could fix that and would have done it decades ago. The reason we haven't is because the FP format is compact and fast and it's the best way to crunch a lot of numbers. If you are just counting beans at a bank, software solutions that use decimal string representations in the first place work perfectly well. But you can't do quantum chromodynamics or aerodynamics that way.

Category: math Time: 2009-02-25 Views: 1

Related post

  • C float and double comparisons 2016-02-01

    I'm comparing simple floats and doubles in C, specifically the value 8.7 for both of them. Now I assign 8.7 to each variable, when I print I get a result of 8.7000 for both values. Why has the compiler added these zeros. And the main question I wante

  • Typesetting quotients and double quotients 2010-12-14

    What is the best way to typeset mathematical quotients and double quotients? I mean constructions like X/~ (~ an equivalence relation on X) and G\X/H (where G and H are groups acting on X). While the simple case is solved by something like X/\mathord

  • TikZ counter and logical comparison 2011-03-01

    Does TikZ have counter that is similar to for loop in c language and logical comparison? e.g., \foreach \x in {0mm,11.200000mm,...,33.600000mm} { counter++; <== is this possible in tikz if(counter == 0) <== is this possible in tikz output something

  • User experiences and double click for selection? 2011-06-28

    What about double click for selection, currently I have design a grid of thumbnails where user can select a thumbnail on double click. But higher management argues that it should be single click or there should be some buttons for selections. Double

  • What is the significance of single and double quotes in environment variables? 2011-07-08

    I defined some environment variables in my .profile like this: MY_HOME="/home/my_user" but the variable does not seem to evaluate unless I strip off the quotes and re-source the file. I believe the quotes are necessary if there will be spaces, a

  • tilde and double quote keys don't work on the command line 2011-10-19

    I've just installed Ubuntu Server 11.10 and the tilde (~) and double quotes (") keys don't do anything - they don't print to the screen. I'm only using the command line (no GUI) and many tutorials say to use setxkbmap and xmodmap which aren't working

  • Xinerama and double buffering? 2011-11-11

    Is there any way to enable double buffering while using Xinerama? Seems its not working because everything is flickering (conky, lilyterm, tilda, ...). --------------Solutions------------- As far as I know this is a limitation of the DBE and double b

  • Tap to hover and double-tap to click? 2011-11-18

    One of the limitations in mobile design is the lack of any sort of hover state. This is especially problematic when it comes to tooltips. So why isn't the system one where: single tap = hover and double-tap = click? I'm not asking whether we should d

  • Single and Double Jump with single button. 2012-11-07

    I want to make Single Jump on Single Tap and Double Jump on Double Tap. My problem is that if I make double Tap on ground then it's fine but if I make first Tap on ground and second Tap in Air then Player gain more height then usual As in image 1. I

  • Single and Double Click on Arduino Multiplexed Buttons 2013-06-20

    On my arduino project i have 8 buttons connected to a 4051 Multiplexer. I would however like to add single and double click funcionalities to each of these buttons, but i can't find a way to do it. My code for the buttons at the moment looks like thi

  • Typesetting double brackets and double parentheses 2013-07-31

    I need the LaTeX equation with double square bracket and double paranthesis. Can anyone suggest me the proper code? Here, I had attached the screenshots. --------------Solutions------------- In favour of Consistent typography, I always suggest creati

  • Document possessing simultaneously text in single column and double column; 2013-11-04

    I would like to make a list of math exercises. Formatting that intend this list is as follows. The first half of the page I would write in a single column. From the second half of the page into two columns I write normally. Is there any resource (pac

  • Grep for single and double digit 2013-12-19

    i have path name as below, abc4/2012 abc4/2013 abc45/2014 abc45/2014 When i grep for 45 it is showing one instance of 4 and for 4 its showing all 45. i need to grep both single digit and double digit separately. please help --------------Solutions---

  • Point addition and doubling in Ed25519 (ref10)? 2014-05-08

    I just migrated CodesInChaos' C# port of Ed25519(ref10) to Java, and everything works fine. (I.e. I get the same results for key generation, signature and verification.) Now, I would like to do a Diffie-Hellman key exchange directly on Ed25519. There

  • Stringify single and double quotes 2014-09-08

    With the following code I am creating an object to pass to a webservice. The single quote seems to be handled with a replace in the stringify before the object is passed to the service, but I don't know how to modify the double quote. If I inspect th

  • How do I make an alias for a Unix command that includes both single and double quotes? 2015-06-12

    I defined the following command: top -b n 2 -d 0.01 | grep 'Cpu' | tail -n 1 | awk '{print "cpu=" 100.0-$8 "%"}' which returns me the CPU usage in the form cpu=nn% I now want to define the alias 'cpu' for the above command, but I have

  • Differ single click and double click (outputLink & inlineEdit) 2015-06-25

    I am trying to something special on a Visualforce page. I got a record name on a visualforce page and I want to open the record with a single click on this name. No problem with apex:outputLink. But additionally I want to inline edit (with apex:inlin

  • Problem serializing single quote and double quote into post meta 2015-08-15

    This is a silly question and I shouldn't have to be spending this much time on this issue. I've been trying all sorts of things without any success. I am trying to serialize a string with single quotes and double quotes. But no matter how I escape it

  • Unable to find special symbol for single and double window icon 2015-12-07

    I would like to get the symbols shown in the text below in latex This is what I was able to achieve From the following code \documentclass{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \begin{document} Please tick (\checkma

iOS development

Android development

Python development

JAVA development

Development language

PHP development

Ruby development

search

Front-end development

Database

development tools

Open Platform

Javascript development

.NET development

cloud computing

server

Copyright (C) avrocks.com, All Rights Reserved.

processed in 1.501 (s). 13 q(s)