arpanmukherjee1
XLr8
Can you please elaborate on this? I think I didn't get you. What do you mean by not implicit?
By implicit I understand JDK/JRE automatically converts it, you don't have to do it manually (invoking any method).
did u open the link just after that line??
this :: Assignment, Arithmetic, and Unary Operators (The Java™ Tutorials > Learning the Java Language > Language Basics)
implicit conversion = Widening Primitive Conversion
explicit conversion = Narrowing Primitive Conversions
i will now quote fragment from this page :: Conversions and Promotions
the important thing are in bold
5.1.2 Widening Primitive Conversion
The following 19 specific conversions on primitive types are called the widening primitive conversions:
* byte to short, int, long, float, or double
* short to int, long, float, or double
* char to int, long, float, or double
* int to long, float, or double
* long to float or double
* float to double
Widening primitive conversions do not lose information about the overall magnitude of a numeric value. Indeed, conversions widening from an integral type to another integral type and from float to double do not lose any information at all; the numeric value is preserved exactly. Conversions widening from float to double in strictfp expressions also preserve the numeric value exactly; however, such conversions that are not strictfp may lose information about the overall magnitude of the converted value.
Conversion of an int or a long value to float, or of a long value to double, may result in loss of precision-that is, the result may lose some of the least significant bits of the value. In this case, the resulting floating-point value will be a correctly rounded version of the integer value, using IEEE 754 round-to-nearest mode (§4.2.4).
A widening conversion of a signed integer value to an integral type T simply sign-extends the two's-complement representation of the integer value to fill the wider format. A widening conversion of a character to an integral type T zero-extends the representation of the character value to fill the wider format.
Despite the fact that loss of precision may occur, widening conversions among primitive types never result in a run-time exception (§11).
Here is an example of a widening conversion that loses precision:
Code:class Test { public static void main(String[] args) { int big = 1234567890; float approx = big; System.out.println(big - (int)approx); } }
which prints:
-46
thus indicating that information was lost during the conversion from type int to type float because values of type float are not precise to nine significant digits.
5.1.3 Narrowing Primitive Conversions
The following 23 specific conversions on primitive types are called the narrowing primitive conversions:
* byte to char
* short to byte or char
* char to byte or short
* int to byte, short, or char
* long to byte, short, char, or int
* float to byte, short, char, int, or long
* double to byte, short, char, int, long, or float
Narrowing conversions may lose information about the overall magnitude of a numeric value and may also lose precision.
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
A narrowing conversion of a character to an integral type T likewise simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the resulting value to be a negative number, even though characters represent 16-bit unsigned integer values.
The example:
produces the output:Code:class Test { public static void main(String[] args) { float fmin = Float.NEGATIVE_INFINITY; float fmax = Float.POSITIVE_INFINITY; System.out.println("long: " + (long)fmin + ".." + (long)fmax); System.out.println("int: " + (int)fmin + ".." + (int)fmax); System.out.println("short: " + (short)fmin + ".." + (short)fmax); System.out.println("char: " + (int)(char)fmin + ".." + (int)(char)fmax); System.out.println("byte: " + (byte)fmin + ".." + (byte)fmax); } }
long: -9223372036854775808..9223372036854775807
int: -2147483648..2147483647
short: 0..-1
char: 0..65535
byte: 0..-1
The results for char, int, and long are unsurprising, producing the minimum and maximum representable values of the type.
The results for byte and short lose information about the sign and magnitude of the numeric values and also lose precision. The results can be understood by examining the low order bits of the minimum and maximum int. The minimum int is, in hexadecimal, 0x80000000, and the maximum int is 0x7fffffff. This explains the short results, which are the low 16 bits of these values, namely, 0x0000 and 0xffff; it explains the char results, which also are the low 16 bits of these values, namely, '\u0000' and '\uffff'; and it explains the byte results, which are the low 8 bits of these values, namely, 0x00 and 0xff. Despite the fact that overflow, underflow, or other loss of information may occur, narrowing conversions among primitive types never result in a run-time exception (§11).
the whole concept can be actually explained in one line ::
if result takes more memory then implicit conversion. otherwise explicit conversion or compiler error
Last edited: