ruturaj3
Journeyman
HI,
In C int is by default, signed short int so it uses 2 bytes(I m talking abt 16 bit compiler). and its range is -32768 to 32767.
And char uses 1 byte.
char ch = 1300, printf("%d",ch); prints 20.
coz, it takes only 8 bits.
1300 = 0101 0001 0100 in binary.
so when storing 1300 in char, it takes only 8 bits ie 0001 0100 wich is +20 (+ coz its signed bit is 0) in decimal.
So int a = 32767, printf("%d",a); prints output correctly ie 32767.
32772 = 1000 0000 0000 0100 in binary.
so if i tak 16 bits and since signed bit is 1 so - sign. so it should print - 4 na.
but int a = 32772, printf("%d",a); prints -32764 why ??? I m not getting tis point only.
In C int is by default, signed short int so it uses 2 bytes(I m talking abt 16 bit compiler). and its range is -32768 to 32767.
And char uses 1 byte.
char ch = 1300, printf("%d",ch); prints 20.
coz, it takes only 8 bits.
1300 = 0101 0001 0100 in binary.
so when storing 1300 in char, it takes only 8 bits ie 0001 0100 wich is +20 (+ coz its signed bit is 0) in decimal.
So int a = 32767, printf("%d",a); prints output correctly ie 32767.
32772 = 1000 0000 0000 0100 in binary.
so if i tak 16 bits and since signed bit is 1 so - sign. so it should print - 4 na.
but int a = 32772, printf("%d",a); prints -32764 why ??? I m not getting tis point only.