16.3852 is a decimal number, not a hexadecimal one. Hexadecimal numbers are based on the number 16 and the digits run 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. The letters stand for 10 , 11,12 , 13 , 14 and 15. Like the binary system, the numbers are arranged in terms of a power series. The first digit is 16 to the first power, the next is 16 squared, cubed, ect., ect. It is easy to write some very large binary numbers using hexadecimal notation, and it can be thought of a form of binary "shorthand". All binary, octal and hexadecimal numbers are integers. To deal with decimals, computers use a code known as "BCD" or "binary coded digit".
Numbers are physically stored inside the computer as 8 bit bytes. These are binary in nature and each byte can contain a number as large as 255. Two bytes can be strung together into a 16 bit "word" which can contain a number as large as 65535. Modern computers use "32 bit" archetecture, meaning they can handle numbers as large as 4294967295. This number is loaded as 4 consecuitive, 8 bit bytes bytes of data in just 1 step. The binary equivalent would be "11111111111111111111111111111111". In hexadecimal notation, it would be "FFFF". This is why modern computers have so much memory and are so fast compared to the first PC's which appeared in the 1980's.
BCD breaks up decimal numbers into pairs of 4 bit binary strings. 4 bits of binary data can encode any integer from 0 ("0000") to 9 ("1001"). For example, the number "16" would be "1" in hexadecimal, and "0000000010000000" in binary (note that it took a pair of 8 bit bytes to hold this value). The BCD equivalent would be "1000_0110". Of course the "_" was added to help ilustrate the fact the 8 bits were two BCD encoded binary digits. "1000" is the number "1" and "0110" is the number "6" in binary notation.
I'm afraid I'm not exactly sure how the BCD system determines the placement of the decimal point. Computer languages clue the system as to the type of data by declaring it either "integer" (whole numbers) or "float" or "double" (BCD encoded data). Most "float" (floating point) numbers are 2 bytes in length. The "double" type is 4 bytes, or 32 bits. If, as I suspect, the last 4 bits are used to indicate the placement of the decimal, then a "double float" can hold at most 7 decimal digits and is accurate to only 6 decimal places. Another complication is the way the data is read. Some systems read left to right, and some right to left. Assuming left to right reading, the decimal number "16.3852" would be a BCD string :
0000_1000_0110_1100_0001_1010_0100_0100
The right most 4 bit code is the digit "2", meaning the string "163852" contains a decimal point after the second integer (reading left to right). Finally, note the left most 4 bit segment (these are actually called "nibbles") is padded with zeroes. This indicates the digits themselves are right justified, just like a line of text.
Finally, computers only deal with binary numbers. Hexadecimal and octal systems are shorthand systems humans use to aviod filling notebooks with endless strings of 1's ans 0's.
Hope this helps (although it is rather complicated!)