Method 1: Using charCodeAt()
The string charCodeAt() method returns an integer between 0 and 65535, representing the UTF-16 code unit at the given index.
let char = 'K'; let asciiCode = char.charCodeAt(0); console.log(asciiCode);
Method 2: Using codePointAt()
The String.codePointAt() function returns a non-negative integer, the Unicode code point value. It’s similar to charCodeAt() but can handle full 32-bit code points, whereas charCodeAt() only deals with the 16-bit units of UTF-16 encoding.
For characters within the Basic Multilingual Plane (BMP) – those characters with code points between U+0000 and U+FFFF – codePointAt() and charCodeAt() return the same value.
However, for characters outside of the BMP (those with code points between U+010000 and U+10FFFF), codePointAt() will return the correct full code point, whereas charCodeAt() will return a surrogate pair.
For a character inside the BMP:
let char = 'K'; let codePoint = char.codePointAt(0); console.log(codePoint);
For a character outside the BMP:
let char = '𐍈'; let codePoint = char.codePointAt(0); console.log(codePoint);
In the second example, if you were to use charCodeAt(0) instead, you would get the code unit of the leading surrogate, which wouldn’t represent the actual character’s full code point on its own.