Spring Security
  1. Spring Security
  2. SEC-1752

Null character appended to TextEncryptor.decrypt(String) results

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 3.1.0.RC2
    • Fix Version/s: 3.1.0.RC3
    • Component/s: Crypto
    • Labels:
      None

      Description

      The following test fails:

      @Test
      public void test() {
      	TextEncryptor encryptor = Encryptors.queryableText("password", "salt");
      	String encrypted = encryptor.encrypt("6048b75ed560785c");
      	String decrypted = encryptor.decrypt(encrypted);
      	assertEquals("6048b75ed560785c", decrypted);
      }
      

      This is because the value of the decrypted variable has a null byte (%00) at the end. To get the test to pass, I called trim():

      @Test
      public void test() {
      	TextEncryptor encryptor = Encryptors.queryableText("password", "salt");
      	String encrypted = encryptor.encrypt("6048b75ed560785c");
      	String decrypted = encryptor.decrypt(encrypted).trim();
      	assertEquals("6048b75ed560785c", decrypted);
      }
      

      I'm not sure why this is happening at this stage, or the appropriate resolution. The AesBytesEncryptor is using AES with PKCS5 which should care for padding handling for both encrypt/decrypt operations. I'm puzzled as to why a null byte is being tacked on.

      This problem manifested itself in the Greenhouse reference application as a OAuth Signature Verification Failure. The NUL (%00) byte is the culprit.

        Activity

        Hide
        Keith Donald added a comment -

        I have confirmed its the doFinal(byte[]) call on the decrypt Cipher in AesBytesEncryptor is the one appending this null byte. Do not know why it is doing this though.

        Show
        Keith Donald added a comment - I have confirmed its the doFinal(byte[]) call on the decrypt Cipher in AesBytesEncryptor is the one appending this null byte. Do not know why it is doing this though.
        Hide
        Keith Donald added a comment -

        Upon further investigation, it appears the call in HexEncodingTextEncryptor#encrypt to Utf8.encode(String) causes the message bytes, once encrypted, to decrypt with an extra null byte. I'm not familiar with the Ut8 class and it's use of the java.nio library to perform Utf8 encoding. If I replace use of that library with a standard string.getBytes("UTF-8") call, the extra byte goes away and the decrypted result is as expected. Luke, any insight into what's going on here, and to the usage of java.nio for UTF8 encoding?

        Show
        Keith Donald added a comment - Upon further investigation, it appears the call in HexEncodingTextEncryptor#encrypt to Utf8.encode(String) causes the message bytes, once encrypted, to decrypt with an extra null byte. I'm not familiar with the Ut8 class and it's use of the java.nio library to perform Utf8 encoding. If I replace use of that library with a standard string.getBytes("UTF-8") call, the extra byte goes away and the decrypted result is as expected. Luke, any insight into what's going on here, and to the usage of java.nio for UTF8 encoding?
        Hide
        Keith Donald added a comment -

        Attached is a patch to HexEncodingTextEncryptor that resolves this issue.

        Show
        Keith Donald added a comment - Attached is a patch to HexEncodingTextEncryptor that resolves this issue.
        Hide
        Luke Taylor added a comment - - edited

        The reason for using CharsetEncoder is primarily to deal with the case when the bytes are not valid. We generally want to fail fast in that case, but the Javadoc from public String(byte[] bytes, String charsetName) says the behaviour is undefined. Typically it will use a fixed "replacement string" for invalid input. String.getBytes() actually uses CharsetEncoder under the hood, but configures the encoder not to throw an exception for invalid input.

        The problem looks like being due to the fact that I haven't taken the limit within the ByteBuffer into account, but just pulled the underlying array contents directly (which happen to be one larger than the actual content length, hence the appearance of a null byte at the end).

        Show
        Luke Taylor added a comment - - edited The reason for using CharsetEncoder is primarily to deal with the case when the bytes are not valid. We generally want to fail fast in that case, but the Javadoc from public String(byte[] bytes, String charsetName) says the behaviour is undefined. Typically it will use a fixed "replacement string" for invalid input. String.getBytes() actually uses CharsetEncoder under the hood, but configures the encoder not to throw an exception for invalid input. The problem looks like being due to the fact that I haven't taken the limit within the ByteBuffer into account, but just pulled the underlying array contents directly (which happen to be one larger than the actual content length, hence the appearance of a null byte at the end).

          People

          • Assignee:
            Luke Taylor
            Reporter:
            Keith Donald
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: