Skip to content

Mismatch in chunk sizes after encryption #789

@ryssbowh

Description

@ryssbowh

Question

I'm developing a website in an HIPAA environment where everything has to be encrypted, I'm trying to encrypt the chunks on the client side to decrypt them on the server side (php).

Code example on the client side :

class EncryptedBlobFileSource implements FileSource {
  private _file: Blob;
  size: number;
  encryptionKey: string;
  iv: Uint8Array;

  constructor(file: Blob, iv: Uint8Array, encryptionKey: string) {
    this._file = file;
    this.size = file.size;
    this.iv = iv;
    this.encryptionKey = encryptionKey;
  }

  async slice(start: number, end: number): Promise<SliceResult> {
    const raw = this._file.slice(start, end);
    const value = await encryptBlob(raw, this.encryptionKey, this.iv);
    const size = raw.size;
    const done = end >= this.size;
    //@ts-ignore-next-line
    return { value, size, done };
  }

  close() {}
}

const upload = async (file) => {
  let options: any = {
    endpoint: "/api/tus",
    retryDelays: [],
    metadata: {
      filename: file.name,
      name: file.name,
      filetype: file.type,
    },
    onError: function (error: Error) {
      console.log(error);
    },
  };
  if (process.env.NEXT_PUBLIC_ENCRYPT_PAYLOADS === "1" && encryptionKey) {
    const iv = crypto.getRandomValues(new Uint8Array(12));
    options.headers["X-Encryption-IV"] = Buffer.from(iv).toString("base64");
    options.headers["Tus-Encrypted"] = "1";
    options.fileReader = {
      openFile: (input: any, chunkSize: number): Promise<FileSource> => {
        return new Promise((resolve, reject) => {
          resolve(new EncryptedBlobFileSource(input, iv, encryptionKey));
        });
      },
    };
  }
}

let upload = new Upload(file, options);
upload.start();

This throws the client error :

Error: tus: failed to upload chunk at offset 0, caused by Error: upload was configured with a size of 92652 bytes, but the source is done after 92668 bytes, originated from request ...

Which I'm guessing it's normal since the blob size changes after the encryption.
In theory the client could still only work with the original blob size, and the server which decrypts the data is responsible to send back the proper header Upload-Offset ?
Curious to see what you think of it and if there would be a native solution to this ?

Thanks for your time.

Setup details

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions